Podcasts about gcp

  • 389PODCASTS
  • 2,251EPISODES
  • 49mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Sep 28, 2022LATEST

POPULARITY

20152016201720182019202020212022

Categories



Best podcasts about gcp

Show all podcasts related to gcp

Latest podcast episodes about gcp

The Grit City Podcast
GCP: Streetbeefs Scrapyard - LIVE at The Scrapyard!

The Grit City Podcast

Play Episode Listen Later Sep 28, 2022 39:46


This time the GCP guys were at the latest Streetbeefs Scrapyard event. Streetbeefs Scrapyard is the Pacific Northwest backyard fight organization located in Gig Harbor. It's a branch of the largest backyard fight group in the USA, Streetbeefs, created in 2008. Steve brought Streetbeefs concept “Guns Down, Gloves up!” to the PNW in 2020. 00:16 – Justin comes up with the newest business idea to pass on to Steve, Scott talks about being excited to record this episode, and Justin briefly describes Streetbeefs Scrapyard. They play part of a fight for listeners to get a feel of what it's like, Buddy V introduces himself, and how he ended up getting started in Scrapyard. 9:47 – Buddy V talks about the lives each of the fighters have outside of the fight, why he fights, and getting involved in the interviews he does for Scrapyard. He shares where people can find him online, Jeff and Scott encourage Justin to become a fighter himself, and they talk to Anomaly about his involvement with the club. 18:58 – Anomaly talks about the types of fighting he enjoys watching at the event, if he's refereeing at it, and explains how they go through the rules before each event. He talks about why it's important as a ref to watch the losing guy, Scott and Justin, guess who they think will win the current fight, and they talk to Lights Out, who shares the advice he would pass on to new fighters. 30:16 – They tune into another live fight, encourage the fighters in the last round to keep it going, and chat with Firechicken during fight intermission. Firechicken talks about the development he's done since the last time GCP was out, the different types of fight styles he sees, and who his favorite is on the hall of fame. He talks about how people can get involved and the importance of understanding what each fighter wants to achieve with fighting. Special Guest: Steve Hagara.

Gut Check Project
#82 Tryptophan, IPAs, and Crow Funerals

Gut Check Project

Play Episode Listen Later Sep 25, 2022 40:25


Were you ever told that eating turkey was a solid source of Tryptophan? Well it is.. and so are so many other foods! And that is GREAT news for all of us. Tryptophan is an essential amino acid, meaning that YOU NEED to consume it in order to achieve optimal health. Join us today as Ken also walks us through how Crows have a real mourning ceremony for their fallen and why IPAs may not be just the beer that brew enthusiasts enjoy. Join Ken & Eric on the GCP and PLEASE LIKE AND SHARE with your friends and fam!

Global Captive Podcast
GCP #73: Ali Quinlivan and Bermuda Captive Conference report

Global Captive Podcast

Play Episode Listen Later Sep 25, 2022 37:17


In episode 73 of the Global Captive Podcast, supported by legacy specialists R&Q, Richard interviews Ali Quinlivan, Head of International Insurance & Captives at Google, about her 20 year career in captive management and consulting. Listeners are also introduced to Luke Harrison, who has joined GCP as Senior Reporter and recently attended the Bermuda Captive Conference. Luke shares interviews with Scott Reynolds, President & CEO at American Hardware & Lumber Insurance and also President of the Bermuda Captive Owners Association, as well as conference committee members Leslie Robinson, Luis Delgado and Grainne Richmond. You can subscribe to the Global Captive Podcast on iTunes, Apple Podcasts, Spotify or any other podcast app. Contact Richard: richard@globalcaptivepodcast.com Visit the website: https://www.globalcaptivepodcast.com

52 Weeks of Cloud
Enterprise MLOps Interview-Simon Stiebellehner

52 Weeks of Cloud

Play Episode Listen Later Sep 23, 2022 56:16


If you enjoyed this video, here are additional resources to look at:Coursera + Duke Specialization: Building Cloud Computing Solutions at Scale Specialization: https://www.coursera.org/specializations/building-cloud-computing-solutions-at-scalePython, Bash, and SQL Essentials for Data Engineering Specialization: https://www.coursera.org/specializations/python-bash-sql-data-engineering-dukeAWS Certified Solutions Architect - Professional (SAP-C01) Cert Prep: 1 Design for Organizational Complexity:https://www.linkedin.com/learning/aws-certified-solutions-architect-professional-sap-c01-cert-prep-1-design-for-organizational-complexity/design-for-organizational-complexity?autoplay=trueO'Reilly Book: Practical MLOps: https://www.amazon.com/Practical-MLOps-Operationalizing-Machine-Learning/dp/1098103017O'Reilly Book: Python for DevOps: https://www.amazon.com/gp/product/B082P97LDW/O'Reilly Book: Developing on AWS with C#: A Comprehensive Guide on Using C# to Build Solutions on the AWS Platformhttps://www.amazon.com/Developing-AWS-Comprehensive-Solutions-Platform/dp/1492095877Pragmatic AI: An Introduction to Cloud-based Machine Learning: https://www.amazon.com/gp/product/B07FB8F8QP/Pragmatic AI Labs Book: Python Command-Line Tools: https://www.amazon.com/gp/product/B0855FSFYZPragmatic AI Labs Book: Cloud Computing for Data Analysis: https://www.amazon.com/gp/product/B0992BN7W8Pragmatic AI Book: Minimal Python: https://www.amazon.com/gp/product/B0855NSRR7Pragmatic AI Book: Testing in Python: https://www.amazon.com/gp/product/B0855NSRR7Subscribe to Pragmatic AI Labs YouTube Channel: https://www.youtube.com/channel/UCNDfiL0D1LUeKWAkRE1xO5QSubscribe to 52 Weeks of AWS Podcast: https://52-weeks-of-cloud.simplecast.comView content on noahgift.com: https://noahgift.com/View content on Pragmatic AI Labs Website: https://paiml.com/[00:00.000 --> 00:02.260] Hey, three, two, one, there we go, we're live.[00:02.260 --> 00:07.260] All right, so welcome Simon to Enterprise ML Ops interviews.[00:09.760 --> 00:13.480] The goal of these interviews is to get people exposed[00:13.480 --> 00:17.680] to real professionals who are doing work in ML Ops.[00:17.680 --> 00:20.360] It's such a cutting edge field[00:20.360 --> 00:22.760] that I think a lot of people are very curious about.[00:22.760 --> 00:23.600] What is it?[00:23.600 --> 00:24.960] You know, how do you do it?[00:24.960 --> 00:27.760] And very honored to have Simon here.[00:27.760 --> 00:29.200] And do you wanna introduce yourself[00:29.200 --> 00:31.520] and maybe talk a little bit about your background?[00:31.520 --> 00:32.360] Sure.[00:32.360 --> 00:33.960] Yeah, thanks again for inviting me.[00:34.960 --> 00:38.160] My name is Simon Stebelena or Simon.[00:38.160 --> 00:40.440] I am originally from Austria,[00:40.440 --> 00:43.120] but currently working in the Netherlands and Amsterdam[00:43.120 --> 00:46.080] at Transaction Monitoring Netherlands.[00:46.080 --> 00:48.780] Here I am the lead ML Ops engineer.[00:49.840 --> 00:51.680] What are we doing at TML actually?[00:51.680 --> 00:55.560] We are a data processing company actually.[00:55.560 --> 00:59.320] We are owned by the five large banks of Netherlands.[00:59.320 --> 01:02.080] And our purpose is kind of what the name says.[01:02.080 --> 01:05.920] We are basically lifting specifically anti money laundering.[01:05.920 --> 01:08.040] So anti money laundering models that run[01:08.040 --> 01:11.440] on a personalized transactions of businesses[01:11.440 --> 01:13.240] we get from these five banks[01:13.240 --> 01:15.760] to detect unusual patterns on that transaction graph[01:15.760 --> 01:19.000] that might indicate money laundering.[01:19.000 --> 01:20.520] That's a natural what we do.[01:20.520 --> 01:21.800] So as you can imagine,[01:21.800 --> 01:24.160] we are really focused on building models[01:24.160 --> 01:27.280] and obviously ML Ops is a big component there[01:27.280 --> 01:29.920] because that is really the core of what you do.[01:29.920 --> 01:32.680] You wanna do it efficiently and effectively as well.[01:32.680 --> 01:34.760] In my role as lead ML Ops engineer,[01:34.760 --> 01:36.880] I'm on the one hand the lead engineer[01:36.880 --> 01:38.680] of the actual ML Ops platform team.[01:38.680 --> 01:40.200] So this is actually a centralized team[01:40.200 --> 01:42.680] that builds out lots of the infrastructure[01:42.680 --> 01:47.320] that's needed to do modeling effectively and efficiently.[01:47.320 --> 01:50.360] But also I am the craft lead[01:50.360 --> 01:52.640] for the machine learning engineering craft.[01:52.640 --> 01:55.120] These are actually in our case, the machine learning engineers,[01:55.120 --> 01:58.360] the people working within the model development teams[01:58.360 --> 01:59.360] and cross functional teams[01:59.360 --> 02:01.680] actually building these models.[02:01.680 --> 02:03.640] That's what I'm currently doing[02:03.640 --> 02:05.760] during the evenings and weekends.[02:05.760 --> 02:09.400] I'm also lecturer at the University of Applied Sciences, Vienna.[02:09.400 --> 02:12.080] And there I'm teaching data mining[02:12.080 --> 02:15.160] and data warehousing to master students, essentially.[02:16.240 --> 02:19.080] Before TMNL, I was at bold.com,[02:19.080 --> 02:21.960] which is the largest eCommerce retailer in the Netherlands.[02:21.960 --> 02:25.040] So I always tend to see the Amazon of the Netherlands[02:25.040 --> 02:27.560] or been a lux actually.[02:27.560 --> 02:30.920] It is still the biggest eCommerce retailer in the Netherlands[02:30.920 --> 02:32.960] even before Amazon actually.[02:32.960 --> 02:36.160] And there I was an expert machine learning engineer.[02:36.160 --> 02:39.240] So doing somewhat comparable stuff,[02:39.240 --> 02:42.440] a bit more still focused on the actual modeling part.[02:42.440 --> 02:44.800] Now it's really more on the infrastructure end.[02:45.760 --> 02:46.760] And well, before that,[02:46.760 --> 02:49.360] I spent some time in consulting, leading a data science team.[02:49.360 --> 02:50.880] That's actually where I kind of come from.[02:50.880 --> 02:53.360] I really come from originally the data science end.[02:54.640 --> 02:57.840] And there I kind of started drifting towards ML Ops[02:57.840 --> 02:59.200] because we started building out[02:59.200 --> 03:01.640] a deployment and serving platform[03:01.640 --> 03:04.440] that would as consulting company would make it easier[03:04.440 --> 03:07.920] for us to deploy models for our clients[03:07.920 --> 03:10.840] to serve these models, to also monitor these models.[03:10.840 --> 03:12.800] And that kind of then made me drift further and further[03:12.800 --> 03:15.520] down the engineering lane all the way to ML Ops.[03:17.000 --> 03:19.600] Great, yeah, that's a great background.[03:19.600 --> 03:23.200] I'm kind of curious in terms of the data science[03:23.200 --> 03:25.240] to ML Ops journey,[03:25.240 --> 03:27.720] that I think would be a great discussion[03:27.720 --> 03:29.080] to dig into a little bit.[03:30.280 --> 03:34.320] My background is originally more on the software engineering[03:34.320 --> 03:36.920] side and when I was in the Bay Area,[03:36.920 --> 03:41.160] I did individual contributor and then ran companies[03:41.160 --> 03:44.240] at one point and ran multiple teams.[03:44.240 --> 03:49.240] And then as the data science field exploded,[03:49.240 --> 03:52.880] I hired multiple data science teams and worked with them.[03:52.880 --> 03:55.800] But what was interesting is that I found that[03:56.840 --> 03:59.520] I think the original approach of data science[03:59.520 --> 04:02.520] from my perspective was lacking[04:02.520 --> 04:07.240] in that there wasn't really like deliverables.[04:07.240 --> 04:10.520] And I think when you look at a software engineering team,[04:10.520 --> 04:12.240] it's very clear there's deliverables.[04:12.240 --> 04:14.800] Like you have a mobile app and it has to get better[04:14.800 --> 04:15.880] each week, right?[04:15.880 --> 04:18.200] Where else, what are you doing?[04:18.200 --> 04:20.880] And so I would love to hear your story[04:20.880 --> 04:25.120] about how you went from doing kind of more pure data science[04:25.120 --> 04:27.960] to now it sounds like ML Ops.[04:27.960 --> 04:30.240] Yeah, yeah, actually.[04:30.240 --> 04:33.800] So back then in consulting one of the,[04:33.800 --> 04:36.200] which was still at least back then in Austria,[04:36.200 --> 04:39.280] data science and everything around it was still kind of[04:39.280 --> 04:43.720] in this infancy back then 2016 and so on.[04:43.720 --> 04:46.560] It was still really, really new to many organizations,[04:46.560 --> 04:47.400] at least in Austria.[04:47.400 --> 04:50.120] There might be some years behind in the US and stuff.[04:50.120 --> 04:52.040] But back then it was still relatively fresh.[04:52.040 --> 04:55.240] So in consulting, what we very often struggled with was[04:55.240 --> 04:58.520] on the modeling end, problems could be solved,[04:58.520 --> 05:02.040] but actually then easy deployment,[05:02.040 --> 05:05.600] keeping these models in production at client side.[05:05.600 --> 05:08.880] That was always a bit more of the challenge.[05:08.880 --> 05:12.400] And so naturally kind of I started thinking[05:12.400 --> 05:16.200] and focusing more on the actual bigger problem that I saw,[05:16.200 --> 05:19.440] which was not so much building the models,[05:19.440 --> 05:23.080] but it was really more, how can we streamline things?[05:23.080 --> 05:24.800] How can we keep things operating?[05:24.800 --> 05:27.960] How can we make that move easier from a prototype,[05:27.960 --> 05:30.680] from a PUC to a productionized model?[05:30.680 --> 05:33.160] Also how can we keep it there and maintain it there?[05:33.160 --> 05:35.480] So personally I was really more,[05:35.480 --> 05:37.680] I saw that this problem was coming up[05:38.960 --> 05:40.320] and that really fascinated me.[05:40.320 --> 05:44.120] So I started jumping more on that exciting problem.[05:44.120 --> 05:45.080] That's how it went for me.[05:45.080 --> 05:47.000] And back then we then also recognized it[05:47.000 --> 05:51.560] as a potential product in our case.[05:51.560 --> 05:54.120] So we started building out that deployment[05:54.120 --> 05:56.960] and serving and monitoring platform, actually.[05:56.960 --> 05:59.520] And that then really for me, naturally,[05:59.520 --> 06:01.840] I fell into that rabbit hole[06:01.840 --> 06:04.280] and I also never wanted to get out of it again.[06:05.680 --> 06:09.400] So the system that you built initially,[06:09.400 --> 06:10.840] what was your stack?[06:10.840 --> 06:13.760] What were some of the things you were using?[06:13.760 --> 06:17.000] Yeah, so essentially we had,[06:17.000 --> 06:19.560] when we talk about the stack on the backend,[06:19.560 --> 06:20.560] there was a lot of,[06:20.560 --> 06:23.000] so the full backend was written in Java.[06:23.000 --> 06:25.560] We were using more from a user perspective,[06:25.560 --> 06:28.040] the contract that we kind of had,[06:28.040 --> 06:32.560] our goal was to build a drag and drop platform for models.[06:32.560 --> 06:35.760] So basically the contract was you package your model[06:35.760 --> 06:37.960] as an MLflow model,[06:37.960 --> 06:41.520] and then you basically drag and drop it into a web UI.[06:41.520 --> 06:43.640] It's gonna be wrapped in containers.[06:43.640 --> 06:45.040] It's gonna be deployed.[06:45.040 --> 06:45.880] It's gonna be,[06:45.880 --> 06:49.680] there will be a monitoring layer in front of it[06:49.680 --> 06:52.760] based on whatever the dataset is you trained it on.[06:52.760 --> 06:55.920] You would automatically calculate different metrics,[06:55.920 --> 06:57.360] different distributional metrics[06:57.360 --> 06:59.240] around your variables that you are using.[06:59.240 --> 07:02.080] And so we were layering this approach[07:02.080 --> 07:06.840] to, so that eventually every incoming request would be,[07:06.840 --> 07:08.160] you would have a nice dashboard.[07:08.160 --> 07:10.040] You could monitor all that stuff.[07:10.040 --> 07:12.600] So stackwise it was actually MLflow.[07:12.600 --> 07:15.480] Specifically MLflow models a lot.[07:15.480 --> 07:17.920] Then it was Java in the backend, Python.[07:17.920 --> 07:19.760] There was a lot of Python,[07:19.760 --> 07:22.040] especially PySpark component as well.[07:23.000 --> 07:25.880] There was a, it's been quite a while actually,[07:25.880 --> 07:29.160] there was a quite some part written in Scala.[07:29.160 --> 07:32.280] Also, because there was a component of this platform[07:32.280 --> 07:34.800] was also a bit of an auto ML approach,[07:34.800 --> 07:36.480] but that died then over time.[07:36.480 --> 07:40.120] And that was also based on PySpark[07:40.120 --> 07:43.280] and vanilla Spark written in Scala.[07:43.280 --> 07:45.560] So we could facilitate the auto ML part.[07:45.560 --> 07:48.600] And then later on we actually added that deployment,[07:48.600 --> 07:51.480] the easy deployment and serving part.[07:51.480 --> 07:55.280] So that was kind of, yeah, a lot of custom build stuff.[07:55.280 --> 07:56.120] Back then, right?[07:56.120 --> 07:59.720] There wasn't that much MLOps tooling out there yet.[07:59.720 --> 08:02.920] So you need to build a lot of that stuff custom.[08:02.920 --> 08:05.280] So it was largely custom built.[08:05.280 --> 08:09.280] Yeah, the MLflow concept is an interesting concept[08:09.280 --> 08:13.880] because they provide this package structure[08:13.880 --> 08:17.520] that at least you have some idea of,[08:17.520 --> 08:19.920] what is gonna be sent into the model[08:19.920 --> 08:22.680] and like there's a format for the model.[08:22.680 --> 08:24.720] And I think that part of MLflow[08:24.720 --> 08:27.520] seems to be a pretty good idea,[08:27.520 --> 08:30.080] which is you're creating a standard where,[08:30.080 --> 08:32.360] you know, if in the case of,[08:32.360 --> 08:34.720] if you're using scikit learn or something,[08:34.720 --> 08:37.960] you don't necessarily want to just throw[08:37.960 --> 08:40.560] like a pickled model somewhere and just say,[08:40.560 --> 08:42.720] okay, you know, let's go.[08:42.720 --> 08:44.760] Yeah, that was also our thinking back then.[08:44.760 --> 08:48.040] So we thought a lot about what would be a,[08:48.040 --> 08:51.720] what would be, what could become the standard actually[08:51.720 --> 08:53.920] for how you package models.[08:53.920 --> 08:56.200] And back then MLflow was one of the little tools[08:56.200 --> 08:58.160] that was already there, already existent.[08:58.160 --> 09:00.360] And of course there was data bricks behind it.[09:00.360 --> 09:02.680] So we also made a bet on that back then and said,[09:02.680 --> 09:04.920] all right, let's follow that packaging standard[09:04.920 --> 09:08.680] and make it the contract how you would as a data scientist,[09:08.680 --> 09:10.800] then how you would need to package it up[09:10.800 --> 09:13.640] and submit it to the platform.[09:13.640 --> 09:16.800] Yeah, it's interesting because the,[09:16.800 --> 09:19.560] one of the, this reminds me of one of the issues[09:19.560 --> 09:21.800] that's happening right now with cloud computing,[09:21.800 --> 09:26.800] where in the cloud AWS has dominated for a long time[09:29.480 --> 09:34.480] and they have 40% market share, I think globally.[09:34.480 --> 09:38.960] And Azure's now gaining and they have some pretty good traction[09:38.960 --> 09:43.120] and then GCP's been down for a bit, you know,[09:43.120 --> 09:45.760] in that maybe the 10% range or something like that.[09:45.760 --> 09:47.760] But what's interesting is that it seems like[09:47.760 --> 09:51.480] in the case of all of the cloud providers,[09:51.480 --> 09:54.360] they haven't necessarily been leading the way[09:54.360 --> 09:57.840] on things like packaging models, right?[09:57.840 --> 10:01.480] Or, you know, they have their own proprietary systems[10:01.480 --> 10:06.480] which have been developed and are continuing to be developed[10:06.640 --> 10:08.920] like Vertex AI in the case of Google,[10:09.760 --> 10:13.160] the SageMaker in the case of Amazon.[10:13.160 --> 10:16.480] But what's interesting is, let's just take SageMaker,[10:16.480 --> 10:20.920] for example, there isn't really like this, you know,[10:20.920 --> 10:25.480] industry wide standard of model packaging[10:25.480 --> 10:28.680] that SageMaker uses, they have their own proprietary stuff[10:28.680 --> 10:31.040] that kind of builds in and Vertex AI[10:31.040 --> 10:32.440] has their own proprietary stuff.[10:32.440 --> 10:34.920] So, you know, I think it is interesting[10:34.920 --> 10:36.960] to see what's gonna happen[10:36.960 --> 10:41.120] because I think your original hypothesis which is,[10:41.120 --> 10:44.960] let's pick, you know, this looks like it's got some traction[10:44.960 --> 10:48.760] and it wasn't necessarily tied directly to a cloud provider[10:48.760 --> 10:51.600] because Databricks can work on anything.[10:51.600 --> 10:53.680] It seems like that in particular,[10:53.680 --> 10:56.800] that's one of the more sticky problems right now[10:56.800 --> 11:01.800] with MLopsis is, you know, who's the leader?[11:02.280 --> 11:05.440] Like, who's developing the right, you know,[11:05.440 --> 11:08.880] kind of a standard for tooling.[11:08.880 --> 11:12.320] And I don't know, maybe that leads into kind of you talking[11:12.320 --> 11:13.760] a little bit about what you're doing currently.[11:13.760 --> 11:15.600] Like, do you have any thoughts about the, you know,[11:15.600 --> 11:18.720] current tooling and what you're doing at your current company[11:18.720 --> 11:20.920] and what's going on with that?[11:20.920 --> 11:21.760] Absolutely.[11:21.760 --> 11:24.200] So at my current organization,[11:24.200 --> 11:26.040] Transaction Monitor Netherlands,[11:26.040 --> 11:27.480] we are fully on AWS.[11:27.480 --> 11:32.000] So we're really almost cloud native AWS.[11:32.000 --> 11:34.840] And so that also means everything we do on the modeling side[11:34.840 --> 11:36.600] really evolves around SageMaker.[11:37.680 --> 11:40.840] So for us, specifically for us as MLops team,[11:40.840 --> 11:44.680] we are building the platform around SageMaker capabilities.[11:45.680 --> 11:48.360] And on that end, at least company internal,[11:48.360 --> 11:52.880] we have a contract how you must actually deploy models.[11:52.880 --> 11:56.200] There is only one way, what we call the golden path,[11:56.200 --> 11:59.800] in that case, this is the streamlined highly automated path[11:59.800 --> 12:01.360] that is supported by the platform.[12:01.360 --> 12:04.360] This is the only way how you can actually deploy models.[12:04.360 --> 12:09.360] And in our case, that is actually a SageMaker pipeline object.[12:09.640 --> 12:12.680] So in our company, we're doing large scale batch processing.[12:12.680 --> 12:15.040] So we're actually not doing anything real time at present.[12:15.040 --> 12:17.040] We are doing post transaction monitoring.[12:17.040 --> 12:20.960] So that means you need to submit essentially DAX, right?[12:20.960 --> 12:23.400] This is what we use for training.[12:23.400 --> 12:25.680] This is what we also deploy eventually.[12:25.680 --> 12:27.720] And this is our internal contract.[12:27.720 --> 12:32.200] You need to provision a SageMaker in your model repository.[12:32.200 --> 12:34.640] You got to have one place,[12:34.640 --> 12:37.840] and there must be a function with a specific name[12:37.840 --> 12:41.440] and that function must return a SageMaker pipeline object.[12:41.440 --> 12:44.920] So this is our internal contract actually.[12:44.920 --> 12:46.600] Yeah, that's interesting.[12:46.600 --> 12:51.200] I mean, and I could see like for, I know many people[12:51.200 --> 12:53.880] that are using SageMaker in production,[12:53.880 --> 12:58.680] and it does seem like where it has some advantages[12:58.680 --> 13:02.360] is that AWS generally does a pretty good job[13:02.360 --> 13:04.240] at building solutions.[13:04.240 --> 13:06.920] And if you just look at the history of services,[13:06.920 --> 13:09.080] the odds are pretty high[13:09.080 --> 13:12.880] that they'll keep getting better, keep improving things.[13:12.880 --> 13:17.080] And it seems like what I'm hearing from people,[13:17.080 --> 13:19.080] and it sounds like maybe with your organization as well,[13:19.080 --> 13:24.080] is that potentially the SDK for SageMaker[13:24.440 --> 13:29.120] is really the win versus some of the UX tools they have[13:29.120 --> 13:32.680] and the interface for Canvas and Studio.[13:32.680 --> 13:36.080] Is that what's happening?[13:36.080 --> 13:38.720] Yeah, so I think, right,[13:38.720 --> 13:41.440] what we try to do is we always try to think about our users.[13:41.440 --> 13:44.880] So how do our users, who are our users?[13:44.880 --> 13:47.000] What capabilities and skills do they have?[13:47.000 --> 13:50.080] And what freedom should they have[13:50.080 --> 13:52.640] and what abilities should they have to develop models?[13:52.640 --> 13:55.440] In our case, we don't really have use cases[13:55.440 --> 13:58.640] for stuff like Canvas because our users[13:58.640 --> 14:02.680] are fairly mature teams that know how to do their,[14:02.680 --> 14:04.320] on the one hand, the data science stuff, of course,[14:04.320 --> 14:06.400] but also the engineering stuff.[14:06.400 --> 14:08.160] So in our case, things like Canvas[14:08.160 --> 14:10.320] do not really play so much role[14:10.320 --> 14:12.960] because obviously due to the high abstraction layer[14:12.960 --> 14:15.640] of more like graphical user interfaces,[14:15.640 --> 14:17.360] drag and drop tooling,[14:17.360 --> 14:20.360] you are also limited in what you can do,[14:20.360 --> 14:22.480] or what you can do easily.[14:22.480 --> 14:26.320] So in our case, really, it is the strength of the flexibility[14:26.320 --> 14:28.320] that the SageMaker SDK gives you.[14:28.320 --> 14:33.040] And in general, the SDK around most AWS services.[14:34.080 --> 14:36.760] But also it comes with challenges, of course.[14:37.720 --> 14:38.960] You give a lot of freedom,[14:38.960 --> 14:43.400] but also you're creating a certain ask,[14:43.400 --> 14:47.320] certain requirements for your model development teams,[14:47.320 --> 14:49.600] which is also why we've also been working[14:49.600 --> 14:52.600] about abstracting further away from the SDK.[14:52.600 --> 14:54.600] So our objective is actually[14:54.600 --> 14:58.760] that you should not be forced to interact with the raw SDK[14:58.760 --> 15:00.600] when you use SageMaker anymore,[15:00.600 --> 15:03.520] but you have a thin layer of abstraction[15:03.520 --> 15:05.480] on top of what you are doing.[15:05.480 --> 15:07.480] That's actually something we are moving towards[15:07.480 --> 15:09.320] more and more as well.[15:09.320 --> 15:11.120] Because yeah, it gives you the flexibility,[15:11.120 --> 15:12.960] but also flexibility comes at a cost,[15:12.960 --> 15:15.080] comes often at the cost of speeds,[15:15.080 --> 15:18.560] specifically when it comes to the 90% default stuff[15:18.560 --> 15:20.720] that you want to do, yeah.[15:20.720 --> 15:24.160] And one of the things that I have as a complaint[15:24.160 --> 15:29.160] against SageMaker is that it only uses virtual machines,[15:30.000 --> 15:35.000] and it does seem like a strange strategy in some sense.[15:35.000 --> 15:40.000] Like for example, I guess if you're doing batch only,[15:40.000 --> 15:42.000] it doesn't matter as much,[15:42.000 --> 15:45.000] which I think is a good strategy actually[15:45.000 --> 15:50.000] to get your batch based predictions very, very strong.[15:50.000 --> 15:53.000] And in that case, maybe the virtual machines[15:53.000 --> 15:56.000] make a little bit less of a complaint.[15:56.000 --> 16:00.000] But in the case of the endpoints with SageMaker,[16:00.000 --> 16:02.000] the fact that you have to spend up[16:02.000 --> 16:04.000] these really expensive virtual machines[16:04.000 --> 16:08.000] and let them run 24 seven to do online prediction,[16:08.000 --> 16:11.000] is that something that your organization evaluated[16:11.000 --> 16:13.000] and decided not to use?[16:13.000 --> 16:15.000] Or like, what are your thoughts behind that?[16:15.000 --> 16:19.000] Yeah, in our case, doing real time[16:19.000 --> 16:22.000] or near real time inference is currently not really relevant[16:22.000 --> 16:25.000] for the simple reason that when you think a bit more[16:25.000 --> 16:28.000] about the money laundering or anti money laundering space,[16:28.000 --> 16:31.000] typically when, right,[16:31.000 --> 16:34.000] all every individual bank must do anti money laundering[16:34.000 --> 16:37.000] and they have armies of people doing that.[16:37.000 --> 16:39.000] But on the other hand,[16:39.000 --> 16:43.000] the time it actually takes from one of their systems,[16:43.000 --> 16:46.000] one of their AML systems actually detecting something[16:46.000 --> 16:49.000] that's unusual that then goes into a review process[16:49.000 --> 16:54.000] until it eventually hits the governmental institution[16:54.000 --> 16:56.000] that then takes care of the cases that have been[16:56.000 --> 16:58.000] at least twice validated that they are indeed,[16:58.000 --> 17:01.000] they look very unusual.[17:01.000 --> 17:04.000] So this takes a while, this can take quite some time,[17:04.000 --> 17:06.000] which is also why it doesn't really matter[17:06.000 --> 17:09.000] whether you ship your prediction within a second[17:09.000 --> 17:13.000] or whether it takes you a week or two weeks.[17:13.000 --> 17:15.000] It doesn't really matter, hence for us,[17:15.000 --> 17:19.000] that problem so far thinking about real time inference[17:19.000 --> 17:21.000] has not been there.[17:21.000 --> 17:25.000] But yeah, indeed, for other use cases,[17:25.000 --> 17:27.000] for also private projects,[17:27.000 --> 17:29.000] we've also been considering SageMaker Endpoints[17:29.000 --> 17:31.000] for a while, but exactly what you said,[17:31.000 --> 17:33.000] the fact that you need to have a very beefy machine[17:33.000 --> 17:35.000] running all the time,[17:35.000 --> 17:39.000] specifically when you have heavy GPU loads, right,[17:39.000 --> 17:43.000] and you're actually paying for that machine running 2047,[17:43.000 --> 17:46.000] although you do have quite fluctuating load.[17:46.000 --> 17:49.000] Yeah, then that definitely becomes quite a consideration[17:49.000 --> 17:51.000] of what you go for.[17:51.000 --> 17:58.000] Yeah, and I actually have been talking to AWS about that,[17:58.000 --> 18:02.000] because one of the issues that I have is that[18:02.000 --> 18:07.000] the AWS platform really pushes serverless,[18:07.000 --> 18:10.000] and then my question for AWS is,[18:10.000 --> 18:13.000] so why aren't you using it?[18:13.000 --> 18:16.000] I mean, if you're pushing serverless for everything,[18:16.000 --> 18:19.000] why is SageMaker nothing serverless?[18:19.000 --> 18:21.000] And so maybe they're going to do that, I don't know.[18:21.000 --> 18:23.000] I don't have any inside information,[18:23.000 --> 18:29.000] but it is interesting to hear you had some similar concerns.[18:29.000 --> 18:32.000] I know that there's two questions here.[18:32.000 --> 18:37.000] One is someone asked about what do you do for data versioning,[18:37.000 --> 18:41.000] and a second one is how do you do event based MLOps?[18:41.000 --> 18:43.000] So maybe kind of following up.[18:43.000 --> 18:46.000] Yeah, what do we do for data versioning?[18:46.000 --> 18:51.000] On the one hand, we're running a data lakehouse,[18:51.000 --> 18:54.000] where after data we get from the financial institutions,[18:54.000 --> 18:57.000] from the banks that runs through massive data pipeline,[18:57.000 --> 19:01.000] also on AWS, we're using glue and step functions actually for that,[19:01.000 --> 19:03.000] and then eventually it ends up modeled to some extent,[19:03.000 --> 19:06.000] sanitized, quality checked in our data lakehouse,[19:06.000 --> 19:10.000] and there we're actually using hoodie on top of S3.[19:10.000 --> 19:13.000] And this is also what we use for versioning,[19:13.000 --> 19:16.000] which we use for time travel and all these things.[19:16.000 --> 19:19.000] So that is hoodie on top of S3,[19:19.000 --> 19:21.000] when then pipelines,[19:21.000 --> 19:24.000] so actually our model pipelines plug in there[19:24.000 --> 19:27.000] and spit out predictions, alerts,[19:27.000 --> 19:29.000] what we call alerts eventually.[19:29.000 --> 19:33.000] That is something that we version based on unique IDs.[19:33.000 --> 19:36.000] So processing IDs, we track pretty much everything,[19:36.000 --> 19:39.000] every line of code that touched,[19:39.000 --> 19:43.000] is related to a specific row in our data.[19:43.000 --> 19:46.000] So we can exactly track back for every single row[19:46.000 --> 19:48.000] in our predictions and in our alerts,[19:48.000 --> 19:50.000] what pipeline ran on it,[19:50.000 --> 19:52.000] which jobs were in that pipeline,[19:52.000 --> 19:56.000] which code exactly was running in each job,[19:56.000 --> 19:58.000] which intermediate results were produced.[19:58.000 --> 20:01.000] So we're basically adding lineage information[20:01.000 --> 20:03.000] to everything we output along that line,[20:03.000 --> 20:05.000] so we can track everything back[20:05.000 --> 20:09.000] using a few tools we've built.[20:09.000 --> 20:12.000] So the tool you mentioned,[20:12.000 --> 20:13.000] I'm not familiar with it.[20:13.000 --> 20:14.000] What is it called again?[20:14.000 --> 20:15.000] It's called hoodie?[20:15.000 --> 20:16.000] Hoodie.[20:16.000 --> 20:17.000] Hoodie.[20:17.000 --> 20:18.000] Oh, what is it?[20:18.000 --> 20:19.000] Maybe you can describe it.[20:19.000 --> 20:22.000] Yeah, hoodie is essentially,[20:22.000 --> 20:29.000] it's quite similar to other tools such as[20:29.000 --> 20:31.000] Databricks, how is it called?[20:31.000 --> 20:32.000] Databricks?[20:32.000 --> 20:33.000] Delta Lake maybe?[20:33.000 --> 20:34.000] Yes, exactly.[20:34.000 --> 20:35.000] Exactly.[20:35.000 --> 20:38.000] It's basically, it's equivalent to Delta Lake,[20:38.000 --> 20:40.000] just back then when we looked into[20:40.000 --> 20:42.000] what are we going to use.[20:42.000 --> 20:44.000] Delta Lake was not open sourced yet.[20:44.000 --> 20:46.000] Databricks open sourced a while ago.[20:46.000 --> 20:47.000] We went for Hoodie.[20:47.000 --> 20:50.000] It essentially, it is a layer on top of,[20:50.000 --> 20:53.000] in our case, S3 that allows you[20:53.000 --> 20:58.000] to more easily keep track of what you,[20:58.000 --> 21:03.000] of the actions you are performing on your data.[21:03.000 --> 21:08.000] So it's essentially very similar to Delta Lake,[21:08.000 --> 21:13.000] just already before an open sourced solution.[21:13.000 --> 21:15.000] Yeah, that's, I didn't know anything about that.[21:15.000 --> 21:16.000] So now I do.[21:16.000 --> 21:19.000] So thanks for letting me know.[21:19.000 --> 21:21.000] I'll have to look into that.[21:21.000 --> 21:27.000] The other, I guess, interesting stack related question is,[21:27.000 --> 21:29.000] what are your thoughts about,[21:29.000 --> 21:32.000] I think there's two areas that I think[21:32.000 --> 21:34.000] are interesting and that are emerging.[21:34.000 --> 21:36.000] Oh, actually there's, there's multiple.[21:36.000 --> 21:37.000] Maybe I'll just bring them all up.[21:37.000 --> 21:39.000] So we'll do one by one.[21:39.000 --> 21:42.000] So these are some emerging areas that I'm, that I'm seeing.[21:42.000 --> 21:49.000] So one is the concept of event driven, you know,[21:49.000 --> 21:54.000] architecture versus, versus maybe like a static architecture.[21:54.000 --> 21:57.000] And so I think obviously you're using step functions.[21:57.000 --> 22:00.000] So you're a fan of, of event driven architecture.[22:00.000 --> 22:04.000] Maybe we start, we'll start with that one is what are your,[22:04.000 --> 22:08.000] what are your thoughts on going more event driven in your organization?[22:08.000 --> 22:09.000] Yeah.[22:09.000 --> 22:13.000] In, in, in our case, essentially everything works event driven.[22:13.000 --> 22:14.000] Right.[22:14.000 --> 22:19.000] So since we on AWS, we're using event bridge or cloud watch events.[22:19.000 --> 22:21.000] I think now it's called everywhere.[22:21.000 --> 22:22.000] Right.[22:22.000 --> 22:24.000] This is how we trigger pretty much everything in our stack.[22:24.000 --> 22:27.000] This is how we trigger our data pipelines when data comes in.[22:27.000 --> 22:32.000] This is how we trigger different, different lambdas that parse our[22:32.000 --> 22:35.000] certain information from your log, store them in different databases.[22:35.000 --> 22:40.000] This is how we also, how we, at some point in the back in the past,[22:40.000 --> 22:44.000] how we also triggered new deployments when new models were approved in[22:44.000 --> 22:46.000] your model registry.[22:46.000 --> 22:50.000] So basically everything we've been doing is, is fully event driven.[22:50.000 --> 22:51.000] Yeah.[22:51.000 --> 22:56.000] So, so I think this is a key thing you bring up here is that I've,[22:56.000 --> 23:00.000] I've talked to many people who don't use AWS, who are, you know,[23:00.000 --> 23:03.000] all alternatively experts at technology.[23:03.000 --> 23:06.000] And one of the things that I've heard some people say is like, oh,[23:06.000 --> 23:13.000] well, AWS is in as fast as X or Y, like Lambda is in as fast as X or Y or,[23:13.000 --> 23:17.000] you know, Kubernetes or, but, but the point you bring up is exactly the[23:17.000 --> 23:24.000] way I think about AWS is that the true advantage of AWS platform is the,[23:24.000 --> 23:29.000] is the tight integration with the services and you can design event[23:29.000 --> 23:31.000] driven workflows.[23:31.000 --> 23:33.000] Would you say that's, that's absolutely.[23:33.000 --> 23:34.000] Yeah.[23:34.000 --> 23:35.000] Yeah.[23:35.000 --> 23:39.000] I think designing event driven workflows on AWS is incredibly easy to do.[23:39.000 --> 23:40.000] Yeah.[23:40.000 --> 23:43.000] And it also comes incredibly natural and that's extremely powerful.[23:43.000 --> 23:44.000] Right.[23:44.000 --> 23:49.000] And simply by, by having an easy way how to trigger lambdas event driven,[23:49.000 --> 23:52.000] you can pretty much, right, pretty much do everything and glue[23:52.000 --> 23:54.000] everything together that you want.[23:54.000 --> 23:56.000] I think that gives you a tremendous flexibility.[23:56.000 --> 23:57.000] Yeah.[23:57.000 --> 24:00.000] So, so I think there's two things that come to mind now.[24:00.000 --> 24:07.000] One is that, that if you are developing an ML ops platform that you[24:07.000 --> 24:09.000] can't ignore Lambda.[24:09.000 --> 24:12.000] So I, because I've had some people tell me, oh, well, we can do this and[24:12.000 --> 24:13.000] this and this better.[24:13.000 --> 24:17.000] It's like, yeah, but if you're going to be on AWS, you have to understand[24:17.000 --> 24:18.000] why people use Lambda.[24:18.000 --> 24:19.000] It isn't speed.[24:19.000 --> 24:24.000] It's, it's the ease of, ease of developing very rich solutions.[24:24.000 --> 24:25.000] Right.[24:25.000 --> 24:26.000] Absolutely.[24:26.000 --> 24:28.000] And then the glue between, between what you are building eventually.[24:28.000 --> 24:33.000] And you can even almost your, the thoughts in your mind turn into Lambda.[24:33.000 --> 24:36.000] You know, like you can be thinking and building code so quickly.[24:36.000 --> 24:37.000] Absolutely.[24:37.000 --> 24:41.000] Everything turns into which event do I need to listen to and then I trigger[24:41.000 --> 24:43.000] a Lambda and that Lambda does this and that.[24:43.000 --> 24:44.000] Yeah.[24:44.000 --> 24:48.000] And the other part about Lambda that's pretty, pretty awesome is that it[24:48.000 --> 24:52.000] hooks into services that have infinite scale.[24:52.000 --> 24:56.000] Like so SQS, like you can't break SQS.[24:56.000 --> 24:59.000] Like there's nothing you can do to ever take SQS down.[24:59.000 --> 25:02.000] It handles unlimited requests in and unlimited requests out.[25:02.000 --> 25:04.000] How many systems are like that?[25:04.000 --> 25:05.000] Yeah.[25:05.000 --> 25:06.000] Yeah, absolutely.[25:06.000 --> 25:07.000] Yeah.[25:07.000 --> 25:12.000] So then this kind of a followup would be that, that maybe data scientists[25:12.000 --> 25:17.000] should learn Lambda and step functions in order to, to get to[25:17.000 --> 25:18.000] MLOps.[25:18.000 --> 25:21.000] I think that's a yes.[25:21.000 --> 25:25.000] If you want to, if you want to put the foot into MLOps and you are on AWS,[25:25.000 --> 25:31.000] then I think there is no way around learning these fundamentals.[25:31.000 --> 25:32.000] Right.[25:32.000 --> 25:35.000] There's no way around learning things like what is a Lambda?[25:35.000 --> 25:39.000] How do I, how do I create a Lambda via Terraform or whatever tool you're[25:39.000 --> 25:40.000] using there?[25:40.000 --> 25:42.000] And how do I hook it up to an event?[25:42.000 --> 25:47.000] And how do I, how do I use the AWS SDK to interact with different[25:47.000 --> 25:48.000] services?[25:48.000 --> 25:49.000] So, right.[25:49.000 --> 25:53.000] I think if you want to take a step into MLOps from, from coming more from[25:53.000 --> 25:57.000] the data science and it's extremely important to familiarize yourself[25:57.000 --> 26:01.000] with how do you, at least the fundamentals, how do you architect[26:01.000 --> 26:03.000] basic solutions on AWS?[26:03.000 --> 26:05.000] How do you glue services together?[26:05.000 --> 26:07.000] How do you make them speak to each other?[26:07.000 --> 26:09.000] So yeah, I think that's quite fundamental.[26:09.000 --> 26:14.000] Ideally, ideally, I think that's what the platform should take away from you[26:14.000 --> 26:16.000] as a, as a pure data scientist.[26:16.000 --> 26:19.000] You don't, should not necessarily have to deal with that stuff.[26:19.000 --> 26:23.000] But if you're interested in, if you want to make that move more towards MLOps,[26:23.000 --> 26:27.000] I think learning about infrastructure and specifically in the context of AWS[26:27.000 --> 26:31.000] about the services and how to use them is really fundamental.[26:31.000 --> 26:32.000] Yeah, it's good.[26:32.000 --> 26:33.000] Because this is automation eventually.[26:33.000 --> 26:37.000] And if you want to automate, if you want to automate your complex processes,[26:37.000 --> 26:39.000] then you need to learn that stuff.[26:39.000 --> 26:41.000] How else are you going to do it?[26:41.000 --> 26:42.000] Yeah, I agree.[26:42.000 --> 26:46.000] I mean, that's really what, what, what Lambda step functions are is their[26:46.000 --> 26:47.000] automation tools.[26:47.000 --> 26:49.000] So that's probably the better way to describe it.[26:49.000 --> 26:52.000] That's a very good point you bring up.[26:52.000 --> 26:57.000] Another technology that I think is an emerging technology is the[26:57.000 --> 26:58.000] managed file system.[26:58.000 --> 27:05.000] And the reason why I think it's interesting is that, so I 20 plus years[27:05.000 --> 27:11.000] ago, I was using file systems in the university setting when I was at[27:11.000 --> 27:14.000] Caltech and then also in film, film industry.[27:14.000 --> 27:22.000] So film has been using managed file servers with parallel processing[27:22.000 --> 27:24.000] farms for a long time.[27:24.000 --> 27:27.000] I don't know how many people know this, but in the film industry,[27:27.000 --> 27:32.000] the, the, the architecture, even from like 2000 was there's a very[27:32.000 --> 27:38.000] expensive file server and then there's let's say 40,000 machines or 40,000[27:38.000 --> 27:39.000] cores.[27:39.000 --> 27:40.000] And that's, that's it.[27:40.000 --> 27:41.000] That's the architecture.[27:41.000 --> 27:46.000] And now what's interesting is I see with data science and machine learning[27:46.000 --> 27:52.000] operations that like that, that could potentially happen in the future is[27:52.000 --> 27:57.000] actually a managed NFS mount point with maybe Kubernetes or something like[27:57.000 --> 27:58.000] that.[27:58.000 --> 28:01.000] Do you see any of that on the horizon?[28:01.000 --> 28:04.000] Oh, that's a good question.[28:04.000 --> 28:08.000] I think for our, for our, what we're currently doing, that's probably a[28:08.000 --> 28:10.000] bit further away.[28:10.000 --> 28:15.000] But in principle, I could very well imagine that in our use case, not,[28:15.000 --> 28:17.000] not quite.[28:17.000 --> 28:20.000] But in principle, definitely.[28:20.000 --> 28:26.000] And then maybe a third, a third emerging thing I'm seeing is what's going[28:26.000 --> 28:29.000] on with open AI and hugging face.[28:29.000 --> 28:34.000] And that has the potential, but maybe to change the game a little bit,[28:34.000 --> 28:38.000] especially with hugging face, I think, although both of them, I mean,[28:38.000 --> 28:43.000] there is that, you know, in the case of pre trained models, here's a[28:43.000 --> 28:48.000] perfect example is that an organization may have, you know, maybe they're[28:48.000 --> 28:53.000] using AWS even for this, they're transcribing videos and they're going[28:53.000 --> 28:56.000] to do something with them, maybe they're going to detect, I don't know,[28:56.000 --> 29:02.000] like, you know, if you recorded customers in your, I'm just brainstorm,[29:02.000 --> 29:05.000] I'm not seeing your company did this, but I'm just creating a hypothetical[29:05.000 --> 29:09.000] situation that they recorded, you know, customer talking and then they,[29:09.000 --> 29:12.000] they transcribe it to text and then run some kind of a, you know,[29:12.000 --> 29:15.000] criminal detection feature or something like that.[29:15.000 --> 29:19.000] Like they could build their own models or they could download the thing[29:19.000 --> 29:23.000] that was released two days ago or a day ago from open AI that transcribes[29:23.000 --> 29:29.000] things, you know, and then, and then turn that transcribe text into[29:29.000 --> 29:34.000] hugging face, some other model that summarizes it and then you could[29:34.000 --> 29:38.000] feed that into a system. So it's, what is, what is your, what are your[29:38.000 --> 29:42.000] thoughts around some of these pre trained models and is your, are you[29:42.000 --> 29:48.000] thinking of in terms of your stack, trying to look into doing fine tuning?[29:48.000 --> 29:53.000] Yeah, so I think pre trained models and especially the way that hugging face,[29:53.000 --> 29:57.000] I think really revolutionized the space in terms of really kind of[29:57.000 --> 30:02.000] platformizing the entire business around or the entire market around[30:02.000 --> 30:07.000] pre trained models. I think that is really quite incredible and I think[30:07.000 --> 30:10.000] really for the ecosystem a changing way how to do things.[30:10.000 --> 30:16.000] And I believe that looking at the, the costs of training large models[30:16.000 --> 30:19.000] and looking at the fact that many organizations are not able to do it[30:19.000 --> 30:23.000] for, because of massive costs or because of lack of data.[30:23.000 --> 30:29.000] I think this is a, this is a clear, makes it very clear how important[30:29.000 --> 30:33.000] such platforms are, how important sharing of pre trained models actually is.[30:33.000 --> 30:37.000] I believe it's a, we are only at the, quite at the beginning actually of that.[30:37.000 --> 30:42.000] And I think we're going to see that nowadays you see it mostly when it[30:42.000 --> 30:47.000] comes to fairly generalized data format, images, potentially videos, text,[30:47.000 --> 30:52.000] speech, these things. But I believe that we're going to see more marketplace[30:52.000 --> 30:57.000] approaches when it comes to pre trained models in a lot more industries[30:57.000 --> 31:01.000] and in a lot more, in a lot more use cases where data is to some degree[31:01.000 --> 31:05.000] standardized. Also when you think about, when you think about banking,[31:05.000 --> 31:10.000] for example, right? When you think about transactions to some extent,[31:10.000 --> 31:14.000] transaction, transaction data always looks the same, kind of at least at[31:14.000 --> 31:17.000] every bank. Of course you might need to do some mapping here and there,[31:17.000 --> 31:22.000] but also there is a lot of power in it. But because simply also thinking[31:22.000 --> 31:28.000] about sharing data is always a difficult thing, especially in Europe.[31:28.000 --> 31:32.000] Sharing data between organizations is incredibly difficult legally.[31:32.000 --> 31:36.000] It's difficult. Sharing models is a different thing, right?[31:36.000 --> 31:40.000] Basically, similar to the concept of federated learning. Sharing models[31:40.000 --> 31:44.000] is significantly easier legally than actually sharing data.[31:44.000 --> 31:48.000] And then applying these models, fine tuning them and so on.[31:48.000 --> 31:52.000] Yeah, I mean, I could just imagine. I really don't know much about[31:52.000 --> 31:56.000] banking transactions, but I would imagine there could be several[31:56.000 --> 32:01.000] kinds of transactions that are very normal. And then there's some[32:01.000 --> 32:06.000] transactions, like if you're making every single second,[32:06.000 --> 32:11.000] you're transferring a lot of money. And it happens just[32:11.000 --> 32:14.000] very quickly. It's like, wait, why are you doing this? Why are you transferring money[32:14.000 --> 32:20.000] constantly? What's going on? Or the huge sum of money only[32:20.000 --> 32:24.000] involves three different points in the network. Over and over again,[32:24.000 --> 32:29.000] just these three points are constantly... And so once you've developed[32:29.000 --> 32:33.000] a model that is anomaly detection, then[32:33.000 --> 32:37.000] yeah, why would you need to develop another one? I mean, somebody already did it.[32:37.000 --> 32:41.000] Exactly. Yes, absolutely, absolutely. And that's[32:41.000 --> 32:45.000] definitely... That's encoded knowledge, encoded information in terms of the model,[32:45.000 --> 32:49.000] which is not personally... Well, abstracts away from[32:49.000 --> 32:53.000] but personally identifiable data. And that's really the power. That is something[32:53.000 --> 32:57.000] that, yeah, as I've said before, you can share significantly easier and you can[32:57.000 --> 33:03.000] apply to your use cases. The kind of related to this in[33:03.000 --> 33:09.000] terms of upcoming technologies is, I think, dealing more with graphs.[33:09.000 --> 33:13.000] And so is that something from a stackwise that your[33:13.000 --> 33:19.000] company's investigated resource can do? Yeah, so when you think about[33:19.000 --> 33:23.000] transactions, bank transactions, right? And bank customers.[33:23.000 --> 33:27.000] So in our case, again, it's a... We only have pseudonymized[33:27.000 --> 33:31.000] transaction data, so actually we cannot see anything, right? We cannot see names, we cannot see[33:31.000 --> 33:35.000] iPads or whatever. We really can't see much. But[33:35.000 --> 33:39.000] you can look at transactions moving between[33:39.000 --> 33:43.000] different entities, between different accounts. You can look at that[33:43.000 --> 33:47.000] as a network, as a graph. And that's also what we very frequently do.[33:47.000 --> 33:51.000] You have your nodes in your network, these are your accounts[33:51.000 --> 33:55.000] or your presence, even. And the actual edges between them,[33:55.000 --> 33:59.000] that's what your transactions are. So you have this[33:59.000 --> 34:03.000] massive graph, actually, that also we as TMNL, as Transaction Montenegro,[34:03.000 --> 34:07.000] are sitting on. We're actually sitting on a massive transaction graph.[34:07.000 --> 34:11.000] So yeah, absolutely. For us, doing analysis on top of[34:11.000 --> 34:15.000] that graph, building models on top of that graph is a quite important[34:15.000 --> 34:19.000] thing. And like I taught a class[34:19.000 --> 34:23.000] a few years ago at Berkeley where we had to[34:23.000 --> 34:27.000] cover graph databases a little bit. And I[34:27.000 --> 34:31.000] really didn't know that much about graph databases, although I did use one actually[34:31.000 --> 34:35.000] at one company I was at. But one of the things I learned in teaching that[34:35.000 --> 34:39.000] class was about the descriptive statistics[34:39.000 --> 34:43.000] of a graph network. And it[34:43.000 --> 34:47.000] is actually pretty interesting, because I think most of the time everyone talks about[34:47.000 --> 34:51.000] median and max min and standard deviation and everything.[34:51.000 --> 34:55.000] But then with a graph, there's things like centrality[34:55.000 --> 34:59.000] and I forget all the terms off the top of my head, but you can see[34:59.000 --> 35:03.000] if there's a node in the network that's[35:03.000 --> 35:07.000] everybody's interacting with. Absolutely. You can identify communities[35:07.000 --> 35:11.000] of people moving around a lot of money all the time. For example,[35:11.000 --> 35:15.000] you can detect different metric features eventually[35:15.000 --> 35:19.000] doing computations on your graph and then plugging in some model.[35:19.000 --> 35:23.000] Often it's feature engineering. You're computing between the centrality scores[35:23.000 --> 35:27.000] across your graph or your different entities. And then[35:27.000 --> 35:31.000] you're building your features actually. And then you're plugging in some[35:31.000 --> 35:35.000] model in the end. If you do classic machine learning, so to say[35:35.000 --> 35:39.000] if you do graph deep learning, of course that's a bit different.[35:39.000 --> 35:43.000] So basically that could for people that are analyzing[35:43.000 --> 35:47.000] essentially networks of people or networks, then[35:47.000 --> 35:51.000] basically a graph database would be step one is[35:51.000 --> 35:55.000] generate the features which could be centrality.[35:55.000 --> 35:59.000] There's a score and then you then go and train[35:59.000 --> 36:03.000] the model based on that descriptive statistic.[36:03.000 --> 36:07.000] Exactly. So one way how you could think about it is[36:07.000 --> 36:11.000] whether we need a graph database or not, that always depends on your specific use case[36:11.000 --> 36:15.000] and what database. We're actually also running[36:15.000 --> 36:19.000] that using Spark. You have graph frames, you have[36:19.000 --> 36:23.000] graph X actually. So really stuff in Spark built for[36:23.000 --> 36:27.000] doing analysis on graphs.[36:27.000 --> 36:31.000] And then what you usually do is exactly what you said. You are trying[36:31.000 --> 36:35.000] to build features based on that graph.[36:35.000 --> 36:39.000] Based on the attributes of the nodes and the attributes on the edges and so on.[36:39.000 --> 36:43.000] And so I guess in terms of graph databases right[36:43.000 --> 36:47.000] now, it sounds like maybe the three[36:47.000 --> 36:51.000] main players maybe are there's Neo4j which[36:51.000 --> 36:55.000] has been around for a long time. There's I guess Spark[36:55.000 --> 36:59.000] and then there's also, I forgot what the one is called for AWS[36:59.000 --> 37:03.000] is it? Neptune, that's Neptune.[37:03.000 --> 37:07.000] Have you played with all three of those and did you[37:07.000 --> 37:11.000] like Neptune? Neptune was something we, Spark of course we actually currently[37:11.000 --> 37:15.000] using for exactly that. Also because it allows us to do[37:15.000 --> 37:19.000] to keep our stack fairly homogeneous. We did[37:19.000 --> 37:23.000] also PUC in Neptune a while ago already[37:23.000 --> 37:27.000] and well Neptune you definitely have essentially two ways[37:27.000 --> 37:31.000] how to query Neptune either using Gremlin or SparkQL.[37:31.000 --> 37:35.000] So that means the people, your data science[37:35.000 --> 37:39.000] need to get familiar with that which then is already one bit of a hurdle[37:39.000 --> 37:43.000] because usually data scientists are not familiar with either.[37:43.000 --> 37:47.000] But also what we found with Neptune[37:47.000 --> 37:51.000] is also that it's not necessarily built for[37:51.000 --> 37:55.000] as an analytics graph database. It's not necessarily made for[37:55.000 --> 37:59.000] that. And that then become, then it's sometimes, at least[37:59.000 --> 38:03.000] for us, it has become quite complicated to handle different performance considerations[38:03.000 --> 38:07.000] when you actually do fairly complex queries across that graph.[38:07.000 --> 38:11.000] Yeah, so you're bringing up like a point which[38:11.000 --> 38:15.000] happens a lot in my experience with[38:15.000 --> 38:19.000] technology is that sometimes[38:19.000 --> 38:23.000] the purity of the solution becomes the problem[38:23.000 --> 38:27.000] where even though Spark isn't necessarily[38:27.000 --> 38:31.000] designed to be a graph database system, the fact is[38:31.000 --> 38:35.000] people in your company are already using it. So[38:35.000 --> 38:39.000] if you just turn on that feature now you can use it and it's not like[38:39.000 --> 38:43.000] this huge technical undertaking and retraining effort.[38:43.000 --> 38:47.000] So even if it's not as good, if it works, then that's probably[38:47.000 --> 38:51.000] the solution your company will use versus I agree with you like a lot of times[38:51.000 --> 38:55.000] even if a solution like Neo4j is a pretty good example of[38:55.000 --> 38:59.000] it's an interesting product but[38:59.000 --> 39:03.000] you already have all these other products like do you really want to introduce yet[39:03.000 --> 39:07.000] another product into your stack. Yeah, because eventually[39:07.000 --> 39:11.000] it all comes with an overhead of course introducing it. That is one thing[39:11.000 --> 39:15.000] it requires someone to maintain it even if it's a[39:15.000 --> 39:19.000] managed service. Somebody needs to actually own it and look after it[39:19.000 --> 39:23.000] and then as you said you need to retrain people to also use it effectively.[39:23.000 --> 39:27.000] So it comes at significant cost and that is really[39:27.000 --> 39:31.000] something that I believe should be quite critically[39:31.000 --> 39:35.000] assessed. What is really the game you have? How far can you go with[39:35.000 --> 39:39.000] your current tooling and then eventually make[39:39.000 --> 39:43.000] that decision. At least personally I'm really[39:43.000 --> 39:47.000] not a fan of thinking tooling first[39:47.000 --> 39:51.000] but personally I really believe in looking at your organization, looking at the people[39:51.000 --> 39:55.000] what skills are there, looking at how effective[39:55.000 --> 39:59.000] are these people actually performing certain activities and processes[39:59.000 --> 40:03.000] and then carefully thinking about what really makes sense[40:03.000 --> 40:07.000] because it's one thing but people need to[40:07.000 --> 40:11.000] adopt and use the tooling and eventually it should really speed them up and improve[40:11.000 --> 40:15.000] how they develop. Yeah, I think it's very[40:15.000 --> 40:19.000] that's great advice that it's hard to understand how good of advice it is[40:19.000 --> 40:23.000] because it takes experience getting burned[40:23.000 --> 40:27.000] creating new technology. I've[40:27.000 --> 40:31.000] had experiences before where[40:31.000 --> 40:35.000] one of the mistakes I've made was putting too many different technologies in an organization[40:35.000 --> 40:39.000] and the problem is once you get enough complexity[40:39.000 --> 40:43.000] it can really explode and then[40:43.000 --> 40:47.000] this is the part that really gets scary is that[40:47.000 --> 40:51.000] let's take Spark for example. How hard is it to hire somebody that knows Spark? Pretty easy[40:51.000 --> 40:55.000] how hard is it going to be to hire somebody that knows[40:55.000 --> 40:59.000] Spark and then hire another person that knows the gremlin query[40:59.000 --> 41:03.000] language for Neptune, then hire another person that knows Kubernetes[41:03.000 --> 41:07.000] then tire another, after a while if you have so many different kinds of tools[41:07.000 --> 41:11.000] you have to hire so many different kinds of people that all[41:11.000 --> 41:15.000] productivity goes to a stop. So it's the hiring as well[41:15.000 --> 41:19.000] Absolutely, I mean it's virtually impossible[41:19.000 --> 41:23.000] to find someone who is really well versed with gremlin for example[41:23.000 --> 41:27.000] it's incredibly hard and I think tech hiring is hard[41:27.000 --> 41:31.000] by itself already[41:31.000 --> 41:35.000] so you really need to think about what can I hire for as well[41:35.000 --> 41:39.000] what expertise can I realistically build up?[41:39.000 --> 41:43.000] So that's why I think AWS[41:43.000 --> 41:47.000] even with some of the limitations about the ML platform[41:47.000 --> 41:51.000] the advantages of using AWS is that[41:51.000 --> 41:55.000] you have a huge audience of people to hire from and then the same thing like[41:55.000 --> 41:59.000] Spark, there's a lot of things I don't like about Spark but a lot of people[41:59.000 --> 42:03.000] use Spark and so if you use AWS and you use Spark[42:03.000 --> 42:07.000] let's say those two which you are then you're going to have a much easier time[42:07.000 --> 42:11.000] hiring people, you're going to have a much easier time training people[42:11.000 --> 42:15.000] there's tons of documentation about it so I think a lot of people[42:15.000 --> 42:19.000] are very wise that you're thinking that way but a lot of people don't think about that[42:19.000 --> 42:23.000] they're like oh I've got to use the latest, greatest stuff and this and this and this[42:23.000 --> 42:27.000] and then their company starts to get into trouble because they can't hire[42:27.000 --> 42:31.000] people, they can't maintain systems and then productivity starts to[42:31.000 --> 42:35.000] to degrees. Also something[42:35.000 --> 42:39.000] not to ignore is the cognitive load you put on a team[42:39.000 --> 42:43.000] that needs to manage a broad range of very different[42:43.000 --> 42:47.000] tools or services. It also puts incredible[42:47.000 --> 42:51.000] cognitive load on that team and you suddenly also need an incredible breadth[42:51.000 --> 42:55.000] of expertise in that team and that means you're also going[42:55.000 --> 42:59.000] to create single points of failures if you don't really[42:59.000 --> 43:03.000] scale up your team.[43:03.000 --> 43:07.000] It's something to really, I think when you go for[43:07.000 --> 43:11.000] new tooling you should really look at it from a holistic perspective[43:11.000 --> 43:15.000] not only about this is the latest and greatest.[43:15.000 --> 43:19.000] In terms of Europe versus[43:19.000 --> 43:23.000] US, have you spent much time in the US at all?[43:23.000 --> 43:27.000] Not at all actually, flying to the US Monday but no, not at all.[43:27.000 --> 43:31.000] That also would be kind of an interesting[43:31.000 --> 43:35.000] comparison in that the culture of the United States[43:35.000 --> 43:39.000] is really this culture of[43:39.000 --> 43:43.000] I would say more like survival of the fittest or you work[43:43.000 --> 43:47.000] seven days a week and you're constantly like you don't go on vacation[43:47.000 --> 43:51.000] and you're proud of it and I think it's not[43:51.000 --> 43:55.000] a good culture. I'm not saying that's a good thing, I think it's a bad[43:55.000 --> 43:59.000] thing and that a lot of times the critique people have[43:59.000 --> 44:03.000] about Europe is like oh will people take vacation all the time and all this[44:03.000 --> 44:07.000] and as someone who has spent time in both I would say[44:07.000 --> 44:11.000] yes that's a better approach. A better approach is that people[44:11.000 --> 44:15.000] should feel relaxed because when[44:15.000 --> 44:19.000] especially the kind of work you do in MLOPs[44:19.000 --> 44:23.000] is that you need people to feel comfortable and happy[44:23.000 --> 44:27.000] and more the question[44:27.000 --> 44:31.000] what I was going to is that[44:31.000 --> 44:35.000] I wonder if there is a more productive culture[44:35.000 --> 44:39.000] for MLOPs in Europe[44:39.000 --> 44:43.000] versus the US in terms of maintaining[44:43.000 --> 44:47.000] systems and building software where the US[44:47.000 --> 44:51.000] what it's really been good at I guess is kind of coming up with new[44:51.000 --> 44:55.000] ideas and there's lots of new services that get generated but[44:55.000 --> 44:59.000] the quality and longevity[44:59.000 --> 45:03.000] is not necessarily the same where I could see[45:03.000 --> 45:07.000] in the stuff we just talked about which is if you're trying to build a team[45:07.000 --> 45:11.000] where there's low turnover[45:11.000 --> 45:15.000] you have very high quality output[45:15.000 --> 45:19.000] it seems like that maybe organizations[45:19.000 --> 45:23.000] could learn from the European approach to building[45:23.000 --> 45:27.000] and maintaining systems for MLOPs.[45:27.000 --> 45:31.000] I think there's definitely some truth in it especially when you look at the median[45:31.000 --> 45:35.000] tenure of a tech person in an organization[45:35.000 --> 45:39.000] I think that is actually still significantly lower in the US[45:39.000 --> 45:43.000] I'm not sure I think in the Bay Area somewhere around one year or two months or something like that[45:43.000 --> 45:47.000] compared to Europe I believe[45:47.000 --> 45:51.000] still fairly low. Here of course in tech people also like to switch companies more often[45:51.000 --> 45:55.000] but I would say average is still more around[45:55.000 --> 45:59.000] two years something around that staying with the same company[45:59.000 --> 46:03.000] also in tech which I think is a bit longer[46:03.000 --> 46:07.000] than you would typically have it in the US.[46:07.000 --> 46:11.000] I think from my perspective where I've also built up most of the[46:11.000 --> 46:15.000] current team I think it's[46:15.000 --> 46:19.000] super important to hire good people[46:19.000 --> 46:23.000] and people that fit to the team fit to the company culture wise[46:23.000 --> 46:27.000] but also give them[46:27.000 --> 46:31.000] let them not be in a sprint all the time[46:31.000 --> 46:35.000] it's about having a sustainable way of working in my opinion[46:35.000 --> 46:39.000] and that sustainable way means you should definitely take your vacation[46:39.000 --> 46:43.000] and I think usually in Europe we have quite generous[46:43.000 --> 46:47.000] even by law vacation I mean in Netherlands by law you get 20 days a year[46:47.000 --> 46:51.000] but most companies give you 25 many IT companies[46:51.000 --> 46:55.000] 30 per year so that's quite nice[46:55.000 --> 46:59.000] but I do take that so culture wise it's really everyone[46:59.000 --> 47:03.000] likes to take vacations whether that's sea level or whether that's an engineer on a team[47:03.000 --> 47:07.000] and that's in many companies that's also really encouraged[47:07.000 --> 47:11.000] to have a healthy work life balance[47:11.000 --> 47:15.000] and of course it's not only about vacations also but growth opportunities[47:15.000 --> 47:19.000] letting people explore develop themselves[47:19.000 --> 47:23.000] and not always pushing on max performance[47:23.000 --> 47:27.000] so really at least I always see like a partnership[47:27.000 --> 47:31.000] the organization wants to get something from an[47:31.000 --> 47:35.000] employee but the employee should also be encouraged and developed[47:35.000 --> 47:39.000] in that organization a

Getup Kubicast
#100 - Recapitulando o Kubicast

Getup Kubicast

Play Episode Listen Later Sep 22, 2022 79:07


O Kubicast completou 100 episódios de vida e para gravar esse marco chamamos a internet toda mais alguns convidados especiais!A ideia era fazer o programa no formato ASK ME ANYTHING, mas acabou que começamos a relembrar os episódios mais marcantes e isso deu pano para manga para entrar em assuntos, como rodar Containers com Windows, arquivo YAML, piadas de Java, como sempre, e etc!Agradecemos as pessoas que participaram desse episódio e todos os demais ouvintes: antigos e novos! É muito gratificante poder tocar esse podcast que começou em 2018, sem muitas habilidades para a coisa, e de lá para cá só foi evoluindo para melhor!Se você chegou aqui agora, seja bem-vindo(a)! O Kubicast é uma produção da Getup, empresa especialista em Kubernetes. Todos os episódios do podcast estão no site da Getup e nas principais plataformas de áudio digital. Alguns deles estão registrados no YT. Os EPISÓDIOS RELEMBRADOS nesse Kubicast:#6 - O que NÃO esperar de Kubernetes#19 - KubeCon Day 1 - Lightning Talks#35 - The day we recorded with Kelsey Hightower#51 - Maratona KubeCon 2020#60 - Windows Containers#69 - Nomad vs Kubernetes#92 - Kubernetes 1.24 is out!#93 - Por dentro do Tsuru#95 - FOMGO - Fear of missing Gomex#97 - Segue o fio com Leandro DamascenaAs RECOMENDAÇÕES dos participantes do programa:Manifesto (série na Netflix)Succession (série na HBO)Dentro da Mente de um Gato (documentário na Netflix)Ruptura (série na Apple TV +)The Sandman (série na Netflix)

RunAs Radio
Developer Practices that Help SysAdmins with Rick Taylor

RunAs Radio

Play Episode Listen Later Sep 21, 2022 42:19


What can a sysadmin learn from a developer? Richard chats with Rick Taylor about his experiences learning from developers to write better code - sysadmin code, of course, like PowerShell, Python, and even YAML. Rick talks about how PowerShell code works across all the clouds and how organizations need well-managed PowerShell the same way developers create well-managed compiled code. The conversation explores the various developer techniques that can help sysadmins be more productive - call it DevOps if you like, but it mostly looks like getting work done!Links:PowerShell for GCPPowerShell for AWSPowerShell for AzureAzure DevOpsGitHubRecorded August 9, 2022

Screaming in the Cloud
Azul and the Current State of the Java Ecosystem with Scott Sellers

Screaming in the Cloud

Play Episode Listen Later Sep 20, 2022 36:35


About ScottWith more than 28 years of successful leadership in building high technology companies and delivering advanced products to market, Scott provides the overall strategic leadership and visionary direction for Azul Systems.Scott has a consistent proven track record of vision, leadership, and success in enterprise, consumer and scientific markets. Prior to co-founding Azul Systems, Scott founded 3dfx Interactive, a graphics processor company that pioneered the 3D graphics market for personal computers and game consoles. Scott served at 3dfx as Vice President of Engineering, CTO and as a member of the board of directors and delivered 7 award-winning products and developed 14 different graphics processors. After a successful initial public offering, 3dfx was later acquired by NVIDIA Corporation.Prior to 3dfx, Scott was a CPU systems architect at Pellucid, later acquired by MediaVision. Before Pellucid, Scott was a member of the technical staff at Silicon Graphics where he designed high-performance workstations.Scott graduated from Princeton University with a bachelor of science, earning magna cum laude and Phi Beta Kappa honors. Scott has been granted 8 patents in high performance graphics and computing and is a regularly invited keynote speaker at industry conferences.Links Referenced:Azul: https://www.azul.com/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: I come bearing ill tidings. Developers are responsible for more than ever these days. Not just the code that they write, but also the containers and the cloud infrastructure that their apps run on. Because serverless means it's still somebody's problem. And a big part of that responsibility is app security from code to cloud. And that's where our friend Snyk comes in. Snyk is a frictionless security platform that meets developers where they are - Finding and fixing vulnerabilities right from the CLI, IDEs, Repos, and Pipelines. Snyk integrates seamlessly with AWS offerings like code pipeline, EKS, ECR, and more! As well as things you're actually likely to be using. Deploy on AWS, secure with Snyk. Learn more at Snyk.co/scream That's S-N-Y-K.co/screamCorey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest on this promoted episode today is Scott Sellers, CEO and co-founder of Azul. Scott, thank you for joining me.Scott: Thank you, Corey. I appreciate the opportunity in talking to you today.Corey: So, let's start with what you're doing these days. What is Azul? What do you folks do over there?Scott: Azul is an enterprise software and SaaS company that is focused on delivering more efficient Java solutions for our customers around the globe. We've been around for 20-plus years, and as an entrepreneur, we've really gone through various stages of different growth and different dynamics in the market. But at the end of the day, Azul is all about adding value for Java-based enterprises, Java-based applications, and really endearing ourselves to the Java community.Corey: This feels like the sort of space where there are an awful lot of great business cases to explore. When you look at what's needed in that market, there are a lot of things that pop up. The surprising part to me is that this is the direction that you personally went in. You started your career as a CPU architect, to my understanding. You were then one of the co-founders of 3dfx before it got acquired by Nvidia.You feel like you've spent your career more as a hardware guy than working on the SaaS side of the world. Is that a misunderstanding of your path, or have things changed, or is this just a new direction? Help me understand how you got here from where you were.Scott: I'm not exactly sure what the math would say because I continue to—can't figure out a way to stop time. But you're correct that my academic background, I was an electrical engineer at Princeton and started my career at Silicon Graphics. And that was when I did a lot of fantastic and fascinating work building workstations and high-end graphics systems, you know, back in the day when Silicon Graphics really was the who's who here in Silicon Valley. And so, a lot of my career began in the context of hardware. As you mentioned, I was one of the founders of graphics company called 3dfx that was one of, I think, arguably the pioneer in terms of bringing 3d graphics to the masses, if you will.And we had a great run of that. That was a really fun business to be a part of just because of what was going on in the 3d world. And we took that public and eventually sold that to Nvidia. And at that point, my itch, if you will, was really learning more about the enterprise segment. I'd been involved with professional graphics with SGI, I had been involved with consumer graphics with 3dfx.And I was fascinated just to learn about the enterprise segment. And met a couple people through a mutual friend around the 2001 timeframe, and they started talking about this thing called Java. And you know, I had of course heard about Java, but as a consumer graphics guy, didn't have a lot of knowledge about it or experience with it. And the more I learned about it, recognized that what was going on in the Java world—and credit to Sun for really creating, obviously, not only language, but building a community around Java—and recognized that new evolutions of developer paradigms really only come around once a decade if then, and was convinced and really got excited about the opportunity to ride the wave of Java and build a company around that.Corey: One of the blind spots that I have throughout the entire world of technology—and to be fair, I have many of them, but the one most relevant to this conversation, I suppose, is the Java ecosystem as a whole. I come from a background of being a grumpy Unix sysadmin—because I've never met a happy one of those in my entire career—and as a result, scripting languages is where everything that I worked with started off. And on the rare occasions, I worked in Java shops, it was, “Great. We're going to go—here's a WAR file. Go ahead and deploy this with Tomcat,” or whatever else people are going to use. But basically, “Don't worry your pretty little head about that.”At most, I have to worry about how to configure a heap or whatnot. But it's from the outside looking in, not having to deal with that entire ecosystem as a whole. And what I've seen from that particular perspective is that every time I start as a technologist, or even as a consumer trying to install some random software package in the depths of the internet, and I have to start thinking about Java, it always feels like I'm about to wind up in a confusing world. There are a number of software packages that I installed back in, I want to say the early-2010s or whatnot. “Oh, you need to have a Java runtime installed on your Mac,” for example.And okay, going through Oracle site, do I need the JRE? Do I need the JDK? Oh, there's OpenJDK, which kind of works, kind of doesn't. Amazon got into the space with Corretto, which because that sounds nothing whatsoever, like Java, but strange names coming from Amazon is basically par for the course for those folks. What is the current state of the Java ecosystem, for those of us who have—basically the closest we've ever gotten is JavaScript, which is nothing alike except for the name.Scott: And you know, frankly, given the protection around the name Java—and you know, that is a trademark that's owned by Oracle—it's amazing to me that JavaScript has been allowed to continue to be called JavaScript because as you point out, JavaScript has nothing to do with Java per se.Corey: Well, one thing they do have in common I found out somewhat recently is that Oracle also owns the trademark for JavaScript.Scott: Ah, there you go. Maybe that's why it continues.Corey: They're basically a law firm—three law firms in a trench coat, masquerading as a tech company some days.Scott: Right. But anyway, it is a confusing thing because you know, I think, arguably, JavaScript, by the numbers, probably has more programmers than any other language in the world, just given its popularity as a web language. But to your question about Java specifically, it's had an evolving life, and I think the state where it is today, I think it's in the most exciting place it's ever been. And I'll walk you through kind of why I believe that to be the case.But Java has evolved over time from its inception back in the days when it was called, I think it was Oak when it was originally conceived, and Sun had eventually branded it as Java. And at the time, it truly was owned by Sun, meaning it was proprietary code; it had to be licensed. And even though Sun gave it away, in most cases, it still at the end of the day, it was a commercially licensed product, if you will, and platform. And if you think about today's world, it would not be conceivable to create something that became so popular with programmers that was a commercially licensed product today. It almost would be mandated that it would be open-source to be able to really gain the type of traction that Java has gained.And so, even though Java was really garnering interest, you know, not only within the developer community, but also amongst commercial entities, right, everyone—and the era now I'm talking about is around the 2000 era—all of the major software vendors, whether it was obviously Sun, but then you had Oracle, you had IBM, companies like BEA, were really starting to blossom at that point. It was a—you know, you could almost not find a commercial software entity that was not backing Java. But it was still all controlled by Sun. And all that success ultimately led to a strong outcry from the community saying this has to be open-source; this is too important to be beholden to a single vendor. And that decision was made by Sun prior to the Oracle acquisition, they actually open-sourced the Java runtime code and they created an open-source project called OpenJDK.And to Oracle's credit, when they bought Sun—which I think at the time when you really look back, Oracle really did not have a lot of track record, if you will, of being involved with an open-source community—and I think when Oracle acquired Sun, there was a lot of skepticism as to what's going to happen to Java. Is Oracle going to make this thing, you know, back to the old days, proprietary Oracle, et cetera? And really—Corey: I was too busy being heartbroken over Solaris at that point to pay much attention to the Java stuff, but it felt like it was this—sort of the same pattern, repeated across multiple ecosystems.Scott: Absolutely. And even though Sun had also open-sourced Solaris, with the OpenSolaris project, that was one of the kinds of things that it was still developed very much in a closed environment, and then they would kind of throw some code out into the open world. And no one really ran OpenSolaris because it wasn't fully compatible with Solaris. And so, that was a faint attempt, if you will.But Java was quite different. It was truly all open-sourced, and the big difference that—and again, I give Oracle a lot of credit for this because this was a very important time in the evolution of Java—that Oracle, maintained Sun's commitment to not only continue to open-source Java but most importantly, develop it in the open community. And so, you know, again, back and this is the 2008, ‘09, ‘10 timeframe, the evolution of Java, the decisions, the standards, you know, what goes in the platform, what doesn't, decisions about updates and those types of things, that truly became a community-led world and all done in the open-source. And credit to Oracle for continuing to do that. And that really began the transition away from proprietary implementations of Java to one that, very similar to Linux, has really thrived because of the true open-source nature of what Java is today.And that's enabled more and more companies to get involved with the evolution of Java. If you go to the OpenJDK page, you'll see all of the not only, you know, incredibly talented individuals that are involved with the evolution of Java, but again, a who's who in pretty much every major commercial entities in the enterprise software world is also somehow involved in the OpenJDK community. And so, it really is a very vibrant, evolving standard. And some of the tactical things that have happened along the way in terms of changing how versions of Java are released still also very much in the context of maintaining compatibility and finding that careful balance of evolving the platform, but at the same time, recognizing that there is a lot of Java applications out there, so you can't just take a right-hand turn and forget about the compatibility side of things. But we as a community overall, I think, have addressed that very effectively, and the result has been now I think Java is more popular than ever and continues to—we liken it kind of to the mortar and the brick walls of the enterprise. It's a given that it's going to be used, certainly by most of the enterprises worldwide today.Corey: There's a certain subset of folk who are convinced the Java, “Oh, it's this a legacy programming language, and nothing modern or forward-looking is going to be built in it.” Yeah, those people generally don't know what the internal language stack looks like at places like oh, I don't know, AWS, Google, and a few others, it is very much everywhere. But it also feels, on some level, like, it's a bit below the surface-level of awareness for the modern full-stack developer in some respects, right up until suddenly it's very much not. How is Java evolving in a cloud these days?Scott: Well, what we see happening—you know, this is true for—you know, I'm a techie, so I can talk about other techies. I mean as techies, we all like the new thing, right? I mean, it's not that exciting to talk about a language that's been around for 20-plus years. But that doesn't take away from the fact that we still all use keyboards. I mean, no one really talks about what keyboard they use anymore—unless you're really into keyboards—but at the end of the day, it's still a fundamental tool that you use every single day.And Java is kind of in the same situation. The reason that Java continues to be so fundamental is that it really comes back to kind of reinventing the wheel problem. Are there are other languages that are more efficient to code in? Absolutely. Are there other languages that, you know, have some capabilities that the Java doesn't have? Absolutely.But if you have the ability to reinvent everything from scratch, sure, go for it. And you also don't have to worry about well, can I find enough programmers in this, you know, new hot language, okay, good luck with that. You might be able to find dozens, but when you need to really scale a company into thousands or tens of thousands of developers, good luck finding, you know, everyone that knows, whatever your favorite hot language of the day is.Corey: It requires six years experience in a four-year-old language. Yeah, it's hard to find that, sometimes.Scott: Right. And you know, the reality is, is that really no application ever is developed from scratch, right? Even when an application is, quote, new, immediately, what you're using is frameworks and other things that have written long ago and proven to be very successful.Corey: And disturbing amounts of code copied and pasted from Stack Overflow.Scott: Absolutely.Corey: But that's one of those impolite things we don't say out loud very often.Scott: That's exactly right. So, nothing really is created from scratch anymore. And so, it's all about building blocks. And this is really where this snowball of Java is difficult to stop because there is so much third-party code out there—and by that, I mean, you know, open-source, commercial code, et cetera—that is just so leveraged and so useful to very quickly be able to take advantage of and, you know, allow developers to focus on truly new things, not reinventing the wheel for the hundredth time. And that's what's kind of hard about all these other languages is catching up to Java with all of the things that are immediately available for developers to use freely, right, because most of its open-source. That's a pretty fundamental Catch-22 about when you start talking about the evolution of new languages.Corey: I'm with you so far. The counterpoint though is that so much of what we're talking about in the world of Java is open-source; it is freely available. The OpenJDK, for example, says that right on the tin. You have built a company and you've been in business for 20 years. I have to imagine that this is not one of those stories where, “Oh, all the things we do, we give away for free. But that's okay. We make it up in volume.” Even the venture capitalist mindset tends to run out of patience on those kinds of timescales. What is it you actually do as a business that clearly, obviously delivers value for customers but also results in, you know, being able to meet payroll every week?Scott: Right? Absolutely. And I think what time has shown is that, with one very notable exception and very successful example being Red Hat, there are very, very few pure open-source companies whose business is only selling support services for free software. Most successful businesses that are based on open-source are in one-way shape or form adding value-added elements. And that's our strategy as well.The heart of everything we do is based on free code from OpenJDK, and we have a tremendous amount of business that we are following the Red Hat business model where we are selling support and long-term access and a huge variety of different operating system configurations, older Java versions. Still all free software, though, right, but we're selling support services for that. And that is, in essence, the classic Red Hat business model. And that business for us is incredibly high growth, very fast-moving, a lot of that business is because enterprises are tired of paying the very high price to Oracle for Java support and they're looking for an open-source alternative that is exactly the same thing, but comes in pure open-source form and with a vendor that is as reputable as Oracle. So, a lot of our businesses based on that.However, on top of that, we also have value-added elements. And so, our product that is called Azul Platform Prime is rooted in OpenJDK—it is OpenJDK—but then we've added value-added elements to that. And what those value-added elements create is, in essence, a better Java platform. And better in this context means faster, quicker to warm up, elimination of some of the inconsistencies of the Java runtime in terms of this nasty problem called garbage collection which causes applications to kind of bounce around in terms of performance limitations. And so, creating a better Java is another way that we have monetized our company is value-added elements that are built on top of OpenJDK. And I'd say that part of the business is very typical for the majority of enterprise software companies that are rooted in open-source. They're typically adding value-added components on top of the open-source technology, and that's our similar strategy as well.And then the third evolution for us, which again is very tried-and-true, is evolving the business also to add SaaS offerings. So today, the majority of our customers, even though they deploy in the cloud, they're stuck customer-managed and so they're responsible for where do I want to put my Java runtime on building out my stack and cetera, et cetera. And of course, that could be on-prem, but like I mentioned, the majority are in the cloud. We're evolving our product offerings also to have truly SaaS-based solutions so that customers don't even need to manage those types of stacks on their own anymore.Corey: On some level, it feels like we're talking about two different things when we talk about cloud and when we talk about programming languages, but increasingly, I'm starting to see across almost the entire ecosystem that different languages and different cloud providers are in many ways converging. How do you see Java changing as cloud-native becomes the default rather than the new thing?Scott: Great question. And I think the thing to recognize about, really, most popular programming languages today—I can think of very few exceptions—these languages were created, envisioned, implemented if you will, in a day when cloud was not top-of-mind, and in many cases, certainly in the case of Java, cloud didn't even exist when Java was originally conceived, nor was that the case when you know, other languages, such as Python, or JavaScript, or on and on. So, rethinking how these languages should evolve in very much the context of a cloud-native mentality is a really important initiative that we certainly are doing and I think the Java community is doing overall. And how you architect not only the application, but even the Java runtime itself can be fundamentally different if you know that the application is going to be deployed in the cloud.And I'll give you an example. Specifically, in the world of any type of runtime-based language—and JavaScript is an example of that; Python is an example of that; Java is an example of that—in all of those runtime-based environments, what that basically means is that when the application is run, there's a piece of software that's called the runtime that actually is running that application code. And so, you can think about it as a middleware piece of software that sits between the operating system and the application itself. And so, that runtime layer is common across those languages and those platforms that I mentioned. That runtime layer is evolving, and it's evolving in a way that is becoming more and more cloud-native in it's thinking.The process itself of actually taking the application, compiling it into whatever underlying architecture it may be running on—it could be an x86 instance running on Amazon; it could be, you know, for example, an ARM64, which Amazon has compute instances now that are based on an ARM64 processor that they call Graviton, which is really also kind of altering the price-performance of the compute instances on the AWS platform—that runtime layer magically takes an application that doesn't have to be aware of the underlying hardware and transforms that into a way that can be run. And that's a very expensive process; it's called just-in-time compiling, and that just-in-time compilation, in today's world—which wasn't really based on cloud thinking—every instance, every compute instance that you deploy, that same JIT compilation process is happening over and over again. And even if you deploy 100 instances for scalability, every one of those 100 instances is doing that same work. And so, it's very inefficient and very redundant. Contrast that to a cloud-native thinking: that compilation process should be a service; that service should be done once.The application—you know, one instance of the application is actually run and there are the other ninety-nine should just reuse that compilation process. And that shared compiler service should be scalable and should be able to scale up when applications are launched and you need more compilation resources, and then scaled right back down when you're through the compilation process and the application is more moving into the—you know, to the runtime phase of the application lifecycle. And so, these types of things are areas that we and others are working on in terms of evolving the Java runtime specifically to be more cloud-native.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig secures your cloud from source to run. They believe, as do I, that DevOps and security are inextricably linked. If you wanna learn more about how they view this, check out their blog, it's definitely worth the read. To learn more about how they are absolutely getting it right from where I sit, visit Sysdig.com and tell them that I sent you. That's S Y S D I G.com. And my thanks to them for their continued support of this ridiculous nonsense.Corey: This feels like it gets even more critical when we're talking about things like serverless functions across basically all the cloud providers these days, where there's the whole setup, everything in the stack, get it running, get it listening, ready to go, to receive a single request and then shut itself down. It feels like there are a lot of operational efficiencies possible once you start optimizing from a starting point of yeah, this is what that environment looks like, rather than us big metal servers sitting in a rack 15 years ago.Scott: Yeah. I think the evolution of serverless appears to be headed more towards serverless containers as opposed to serverless functions. Serverless functions have a bunch of limitations in terms of when you think about it in the context of a complex, you know, microservices-based deployment framework. It's just not very efficient, to spin up and spin down instances of a function if that actually is being—it is any sort of performance or latency-sensitive type of applications. If you're doing something very rarely, sure, it's fine; it's efficient, it's elegant, et cetera.But any sort of thing that has real girth to it—and girth probably means that's what's driving your application infrastructure costs, that's what's driving your Amazon bill every month—those types of things typically are not going to be great for starting and stopping functional instances. And so, serverless is evolving more towards thinking about the container itself not having to worry about the underlying operating system or the instance on Amazon that it's running on. And that's where, you know, we see more and more of the evolution of serverless is thinking about it at a container-level as opposed to a functional level. And that appears to be a really healthy steady state, so it gets the benefits of not having to worry about all the underlying stuff, but at the same time, doesn't have the downside of trying to start and stop functional influences at a given point in time.Corey: It seems to me that there are really two ways of thinking about cloud. The first is what I think a lot of companies do their first outing when they're going into something like AWS. “Okay, we're going to get a bunch of virtual machines that they call instances in AWS, we're going to run things just like it's our data center except now data transfer to the internet is terrifyingly expensive.” The more quote-unquote, “Cloud-native” way of thinking about this is what you're alluding to where there's, “Here's some code that I wrote. I want to throw it to my cloud provider and just don't tell me about any of the infrastructure parts. Execute this code when these conditions are met and leave me alone.”Containers these days seem to be one of our best ways of getting there with a minimum of fuss and friction. What are you seeing in the enterprise space as far as adoption of those patterns go? Or are we seeing cloud repatriation showing up as a real thing and I'm just not in the right place to see it?Scott: Well, I think as a cloud journey evolves, there's no question that—and in fact it's even silly to say that cloud is here to stay because I think that became a reality many, many years ago. So really, the question is, what are the challenges now with cloud deployments? Cloud is absolutely a given. And I think you stated earlier, it's rare that, whether it's a new company or a new application, at least in most businesses that don't have specific regulatory requirements, that application is highly, highly likely to be envisioned to be initially and only deployed in the cloud. That's a great thing because you have so many advantages of not having to purchase infrastructure in advance, being able to tap into all of the various services that are available through the cloud providers. No one builds databases anymore; you're just tapping into the service that's provided by Azure or AWS, or what have you.And, you know, just that specific example is a huge amount of savings in terms of just overhead, and license costs, and those types of stuff, and there's countless examples of that. And so, the services that are available in the cloud are unquestioned. So, there's countless advantages of why you want to be in the cloud. The downside, however, the cloud that is, if at the end of the day, AWS, Microsoft with Azure, Google with GCP, they are making 30% margin on that cloud infrastructure. And in the days of hardware, when companies would actually buy their servers from Dell, or HP, et cetera, those businesses are 5% margin.And so, where's that 25% going? Well, the 25% is being paid for by the users of cloud, and as a result of that, when you look at it purely from an operational cost perspective, it is more expensive to run in the cloud than it is back in the legacy days, right? And that's not to say that the industry has made the wrong choice because there's so many advantages of being in cloud, there's no doubt about it. And there should be—you know, and the cloud providers deserve to take some amount of margin to provide the services that they provide; there's no doubt about that. The question is, how do you do the best of all worlds?And you know, there is a great blog by a couple of the partners in Andreessen Horowitz, they called this the Cloud Paradox. And the Cloud Paradox really talks about the challenges. It's really a Catch-22; how do you get all the benefits of cloud but do that in a way that is not overly taxing from a cost perspective? And a lot of it comes down to good practices and making sure that you have the right monitoring and culture within an enterprise to make sure that cloud cost is a primary thing that is discussed and metric, but then there's also technologies that can help so that you don't have to even think about what you really don't ever want to do: repatriating, which is about the concept of actually moving off the cloud back to the old way of doing things. So certainly, I don't believe repatriation is a practical solution for ongoing and increasing cloud costs. I believe technology is a solution to that.And there are technologies such as our product, Azul Platform Prime, that in essence, allows you to do more with less, right, get all the benefits of cloud, deploy in your Amazon environment, deploy in your Azure environment, et cetera, but imagine if instead of needing a hundred instances to handle your given workload, you could do that with 50 or 60. Tomorrow, that means that you can start savings and being able to do that simply by changing your JVM from a standard OpenJDK or Oracle JVM to something like Platform Prime, you can immediately start to start seeing the benefits from that. And so, a lot of our business now and our growth is coming from companies that are screaming under the ongoing cloud costs and trying to keep them in line, and using technology like Azul Platform Prime to help mitigate those costs.Corey: I think that there is a somewhat foolish approach that I'm seeing taken by a lot of folks where there are some companies that are existentially anti-cloud, if for no other reason than because if the cloud wins, then they don't really have a business anymore. The problem I see with that is that it seems that their solution across the board is to turn back the clock where if I'm going to build a startup, it's time for me to go buy some servers and a rack somewhere and start negotiating with bandwidth providers. I don't see that that is necessarily viable for almost anyone. We aren't living in 1995 anymore, despite how much some people like to pretend we are. It seems like if there are workloads—for which I agree, cloud is not necessarily an economic fit, first, I feel like the market will fix that in the fullness of time, but secondly, on an individual workload belonging in a certain place is radically different than, “Oh, none of our stuff should live on cloud. Everything belongs in a data center.” And I just think that companies lose all credibility when they start pretending that it's any other way.Scott: Right. I'd love to see the reaction of the venture capitalists' face when an entrepreneur walks in and talks about how their strategy for deploying their SaaS service is going to be buying hardware and renting some space in the local data center.Corey: Well, there is a good cost control method, if you think about it. I mean very few engineers are going to accidentally spin up an $8 million cluster in a data center a second time, just because there's no space left for it.Scott: And you're right; it does happen in the cloud as well. It's just, I agree with you completely that as part of the evolution of cloud, in general, is an ever-improving aspect of cost and awareness of cost and building in technologies that help mitigate that cost. So, I think that will continue to evolve. I think, you know, if you really think about the cloud journey, cost, I would say, is still in early phases of really technologies and practices and processes of allowing enterprises to really get their head around cost. I'd still say it's a fairly immature industry that is evolving quickly, just given the importance of it.And so, I think in the coming years, you're going to see a radical improvement in terms of cost awareness and technologies to help with costs, that again allows you to the best of all worlds. Because, you know, if you go back to the Dark Ages and you start thinking about buying servers and infrastructure, then you are really getting back to a mentality of, “I've got to deploy everything. I've got to buy software for my database. I've got to deploy it. What am I going to do about my authentication service? So, I got to buy this vendor's, you know, solution, et cetera.” And so, all that stuff just goes away in the world of cloud, so it's just not practical, in this day and age I think, to think about really building a business that's not cloud-native from the beginning.Corey: I really want to thank you for spending so much time talking to me about how you view the industry, the evolution we've seen in the Java ecosystem, and what you've been up to. If people want to learn more, where's the best place for them to find you?Scott: Well, there's a thing called a website that you may not have heard of, it's really cool.Corey: Can I build it in Java?Scott: W-W-dot—[laugh]. Yeah. Azul website obviously has an awful lot of information about that, Azul is spelled A-Z-U-L, and we sometimes get the question, “How in the world did you name a company—why did you name it Azul?”And it's kind of a funny story because back in the days of Azul when we thought about, hey, we want to be big and successful, and at the time, IBM was the gold standard in terms of success in the enterprise world. And you know, they were Big Blue, so we said, “Hey, we're going to be a little blue. Let's be Azul.” So, that's where we began. So obviously, go check out our site.We're very present, also, in the Java community. We're, you know, many developer conferences and talks. We sponsor and run many of what's called the Java User Groups, which are very popular 10-, 20-person meetups that happen around the globe on a regular basis. And so, you know, come check us out. And I appreciate everyone's time in listening to the podcast today.Corey: No, thank you very much for spending as much time with me as you have. It's appreciated.Scott: Thanks, Corey.Corey: Scott Sellers, CEO and co-founder of Azul. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an entire copy of the terms and conditions from Oracle's version of the JDK.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

The Grit City Podcast
Saturday Night Grit - Wolverine West Fireworks At The Washington State Fair

The Grit City Podcast

Play Episode Listen Later Sep 19, 2022 91:08


On this episode, the guys talk about hanging out at the fair and meeting with Chad from Wolverine West Fireworks (thanks, Cnote!!) while there. They also discuss Werewolf movies, plans for the fall, and The Grit City Comic Show 01:17 – Jeff kicks off the podcast with the safe word of the day, Justin welcomes Tacompton Files back on Facebook, and his and Jeff's recent experience setting off fireworks at the Puyallup Fair. He plays their recording the did at the fair, GCP patron Cnote shares how he got involved with setting off fireworks, and Chad talks about what Wolverine West Fireworks does. He talks about how he got started in fireworks, how long he's been doing it, and the different events he sets fireworks off for. 21:50 – Chad talks about how interested people can get involved, the areas they serve, and Justin talks about where they ate before the fair. He talks about the haps of the food during the fair, wandering around while enjoying the food, and the different animals they saw. Jeff drops the pumpkin stats, Justin talks about the super pumpkin winners, and Jeff talks about rumors of how Joel Holland won in the past. 44:38 – Scott and Jeff talk about car fundraisers as kids where people paid to hit a car, the different pricing at the events, and Justin talks about his love of meeting listeners. He talks about his favorite stoner shows, getting the honor of lighting off the fireworks, and Jeff talks about the reward people who light off fireworks feel when people cheer after they go off. Justin talks about the art he found at the event, taking metal shop when younger, and gives a shout-out to the Peterson Brothers. 66:40 – Justin talks about his continuing werewolf movie quest, watching back-to-back of The Wolfman movies, and what stuck with him from the classic film. They talk about the psychological perspective behind the werewolf stories, plans for visiting haunted houses and corn mazes this fall, and Jeff talks about his grandkids making a horror movie. Justin talks about paintball bus rides, Jeff talks about making a music video petting goats, and the props he got for making the video. Special Guest: Wolverine West Fireworks.

Let’s Talk Cloud Networking - Unscripted
EP34 - Cloud Cost Management - Increase Cloud ROI - Enhance Application Security

Let’s Talk Cloud Networking - Unscripted

Play Episode Listen Later Sep 17, 2022 46:57


Experience your firewall working for you, not the other way around. Aviatrix integrate and works with Security vendor such as Palo Alto Networks, Check Point, Fortinet, Cisco, F5 etc. Aviatrix business value is to simplify the way organizations enhance their public cloud security posture (Azure, GCP, AWS, OCI, etc.) while reducing the overall cost. In this candid podcata you'll find out how Aviatrix FireNet helps you to simply consume virtual firewall, with a proven recipe adopted by hundreds of customers in the cloud. Now, you can simplify security by leaving the heavy lifting to Aviatrix. The YouTube version is available here --- Send in a voice message: https://anchor.fm/netjoints/message

Let’s Talk Cloud Networking - Unscripted
EP34 - Cloud Cost Management - Increase Cloud ROI - Enhance Application Security

Let’s Talk Cloud Networking - Unscripted

Play Episode Listen Later Sep 17, 2022 46:57


Experience your firewall working for you, not the other way around. Aviatrix integrate and works with Security vendor such as Palo Alto Networks, Check Point, Fortinet, Cisco, F5 etc. Aviatrix business value is to simplify the way organizations enhance their public cloud security posture (Azure, GCP, AWS, OCI, etc.) while reducing the overall cost. In this candid podcata you'll find out how Aviatrix FireNet helps you to simply consume virtual firewall, with a proven recipe adopted by hundreds of customers in the cloud. Now, you can simplify security by leaving the heavy lifting to Aviatrix. The YouTube version is available here --- Send in a voice message: https://anchor.fm/netjoints/message

BLUEPRINT
Brandon Evans: Cloud Security - Threats and Opportunities

BLUEPRINT

Play Episode Listen Later Sep 13, 2022 50:39


Ever wonder how a cloud and application security expert views risks of cloud workloads? Well, wonder no more because on this episode we have Brandon Evans - SANS Certified Instructor and lead author of SEC510: Public Cloud Security. We cover the why and how of moving their applications to the cloud, the key considerations for a successful cloud security posture, and how building your infrastructure with a cloud-native mindset can and should lead to an improved security posture. BONUS: Be sure to stay tuned to the end of the episode for a very special announcement from Brandon on the new SANS Cloud Ace podcast. Coming to all podcast directories on September 28. Our Guest - Brandon EvansBrandon works for Zoom Video Communications, in which he leads their internal Application Security training. As an application developer for most of his professional career, he moved into security full-time largely because of his many formal trainings through SANS. He's a contributor to the OWASP Serverless Top 10 Project and a co-leader for the Nashville OWASP chapter. Brandon is lead author for SEC510: Public Cloud Security: AWS, Azure, and GCP and a contributor and instructor for SEC540: Cloud Security and DevSecOps Automation. Resources:sans.org/cloud - SANS Cloud Resourceshttps://brandone.github.io/pixel-puzzles/ - Brandon's Pixel Puzzle gameSponsor's Note:Support for the Blueprint podcast comes from the SANS Institute.If you like the topics covered in this podcast and would like to learn more about blue team fundamentals such as host and network data collection, threat detection, alert triage, incident management, threat intelligence, and more, check out my new course SEC450: Blue Team Fundamentals.This course is designed to bring attendees the information that every SOC analyst and blue team member needs to know to hit the ground running, including 15 labs that get you hands on with tools for threat intel, SIEM, incident management, automation and much more, this course has everything you need to launch your blue team career.Check out the details at sansurl.com/450  Hope to see you in class!Follow SANS Cyber Defense: Twitter | LinkedIn | YouTubeFollow John Hubbard: Twitter | LinkedInJoin us in Scottsdale, AZ or virtually for the 2022 SANS Institute Blue Team Summit & Training. At the SANS Blue Team Summit, enhance your current skill set and become even better at defending your organization and hear the latest ways to mitigate the most recent attacks!

Gut Check Project
#81 Longevity Supplements (continued) and Finale!

Gut Check Project

Play Episode Listen Later Sep 13, 2022 47:15


We've covered A LOT in the space of longevity and health span… BECAUSE YOU ARE WORTH IT! In this series finale, Ken and Eric give context to supplement choices and a wrap on the series covering your ability to live your life longer, more full, and healthier than you may have thought possible. Join Ken & Eric on the GCP, today! Be sure to like and share this episode, and thank you for being a part of the KBMD Health family!

Ask Drone U
ADU 01273: Can I use my drone feed for referencing GCP instead of a base station?

Ask Drone U

Play Episode Listen Later Sep 12, 2022 11:45 Very Popular


Are base stations an absolute necessary? Can pilots use their drones to reference ground points for mapping missions? In today's episode we discuss the use of base stations and if pilots can make do without one as they work on mapping deliverables. Are there any benefits to doing so? Today's episode is brought to you by Drone U courses. Over many years on developing comprehensive training content for developing skilled pilots, Drone U has developed over 40+ courses over diverse applications and domains. Access them all through single Drone U membership. Visit our Drone U course page for more details on available courses. Today's question is from Scott, who would like to know if its possible for pilots to use their drone feed for referencing points from the ground for mapping deliverables and if a base station is not required. Thanks for the question Scott. Today's episode is a short one where we address the use and advantages of using formal gear for GCP as against referencing the ground from video feed/footage. We discuss some common errors that can occur from the use of drone feed for generating reference points and speak about the difference in absolute and relative accuracy. We also discuss alternatives that pilots can use in the absence of base stations and state the trade-offs from the use of these alternatives. Tune in today to learn more about improving your accuracy in mapping deliverables. Get Your Biggest and Most Common Drone Certificate Questions Answered by Downloading this FREE Part 107 PDF Make sure to get yourself the all-new Drone U landing pad! Get your questions answered: https://thedroneu.com/. If you enjoy the show, the #1 thing you can do to help us out is to subscribe to it on iTunes. Can we ask you to do that for us real quick? While you're there, leave us a 5-star review, if you're inclined to do so. Thanks! https://itunes.apple.com/us/podcast/ask-drone-u/id967352832. Become a Drone U Member. Access to over 30 courses, great resources, and our incredible community. Follow Us Site – https://thedroneu.com/ Facebook – https://www.facebook.com/droneu Instagram – https://instagram.com/thedroneu/ Twitter – https://twitter.com/thedroneu YouTube – https://www.youtube.com/c/droneu Timestamps: [2:17] Today's question on mapping and if there are advantages to using the GCP from a drone [4:45] Paul elaborates on areas where the measurements can be erroneous during mapping [7:00] An example to demonstrate the difference in absolute and relative accuracy [8:44] Can missions be completed without a base station? Are they an absolute necessity in mapping? What other alternatives would work?

UBC News World
This IAM & Cloud Computing Consultancy Protects Your GCP From Cyber Criminals

UBC News World

Play Episode Listen Later Sep 12, 2022 2:56


Minimize your GCP blast radius with Britive. More details at https://www.britive.com/blog/3-frictionless-strategies-to-boost-your-gcp-iam (https://www.britive.com/blog/3-frictionless-strategies-to-boost-your-gcp-iam)

The Grit City Podcast
GCP: Saturday Night Grit - Wocka, Wocka!

The Grit City Podcast

Play Episode Listen Later Sep 12, 2022 71:54


On this Saturday Night Grit, Jeff, Justin, and Scott talk about the upcoming Grit City Comic Show, Justin's continuing werewolf movie project, upcoming guests, area restaurants with new owners, fair food, and more! 00:15 – The show kicks off with Scott talking about Nate Diaz fighting the night of the recording, Jeff reveals the code word of the day, and talks about having fun in the discord. Justin gives a shout-out to the GCP's dedicated listeners, comic book grading services they'll offer at Grit City Comic Show, and their plans to have Ken back on the podcast. They make plans for a GCP after party during the event, special guests to have at the party, and Justin shares Mark Monlux's kick starter for his Atomic Age Alien Series. 18:07 – Justin gives props to Michael J Fox in the Teen Wolf movie, Jeff and Scott share their recent man date at The Valley, and who they saw perform there. They discuss plans to meet with the Peterson brothers to talk about their next adventure, reflect on Rainer Beer commercials from the past, and other upcoming guests they're going to have on. 35:25 – Justin talks about the recent TikTok celebrity he met, plans to have them on the podcast, and encourages listeners to keep track of what's happening at their local libraries. He talks about visiting the Tacoma Moon Festival, Jeff talks about partying it up in Tacoma, and enjoying the late night food at Matador. Justin dives into Grit City Grub, talks about two older Tacoma restaurants with new owners, and the soft re-opening of the Pine Cone Café. 53:04 – Justin talks about who took over Spuds Pizza Parlor, what changes people should expect with the restaurant, and why chicken and jojo's are one of his fave comfort foods. Scott talks about making wonton nachos, they discuss plans to visit the fair to enjoy some fair food, and Justin closes out talking about who they plan to speak with at the Union Club next week.

Wrestle Addict Radio
Fretzlemania 84- Summerslam 2002 Review w Nate the FN Great

Wrestle Addict Radio

Play Episode Listen Later Sep 9, 2022 64:11


#MrFretz and Nate the FN Great from Brace For Impact & The Gamechanger Podcast to review Summerslam 2002. In proper GCP fashion, it falls off the rails almost immediately and the lads review a bonified classic. Learn about being "Canadianized", Stephanie McMahon's "maniacal laughter", who's better Team Canada or The UnAmericans and a collection of absolute bangers including: Rey Mysterio vs Kurt Angle, Edge vs Eddy Guerrero, RVD vs Chris Benoit, Brock vs Rock and the unsanctioned Street Fight between Triple H and Shawn Michaels! Follow Fretz on Twitter/Instagram/Tiktok @Fretzlemania Follow Nate @RealFNGame Follow WAR on Twitter @Addict_Wrestle Join our EXCLUSIVE $5 Patreon: www.patreon.com/wrestleaddictradio Merch: https://fretzlemania.myteespring.co/ Patreons get 15% off! Read our exclusive blog with reviews, fan fiction and more! https://writteninwar.wordpress.com/ Join our Discord Server: https://discord.gg/hWUGvp85 --- Send in a voice message: https://anchor.fm/wrestleaddictradionetwork/message

Fretzlemania
Summerslam 2002 Review w Nate the FN Great

Fretzlemania

Play Episode Listen Later Sep 9, 2022 64:08


#MrFretz and Nate the FN Great from Brace For Impact & The Gamechanger Podcast to review Summerslam 2002. In proper GCP fashion, it falls off the rails almost immediately and the lads review a bonified classic. Learn about being "Canadianized", Stephanie McMahon's "maniacal laughter", who's better Team Canada or The UnAmericans and a collection of absoulte bangers including: Rey Mysterio vs Kurt Angle, Edge vs Eddy Guerrero, RVD vs Chris Benoit, Brock vs Rock and the unsanctioned Street Fight between Triple H and Shawn Michaels! Follow Fretz on Twitter/Instagram/Tiktok @Fretzlemania Follow Nate @RealFNGame Follow WAR on Twitter @Addict_Wrestle Join our EXCLUSIVE $5 Patreon: www.patreon.com/wrestleaddictradio Merch: https://fretzlemania.myteespring.co/ Patreons get 15% off! Read our exclusive blog with reviews, fan fiction and more! https://writteninwar.wordpress.com/ Join our Discord Server: https://discord.gg/hWUGvp85  --- Send in a voice message: https://anchor.fm/fretzlemania/message

Rant With Ant
Fretzlemania 84- Summerslam 2002 Review w Nate the FN Great

Rant With Ant

Play Episode Listen Later Sep 9, 2022 64:11


#MrFretz and Nate the FN Great from Brace For Impact & The Gamechanger Podcast to review Summerslam 2002. In proper GCP fashion, it falls off the rails almost immediately and the lads review a bonified classic. Learn about being "Canadianized", Stephanie McMahon's "maniacal laughter", who's better Team Canada or The UnAmericans and a collection of absolute bangers including: Rey Mysterio vs Kurt Angle, Edge vs Eddy Guerrero, RVD vs Chris Benoit, Brock vs Rock and the unsanctioned Street Fight between Triple H and Shawn Michaels! Follow Fretz on Twitter/Instagram/Tiktok @Fretzlemania Follow Nate @RealFNGame Follow WAR on Twitter @Addict_Wrestle Join our EXCLUSIVE $5 Patreon: www.patreon.com/wrestleaddictradio Merch: https://fretzlemania.myteespring.co/ Patreons get 15% off! Read our exclusive blog with reviews, fan fiction and more! https://writteninwar.wordpress.com/ Join our Discord Server: https://discord.gg/hWUGvp85 --- Send in a voice message: https://anchor.fm/wrestleaddictradionetwork/message

Iron Sysadmin Podcast
Episode 125a - Cloud Terminology and technologies

Iron Sysadmin Podcast

Play Episode Listen Later Sep 9, 2022 74:59


Welcome to Episode 125 Main Topic Intro to Cloud Private Cloud (is it really a cloud? What makes it different from “my computers”?) Public Cloud Hybrid Cloud Understanding Cloud Acronyms and Terminology IaaS - Infrastructure as a Service PaaS - Platform as a Service SaaS - Software as a Service MaaS - Marc as a Service Instance Container Serverless Load Balancer Floating/Virtual IP Virtual Network WAF Firewall Interconnects VPNs Block Storage Object Storage Hot Storage Cold storage Filer or File Storage Disk Image API Web Based Console Cloud Management Portal CLI Tools https://www.lucidchart.com/blog/cloud-terminology-glossary  Types of services Compute Terminology Network Terminology Storage Terminology Interfaces   Watch us live on the 2nd and 4th Thursday of every month! Subscribe and hit the bell! https://www.youtube.com/IronSysadminPodcast  OR https://twitch.tv/IronSysadminPodcast   Discord Community: https://discord.gg/wmxvQ4c2H6  Find us on Twitter, and Facebook! https://www.facebook.com/ironsysadmin https://www.twitter.com/ironsysadmin Subscribe wherever you find podcasts! And don't forget about our patreon! https://patreon.com/ironsysadmin   Intro and Outro music credit: Tri Tachyon, Digital MK 2http://freemusicarchive.org/music/Tri-Tachyon/ 

Google Cloud Cast
T3E7 - Apagão no mercado de TI: como reduzir o gap de profissionais do setor

Google Cloud Cast

Play Episode Listen Later Sep 8, 2022 40:46


A transformação digital não é mais uma tendência - é uma realidade e uma urgência para as empresas. E para dar conta dessa demanda digital em crescimento, existe uma categoria que vem sendo muito requisitada no mercado de trabalho: os profissionais de tecnologia. A projeção da Associação Brasileira das Empresas de Tecnologia da Informação e Comunicação (Brasscom) é de uma demanda de 797 mil profissionais na área de TI no país até 2025. Hoje, o Brasil forma cerca de 53 mil pessoas com esse perfil por ano. No sétimo episódio da terceira temporada, nossos hosts Daniel Leite e Marcelo Gomes, executivos de vendas do Google Cloud, falam sobre esse “apagão” no mercado de TI. Participam do bate-papo o Head de Cloud Education do Google para a América Latina, Fabio La Selva, e Carmela Borst, CEO e fundadora da SoulCode Academy, edtech brasileira fundada em 2020 com o propósito de gerar impacto social e empregabilidade com educação tecnológica. O Google Cloud Cast é o podcast oficial do Google Cloud no Brasil, no qual discutimos temas como transformação digital, inovação e a jornada para a nuvem com a participação de executivos, especialistas e convidados especiais. Confira os links deste episódio: Portal Capacita+: https://bit.ly/3AXuLOF SoulCode Academy: https://soulcodeacademy.org/ Conheça a história da Patrícia, 39 anos, mãe de três filhos, que participou do bootcamp da SoulCode Academy em parceria com o Google Cloud e se tornou engenheira de dados habilitada em GCP: https://bit.ly/3wGrhhh Leia na íntegra o estudo da Brasscom que aponta demanda de 797 mil profissionais de tecnologia até 2025: https://bit.ly/3R7p9XV Brasil estagna no ranking de competitividade digital (Fonte: The Shift): https://bit.ly/3CMkXsf Veja na íntegra o estudo Digital 2022: Global Overview Report, sobre os números do acesso à internet no mundo inteiro: https://bit.ly/3PVxlJq Gostou do episódio ou tem alguma sugestão? Compartilha com a gente por e-mail em googlecloudcast@google.com

Getup Kubicast
#99 - Kubernetes 1.25 - O que há de novo?

Getup Kubicast

Play Episode Listen Later Sep 8, 2022 43:22


Com alguns de seus companheiros de Getup, João Brito comenta as mudanças mais relevantes da versão 1.25 do Kubernetes. Algumas delas são a remoção definitiva do PSP (PodSecurityPolicy), a depreciação do suporte para GlusterFS e a morte do Autoscaling v.2 beta 1. Tem também a entrada, ainda em estágio alfa, do recurso de namespace de Linux (não o do Kubernetes, heim!?) e o avanço para estável das features Pod Security Admission e Local Ephemeral Storage Capacity Isolation.Outra novidade é que o PDB (Pod Disruption Budget) vai para versão default, por isso recomendamos que mantenham os deploys produtivos com pelo menos duas réplicas para não ter dor de cabeça na hora de uma atualização, por exemplo.Em meio às observações da nova versão, a turma falou sobre os prós e contras de trabalhar com um cluster gerenciado vs um cluster no On-Premise; e se tem alguma future gate que faz falta num cluster de produção.LINKS do que foi comentado no programa:Artigo da Karol Valencia da Aqua Security: https://blog.aquasec.com/kubernetes-version-1.25KubiLab - Vídeo tutorial do Adonai Costa sobre o KEDA: https://gtup.me/KubilabKeda RECOMENDAÇÕES dos participantes:One Punch Man (livro de mangá)Attack on Titan (série de mangá)The Sandman (filme) Narradores de Javé (filme)Five Days at Memorial (série na Apple TV+)CONVITE! Estamos perto do Kubicast #100 e vamos comemorar esse marco de um jeito muito especial! No formato “ASK ME ANYTHING”, a audiência vai poder tirar todas as suas dúvidas sobre Kubernetes e afins! Inscreva-se para participar: https://getup.io/participe-do-kubicast-100. O evento acontece no dia 15/9 às 19h no Zoom.SOBRE O KUBICASTO Kubicast é uma produção da Getup, especialista em Kubernetes. Todos os episódios do podcast estão no site da Getup e nas principais plataformas de áudio digital. Alguns deles estão registrados no YT. #DevOps #Kubernetes #Containers #Kubicast 

Gut Check Project
#80 Longevity and Supplements

Gut Check Project

Play Episode Listen Later Sep 6, 2022 59:19


Stop wasting time & money. Supplement stores offer all kinds of options. But are you using supplements to improve your health? How informed are you for what you actually need and know that works? The GCP longevity series has covered a lot of basics. The basics have to be addressed for health span success. So IF you are already working to achieve the basics for your own health, be sure to tune into this episode of the GCP and learn what unbiased research shows your next steps to consider are for health supplementation, AND LEARN WHY! Ken brings a host of ideas with the science to Eric for you to make the best decisions for your health. Please Like & Share the Gut Check Project and THANK YOU for being a part of KBMD Health!

The Grit City Podcast
GCP: Saturday Night Grit - Beer, Weed, and Titmouse

The Grit City Podcast

Play Episode Listen Later Sep 5, 2022 97:48


Jeff, Justin, and Scott chat about Jaws, werewolf movies, bbqing, Washington State Fair, Grit City Comic Show, and much more! 00:00 – They kick off the podcast talking about the Washington State Fair, Jeff shares the word of the day, and they dive into cute animal talk. Scott talks about watching Jaws the original in 3D, Jeff talks about Jaws being his kryptonite, and they discuss how evil otters are. Justin talks about his spooky project, the #1 werewolf movie everyone should see, and they debate on if the Twilight movies count as werewolf movies. 24:33 – Justin reviews the movie Wolfcop, Jeff talks about what frustrates him the most in movies when they're trying to fill space, and Scott talks about the different services offered on Amazon Prime. Justin shares how he felt about the House of the Dragon, his recent bbq with family, and his love of the baked beans with apple pie filling the family brought. Justin talks about his love of cooking in the Traeger and plans to visit the Washington State Fair. 48:28 – Justin talks about the types of games he likes to play at the fair, the goat events at the Evergreen Fair, and they jump into Jeff's Capades. Justin and Scott talk about throwing up at the fair, accidents on fair rides, and the GCP bling they'll have at the upcoming Grit City Comic Show. Justin shares their idea for listeners to be able to give bad advice during the event for free stickers, the type of people that will be there, and if there will be after parties at the event. 72:05 – Justin reflects on Brogan's first strip club experience, the types of strip clubs in Portland, and strip club laws in Washington. Scott talks about the beautiful beaches in Mississippi, Justin talks about the construction from Fife to Tacoma being complete, and the recent article about the dating app meetup that went wrong.

Dr. Lotte: Science with Soul
Global Consciousness Project with Roger Nelson PhD.

Dr. Lotte: Science with Soul

Play Episode Listen Later Sep 3, 2022 57:19


Roger Nelson, PhD, is the Director of the Global Consciousness Project (GCP). He studied physics and sculpture at the University of Rochester, and experimental psychology at New York University and Columbia. He is the author or co-author of 100 technical papers and three books: Connected: The Emergence of Global Consciousness, Der Welt-Geist: wie wir alle miteinander verbunden sind, and Die Welt-Kraft in Dir (German) with Georg Kindel. He was Professor of Psychology at Johnson State College in northern Vermont, and in 1980 joined Princeton University's PEAR lab to coordinate research. His focus is on mental interactions, anomalous information transfer, and effects on random systems by individuals and groups. He created the GCP in 1997, building a world-spanning random number generator network designed to gather evidence of coalescing global consciousness. He lives in Princeton, NJ, and his website is

Gate City Podcast
GCP Is Back. "It's been a while."

Gate City Podcast

Play Episode Listen Later Sep 2, 2022 68:49


After an incomplete season 2, what better time to bring GCP back than football season? We are joined by Ross Martin (our first 2-time guest) to talk a little App State UNC and to figure out what's going on with Hog's hair. Tox gives his thoughts on the upcoming Panthers season during Keep Podding. Has he changed his thoughts on head coach Matt Rhule? And if you haven't seen the Netflix shows about the Rise and Fall of And1 or the Manti Te'o catfishing, take a trip down nostalgia lane with us and watch those over the weekend. A little life reset is good every now and then. It feels good to be back. Hang with us as we get back into it, and continue to learn, grow and progress..

Getup Kubicast
Kubicast #98 - Kubernetes no Azure e .NET com Renato Groffe

Getup Kubicast

Play Episode Listen Later Sep 1, 2022 46:37


Nesse episódio, trazemos o ilustre Renato Groffe, o cara das lives de Azure, .NET e desenvolvimento de software em geral. Engenheiro de Software Sênior e MVP da Microsoft, o Renato está com o .NET desde que tudo era mato!  Para explorar tudo o que ele sabe a respeito, falamos sobre .NET no Kubernetes, Linux Containers, benefícios de rodar Kubernetes no AKS, tracing para encontrar os bugs em microsserviços e Azure DevOps. LINKS do que comentamos no episódio:Kubicast #60 - http://gtup.me/kubicast-60LinkedIn do Renato - https://www.linkedin.com/in/renatogroffe/Medium do Renato - https://renatogroffe.medium.com/Coding Night - https://www.youtube.com/codingnightCanal .Net - https://www.youtube.com/canaldotnetGitHub - https://github.com/renatogroffeAzure na prática - https://azurenapratica.com/ - https://www.youtube.com/azurenapraticaAs RECOMENDAÇÕES do programa:Acompanhar os jogos da NBA AL- Andalus: O Legado - Documentário no canal HistoryTrês anúncios para um crime - Filme que está na Star +LEMBRETE! Estamos perto do Kubicast #100 e vamos comemorar esse marco de um jeito especial! Aguarde! SOBRE O KUBICASTO Kubicast é uma produção da Getup, especialista em Kubernetes. Todos os episódios do podcast estão no site da Getup e nas principais plataformas de áudio digital. Alguns deles estão registrados no YT. #DevOps #Kubernetes #Containers #Kubicast #AzureDevOps #.NET

Google Cloud Platform Podcast
GKE Turns 7 with Tim Hockin

Google Cloud Platform Podcast

Play Episode Listen Later Aug 31, 2022 38:04


Tim Hockin joins Kaslin Fields and Anthony Bushong to celebrate GKE's seventh birthday! Tim starts with a brief background on GKE from its beginnings in 2015 and its relationship to Borg to the visions Google developers had for the software. GKE is meant to help companies focus on what they're good at and leave the rest to Google's managed Kubernetes service. Tim talks about his acting gig in a Kubernetes documentary, including some fun facts about Kubernetes' early days and the significance of the number seven. Over time, the teams working on open source Kubernetes and GKE have worked together, with advances in the open source software influencing updates in GKE. Kubernetes 1.25 was released the day this episode was recorded, and Tim describes how much work and thought goes into building these updates. GKE offers GCP users unique ways to leverage Kubernetes tools like scaling, and Tim shares stories about the evolution of some of these tools and his experiences with networking. Talking with the Kubernetes community has helped refine GKE mult-icluster tools to help companies solve real problems, and Tim tells us more about other features and updates coming with future iterations of GKE. KubeCon is in October, so come by and learn more! Tim Hockin Tim Hockin is Principal Software Engineer working with Kubernetes at Google Cloud. Cool things of the week What's new with Google Cloud blog Power Your Business with Modern Cloud Apps: Strategies and Best Practices site Securing apps for Googlers using Anthos Service Mesh blog Interview GKE site Kubernetes site Anthos site Borg: The Predecessor to Kubernetes blog Enabling multi-cluster Gateways docs Cloud Load Balancing site Multi-cluster Services docs Keynote: From One to Many, the Road to Multicluster- Kaslin Fields, Developer Advocate, Google Cloud video GCP Podcast Episode 272: GKE Turns Six with Anthony Bushong, Gari Singh, and Kaslin Fields podcast What's something cool you're working on? Kaslin is working on NEXT and KubeCon stuff. Anthony is working on GKE Essentials and getting ready to go on leave. Hosts Kaslin Fields and Anthony Bushong

Cloud N Clear
EP 133 / TAMR PARTNERS WITH SADA TO FIND THE RIGHT CUSTOMERS AND NAVIGATE THE GOOGLE CLOUD ECOSYSTEM

Cloud N Clear

Play Episode Listen Later Aug 30, 2022 28:20


The Grit City Podcast
GCP: Saturday Night Grit - Tuna Fish Milkshake

The Grit City Podcast

Play Episode Listen Later Aug 29, 2022 98:12


00:00 – Scott kicks off this Saturday Night Grit discussing his recent obsessions, Jeff presents the crew with the word of the day, and Robo Brogan makes an appearance. Justin gives a shout-out to GCP's friend Rusty, talks about Rusty's upcoming event, and reflects on their first episodes. He talks about his recent camping trip, the hiking he did while there, and the amazing food prepped by the camping chefs. 24:15 – Scott shares his idea around pod-camping, what he's enjoying for a drink, and Justin talks about the Wodka he took camping. Brandon joins the conversation, talks about his plan to start a podcast, and Justin talks about the benefits he finds on the FaceBook page Tacompton Files. Scott expresses his appreciation for the site, Brandon talks about his not-so-great recent vacations to Florida and Vegas, and losing his dog while he was gone. 49:37 – Justin gives another shout-out to patron Erik P, talks about GCP sponsoring him in this year's Unleashed Stadium Bowl, and the booth they're going to have at the Grit City Comic show coming up in October. Justin discusses sasquatch hunting while camping, hauntings he experienced while working as a security guard, and introduces Grit City Grub bit. He talks about the amazing Mexican restaurants on 72nd, Los Tamales Mexican pizza, and the other food he enjoyed from there. 72:36 – Jeff talks about the logo he's made for Grit City Chronic, talks about getting bent on a Tacoma Aroma TikTok, and Justin plays the new Pickleball song “A Dinkin' Problem”. They give a shout-out to The Cow Cats, talk about why listener Al wasn't listening to them live, and they dive into Bad Life Advice. Special Guest: Tacompton Files.

CISO Tradecraft
#93 - How to Become a Cyber Security Expert

CISO Tradecraft

Play Episode Listen Later Aug 29, 2022 29:43


How do you become a Cyber Security Expert? Hello and welcome to another episode of CISO Tradecraft, the podcast that provides you with the information, knowledge, and wisdom to be a more effective cybersecurity leader.  My name is G. Mark Hardy, and today we're going to talk about how to provide advice and mentoring to help people understand how to become a cybersecurity expert.  As always, please follow us on LinkedIn, and subscribe to our podcasts. As a security leader, part of your role is to develop your people.  That may not be written anywhere in your job description and will probably never be on a formal interview or evaluation, but after years of being entrusted with leadership positions, I have learned what differentiates true leaders from those who just accomplish a great deal is the making of the effort to develop your people. Now, you may have heard the phrase, "take care of your people," but I'll take issue with that.  I take care of my dog.  I take care of a family member who is sick, injured, or incapacitated.  Why?  Because they are not capable of performing all of life's requirements on their own.  For the most part, your people can do this.  If you are constantly doing things for people who could have otherwise done it themselves, you run the risk of creating learned helplessness syndrome.  People, and even animals, can become conditioned to not do what they otherwise could do out of a belief that someone else will do it for them.  I am NOT going to get political here, so don't worry about that.  Rather, I want to point out that effective leaders develop their people so that they may become independent actors and eventually become effective leaders themselves.  In my opinion, you should measure your success by the promotion rate of the people entrusted to you, not by your own personal career advancement or financial success. That brings me to the subject of today's podcast -- how do you counsel and mentor others on how to become a cyber security expert?  If you are listening to this podcast, there's a very good chance that you already are an expert in our field, but if not, keep listening and imagine that you are mentoring yourself, because these lessons can apply to you without having seek out a mentor.  Some people figure it out, and when asked their secret, they're like Bill Murray in the movie Stripes, "We trained ourselves, sir!"  But most of the time, career mastery involves learning from a number of others. Today on CISO Tradecraft we are going to analyze the question, " How do you become a Cyber Security Expert?"  I'm going to address this topic as if I were addressing someone in search of an answer.  Don't tune out early because you feel you've already accomplished this.  Keep listening so you can get a sense of what more you could be doing for your direct reports and any proteges you may have. Let's start at the beginning.  Imagine being a high school kid with absolutely zero work experience (other than maybe a paper route -- do kids still do that?)  You see someone that tells you they have a cool job where they get paid to ethically hack into computers.  Later on, you meet a second person that says they make really good money stopping bad actors from breaking into banks.  Somehow these ideas stick into your brain, and you start to say to yourself, you know both of those jobs sound pretty cool.  You begin to see yourself having a career in Cyber Security.  You definitely prefer it to jobs that require a lot of manual labor and start at a low pay.  So, you start thinking, "how I can gain the skills necessary to land a dream job in cyber security that also pays well?" At CISO Tradecraft we believe that there are really four building blocks that create subject matter experts in most jobs.  The four building blocks are: Getting an education Getting certifications Getting relevant job experience, and Building your personal brand So, let's explore these in detail. Number 1:  Getting an education.  When most people think about getting an education after high school, they usually talk about getting an associate's or a bachelor's degree.  If you were to look at most Chief Information Security Officers, you will see the majority of them earn a bachelor's degree in Computer Science, an Information Systems or Technology degree from a college of business such as a BS in Management of Information Systems (MIS) or Computer Information Systems, or more recently a related discipline such as a degree in Cyber Security. An associate degree is a great start for many, particularly if you don't have the money to pay for a four-year university degree right out of high school.  Tuition and debt can rack up pretty quickly, leaving some students deeply in debt, and for some, that huge bill is a non-starter.  Fortunately, community colleges offer quality educational opportunities at very competitive rates relative to four-year degree institutions.  For example, Baltimore County Community College charges $122 per credit hour for in-county residents.  A couple of miles away, Johns Hopkins University charges $2,016 per credit hour.  Now, that's a HUGE difference -- over 16 times if you do the math.  Now, Hopkins does have some wonderful facilities and excellent faculty, but when it comes to first- and second-year undergraduate studies, is the quality and content of the education THAT different?  Well, that's up to you to decide. The important take-away is, no one should decide NOT to pursue a cybersecurity education because of lack of money.  You can get started at any age on an associate degree, and that may give you enough to go on to get your first job.  However, if you want to continue on to bachelor's degree, don't give up.  Later I'll explain about a program that has been around since 2000 and has provided over 3,300 students with scholarships AND job placement after graduation. Back to those going directly for a bachelor's degree.  Now, the good news is that your chosen profession is likely to pay quite well, so not only are you likely to be able to pay off the investment you make in your education, but it will return dividends many times that which you paid, for the rest of your career.  Think of financing a degree like financing a house.  In exchange for your monthly mortgage payment, you get to enjoy a roof over your head and anything else you do with your home.  As a cybersecurity professional, in exchange for your monthly student loan payment, you get to earn well-above average incomes relative to your non-security peers, and hopefully enjoy a rewarding career.  And, like the right house, the value of your career should increase over time making your investment in your own education one of your best performing assets. Does this mean that you 100% need a bachelor's degree to get a job in cyber?  No, it does not.  There are plenty of cyber professionals that speak at Blackhat and DEF CON who have never obtained a college degree.  However, if ten applicants are going for an extremely competitive job and only seven of the ten applicants have a college degree in IT or Cyber, you shouldn't be surprised when HR shortens the list of qualified applicants to only the top five applicants all having college degrees.  It may not be fair, but it's common.  Plus, a U.S. Census Bureau study showed that folks who have a bachelor's degree make half a million dollars more over a career than those with an associate degree, and 1.6 times what a high school diploma holder may earn over a lifetime.  So, if you want more career opportunities and want to monetize your future, get past that HR checkbox that looks for a 4-year degree. Now, some people (usually those who don't want to do academic work) will say that a formal education isn't necessary for success.  After all, Bill Gates and Mark Zuckerberg were college dropouts, and they're both worth billions.  True, but that's a false argument that there's a cause-and-effect relationship there.  Both were undergraduates at Harvard University when they developed their business ideas.  So, if someone wants to assert a degree isn't necessary, counter with you'll agree once they are accepted into Harvard, and they produce a viable business plan as a teenager while attending classes. You see, completing four years of education in a field of study proves a few things.  I've interviewed candidates that said they took all of the computer science and cybersecurity courses they wanted and didn't feel a need to "waste time" with fuzzy studies such as history and English composition.  Okay, I'll accept that that person had a more focused education.  But consider the precedent here.  When a course looked uninteresting or difficult, that candidate just passed on the opportunity.  In the world of jobs and careers, there are going to be tasks that are uninteresting or difficult, and no one wants to do them, but they have to get done.  As a boss, do you want someone who has shown the pe  d completed it with an A (or maybe even a B), or do you want someone who passed when the going got a little rough?  The business world isn't academia where you're free to pick and choose whether to complete requirements.  Stuff has to get done, and someone who has a modified form of learned helplessness will most likely not follow through when that boring task comes due.   Remember I said I was going to tell you how to deal with the unfortunate situation where a prospective student doesn't have enough money to pay for college?  There are a couple of ways to meet that challenge.  It's time to talk to your rich uncle about paying for college.  That uncle is Uncle Sam.  Uncle Sam can easily finance your college so you can earn your degrees in Cyber Security.  However, Uncle Sam will want you to work for the government in return for paying for your education.  Two example scholarships that you could look into are the Reserve Officer Training Corps (ROTC) and Scholarship for Service (SFS).  ROTC is an officer accession program offered at more than 1,700 colleges and universities   across the United States to prepare young adults to become officers in the U.S. Military.  For scholarship students, ROTC pays 100% of tuition, fees, books, and a modest stipend for living expenses.  A successful degree program can qualify an Army second lieutenant for a Military Occupation Specialty (or MOS) such as a 17A Cyber Operations Officer, a 17B Cyber and Electronic Warfare Officer, or a 17D Cyber Capabilities Development Officer, a great start to a cybersecurity career. For the Navy, a graduating Ensign may commission as an 1810 Cryptologic Warfare Officer, 1820 Information Professional Officer, 1830 Intelligence Officer, or an 1840 Cyber Warfare Engineer.  The Navy uses designators rather than MOS's to delineate career patterns.  These designators have changed significantly over the last dozen years and may continue to evolve.  The Marine Corps has a 1702 cyberspace officer MOS.  Note that the Navy and the Marine Corps share a commissioning source in NROTC (Navy ROTC), and unlike the Army that has over 1,000 schools that participate in AROTC and the Air Force that has 1,100 associated universities in 145 detachments, there are only 63 Navy ROTC units or consortiums, although cross-town affiliates include nearly one hundred more colleges and universities. There are a lot of details that pertain to ROTC, and if you're serious about entering upon a military officer career, it's well worth the time and effort to do your research.  Not all ROTC students receive a scholarship; some receive military instruction throughout their four years and are offered a commission upon graduation.  Three- and four-year scholarship students incur a military obligation at the beginning of sophomore year, two-year scholarship students at the beginning of junior year, and one-year scholarship students at the start of senior year.  The military obligation today is eight years, usually the first four of which are on active duty; the rest may be completed in the reserves.  If you flunk out of school, you are rewarded with an enlistment rather than a commission.  These numbers were different when I was in ROTC, and they may have changed since this podcast was recorded, so make sure you get the latest information to make an informed decision. What if you want to serve your country but you're not inclined to serve in the military, or have some medical condition that may keep you from vigorous physical activity, or had engaged in recreational chemical use or other youthful indiscretions that may have disqualified you from further ROTC consideration?  There is another program worth investigating.   The National Science Foundation provides educational grants through the Scholarship For Service program or SFS for short.  SFS is a government scholarship that will pay up to 3 years of costs for undergraduate and even graduate (MS or PhD) educational degree programs.  It's understood that government agencies do not have the flexibility to match private sector salaries in cyber security.  However, by offering scholarships up front, qualified professionals may choose to stay in government service; hence SFS continues as a sourcing engine for Federal employees.  Unlike ROTC, a participant in SFS will incur an obligation to work in a non-DoD branch of the Federal government for a duration equal to the number of years of scholarship provided. In addition to tuition and education-related fees, undergraduate scholarship recipients receive $25,000 in annual academic stipends, while graduate students receive $34,000 per year.  In addition, an additional $6,000 is provided for certifications, and even travel to the SFS Job Fair in Washington DC. That job fair is an interesting affair.  I was honored to be the keynote speaker at the SFS job fair back in 2008.  I saw entities and agencies of the Federal government that I didn't even know existed, but they all had a cybersecurity requirement, and they all were actively hiring.  SFS students qualify for "excepted service" appointments, which means they can be hired through an expedited process.  These have been virtual the last couple of years due to COVID-19 but expect in-person events to resume in the future. I wrote a recommendation for a young lady whom I've known since she was born (her mom is a childhood friend of mine), and as an electrical engineering student in her sophomore year, she was selected for a two-year SFS scholarship.  A good way to make mom and dad happy knowing they're not going to be working until 80 to pay off their kid's education bills. In exchange for a two-year scholarship, SFS will usually require a student to complete a summer internship between the first and second years of school and then work two years in a government agency after graduation.  The biggest benefit to the Scholarship for Service is you can work at a variety of places.  So, if your dream is to be a nation state hacker for the NSA, CIA, or the FBI then this offers a great chance of getting in.  These three-letter agencies heavily recruit from these programs.  As I mentioned, there are a lot of other agencies as well.  You could find work at the State Department, Department of Health and Human Services, the Department of Education, the Federal Reserve Board, and I think I remember the United States Agency for International Development (USAID).  Federal executive agencies, Congress, interstate agencies, and even state, local, or tribal governments can satisfy the service requirement.  So, you can get paid to go to college and have a rewarding job in the government that builds a nice background for your career. How would you put all this together?  I spent nine years as an advisor to the National CyberWatch Center.  Founded as CyberWatch I in 2005, it started as a Washington D.C. and Mid-Atlantic regional effort to increase the quantity and quality of the information assurance workforce.  In 2009, we received a National Science Foundation award and grants that allowed the program to go nationwide.  Today, over 370 colleges and universities are in the program.  So why the history lesson? What we did was align curriculum between two-year colleges and four-year universities, such that a student who took the designated courses in an associate degree program would have 100% of those credits transfer to the four-year university.  That is HUGE.  Without getting into the boring details, schools would certify to the Committee on National Security Systems (CNSS) (formerly known as the National Security Telecommunications and Information Systems Security Committee or NSTISSC) national training standard for INFOSEC professionals known as NSTISSI 4011.  Now with the help of an SFS scholarship, a student with little to no financial resources can earn an associate degree locally, proceed to a bachelor's degree from a respected university, have a guaranteed job coming out of school, and HAVE NO STUDENT DEBT.  Parents, are you listening carefully?  Successfully following that advice can save $100,000 and place your child on course for success. OK, so let's fast forward 3 years and say that you are getting closer to finishing a degree in Cyber Security or Computer Science.  Is there anything else that you can do while performing a summer internship?    That brings us to our second building block.  Getting certifications.   Number Two:  Getting a Certification  Earning certifications are another key step to demonstrate that you have technical skills in cyber security.  Usually, technology changes rapidly.  That means that universities typically don't provide specialized training in Windows 11, Oracle Databases, Amazon Web Services, or the latest programming language.  Thus, while you may come out of a computer science degree with knowledge on how to write C++ and JavaScript, there are a lot of skills that you often lack to be quite knowledgeable in the workforce.  Additionally, most colleges teach only the free version of software.  In class you don't expect to learn how to deploy Antivirus software to thousands of endpoints from a vendor that would be in a Gartner Magic quadrant, yet that is exactly what you might encounter in the workplace.  So, let's look at some certifications that can help you establish your expertise as a cyber professional.  We usually recommend entry level certifications from CompTIA as a great starting point.  CompTIA has some good certifications that can teach you the basics in technology.  For example: CompTIA A+ can teach you how to work an IT Help Desk.  CompTIA Network+ can teach you about troubleshooting, configuring, and managing networks CompTIA Linux+ can help you learn how to perform as a system administrator supporting Linux Systems CompTIA Server+ ensures you have the skills to work in data centers as well as on-premises or hybrid environments. Remember it's really hard to protect a technology that you know nothing about so these are easy ways to get great experience in a technology.  If you want a certification such as these from CompTIA, we recommend going to a bookstore such as Amazon, buying the official study guidebook, and setting a goal to read every day.  Once you have read the official study guide go and buy a set of practice exam questions from a site like Whiz Labs or Udemy.  Note this usually retails for about $10.  So far this represents a total cost of about $50 ($40 dollars to buy a book and $10 to buy practice exams.)  For that small investment, you can gain the knowledge base to pass a certification.  You just need to pay for the exam and meet eligibility requirements. Now after you get a good grasp of important technologies such as Servers, Networks, and Operating Systems, we recommend adding several types of certifications to your resume.  The first is a certification in the Cloud.  One notable example of that is AWS Certified Solutions Architect - Associate.  Note you can find solution architect certifications from Azure and GCP, but AWS is the most popular cloud provider, so we recommend starting there.  Learning how the cloud works is extremely important.  Chances are you will be asked to defend it and you need to understand what an EC-2 server is, types of storage to make backups, and how to provide proper access control.  So, spend the time and get certified.  One course author who provides a great course is Adrian Cantrill.  You can find his course link for AWS Solutions Architect in our show notes or by visiting learn.cantrill.io.  The course costs $40 and has some of the best diagrams you will ever see in IT.  Once again go through a course like this and supplement with practice exam questions before going for the official certification. The last type of certifications we will mention is an entry cyber security certification.  We usually see college students pick up a Security+ or Certified Ethical Hacker as a foundation to establish their knowledge in cyber security.  Now the one thing that you really gain out of Security+ is a list of technical terms and concepts in cyber security.  You need to be able to understand the difference between Access Control, Authentication, and Authorization if you are to consult with a developer on what is needed before allowing access to a site.  These types of certifications will help you to speak fluently as a cyber professional.  That means you get more job offers, better opportunities, and interesting work.  It's next to impossible to establish yourself as a cyber expert if you don't even understand the technical jargon correctly. Number Three:  Getting Relevant Job Experience OK, so you have a college degree and an IT certification or two. What's next?  At this point in time, you are eligible for most entry level jobs.  So, let's find interesting work in Cyber Security.  If you are looking for jobs in cyber security, there are two places we recommend.  The first is LinkedIn.  Almost all companies post there and there's a wealth of opportunities.  Build out an interesting profile and look professional.  Then apply, apply, apply.  It will take a while to find the role you want.  Also post that you are looking for opportunities and need help finding your first role.  You will be surprised at how helpful the cyber community is.  Here's a pro tip:  add some hashtags with your post to increase its visibility. Another interesting place to consider is your local government.  The government spends a lot of time investing in their employees.  So go there, work a few years, and gain valuable experience.  You can start by going to your local government webpage such as USAJobs.Gov  and search for the Career Codes that map to cyber security.  For example, search using the keyword “2210” to find the job family of Information Technology Management where most cyber security opportunities can be found.  If you find that you get one of these government jobs, be sure to look into college repayment programs.  Most government jobs will help you pay off student loans, finance master's degrees in Cyber Security, or pay for your certifications.  It's a great win-win to learn the trade. Once you get into an organization and begin working your first job out of college, you then generally get one big opportunity to set the direction of your career.  What type of cyber professional do you want to be?  Usually, we see most Cyber Careerists fall into one of three basic paths.   Offensive Security Defensive Security Security Auditing The reason these three are the most common is they have the largest amount of job opportunities.  So, from a pure numbers game it's likely where you are to spend the bulk of your career.  Although we do recommend cross training.  Mike Miller who is the vCISO for Appalachia Technologies put out a great LinkedIn post on this where he goes into more detail.  Note we have a link to it in our show notes.  Here's some of our own thoughts on these three common cyber pathways: Offensive Security is for those that like to find vulnerabilities in things before the bad guys do.  It's fun to learn how to hack and take jobs in penetration testing and the red team.  Usually if you choose this career, you will spend time learning offensive tools like Nmap, Kali Linux, Metasploit, Burp Suite, and others.  You need to know how technology works, common flaws such as the OWASP Top Ten web application security risks, and how to find those vulnerabilities in technology.  Once you do, there's a lot of interesting work awaiting.  Note if these roles interest you then try to obtain the Offensive Security Certified Professional (OSCP) certification to gain relevant skill sets that you can use at work. Defensive Security is for the protectors.  These are the people who work in the Security Operations Center (SOC) or Incident Response Teams.  They look for anomalies, intrusions, and signals across the whole IT network.  If something is wrong, they need to find it and identify how to fix it.  Similar to Offensive Security professionals they need to understand technology, but they differ in the types of tools they need to look at.  You can find a defender looking at logs.  Logs can come from an Intrusion Detection System, a Firewall, a SIEM, Antivirus, Data Loss Prevention Tools, an EDR, and many other sources.  Defenders will become an expert in one of these tools that needs to be constantly monitored.  Note if you are interested in these types of opportunities look for cyber certifications such as the MITRE ATT&CK Defender (MAD) or SANS GIAC Certified Incident Handler GCIH to gain relevant expertise. Security Auditing is a third common discipline.  Usually reporting to the Governance, Risk, and Compliance organization, this role is usually the least technical.  This discipline is about understanding a relevant standard or regulation and making sure the organization follows the intent of the standard/regulation.  You will spend a lot of time learning the standards, policies, and best practices of an industry.  You will perform risk assessments and third-party reviews to understand how we certify as an industry.  If you would like to learn about the information systems auditing process, governance and management of IT systems, business processes such as Disaster Recovery and Business Continuity Management, and compliance activities, then we recommend obtaining the Certified Information Systems Auditor (CISA) certification from ISACA.   Ok, so you have a degree, you have certifications, you are in a promising job role, WHAT's Next?  If you want to really become an expert, we recommend you focus on… Number Four: Building your personal brand.   Essentially find a way to give back to the industry by blogging, writing open-source software, creating a podcast, building cybersecurity tutorials, creating YouTube videos, or presenting a lecture topic to your local OWASP chapter on cyber security.  Every time you do you will get smarter on a subject.  Imagine spending three hours a week reading books in cyber security.  If you did that for ten years, think of how many books you could read and how much smarter you would become.  Now as you share that knowledge with others two things happen:   People begin to recognize you as an industry expert.  You will get invited to opportunities to connect with other smart people which allows you to become even smarter.  If you spend your time listening to smart people and reading their works, it rubs off.  You will absorb knowledge from them that will spark new ideas and increase your understanding The second thing is when you present your ideas to others you often get feedback.  Sometimes you learn that you are actually misunderstanding something.  Other times you get different viewpoints.  Yes, this works in the financial sector, but it doesn't work in the government sector or in the university setting.  This feedback also helps you become smarter as you understand more angles of approaching a problem. Trust us, the greatest minds in cyber spend a lot of time researching, learning, and teaching others.  They all know G Mark's law, which I wrote nearly twenty years ago:  "Half of what you know about security will be obsolete in eighteen months." OK so let's recap a bit.  If you want to become an expert in something, then you should do four things. 1) Get a college education so that you have the greatest amount of opportunities open to you, 2) get certifications to build up your technical knowledge base, 3) find relevant job experiences that allow you to grow your skill sets, and 4) finally share what you know and build your personal brand.  All of these make you smarter and will help you become a cyber expert.   Thanks again for listening to us at CISO Tradecraft.  We wish you the best on your journey as you Learn to Earn.  If you enjoyed the show, tell one person about it this week.  It could be your child, a friend looking to get into cyber security, or even a coworker.  We would love to help more people and we need your help to reach a larger audience.  This is your host, G. Mark Hardy, and thanks again for listening and stay safe out there. References: https://www.todaysmilitary.com/education-training/rotc-programs  www.sfs.opm.gov  https://www.comptia.org/home  https://www.whizlabs.com/ https://www.udemy.com/ https://learn.cantrill.io/p/aws-certified-solutions-architect-associate-saa-c03  https://www.linkedin.com/feed/update/urn:li:activity:6965305453987737600/ https://www.offensive-security.com/pwk-oscp/  https://mitre-engenuity.org/cybersecurity/mad/ https://www.giac.org/certifications/certified-incident-handler-gcih/  https://www.ccbcmd.edu/Costs-and-Paying-for-College/Tuition-and-fees/In-County-tuition-and-fees.aspx https://www.educationcorner.com/value-of-a-college-degree.html  https://www.collegexpress.com/lists/list/us-colleges-with-army-rotc/2580/  https://www.af.mil/About-Us/Fact-Sheets/Display/Article/104478/air-force-reserve-officer-training-corps/ https://www.netc.navy.mil/Commands/Naval-Service-Training-Command/NROTC https://armypubs.army.mil/pub/eforms/DR_a/NOCASE-DA_FORM_597-3-000-EFILE-2.pdf https://niccs.cisa.gov/sites/default/files/documents/SFS%20Flyer%20FINAL.pdf https://www.nationalcyberwatch.org/