Podcasts about systems manager

  • 40PODCASTS
  • 50EPISODES
  • 36mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 28, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about systems manager

Latest podcast episodes about systems manager

Career Zone Podcast
Careers in Product

Career Zone Podcast

Play Episode Listen Later May 28, 2025 12:33


In this week's episode, Oliver Laity, Careers Information and Systems Manager, is joined by Eve Carney and Allan Stewart from the University's Digital Transformation Team, to discuss product management as a career. Eve, Head of Product, and Alan, Product Manager, share what is meant by a product management role and share their career journeys. They offer advice to students on how to get into this field, the skills to develop and where you can find opportunities. Useful Links: Book onto a skills session via Handshake. Find out more on the sessions we run on our Personal and Professional Development webpage. (https://www.exeter.ac.uk/students/careers/events/personalandprofessionaldevelopment/) Contact the University's Digital Team to find out about work experience opportunities in product management

AWS Morning Brief
Systems Manager Rip-Off Manager

AWS Morning Brief

Play Episode Listen Later May 12, 2025 5:08


AWS Morning Brief for the week of May 12th, with Corey Quinn. Links:Amazon Connect external voice pricing changesAWS Marketplace now supports SaaS products from all deployment locationsAmazon Q Developer elevates the IDE experience with new agentic coding experienceAmazon Q Developer in GitHub (in preview) accelerates code generationIn the works – AWS South America (Chile) RegionMonitoring network traffic in AWS Lambda functionsAnnouncing the end of support for AWS DynamoDB Session State ProviderWordFinder app: Harnessing generative AI on AWS for aphasia communicationAccelerating government efficiency with AWS Enterprise SupportIntroducing the AWS Zero Trust Accelerator for GovernmentSponsorThe Duckbill Group: https://www.duckbillgroup.com/Join us for Office Hours!https://www.duckbillgroup.com/officehours/

The EdUp Experience
LIVE from Ellucian LIVE 2025 - with Mike Stone⁠, Systems Manager, ⁠Lois Kellermann⁠, Programmer/Analyst, & ⁠George Kriss⁠, CIO, ⁠Kaskaskia College⁠

The EdUp Experience

Play Episode Listen Later Apr 9, 2025 15:07


It's YOUR time to #EdUpIn this episode, recorded LIVE from Ellucian LIVE 2025 in Orlando, Florida,YOUR guests are Mike Stone, Systems Manager, Lois Kellermann, Programmer/Analyst, & George Kriss, CIO, Kaskaskia CollegeYOUR host is ⁠⁠⁠⁠⁠⁠⁠⁠⁠Dr. Joe Sallustio⁠⁠⁠⁠⁠How did Kaskaskia College complete their SAS modernization in just 12 months?What challenges did they face during their on-premise to SAS transition?How did they manage change across the institution during implementation?What benefits have they seen from their Ellucian Colleague modernization?Why was their financial aid implementation surprisingly smooth?Topics include:Winning the Ellucian Impact Award for SAS modernizationMoving from SQL to Postgres database seamlesslyCreating an inclusive college-wide project rather than just an IT initiativeUsing effective communication strategies for change managementFreeing up IT resources to focus on strategic student-facing initiativesListen in to #EdUpDo YOU want to accelerate YOUR professional development?Do YOU want to get exclusive early access to ad-free episodes, extended episodes, bonus episodes, original content, invites to special events, & more?Then ⁠⁠⁠⁠⁠⁠BECOME AN #EdUp PREMIUM SUBSCRIBER TODAY⁠⁠ - $19.99/month or $199.99/year (Save 17%)!Want YOUR org to cover costs? Email: EdUp@edupexperience.comThank YOU so much for tuning in. Join us on the next episode for YOUR time to EdUp!Connect with YOUR EdUp Team - ⁠⁠⁠⁠⁠⁠⁠⁠⁠Elvin Freytes⁠⁠⁠⁠⁠⁠⁠⁠⁠ & ⁠⁠⁠⁠⁠⁠⁠⁠⁠Dr. Joe Sallustio⁠⁠⁠⁠● Join YOUR EdUp community at ⁠⁠⁠⁠⁠⁠⁠⁠⁠The EdUp Experience⁠⁠⁠⁠⁠⁠⁠⁠⁠!We make education YOUR business!

CPM Customer Success: Tips for Office of Finance Executives on their Corporate Performance Management journey
From Health Check to High Impact: Gabby Morales on Driving OneStream Success for Global Organizations

CPM Customer Success: Tips for Office of Finance Executives on their Corporate Performance Management journey

Play Episode Listen Later Feb 19, 2025 24:20


When Gabby Morales joined IPS Corporation as Global Financial Systems Manager, she expected to find a robust OneStream platform in action. Instead, she discovered a system that was vastly underutilized—with just one active internal user. Rather than settling for the status quo, Gabby took action. In this episode of CPM Customer Success, Gabby shares her experience conducting a OneStream health assessment, identifying key gaps, and leading a strategic reimplementation to transform OneStream into a powerful tool for financial efficiency and decision-making. She dives into the importance of extensibility, user adoption strategies, and the role of training in maximizing ROI. If you're a finance leader, system administrator, or anyone looking to unlock the full potential of OneStream, this conversation is packed with real-world insights, best practices, and success strategies to help you drive impact across your organization.

The DEI Discussions - Powered by Harrington Starr
FinTech's DEI Discussions #OnTour @ PAY360 | John Mozie, Card Systems Manager at Valero Energy

The DEI Discussions - Powered by Harrington Starr

Play Episode Listen Later Jun 18, 2024 9:50


Welcome to FinTech's DEI Discussions, live from the PAY360 event in London! In this episode, Nadia sits down with John Mozie, Card Systems Manager at Valero Energy.John discusses why we should create a nurturing environment that fosters personal and professional growth. He emphasises the importance of being proud of who you are and expressing yourself freely to bring valuable ideas to the table.The conversation delves into how organisational culture, whether in old or new companies, impacts innovation. John highlights the need for effective communication and collaboration to bridge the gap between functionality and profitability, ensuring systems are future-ready. He also touches on the role of regulations and technology in the evolving FinTech landscape.FinTech's DEI Discussions is powered by Harrington Starr, global leaders in Financial Technology Recruitment. For more episodes or recruitment advice, please visit our website www.harringtonstarr.com

Material Handling Masters Podcast
MHEDA Talks: Emerging Leaders

Material Handling Masters Podcast

Play Episode Listen Later Jun 7, 2024


In this episode of MHEDA Talks, Shari Altergott talks with previous attendees of the Emerging Leaders Conference about their journeys within the material handling industry. As we eagerly anticipate another successful year of the Emerging Leaders Conference, these individuals share their personal journeys of growth and the invaluable experiences they've gained within the leadership community. Guests Garrett King, Project Engineering Manager, SilMan Industries Rod Szalay, Engineering Manager, Conveyor & Caster Stephanie Garrett, Sales, Pricing, and Systems Manager, Toyota Material Handling Interested in Attending the Emerging Leaders Conference? Join us for a fast-paced, fun, and interactive summer leadership experience designed for young (or young at heart!) material handling professionals. This year's theme, “Common Ground Leadership” is centered on providing insights on how leaders and their teams can work together with shared goals of organizational success, personal fulfillment, and the ability to embrace failure as a function of innovative thinking. Join us on July 25 at the Emerging Leaders Conference and together we will find the common ground to help you become a better leader. LEARN MORE ABOUT THE EMERGING LEADERS CONFERENCE HERE About the MHEDA Talks Podcast with Shari Altergott Stay tuned for more episodes of the MHEDA Talks podcast series when Shari will interview industry thought leaders on issues and trends affecting MHEDA members. Shari Altergott has over 20 years of experience within the Material Handling Industry. During her career she has held several positions within marketing, sales, business development and executive leadership. Today she leads The CX Edge, a customer experience consulting firm focused specifically on the material handling industry. Over her time, she has developed full scope marketing functions that manage corporate initiatives related to customer experience, CRM, social networking, brand development, advertising and lead generation. Learn more at cx-edge.com.

Public Works Podcast
James Didawick: Utility Systems Manager @ Inboden Environmental Services. President @ Virginia Rural Water Association

Public Works Podcast

Play Episode Listen Later Dec 22, 2023 30:58


Join us in this episode as we engage in a fascinating conversation with James, the Utility Systems Manager at Inboden Environmental Services and the President of Virginia Rural Water Association. Delve into the nuances that make rural water systems distinct from their larger city counterparts. James provides valuable insights into the unique challenges faced by rural systems and explains why they often require more support than their larger counterparts.With an impressive 37 years in the industry, James shares his wealth of experience, attributing his efficiency to the strategic use of time management and organizational skills. Gain valuable perspectives on the intricacies of managing water systems, and discover the importance of these critical resources in rural communities. Give the show a listen and remember to thank your local Public Works Professionals.Become a supporter of this podcast: https://www.spreaker.com/show/public-works-podcast/support.

WBEN Extras
NASA exploration ground systems manager and Western New York native Shawn Quinn at Wednesday's Space Fair and Trade Show at SUNY Buffalo State

WBEN Extras

Play Episode Listen Later Aug 23, 2023 7:52


Godz Amongst Men Podcast
Always Business Ft. Lanisha Thadison ( Nurse / Entrepreneur/ Business Systems Manager)

Godz Amongst Men Podcast

Play Episode Listen Later Jun 27, 2023 84:28


Godz chop it up with the Godezz Lanisha about making moves while being a Mother, Wife , and business women. We talk about the different systems that you need to run an efficient business.

Private Markets 360°
Ep. 3 - Finding efficiency with technology (with Nick Fox of AEA Investors)

Private Markets 360°

Play Episode Listen Later Apr 25, 2023 29:31


In this episode, Brandon and Jocelyn speak with Nick Fox, Data and Systems Manager at AEA Investors. They discuss the continuing digital transformation of private equity portfolio management, and Nick talks about his experience in creating and scaling effective technology systems. Subscribe to the Private Markets 360 newsletter: www.spglobal.com/PrivateMarkets360

AWS Morning Brief
Corey Invades Seattle

AWS Morning Brief

Play Episode Listen Later Mar 2, 2023 2:56


Last week in security news: US Military emails leaked on an exposed server, How to monitor and query IAM resources at scale, the Tool of the Week, and more!Links: If you're in Seattle, come to Outer Planet Brewing this Sunday at 7PM and let Corey buy you a drink. Aiden Steele writes at length about using a recent enhancement to Systems Manager to pass out a role to all of your EC2 instances. US Military emails leaked on an exposed server Amazon Detective launches an interactive workshop for investigating potential security issues How to monitor and query IAM resources at scale – Part 1  Tool of the week: a break-glass role to limit production access to the AWS console

Le Podcast AWS en Français
Quoi de neuf ?

Le Podcast AWS en Français

Play Episode Listen Later Feb 24, 2023 17:11


AWS a annoncé 83 nouveautés ces derniers 15 jours. J'en ai retenu 7 pour partager avec vous. Dans ma revue biaisée de ces news, j'ai retenu un nouveau service pour les telco, des nouvelles instances EC2 basées sur Graviton3 — le silicon AWS Arm —, une nouvelle option pour gérer vos instances avec Systems Manager. Il y a aussi une nouvelle possibilité pour déployer vos pod kubernetes. Et puis nous parlerons d'améliorations apportées aux APIs de DynamoDB et de Eventbridge qui vous donnent un bénéfice immédiat, sans que vous n'ayez rien à changer de votre côté.

Le Podcast AWS en Français
Quoi de neuf ?

Le Podcast AWS en Français

Play Episode Listen Later Feb 24, 2023 17:11


AWS a annoncé 83 nouveautés ces derniers 15 jours. J'en ai retenu 7 pour partager avec vous. Dans ma revue biaisée de ces news, j'ai retenu un nouveau service pour les telco, des nouvelles instances EC2 basées sur Graviton3 — le silicon AWS Arm —, une nouvelle option pour gérer vos instances avec Systems Manager. Il y a aussi une nouvelle possibilité pour déployer vos pod kubernetes. Et puis nous parlerons d'améliorations apportées aux APIs de DynamoDB et de Eventbridge qui vous donnent un bénéfice immédiat, sans que vous n'ayez rien à changer de votre côté.

The ChurchGear Podcast
Mariners Church's Production Systems Manager [Evan Woertz]

The ChurchGear Podcast

Play Episode Play 19 sec Highlight Listen Later Nov 7, 2022 46:15


Many church techs leave church production to work with an integrator. Evan made this move and then returned to the church tech life!Evan Woertz is the production systems manager at Mariners Church.  He previously worked with Summit Integration. In this episode you'll hear: 1:00 Inlaws and a bait and switch show disaster6:00 Five Truths and a lie with Evan Woertz 14:50 How he got into church tech production 18:15 His time working as an integrator 35:00 Stewarding Mariners gear and budget40:40 Tech Takeaway on working with the people you havePlugs: Hangout with Evan on Instagram. Resources for your Church Tech MinistryDoes your church have used gear that you need to convert into new ministry dollars? We can make you an offer here. Do you need some production gear but lack the budget to buy new gear? You can get Certified Church Owned gear here.Connect with us: Follow us on FacebookHang out with us on InstagramSee all the ways we can serve your church on our WebsiteGet our best gear sent to your inbox each Monday before it goes public via the Early Service

Screaming in the Cloud
Dynamic Configuration Through AWS AppConfig with Steve Rice

Screaming in the Cloud

Play Episode Listen Later Oct 11, 2022 35:54


About Steve:Steve Rice is Principal Product Manager for AWS AppConfig. He is surprisingly passionate about feature flags and continuous configuration. He lives in the Washington DC area with his wife, 3 kids, and 2 incontinent dogs.Links Referenced:AWS AppConfig: https://go.aws/awsappconfig TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: Forget everything you know about SSH and try Tailscale. Imagine if you didn't need to manage PKI or rotate SSH keys every time someone leaves. That'd be pretty sweet, wouldn't it? With tail scale, ssh, you can do exactly that. Tail scale gives each server and user device a node key to connect to its VPN, and it uses the same node key to authorize and authenticate.S. Basically you're SSHing the same way you manage access to your app. What's the benefit here? Built in key rotation permissions is code connectivity between any two devices, reduce latency and there's a lot more, but there's a time limit here. You can also ask users to reauthenticate for that extra bit of security. Sounds expensive?Nope, I wish it were. tail scales. Completely free for personal use on up to 20 devices. To learn more, visit snark.cloud/tailscale. Again, that's snark.cloud/tailscaleCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This is a promoted guest episode. What does that mean? Well, it means that some people don't just want me to sit here and throw slings and arrows their way, they would prefer to send me a guest specifically, and they do pay for that privilege, which I appreciate. Paying me is absolutely a behavior I wish to endorse.Today's victim who has decided to contribute to slash sponsor my ongoing ridiculous nonsense is, of all companies, AWS. And today I'm talking to Steve Rice, who's the principal product manager on AWS AppConfig. Steve, thank you for joining me.Steve: Hey, Corey, great to see you. Thanks for having me. Looking forward to a conversation.Corey: As am I. Now, AppConfig does something super interesting, which I'm not aware of any other service or sub-service doing. You are under the umbrella of AWS Systems Manager, but you're not going to market with Systems Manager AppConfig. You're just AWS AppConfig. Why?Steve: So, AppConfig is part of AWS Systems Manager. Systems Manager has, I think, 17 different features associated with it. Some of them have an individual name that is associated with Systems Manager, some of them don't. We just happen to be one that doesn't. AppConfig is a service that's been around for a while internally before it was launched externally a couple years ago, so I'd say that's probably the origin of the name and the service. I can tell you more about the origin of the service if you're curious.Corey: Oh, I absolutely am. But I just want to take a bit of a detour here and point out that I make fun of the sub-service names in Systems Manager an awful lot, like Systems Manager Session Manager and Systems Manager Change Manager. And part of the reason I do that is not just because it's funny, but because almost everything I found so far within the Systems Manager umbrella is pretty awesome. It aligns with how I tend to think about the world in a bunch of different ways. I have yet to see anything lurking within the Systems Manager umbrella that has led to a tee-hee-hee bill surprise level that rivals, you know, the GDP of Guam. So, I'm a big fan of the entire suite of services. But yes, how did AppConfig get its name?Steve: [laugh]. So, AppConfig started about six years ago, now, internally. So, we actually were part of the region services department inside of Amazon, which is in charge of launching new services around the world. We found that a centralized tool for configuration associated with each service launching was really helpful. So, a service might be launching in a new region and have to enable and disable things as it moved along.And so, the tool was sort of built for that, turning on and off things as the region developed and was ready to launch publicly; then the regions launch publicly. It turned out that our internal customers, which are a lot of AWS services and then some Amazon services as well, started to use us beyond launching new regions, and started to use us for feature flagging. Again, turning on and off capabilities, launching things safely. And so, it became massively popular; we were actually a top 30 service internally in terms of usage. And two years ago, we thought we really should launch this externally and let our customers benefit from some of the goodness that we put in there, and some of—those all come from the mistakes we've made internally. And so, it became AppConfig. In terms of the name itself, we specialize in application configuration, so that's kind of a mouthful, so we just changed it to AppConfig.Corey: Earlier this year, there was a vulnerability reported around I believe it was AWS Glue, but please don't quote me on that. And as part of its excellent response that AWS put out, they said that from the time that it was disclosed to them, they had patched the service and rolled it out to every AWS region in which Glue existed in a little under 29 hours, which at scale is absolutely magic fast. That is superhero speed and then some because you generally don't just throw something over the wall, regardless of how small it is when we're talking about something at the scale of AWS. I mean, look at who your customers are; mistakes will show. This also got me thinking that when you have Adam, or previously Andy, on stage giving a keynote announcement and then they mention something on stage, like, “Congratulations. It's now a very complicated service with 14 adjectives in his name because someone's paid by the syllable. Great.”Suddenly, the marketing pages are up, the APIs are working, it's showing up in the console, and it occurs to me only somewhat recently to think about all of the moving parts that go on behind this. That is far faster than even the improved speed of CloudFront distribution updates. There's very clearly something going on there. So, I've got to ask, is that you?Steve: Yes, a lot of that is us. I can't take credit for a hundred percent of what you're talking about, but that's how we are used. We're essentially used as a feature-flagging service. And I can talk generically about feature flagging. Feature flagging allows you to push code out to production, but it's hidden behind a configuration switch: a feature toggle or a feature flag. And that code can be sitting out there, nobody can access it until somebody flips that toggle. Now, the smart way to do it is to flip that toggle on for a small set of users. Maybe it's just internal users, maybe it's 1% of your users. And so, the features available, you can—Corey: It's your best slash worst customers [laugh] in that 1%, in some cases.Steve: Yeah, you want to stress test the system with them and you want to be able to look and see what's going to break before it breaks for everybody. So, you release us to a small cohort, you measure your operations, you measure your application health, you measure your reputational concerns, and then if everything goes well, then you maybe bump it up to 2%, and then 10%, and then 20%. So, feature flags allow you to slowly release features, and you know what you're releasing by the time it's at a hundred percent. It's tempting for teams to want to, like, have everybody access it at the same time; you've been working hard on this feature for a long time. But again, that's kind of an anti-pattern. You want to make sure that on production, it behaves the way you expect it to behave.Corey: I have to ask what is the fundamental difference between feature flags and/or dynamic configuration. Because to my mind, one of them is a means of achieving the other, but I could also see very easily using the terms interchangeably. Given that in some of our conversations, you have corrected me which, first, how dare you? Secondly, okay, there's probably a reason here. What is that point of distinction?Steve: Yeah. Typically for those that are not eat, sleep, and breathing dynamic configuration—which I do—and most people are not obsessed with this kind of thing, feature flags is kind of a shorthand for dynamic configuration. It allows you to turn on and off things without pushing out any new code. So, your application code's running, it's pulling its configuration data, say every five seconds, every ten seconds, something like that, and when that configuration data changes, then that app changes its behavior, again, without a code push or without restarting the app.So, dynamic configuration is maybe a superset of feature flags. Typically, when people think feature flags, they're thinking of, “Oh, I'm going to release a new feature, so it's almost like an on-off switch.” But we see customers using feature flags—and we use this internally—for things like throttling limits. Let's say you want to be able to throttle TPS transactions per second. Or let's say you want to throttle the number of simultaneous background tasks, and say, you know, I just really don't want this creeping above 50; bad things can start to happen.But in a period of stress, you might want to actually bring that number down. Well, you can push out these changes with dynamic configuration—which is, again, any type of configuration, not just an on-off switch—you can push this out and adjust the behavior and see what happens. Again, I'd recommend pushing it out to 1% of your users, and then 10%. But it allows you to have these dials and switches to do that. And, again, generically, that's dynamic configuration. It's not as fun to term as feature flags; feature flags is sort of a good mental picture, so I do use them interchangeably, but if you're really into the whole world of this dynamic configuration, then you probably will care about the difference.Corey: Which makes a fair bit of sense. It's the question of what are you talking about high level versus what are you talking about implementation detail-wise.Steve: Yep. Yep.Corey: And on some level, I used to get… well, we'll call it angsty—because I can't think of a better adjective right now—about how AWS was reluctant to disclose implementation details behind what it did. And in the fullness of time, it's made a lot more sense to me, specifically through a lens of, you want to be able to have the freedom to change how something works under the hood. And if you've made no particular guarantee about the implementation detail, you can do that without potentially worrying about breaking a whole bunch of customer expectations that you've inadvertently set. And that makes an awful lot of sense.The idea of rolling out changes to your infrastructure has evolved over the last decade. Once upon a time you'd have EC2 instances, and great, you want to go ahead and make a change there—or this actually predates EC2 instances. Virtual machines in a data center or heaven forbid, bare metal servers, you're not going to deploy a whole new server because there's a new version of the code out, so you separate out your infrastructure from the code that it runs. And that worked out well. And increasingly, we started to see ways of okay, if we want to change the behavior of the application, we'll just push out new environment variables to that thing and restart the service so it winds up consuming those.And that's great. You've rolled it out throughout your fleet. With containers, which is sort of the next logical step, well, okay, this stuff gets baked in, we'll just restart containers with a new version of code because that takes less than a second each and you're fine. And then Lambda functions, it's okay, we'll just change the deployment option and the next invocation will wind up taking the brand new environment variables passed out to it. How do feature flags feature into those, I guess, three evolving methods of running applications in anger, by which I mean, of course, production?Steve: [laugh]. Good question. And I think you really articulated that well.Corey: Well, thank you. I should hope so. I'm a storyteller. At least I fancy myself one.Steve: [laugh]. Yes, you are. Really what you talked about is the evolution of you know, at the beginning, people were—well, first of all, people probably were embedding their variables deep in their code and then they realized, “Oh, I want to change this,” and now you have to find where in my code that is. And so, it became a pattern. Why don't we separate everything that's a configuration data into its own file? But it'll get compiled at build time and sent out all at once.There was kind of this breakthrough that was, why don't we actually separate out the deployment of this? We can separate the deployment from code from the deployment of configuration data, and have the code be reading that configuration data on a regular interval, as I already said. So now, as the environments have changed—like you said, containers and Lambda—that ability to make tweaks at microsecond intervals is more important and more powerful. So, there certainly is still value in having things like environment variables that get read at startup. We call that static configuration as opposed to dynamic configuration.And that's a very important element in the world of containers that you talked about. Containers are a bit ephemeral, and so they kind of come and go, and you can restart things, or you might spin up new containers that are slightly different config and have them operate in a certain way. And again, Lambda takes that to the next level. I'm really excited where people are going to take feature flags to the next level because already today we have people just fine-tuning to very targeted small subsets, different configuration data, different feature flag data, and allows them to do this like at we've never seen before scale of turning this on, seeing how it reacts, seeing how the application behaves, and then being able to roll that out to all of your audience.Now, you got to be careful, you really don't want to have completely different configurations out there and have 10 different, or you know, 100 different configurations out there. That makes it really tough to debug. So, you want to think of this as I want to roll this out gradually over time, but eventually, you want to have this sort of state where everything is somewhat consistent.Corey: That, on some level, speaks to a level of operational maturity that my current deployment adventures generally don't have. A common reference I make is to my lasttweetinaws.com Twitter threading app. And anyone can visit it, use it however they want.And it uses a Route 53 latency record to figure out, ah, which is the closest region to you because I've deployed it to 20 different regions. Now, if this were a paid service, or I had people using this in large volume and I had to worry about that sort of thing, I would probably approach something that is very close to what you describe. In practice, I pick a devoted region that I deploy something to, and cool, that's sort of my canary where I get things working the way I would expect. And when that works the way I want it to I then just push it to everything else automatically. Given that I've put significant effort into getting deployments down to approximately two minutes to deploy to everything, it feels like that's a reasonable amount of time to push something out.Whereas if I were, I don't know, running a bank, for example, I would probably have an incredibly heavy process around things that make changes to things like payment or whatnot. Because despite the lies, we all like to tell both to ourselves and in public, anything that touches payments does go through waterfall, not agile iterative development because that mistake tends to show up on your customer's credit card bills, and then they're also angry. I think that there's a certain point of maturity you need to be at as either an organization or possibly as a software technology stack before something like feature flags even becomes available to you. Would you agree with that, or is this something everyone should use?Steve: I would agree with that. Definitely, a small team that has communication flowing between the two probably won't get as much value out of a gradual release process because everybody kind of knows what's going on inside of the team. Once your team scales, or maybe your audience scales, that's when it matters more. You really don't want to have something blow up with your users. You really don't want to have people getting paged in the middle of the night because of a change that was made. And so, feature flags do help with that.So typically, the journey we see is people start off in a maybe very small startup. They're releasing features at a very fast pace. They grow and they start to build their own feature flagging solution—again, at companies I've been at previously have done that—and you start using feature flags and you see the power of it. Oh, my gosh, this is great. I can release something when I want without doing a big code push. I can just do a small little change, and if something goes wrong, I can roll it back instantly. That's really handy.And so, the basics of feature flagging might be a homegrown solution that you all have built. If you really lean into that and start to use it more, then you probably want to look at a third-party solution because there's so many features out there that you might want. A lot of them are around safeguards that makes sure that releasing a new feature is safe. You know, again, pushing out a new feature to everybody could be similar to pushing out untested code to production. You don't want to do that, so you need to have, you know, some checks and balances in your release process of your feature flags, and that's what a lot of third parties do.It really depends—to get back to your question about who needs feature flags—it depends on your audience size. You know, if you have enough audience out there to want to do a small rollout to a small set first and then have everybody hit it, that's great. Also, if you just have, you know, one or two developers, then feature flags are probably something that you're just kind of, you're doing yourself, you're pushing out this thing anyway on your own, but you don't need it coordinated across your team.Corey: I think that there's also a bit of—how to frame this—misunderstanding on someone's part about where AppConfig starts and where it stops. When it was first announced, feature flags were one of the things that it did. And that was talked about on stage, I believe in re:Invent, but please don't quote me on that, when it wound up getting announced. And then in the fullness of time, there was another announcement of AppConfig now supports feature flags, which I'm sitting there and I had to go back to my old notes. Like, did I hallucinate this? Which again, would not be the first time I'd imagine such a thing. But no, it was originally how the service was described, but now it's extra feature flags, almost like someone would, I don't know, flip on a feature-flag toggle for the service and now it does a different thing. What changed? What was it that was misunderstood about the service initially versus what it became?Steve: Yeah, I wouldn't say it was a misunderstanding. I think what happened was we launched it, guessing what our customers were going to use it as. We had done plenty of research on that, and as I mentioned before we had—Corey: Please tell me someone used it as a database. Or am I the only nutter that does stuff like that?Steve: We have seen that before. We have seen something like that before.Corey: Excellent. Excellent, excellent. I approve.Steve: And so, we had done our due diligence ahead of time about how we thought people were going to use it. We were right about a lot of it. I mentioned before that we have a lot of usage internally, so you know, that was kind of maybe cheating even for us to be able to sort of see how this is going to evolve. What we did announce, I guess it was last November, was an opinionated version of feature flags. So, we had people using us for feature flags, but they were building their own structure, their own JSON, and there was not a dedicated console experience for feature flags.What we announced last November was an opinionated version that structured the JSON in a way that we think is the right way, and that afforded us the ability to have a smooth console experience. So, if we know what the structure of the JSON is, we can have things like toggles and validations in there that really specifically look at some of the data points. So, that's really what happened. We're just making it easier for our customers to use us for feature flags. We still have some customers that are kind of building their own solution, but we're seeing a lot of them move over to our opinionated version.Corey: This episode is brought to us in part by our friends at Datadog. Datadog's SaaS monitoring and security platform that enables full stack observability for developers, IT operations, security, and business teams in the cloud age. Datadog's platform, along with 500 plus vendor integrations, allows you to correlate metrics, traces, logs, and security signals across your applications, infrastructure, and third party services in a single pane of glass.Combine these with drag and drop dashboards and machine learning based alerts to help teams troubleshoot and collaborate more effectively, prevent downtime, and enhance performance and reliability. Try Datadog in your environment today with a free 14 day trial and get a complimentary T-shirt when you install the agent.To learn more, visit datadoghq/screaminginthecloud to get. That's www.datadoghq/screaminginthecloudCorey: Part of the problem I have when I look at what it is you folks do, and your use cases, and how you structure it is, it's similar in some respects to how folks perceive things like FIS, the fault injection service, or chaos engineering, as is commonly known, which is, “We can't even get the service to stay up on its own for any [unintelligible 00:18:35] period of time. What do you mean, now let's intentionally degrade it and make it work?” There needs to be a certain level of operational stability or operational maturity. When you're still building a service before it's up and running, feature flags seem awfully premature because there's no one depending on it. You can change configuration however your little heart desires. In most cases. I'm sure at certain points of scale of development teams, you have a communications problem internally, but it's not aimed at me trying to get something working at 2 a.m. in the middle of the night.Whereas by the time folks are ready for what you're doing, they clearly have that level of operational maturity established. So, I have to guess on some level, that your typical adopter of AppConfig feature flags isn't in fact, someone who is, “Well, we're ready for feature flags; let's go,” but rather someone who's come up with something else as a stopgap as they've been iterating forward. Usually something homebuilt. And it might very well be you have the exact same biggest competitor that I do in my consulting work, which is of course, Microsoft Excel as people try to build their own thing that works in their own way.Steve: Yeah, so definitely a very common customer of ours is somebody that is using a homegrown solution for turning on and off things. And they really feel like I'm using the heck out of these feature flags. I'm using them on a daily or weekly basis. I would like to have some enhancements to how my feature flags work, but I have limited resources and I'm not sure that my resources should be building enhancements to a feature-flagging service, but instead, I'd rather have them focusing on something, you know, directly for our customers, some of the core features of whatever your company does. And so, that's when people sort of look around externally and say, “Oh, let me see if there's some other third-party service or something built into AWS like AWS AppConfig that can meet those needs.”And so absolutely, the workflows get more sophisticated, the ability to move forward faster becomes more important, and do so in a safe way. I used to work at a cybersecurity company and we would kind of joke that the security budget of the company is relatively low until something bad happens, and then it's, you know, whatever you need to spend on it. It's not quite the same with feature flags, but you do see when somebody has a problem on production, and they want to be able to turn something off right away or make an adjustment right away, then the ability to do that in a measured way becomes incredibly important. And so, that's when, again, you'll see customers starting to feel like they're outgrowing their homegrown solution and moving to something that's a third-party solution.Corey: Honestly, I feel like so many tools exist in this space, where, “Oh, yeah, you should definitely use this tool.” And most people will use that tool. The second time. Because the first time, it's one of those, “How hard could that be out? I can build something like that in a weekend.” Which is sort of the rallying cry of doomed engineers who are bad at scoping.And by the time that they figure out why, they have to backtrack significantly. There's a whole bunch of stuff that I have built that people look at and say, “Wow, that's a really great design. What inspired you to do that?” And the absolute honest answer to all of it is simply, “Yeah, I worked in roles for the first time I did it the way you would think I would do it and it didn't go well.” Experience is what you get when you didn't get what you wanted, and this is one of those areas where it tends to manifest in reasonable ways.Steve: Absolutely, absolutely.Corey: So, give me an example here, if you don't mind, about how feature flags can improve the day-to-day experience of an engineering team or an engineer themselves. Because we've been down this path enough, in some cases, to know the failure modes, but for folks who haven't been there that's trying to shave a little bit off of their journey of, “I'm going to learn from my own mistakes.” Eh, learn from someone else's. What are the benefits that accrue and are felt immediately?Steve: Yeah. So, we kind of have a policy that the very first commit of any new feature ought to be the feature flag. That's that sort of on-off switch that you want to put there so that you can start to deploy your code and not have a long-lived branch in your source code. But you can have your code there, it reads whether that configuration is on or off. You start with it off.And so, it really helps just while developing these things about keeping your branches short. And you can push the mainline, as long as the feature flag is off and the feature is hidden to production, which is great. So, that helps with the mess of doing big code merges. The other part is around the launch of a feature.So, you talked about Andy Jassy being on stage to launch a new feature. Sort of the old way of doing this, Corey, was that you would need to look at your pipelines and see how long it might take for you to push out your code with any sort of code change in it. And let's say that was an hour-and-a-half process and let's say your CEO is on stage at eight o'clock on a Friday. And as much as you like to say it, “Oh, I'm never pushing out code on a Friday,” sometimes you have to. The old way—Corey: Yeah, that week, yes you are, whether you want to or not.Steve: [laugh]. Exactly, exactly. The old way was this idea that I'm going to time my release, and it takes an hour-and-a-half; I'm going to push it out, and I'll do my best, but hopefully, when the CEO raises her arm or his arm up and points to a screen that everything's lit up. Well, let's say you're doing that and something goes wrong and you have to start over again. Well, oh, my goodness, we're 15 minutes behind, can you accelerate things? And then you start to pull away some of these blockers to accelerate your pipeline or you start editing it right in the console of your application, which is generally not a good idea right before a really big launch.So, the new way is, I'm going to have that code already out there on a Wednesday [laugh] before this big thing on a Friday, but it's hidden behind this feature flag, I've already turned it on and off for internals, and it's just waiting there. And so, then when the CEO points to the big screen, you can just flip that one small little configuration change—and that can be almost instantaneous—and people can access it. So, that just reduces the amount of stress, reduces the amount of risk in pushing out your code.Another thing is—we've heard this from customers—customers are increasing the number of deploys that they can do per week by a very large percentage because they're deploying with confidence. They know that I can push out this code and it's off by default, then I can turn it on whenever I feel like it, and then I can turn it off if something goes wrong. So, if you're into CI/CD, you can actually just move a lot faster with a number of pushes to production each week, which again, I think really helps engineers on their day-to-day lives. The final thing I'm going to talk about is that let's say you did push out something, and for whatever reason, that following weekend, something's going wrong. The old way was oop, you're going to get a page, I'm going to have to get on my computer and go and debug things and fix things, and then push out a new code change.And this could be late on a Saturday evening when you're out with friends. If there's a feature flag there that can turn it off and if this feature is not critical to the operation of your product, you can actually just go in and flip that feature flag off until the next morning or maybe even Monday morning. So, in theory, you kind of get your free time back when you are implementing feature flags. So, I think those are the big benefits for engineers in using feature flags.Corey: And the best way to figure out whether someone is speaking from a position of experience or is simply a raving zealot when they're in a position where they are incentivized to advocate for a particular way of doing things or a particular product, as—let's be clear—you are in that position, is to ask a form of the following question. Let's turn it around for a second. In what scenarios would you absolutely not want to use feature flags? What problems arise? When do you take a look at a situation and say, “Oh, yeah, feature flags will make things worse, instead of better. Don't do it.”Steve: I'm not sure I wouldn't necessarily don't do it—maybe I am that zealot—but you got to do it carefully.Corey: [laugh].Steve: You really got to do things carefully because as I said before, flipping on a feature flag for everybody is similar to pushing out untested code to production. So, you want to do that in a measured way. So, you need to make sure that you do a couple of things. One, there should be some way to measure what the system behavior is for a small set of users with that feature flag flipped to on first. And it could be some canaries that you're using for that.You can also—there's other mechanisms you can do that to: set up cohorts and beta testers and those kinds of things. But I would say the gradual rollout and the targeted rollout of a feature flag is critical. You know, again, it sounds easy, “I'll just turn it on later,” but you ideally don't want to do that. The second thing you want to do is, if you can, is there some sort of validation that the feature flag is what you expect? So, I was talking about on-off feature flags; there are things, as when I was talking about dynamic configuration, that are things like throttling limits, that you actually want to make sure that you put in some other safeguards that say, “I never want my TPS to go above 1200 and never want to set it below 800,” for whatever reason, for example. Well, you want to have some sort of validation of that data before the feature flag gets pushed out. Inside Amazon, we actually have the policy that every single flag needs to have some sort of validation around it so that we don't accidentally fat-finger something out before it goes out there. And we have fat-fingered things.Corey: Typing the wrong thing into a command structure into a tool? “Who would ever do something like that?” He says, remembering times he's taken production down himself, exactly that way.Steve: Exactly, exactly, yeah. And we've done it at Amazon and AWS, for sure. And so yeah, if you have some sort of structure or process to validate that—because oftentimes, what you're doing is you're trying to remediate something in production. Stress levels are high, it is especially easy to fat-finger there. So, that check-and-balance of a validation is important.And then ideally, you have something to automatically roll back whatever change that you made, very quickly. So AppConfig, for example, hooks up to CloudWatch alarms. If an alarm goes off, we're actually going to roll back instantly whatever that feature flag was to its previous state so that you don't even need to really worry about validating against your CloudWatch. It'll just automatically do that against whatever alarms you have.Corey: One of the interesting parts about working at Amazon and seeing things in Amazonian scale is that one in a million events happen thousands of times every second for you folks. What lessons have you learned by deploying feature flags at that kind of scale? Because one of my problems and challenges with deploying feature flags myself is that in some cases, we're talking about three to five users a day for some of these things. That's not really enough usage to get insights into various cohort analyses or A/B tests.Steve: Yeah. As I mentioned before, we build these things as features into our product. So, I just talked about the CloudWatch alarms. That wasn't there originally. Originally, you know, if something went wrong, you would observe a CloudWatch alarm and then you decide what to do, and one of those things might be that I'm going to roll back my configuration.So, a lot of the mistakes that we made that caused alarms to go off necessitated us building some automatic mechanisms. And you know, a human being can only react so fast, but an automated system there is going to be able to roll things back very, very quickly. So, that came from some specific mistakes that we had made inside of AWS. The validation that I was talking about as well. We have a couple of ways of validating things.You might want to do a syntactic validation, which really you're validating—as I was saying—the range between 100 and 1000, but you also might want to have sort of a functional validation, or we call it a semantic validation so that you can make sure that, for example, if you're switching to a new database, that you're going to flip over to your new database, you can have a validation there that says, “This database is ready, I can write to this table, it's truly ready for me to switch.” Instead of just updating some config data, you're actually going to be validating that the new target is ready for you. So, those are a couple of things that we've learned from some of the mistakes we made. And again, not saying we aren't making mistakes still, but we always look at these things inside of AWS and figure out how we can benefit from them and how our customers, more importantly, can benefit from these mistakes.Corey: I would say that I agree. I think that you have threaded the needle of not talking smack about your own product, while also presenting it as not the global panacea that everyone should roll out, willy-nilly. That's a good balance to strike. And frankly, I'd also say it's probably a good point to park the episode. If people want to learn more about AppConfig, how you view these challenges, or even potentially want to get started using it themselves, what should they do?Steve: We have an informational page at go.aws/awsappconfig. That will tell you the high-level overview. You can search for our documentation and we have a lot of blog posts to help you get started there.Corey: And links to that will, of course, go into the [show notes 00:31:21]. Thank you so much for suffering my slings, arrows, and other assorted nonsense on this. I really appreciate your taking the time.Steve: Corey thank you for the time. It's always a pleasure to talk to you. Really appreciate your insights.Corey: You're too kind. Steve Rice, principal product manager for AWS AppConfig. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment. But before you do, just try clearing your cookies and downloading the episode again. You might be in the 3% cohort for an A/B test, and you [want to 00:32:01] listen to the good one instead.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Mums On Cloud Nine
How To Decide Which Job Is Right For You And When (part 3)

Mums On Cloud Nine

Play Episode Listen Later Aug 8, 2022 18:49


How To Decide Which Job Is Right For You and When (Part3)    In the third of our four-part Parents Flex Up mini-series, I am talking about why I believe that Salesforce CRM is the living beating heart of any business – and whether that effectively makes it recession proof.  If you want to have a successful career as a Salesforce professional and you have an appetite for learning, then your potential really is limitless.  This mini-series of Mums on Cloud Nine is all about educating and inspiring parents to thrive in our Salesforce career so that you can earn more, work less and love your life.  If you enjoyed this episode please follow, share and leave a review to help others find us.    Highlights from this episode:  (02:08) Are Salesforce jobs safe in a recession?  (04:54) Adding value with Salesforce  (07:21) The impacts of the pandemic  (08:54) So which Salesforce role is right for you?  (12:13) The role of the Systems Manager  (17:55) Coming in Part Four..    Find out more about how Supermums empowers women around the globe with training and recruitment services. Join us to train, volunteer, sponsor or hire our amazing women in tech. Visit www.supermums.org  Find out about our free short courses here to start or progress your career in tech https://supermums.org/accelerate-your-salesforce-career/  Download our positive affirmation screensavers here to remind yourself how to be a Mum on Cloud Nine https://supermums.org/screensavers/  Supermums helps women to boost their Salesforce career from starting out to progressing up their career ladder. Sign up to their weekly newsletter to benefit from weekly tips, events and insight https://supermums.org/insights/newsletter/   

Whiteboard.fm
Luca Orio – Design Systems Manager at Netflix – Whiteboard.fm #038

Whiteboard.fm

Play Episode Listen Later Dec 20, 2021 24:11


Our guest for today is Luca Orio, Design Systems Manager at Netflix. He's an Italian designer based out of San Francisco. He's been a designer for 19 years now and in this interview he shares some of the greatest and rarest insights regarding the things you experience as a designer and how to get the most out of things. An amazing thing about Luca is before entering the design space, he used to be a Drummer in a Metal Band which had become quite popular and signed with labels. But, his love and passion for design are what made him choose this path. Make sure you watch the entire episode, you're going to love it! ⭐ Subscribe to Whiteboard.fm to stay updated with more interviews and clips https://www.youtube.com/whiteboardfm?sub_confirmation=1 _____

The DotCom Magazine Entrepreneur Spotlight
Justino, Paulo. Founder & CEO, FCJ Venture Builder, A DotCom Magazine Exclusive Interview

The DotCom Magazine Entrepreneur Spotlight

Play Episode Listen Later Oct 26, 2021 26:07


About Justino, Paulo. and FCJ Venture Builder: He started for the first time in 1986 at the age of 19 in a furniture industry, Soartes Coloniais Ind. Com. Ltda. In 1991, it launched a software aimed at the area of ​​medical offices, Doctor Work, which was distributed by Brasoft, sold the right to the software to a company that presented it in Trisoft's investment program. He worked for 10 years as IT and Systems Manager in wholesale companies. Acted as a consultant in the downzing process of the Municipality of Uberlândia, Municipality of Uberaba. In 1999, it launched a software for the management of city hall and public agencies, PowerCity. He served for 15 years as Commercial Director in technology companies. Responsible for the process with SLTI of the Ministry of Planning for the creation and release of Jaguar public software. He started in 2013 as one of the founding shareholders of FCJ Participações S.A. and current CEO. Venture builders are organizations that build startups using their own resources and breaking traditional models such as venture capital funds, accelerators and incubators. Venture builders are also known as startup factories , as this is a model that shares resources, such as infrastructure, marketing support, legal, accounting, among others. Unlike traditional models, Venture Builder 4.0 licensed by FCJ embodies the culture of Open Innovation (open innovation), so instead of creating its own startups, FCJ seeks these solutions in the market to work side by side with entrepreneurs in development of these ideas.

Screaming in the Cloud
Open Core, Real-Time Observability Born in the Cloud with Martin Mao

Screaming in the Cloud

Play Episode Listen Later Jun 22, 2021 41:41


About MartinMartin Mao is the co-founder and CEO of Chronosphere. He was previously at Uber, where he led the development and SRE teams that created and operated M3. Prior to that, he was a technical lead on the EC2 team at AWS and has also worked for Microsoft and Google. He and his family are based in our Seattle hub and he enjoys playing soccer and eating meat pies in his spare time.Links: Chronosphere: https://chronosphere.io/ Email: contact@chronosphere.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached, the hard way. Take a look at this. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That's canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.Corey: If your mean time to WTF for a security alert is more than a minute, it's time to look at Lacework. Lacework will help you get your security act together for everything from compliance service configurations to container app relationships, all without the need for PhDs in AWS to write the rules. If you're building a secure business on AWS with compliance requirements, you don't really have time to choose between antivirus or firewall companies to help you secure your stack. That's why Lacework is built from the ground up for the Cloud: low effort, high visibility and detection. To learn more, visit lacework.com.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I've often talked about observability, or as I tend to think of it when people aren't listening, hipster monitoring. Today, we have a promoted episode from a company called Chronosphere, and I'm joined today by Martin Mao, their CEO and co-founder. Martin, thank you for coming on the show and suffering my slings and arrows.Martin: Thanks for having me on the show, Corey, and looking forward to our conversation today.Corey: So, before we dive into what you're doing now, I'm always a big sucker for origin stories. Historically, you worked at Microsoft and Google, but then you really sort of entered my sphere of things that I find myself having to care about when I'm lying awake at night and the power goes out by working on the EC2 team over at AWS. Tell me a little bit about that. You've hit the big three cloud providers at this point. What was that like?Martin: Yeah, it was an amazing experience, I was a technical lead on one of the EC2 teams, and I think when an opportunity like that comes up on such a core foundational project for the cloud, you take it. So, it was an amazing opportunity to be a part of leading that team at a fairly early stage of AWS and also helping them create a brand new service from scratch, which was AWS Systems Manager, which was targeted at fleet-wide management of EC2 instances, so—Corey: I'm a tremendous fan of Systems Manager, but I'm still looking for the person who named Systems Manager Session Manager because, at this point, I'm about to put a bounty out on them. Wonderful service; terrible name.Martin: That was not me. So, yes. But yeah, no, it was a great experience, for sure, and I think just seeing how AWS operated from the inside was an amazing learning experience for me. And being able to create foundational pieces for the cloud was also an amazing experience. So, only good things to say about my time at AWS.Corey: And then after that, you left and you went to Uber where you led development and SRE teams that created and operated something called M3. Alternately, I'm misreading your bio, and you bought an M3 from BMW and went to drive for Uber. Which is it?Martin: I wish it was the second one, but unfortunately, it is the first one. So yes, I did leave AWS and joined Uber in 2015 to lead a core part of their monitoring and eventually larger observability team. And that team did go on to build open-source projects such as M3—which perhaps we should have thought about the name and the conflict with the car when we named it at the time—and other projects such as Jaeger for distributed tracing as well, and a logging backend system, too. So, yeah, definitely spent many years there building out their observability stack.Corey: We're going to tie a theme together here. You were at Microsoft, you were at Google, you were at AWS, you were at Uber, and you look at all of this and decide, “All right. My entire career has been spent in large companies doing massive globally scaled things. I'm going to go build a small startup.” What made you decide that, all right, this is something I'm going to pursue?Martin: So, definitely never part of the plan. As you mentioned, a lot of big tech companies, and I think I always got a lot of joy building large distributed systems, handling lots of load, and solving problems at a really grand scale. And I think the reason for doing a startup was really the situation that we were in. So, at Uber as I mentioned, myself and my co-founder led the core part of the observability team there, and we were lucky to happen to solve the problem, not just for Uber but for the broader community, especially the community adopting cloud-native architecture. And it just so happened that we were solving the problem of Uber in 2015, but the rest of the industry has similar problems today.So, it was almost the perfect opportunity to solve this now for a broader range of companies out there. And we already had a lot of the core technology built-in open-source as well. So, it was more of an opportunity rather than a long-term plan or anything of that sort, Corey.Corey: So, before we dive into the intricacies of what you've built, I always like to ask people this question because it turns out that the only thing that everyone agrees on is that everyone else is wrong. What is the dividing line, if any, between monitoring and observability?Martin: That's a great question, and I don't know if there's an easy answer.Corey: I mean, my cynical approach is that, “Well, if you call it monitoring, you don't get to bring in SRE-style salaries. Call it observability and no one knows what the hell we're talking about, so sure, it's a blank check at that point.” It's cynical, and probably not entirely correct. So, I'm curious to get your take on it.Martin: Yeah, for sure. So, you know, there's definitely a lot of overlap there, and there's not really two separate things. In my mind at least, monitoring, which has been around for a very long time, has always been around notification and having visibility into your systems. And then as the system's got more complex over time, being able to understand that and not just have visibility into it but understand it a little bit more required, perhaps, additional new data types to go and solve those problems. And that's how, in my mind, monitoring sort of morphed into observability. So, perhaps one is a subset of the other, and they're not competing concepts there. But at least that's my opinion. I'm sure there are plenty out there that would, perhaps, disagree with that.Corey: On some level, it almost hits to the adage of, past a certain point of scale with distributed systems, it's never a question of is the app up or down, it's more a question of how down is it? At least that's how it was explained to me at one point, and it was someone who was incredibly convincing, so I smiled and nodded and never really thought to question it any deeper than that. But I look back at the large-scale environments I've been in, and yeah, things are always on fire, on some level, and ideally, there are ways to handle and mitigate that. Past a certain point, the approach of small-scale systems stops working at large scale. I mean, I see that over in the costing world where people will put tools up on GitHub of, “Hey, I ran this script, and it works super well on my 10 instances.”And then you try and run the thing on 10,000 instances, and the thing melts into the floor, hits rate limits left and right because people don't think in terms of those scales. So, it seems like you're sort of going from the opposite end. Well, this is how we know things work at large scale; let's go ahead and build that out as an initially smaller team. Because I'm going to assume, not knowing much about Chronosphere yet, that it's the sort of thing that will help a company before they get to the hyperscaler stage.Martin: A hundred percent, and you're spot on there, Corey. And it's not even just a company going from small-stage, small-scale simple systems to more complicated ones, actually, if you think about this shift in the cloud right now, it's really going from cloud to cloud-native. So, going from VMs to container on the infrastructure tier, and going from monoliths to microservices. So, it's not even the growth of the company, necessarily, or the growth of the load that the system has to handle, but this shift to containers and microservices heavily accelerates the growth of the amount of data that gets produced, and that is causing a lot of these problems.Corey: So, Uber was famous for disrupting, effectively, the taxi market. What made you folks decide, “I know. We're going to reinvent observability slash monitoring while we're at it, too.” What was it about existing approaches that fell down and, I guess, necessitated you folks to build your own?Martin: Yeah, great question, Corey. And actually, it goes to the first part; we were disrupting the taxi industry, and I think the ability for Uber to iterate extremely fast and respond as a business to changing market conditions was key to that disruption. So, monitoring and observability was a key part of that because you can imagine it was providing all of the real-time visibility to not only what was happening in our infrastructure and applications, but the business as well. So, it really came out of a necessity more than anything else. We found that in order to be more competitive, we had to adopt what is probably today known as cloud-native architecture, adopt running on containers and microservices so that we can move faster, and along with that, we found that all of the existing monitoring tools we were using, weren't really built for this type of environment. And it was that that was the forcing function for us to create our own technologies that were really purpose-built for this modern type of environment that gave us the visibility we needed to, to be competitive as a company and a business.Corey: So, talk to me a little bit more about what observability is. I hear people talking about it in terms of having three pillars; I hear people talking about it, to be frank, in a bunch of ways so that they're trying to, I guess, appropriate the term to cover what they already are doing or selling because changing vocabulary is easier than changing an entire product philosophy. What is it?Martin: Yeah, we actually had a very similar view on observability, and originally we thought that it is a combination of metrics, logs, and traces, and that's a very common view. You have the three pillars, it's almost like three checkboxes; you tick them off, and you have, quote-unquote, “Observability.” And that's actually how we looked at the problem at Uber, and we built solutions for each one of those and we checked all three boxes. What we've come to realize since then is perhaps that was not the best way to look at it because we had all three, but what we realized is that actually just having all three doesn't really help you with the ultimate goal of what you want from this platform, and having more of each of the types of data didn't really help us with that, either. So, taking a step back from there and when we really looked at it, the lesson that we learned in our view on observability is really more from an end-user perspective, rather than a data type or data input perspective.And really, from an end-user perspective, if you think about why you want to use your monitoring tool or your observability tool, you really want to be notified of issues and remediate them as quickly as possible. And to do that, it really just comes down to answering three questions. “Can I get notified when something is wrong? Yes or no? Do I even know something is wrong?”The second question is, “Can I triage it quickly to know what the impact is? Do I know if it's impacting all of my customers or just a subset of them, and how bad is the issue? Can I go back to sleep if I'm being paged at two o'clock in the morning?”And the third one is, “Can I figure out the underlying root cause to the problem and go and actually fix it?” So, this is how we think about the problem now, is from the end-user perspective. And it's not that you don't need metrics, logs, or distributed traces to solve the problem, but we are now orienting our solution around solving the problem for the end-user, as opposed to just orienting our solution around the three data types, per se.Corey: I'm going to self-admit to a fun billing experience I had once with a different monitoring vendor whom I will not name because it turns out, you can tell stories, you can name names, but doing both gets you in trouble. It was a more traditional approach in a simpler time, and they wound up sending me a message saying, “Oh, we're hitting rate limits on CloudWatch. Go ahead and open a ticket asking for them to raise it.” And in a rare display of foresight, AWS respond to my ticket with a, “We can do this, but understand at this level of concurrency, it will cost something like $90,000 a month on increased charges, with that frequency, for that many metrics.” And that was roughly twice what our AWS bill was in those days, and, “Oh.” So, I'm curious as to how you can offer predictable pricing when you can have things that emit so much data so quickly. I believe you when you say you can do it; I'm just trying to understand the philosophy of how that works.Martin: As I said earlier, we started to approach this by trying to solve it in a very engineering fashion where we just wanted to create more efficient backend technology so that it would be cheaper for the increased amount of data. What we realized over time is that no matter how much cheaper we make it, the amount of data being produced, especially from monitoring and observability, kept increasing, and not even in a linear fashion but in an exponential fashion. And because of that, it really switched the problem not to how efficiently can we store this, it really changed our focus of the problem to how our users using this data, and do they even understand the data that's being produced? So, in addition to the couple of properties I mentioned earlier, around cost accounting and rate-limiting—those are definitely required—the other things we try to make available for our end-users is introspection tools such that they understand the type of data that's being produced. It's actually very easy in the monitoring and observability world to write a single line of code that actually produces a lot of data, and most developers don't understand that that single line of code produces so much data.So, our approach to this is to provide a tool so that developers can introspect and understand what is produced on the backend side, not what is being inputted from their code, and then not only have an understanding of that but also dynamic ways to deal with it. So that again, when they hit the rate limit, they don't just have to monitor it less, they understand that, “Oh, I inserted this particular label and now I have 20 times the amount of data that I needed before. Do I really need that particular label in there> and if not, perhaps dropping it dynamically on the server-side is a much better way of dealing with that problem than having to roll back your code and change your metric instrumentation.” So, for us, the way to deal with it is not to just make the backend even more efficient, but really to have end-users understand the data that they're producing, and make decisions on which parts of it is really useful and which parts of it do they, perhaps not want or perhaps want to retain for shorter periods of time, for example, and then allow them to actually implement those changes on that data on the backend. And that is really how the end-users control the bills and the cost themselves.Corey: So, there are a number of different companies in the observability space that have different approaches to what they solve for. In some cases, to be very honest, it seems like, well, I have 15 different observability and monitoring tools. Which ones do you replace? And the answer is, “Oh, we're number 16.” And it's easy to be cynical and down on that entire approach, but then you start digging into it and they're actually right.I didn't expect that to be the case. What was your perspective that made you look around the, let's be honest, fairly crowded landscape of observability companys' tools that gave insight into the health status and well being of various applications in different ways, and say, “You know, no one's quite gotten this right, yet. I have a better idea.”Martin: Yeah, you're completely correct, and perhaps the previous environments that everybody was operating in, there were a lot of different tools for different purposes. A company would purchase an infrastructure monitoring tool, or perhaps even a network monitoring tool, and then they would have, perhaps, an APM solution for the applications, and then perhaps BI tools for the business. So, there was always historically a collection of different tools to go and solve this problem. And I think, again, what has really happened recently with this shift to cloud-native recently is that the need for a lot of this data to be in a single tool has become more important than ever. So, you think about your microservices running on a single container today, if a single container dies in isolation without knowing, perhaps, which microservice was running on it doesn't mean very much, and just having that visibility is not going to be enough, just like if you don't know which business use case that microservice was serving, that's not going to be very useful for you, either.So, with cloud-native architecture, there is more of a need to have all of this data and visibility in a single tool, which hasn't historically happened. And also, none of the existing tools today—so if you think about both the existing APM solutions out there and the existing hosted solutions that exist in the world today, none of them were really built for a cloud-native environment because you can think about even the timing that these companies were created at, you know, back in early 2010s, Kubernetes and containers weren't really a thing. So, a lot of these tools weren't really built for the modern architecture that we see most companies shifting towards. So, the opportunity was really to build something for where we think the industry and everyone's technology stack was going to be as opposed to where the technology stack has been in the past before. And that was really the opportunity there, and it just so happened that we had built a lot of these solutions for a similar type environment for Uber many years before. So, leveraging a lot of our lessons learned there put us in a good spot to build a new solution that we believe is fairly different from everything else that exists today in the market, and it's going to be a good fit for companies moving forward.Corey: So, on your website, one of the things that you, I assume, put up there just to pick a fight—because if there's one thing these people love, it's fighting—is a use case is outgrowing Prometheus. The entire story behind Prometheus is, “Oh, it scales forever. It's what the hyperscalers would use. This came out of the way that Google does things.” And everyone talks about Google as if it's this mythical Valhalla place where everything is amazing and nothing ever goes wrong. I've seen the conference talks. And that's great. What does outgrowing Prometheus look like?Martin: Yeah, that's a great question, Corey. So, if you look at Prometheus—and it is the graduated and the recommended monitoring tool for cloud-native environments—if you look at it and the way it scales, actually, it's a single binary solution, which is great because it's really easy to get started. You deploy a single instance, and you have ingestion, storage, and visibility, and dashboarding, and alerting, all packaged together into one solution, and that's definitely great. And it can scale by itself to a certain point and is definitely the recommended starting point, but as you really start to grow your business, increase your cluster sizes, increase the number of applications you have, actually isn't a great fit for horizontal scale. So, by default, there isn't really a high availability and horizontal scale built into Prometheus by default, and that's why other projects in the CNCF, such as Cortex and Thanos were created to solve some of these problems.So, we looked at the problem in a similar fashion, and when we created M3, the open-source metrics platform that came out of Uber, it was also approaching it from this different perspective where we built it to be horizontally scalable, and highly reliable from the beginning, but yet, we don't really want it to be a, let's say, competing project with Prometheus. So, it is actually something that works in tandem with Prometheus, in the sense that it can ingest Prometheus metrics and you can issue Prometheus query language queries against it, and it will fulfill those. But it is really built for a more scalable environment. And I would say that once a company starts to grow and they run into some of these pain points and these pain points are surrounding how reliable a Prometheus instance is, how you can scale it up beyond just giving it more resources on the VM that it runs on, vertical scale runs out at a certain point. Those are some of the pain points that a lot of companies do run into and need to solve eventually. And there are various solutions out there, both in open-source and in the commercial world, that are designed to solve those pain points. M3 being one of the open-source ones and, of course, Chronosphere being one of the commercial ones.Corey: This episode is sponsored in part by Salesforce. Salesforce invites you to “Salesforce and AWS: Whats Ahead for Architects, Admins and Developers” on June 24th at 10AM, Pacific Time. Its a virtual event where you'll get a first look at the latest innovations of the Salesforce and AWS partnership, and have an opportunity to have your questions answered. Plus you'll get to enjoy an exclusive performance from Grammy Award winning artist The Roots! I think they're talking about a band, not people with super user access to a system. Registration is free at salesforce.com/whatsahead.Corey: Now, you've also gone ahead and more or less dangled raw meat in front of a tiger in some respects here because one of the things that you wind up saying on your site of why people would go with Chronosphere is, “Ah, this doesn't allow for bill spike overages as far as what the Chronosphere bill is.” And that's awesome. I love predictable pricing. It's sort of the antithesis of cloud bills. But there is the counterargument, too, which is with many approaches to monitoring, I don't actually care what my monitoring vendor is going to charge me because they wind up costing me five times more, just in terms of CloudWatch charges. How does your billing work? And how do you avoid causing problems for me on the AWS side, or other cloud provider? I mean, again, GCP and Azure are not immune from this.Martin: So, if you look at the built-in solutions by the cloud providers, a lot of those metrics and monitoring you get from those like CloudWatch or Stackdriver, a lot of it you get included for free with your AWS bill already. It's only if you want additional data and additional retention, do you choose to pay more there. So, I think a lot of companies do use those solutions for the default set of monitoring that they want, especially for the AWS services, but generally, a lot of companies have custom monitoring requirements outside of that in the application tier, or even more detailed monitoring in the infrastructure that is required, especially if you think about Kubernetes.Corey: Oh, yeah. And then I see people using CloudWatch as basically a monitoring, or metric, or log router, which at its price point, don't do that. [laugh]. It doesn't end well for anyone involved.Martin: A hundred percent. So, our solution and our approach is a little bit different. So, it doesn't actually go through CloudWatch or any of these other inbuilt cloud-hosted solutions as a router because, to your point, there's a lot of cost there as well. It actually goes and collects the data from the infrastructure tier or the applications. And what we have found is that not only does the bill for monitoring climb exponentially—and not just as you grow; especially as you shift towards cloud-native architecture—our very first take of solving that problem is to make the backend a lot more efficient than before so it just is cheaper overall.And we approached it that way at Uber, and we had great results there. So, when we created an—originally before M3, 8% of Uber's infrastructure bill was spent on monitoring all the infrastructure and the application. And by the time we were done with M3, the cost was a little over 1%. So, the very first solution was just make it more efficient. And that worked for a while, but what we saw is that over time, this grew again.And there wasn't any more efficiency, we could crank out of the backend storage system. There's only so much optimization you can do to the compression algorithms in the backend and how much you can get there. So, what we realized the problem shifted towards was not, can we store this data more efficiently because we're already reaching limitations there, and what we noticed was more towards getting the users of this data—so individual developers themselves—to start to understand what data is being produced, how they're using it, whether it's even useful, and then taking control from that perspective. And this is not a problem isolated to the SRE team or the observability team anymore; if you think about modern DevOps practices, every developer needs to take control of monitoring their own applications. So, this responsibility is really in the hands of the developers.And the way we approached this from a Chronosphere perspective is really in four steps. The first one is that we have cost accounting so that every developer, and every team, and the central observability team know how much data is being produced. Because it's actually a hard thing to measure, especially in the monitoring world. It's—Corey: Oh, yeah. Even AWS bills get this wrong. Like if you're sending data between one availability zone to another in the same region, it charges a penny to leave an AZ and a penny to enter an AZ in that scenario. And the way that they reflect this on the bill is they double it. So, if you're sending one gigabyte across AZ link in a month, you'll see two gigabytes on the bill and that's how it's reflected. And that is just a glimpse of the monstrosity that is the AWS billing system. But yeah, exposing that to folks so they can understand how much data their application is spitting off? Forget it. That never happens.Martin: Right. Right. And it's not even exposing it to the company as a whole, it's to each use case, to each developer so they know how much data they are producing themselves. They know how much of the bill is being consumed. And then the second step in that is to put up bumper lanes to that so that once you hit the limit, you don't just get a surprise bill at the end of the month.When each developer hits that limit, they rate-limit themselves and they only impact their own data; there is no impact to the other developers or to the other teams, or to the rest of the company. So, we found that those two were necessary initial steps, and then there were additional steps beyond that, to help deal with this problem.Corey: So, in order for this to work within a multi-day lag, in some cases, it's a near certainty that you're looking at what is happening and the expense that is being incurred in real-time, not waiting for it to pass its way through the AWS billing system and then do some tag attribution back.Martin: A hundred percent. It's in real-time for the stream of data. And as I mentioned earlier, for the monitoring data we are collecting, it goes straight from the customer environment to our backend so we're not waiting for it to be routed through the cloud providers because, rightly so, there is a multi-day or multi-hour delay there. So, as the data is coming straight to our backend, we are actively in real-time measuring that and cost accounting it to each individual team. And in real-time, if the usage goes above what is allocated, will actually limit that particular team or that particular developer, and prevent them by default from using more. And with that mechanism, you can imagine that's how the bill is controlled and controlled in real-time.Corey: So, help me understand, on some level; is your architecture then agent-based? Is it a library that gets included in the application code itself? All of the above and more? Something else entirely? Or is this just such a ridiculous question that you can't believe that no one has ever asked it before?Martin: No, it's a great question, Corey, and would love to give some more insight there. So, it is an agent that runs in the customer environment because it does need to be something there that goes and collects all the data we're interested in to send it to the backend. This agent is unlike a lot of APM agents out there where it does, sort of, introspection, things like that. We really believe in the power of the open-source community, and in particular, open-source standards like the Prometheus format for metrics. So, what this agent does is it actually goes and discovers Prometheus endpoints exposed by the infrastructure and applications, and scrapes those endpoints to collect the monitoring data to send to the backend.And that is the only piece of software that runs in our customer environments. And then from that point on, all of the data is in our backend, and that's where we go and process it and get visibility into the end-users as well as store it and make it available for alerting and dashboarding purposes as well.Corey: So, when did you found Chronosphere? I know that you folks recently raised a Series B—congratulations on that, by the way; that generally means, at least if I understand the VC world correctly, that you've established product-market fit and now we're talking about let's scale this thing. My experience in startup land was, “Oh, we've raised a Series B, that means it's probably time to bring in the first DevOps hire.” And that was invariably me, and I wound up screaming and freaking out for three months, and then things were better. So, that was my exposure to Series B.But it seems like, given what you do, you probably had a few SRE folks kicking around, even on the product team because everything you're saying so far absolutely resonates with the experiences someone who has run these large-scale things in production. No big surprise there. Is that where you are? I mean, how long have you been around?Martin: Yeah, so we've been around for a couple of years thus far—so still a relatively new company, for sure. A lot of the core team were the team that both built the underlying technology and also ran it in production the many years at Uber, and that team is now here at Chronosphere. So, you can imagine from the very beginning, we had DevOps and SREs running this hosted platform for us. And it's the folks that actually built the technology and ran it for years running it again, outside of Uber now. And then to your first question, yes, we did establish fairly early on, and I think that is also because we could leverage a lot of the technology that we had built at Uber, and it sort of gave us a boost to have a product ready for the market much faster.And what we're seeing in the industry right now is the adoption of cloud-native is so fast that it's sort of accelerating a need of a new monitoring solution that historical solutions, perhaps, cannot handle a lot of the use cases there. It's a new architecture, it's a new technology stack, and we have the solution purpose-built for that particular stack. So, we are seeing fairly fast acceleration and adoption of our product right now.Corey: One problem that an awful lot of monitoring slash observability companies have gotten into in the last few years—at least it feels this way, and maybe I'm wildly incorrect—is that it seems that the target market is the Ubers of the world, the hyperscalers where once you're at that scale, then you need a tool like this, but if you're just building a standard three-tier web app, oh, you're nowhere near that level of scale. And the problem with go-to-market in those stories inherently seems that by the time you are a hyperscalers, you have already built a somewhat significant observability apparatus, otherwise you would not have survived or stayed up long enough to become a hyperscalers. How do you find that the on-ramp looks? I mean, your website does talk about, “When you outgrow Prometheus.” Is there a certain point of scale that customers should be at before they start looking at things like Chronosphere?Martin: I think if you think about the companies that are born in the cloud today and how quickly they are running and they are iterating their technology stack, monitoring is so critical to that. It's the real-time visibility of these changes that are going out multiple times a day is critical to the success and growth of a lot of new companies. And because of how critical that piece is, we're finding that you don't have to be a giant hyperscalers like Uber to need technology like this. And as you rightly pointed out, you need technology like this as you scale up. And what we're finding is that while a lot of large tech companies can invest a lot of resources into hiring these teams and building out custom software themselves, generally, it's not a great investment on their behalf because those are not companies that are selling monitoring technology as their core business.So generally, what we find is that it is better for companies to perhaps outsource or purchase, or at least use open-source solutions to solve some of these problems rather than custom-build in-house. And we're finding that earlier and earlier on in a company's lifecycle, they're needing technology like this.Corey: Part of the problem I always ran into was—again, I come from the old world of grumpy Unix sysadmins—for me, using Nagios was my approach to monitoring. And that's great when you have a persistent stateful, single node or a couple of single nodes. And then you outgrow it because well, now everything's ephemeral and by the time you realize that there's an outage or an issue with a container, the container hasn't existed for 20 minutes. And you better have good telemetry into what's going on and how your application behaves, especially at scale because at that point, edge cases, one-in-a-million events happen multiple times a second, depending upon scale, and that's a different way of thinking. I've been somewhat fortunate in that, in my experience at least, I've not usually had to go through those transformative leaps.I've worked with Prometheus, I've worked with Nagios, but never in the same shop. That's the joy of being a consultant. You go into one environment, you see what they're doing and you take notes on what works and what doesn't, you move on to the next one. And it's clear that there's a definite defined benefit to approaching observability in a more modern way. But I despair the idea of trying to go from one to the other. And maybe that just speaks to a lack of vision for me.Martin: No, I don't think that's the case at all, Corey. I think we are seeing a lot of companies do this transition. I don't think a lot of companies go and ditch everything that they've done. And things that they put years of investment into, there's definitely a gradual migration process here. And what we're seeing is that a lot of the newer projects, newer environments, newer efforts that have been kicked off are being monitored and observed using modern technology like Prometheus.And then there's also a lot of legacy systems which are still going to be around and legacy processes which are still going to be around for a very long time. It's actually something we had to deal with that at Uber as well; we were actually using Nagios and a StatsD Graphite stack for a very long time before switching over to a more modern tag-like system like Prometheus. So—Corey: Oh, modern Nagios. What was it, uh… that's right, Icinga. That's what it was.Martin: Yes, yes. It was actually the system that we were using Uber. And I think for us, it's not just about ditching all of that investment; it's really about supporting this migration as well. And this is why both in the open-source technology M3, we actually support both the more legacy data types, like StatsD and the Graphite query language, as well as the more modern types like Prometheus and PromQL. And having support for both allows for a migration and a transition.And not even a complete transition; I'm sure there will always be StatsD, Graphite data in a lot of these companies because they're just legacy applications that nobody owns or touches anymore, and they're just going to be lying around for a long time. So, it's actually something that we proactively get ahead of and ensure that we can support both use cases even though we see a lot of companies and trending towards the modern technology solutions, for sure.Corey: The last point I want to raise has always been a personal, I guess, area of focus for me. I allude to it, sometimes; I've done a Twitter thread or two on it, but on your website, you say something that completely resonates with my entire philosophy, and to be blunt is why in many cases, I'm down on an awful lot of vendor tooling across a wide variety of disciplines. On the open-source page on your site, near the bottom, you say, and I quote, “We want our end-users to build transferable skills that are not vendor or product-specific.” And I don't think I've ever seen a vendor come out and say something like that. Where did that come from?Martin: Yeah. If you look at the core of the company, it is built on top of open-source technology. So, it is a very open core company here at Chronosphere, and we really believe in the power of the open-source community and in particular, perhaps not even individual projects, but industry standards and open standards. So, this is why we don't have a proprietary protocol, or proprietary agent, or proprietary query language in our product because we truly believe in allowing our end-users to build these transferable skills and industry-standard skills. And right now that is using Prometheus as the client library for monitoring and PromQL as the query language.And I think it's not just a transferable skill that you can bring with you across multiple companies, it is also the power of that broader community. So, you can imagine now that there is a lot more sharing of, “Hey, I am monitoring, for example, MongoDB. How should I best do that?” Those skills can be shared because the common language that they're all speaking, the queries that everybody is sharing with each other, the dashboards everybody is sharing with each other, are all, sort of, open-source standards now. And we really believe in the power that and we really do everything we can to promote that. And that is why in our product, there isn't any proprietary query language, or definitions of dashboarding, or [learning 00:35:39] or anything like that. So yeah, it is definitely just a core tenant of the company, I would say.Corey: It's really something that I think is admirable, I've known too many people who wind up, I guess, stuck in various environments where the thing that they work on is an internal application to the company, and nothing else like it exists anywhere else, so if they ever want to change jobs, they effectively have a black hole on their resume for a number of years. This speaks directly to the opposite. It seems like it's not built on a lock-in story; it's built around actually solving problems. And I'm a little ashamed to say how refreshing that is [laugh] just based upon what that says about our industry.Martin: Yeah, Corey. And I think what we're seeing is actually the power of these open-source standards, let's say. Prometheus is actually having effects on the broader industry, which I think is great for everybody. So, while a company like Chronosphere is supporting these from day one, you see how pervasive the Prometheus protocol and the query language are that actually all of these probably more traditional vendors providing proprietary protocols and proprietary query languages all actually have to have Prometheus—or not ‘have to have,' but we're seeing that more and more of them are having Prometheus compatibility as well. And I think that just speaks to the power of the industry, and it really benefits all of the end-users and the industry as a whole, as opposed to the vendors, which we are really happy to be supporters of.Corey: Thank you so much for taking the time to speak with me today. If people want to learn more about what you're up to, how you're thinking about these things, where can they find you? And I'm going to go out on a limb and assume you're also hiring.Martin: We're definitely hiring right now. And you can find us on our website at chronosphere.io or feel free to shoot me an email directly. My email is martin@chronosphere.io. Definitely massively hiring right now, and also, if you do have problems trying to monitor your cloud-native environment, please come check out our website and our product.Corey: And we will, of course, include links to that in the [show notes 00:37:41]. Thank you so much for taking the time to speak with me today. I really appreciate it.Martin: Thanks a lot for having me, Corey. I really enjoyed this.Corey: Martin Mao, CEO and co-founder of Chronosphere. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment speculating about how long it took to convince Martin not to name the company ‘Observability Manager Chronosphere Manager.'Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Screaming in the Cloud
Making Compliance Suck Less with AJ Yawn

Screaming in the Cloud

Play Episode Listen Later Jun 17, 2021 34:13


About AJAJ Yawn is a seasoned cloud security professional that possesses over a decade of senior information security experience with extensive experience managing a wide range of cybersecurity compliance assessments (SOC 2, ISO 27001, HIPAA, etc.) for a variety of SaaS, IaaS, and PaaS providers.AJ advises startups on cloud security and serves on the Board of Directors of the ISC2 Miami chapter as the Education Chair, he is also a Founding Board member of the National Association of Black Compliance and Risk Management professions, regularly speaks on information security podcasts, events, and he contributes blogs and articles to the information security community including publications such as CISOMag, InfosecMag, HackerNoon, and ISC2.Before Bytechek, AJ served as a senior member of national cybersecurity professional services firm SOC-ISO-Healthcare compliance practice. AJ helped grow the practice from a 9 person team to over 100 team members serving clients all over the world. AJ also spent over five years on active duty in the United States Army, earning the rank of Captain.AJ is relentlessly committed to learning and encouraging others around him to improve themselves. He leads by example and has earned several industry-recognized certifications, including the AWS Certified Solutions Architect-Professional, CISSP, AWS Certified Security Specialty, AWS Certified Solutions Architect-Associate, and PMP. AJ is also involved with the AWS training and certification department, volunteering with the AWS Certification Examination Subject Matter Expert program.AJ graduated from Georgetown University with a Master of Science in Technology Management and from Florida State University with a Bachelor of Science in Social Science. While at Florida State, AJ played on the Florida State University Men's basketball team participating in back to back trips to the NCAA tournament playing under Coach Leonard Hamilton.Links: ByteChek: https://www.bytechek.com/ Blog post, Everything You Need to Know About SOC 2 Trust Service Criteria CC6.0 (Logical and Physical Access Controls): https://help.bytechek.com/en/articles/4567289-everything-you-need-to-know-about-soc-2-trust-service-criteria-cc6-0-logical-and-physical-access-controls LinkedIn: https://www.linkedin.com/in/ajyawn/ Twitter: https://twitter.com/AjYawn TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of Cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached, the hard way. Take a look at this. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That's canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.Corey: This episode is sponsored in part by our friends at Lumigo. If you've built anything from serverless, you know that if there's one thing that can be said universally about these applications, it's that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications. It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You've created more problems for yourself; make one of them go away. To learn more, visit lumigo.io.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by AJ Yawn, co-founder, and CEO of ByteChek. AJ, thanks for joining me.AJ: Thanks for having me on, Corey. Really excited about the conversation.Corey: So, what is ByteChek? It sounds like it's one of those things—‘byte' spelled as in computer term, not teeth, and ‘chek' without a second C in it because frugality looms everywhere, and we save money where we can by sometimes not buying the extra letter or vowel. So, what is ByteChek?AJ: Exactly. You get it. ByteChek is a cybersecurity compliance software company, built with one goal in mind: make compliance suck less. And the way that we do that is by automating the worst part of compliance, which is evidence collection and taking out a lot of the subjective nature of dealing with an audit by connecting directly where the evidence lives and focusing on security.Corey: That sound you hear is Pandora's Box creaking open because back before I started focusing on AWS bills, I spent a few months doing a deep dive PCI project for workloads going into AWS because previously I've worked in regulated industries a fair bit. I've been a SOC 2 control owner, I've gone through the PCI process multiple times, I've dabbled with HIPAA as a consultant. And I thought, “Huh, there might be a business need here.” And it turns out, yeah, there really is.The problem for me is that the work made me want to die. I found it depressing; it was dull; it was a whole lot of hurry up and wait. And that didn't align with how I approach the world, so I immediately got the hell out of there. You apparently have a better perspective on, you know, delivering things companies need and don't need to have constant novel entertainment every 30 seconds. So, how did you start down this path, and what set you on this road?AJ: Yeah, great question. I started in the army as a information security officer, worked in a variety of different capacities. And when I left the military—mainly because I didn't like sleeping outside anymore—I got into cybersecurity compliance consulting. And that's where I got first into compliance and seeing the backwards way that we would do things with old document requests and screenshots. And I enjoyed the process because there was a reason for it, like you said.There's a business value to this, going through this compliance assessments. So, I knew they were important, but I hated the way we were doing it. And while there, I just got exposed to so many companies that had to go through this, and I just thought there was a better way. Like, typical entrepreneur story, right? You see a problem and you're like, “There has to be a better way than grabbing screenshots of the EC2 console.” And set out to build a product to do that, to just solve that problem that I saw on a regular basis. And I tell people all the time, I was complicit in making compliance stuff before. I was in that role and doing the things that I think sucked and not focused on security. And that's what we're solving here at ByteChek.Corey: So, I've dabbled in it and sort of recoiled in horror. You've gone into this to the point where you are not only handling it for customers but in order to build software that goes in a positive direction, you have to be deeply steeped in this yourself. As you're going down this process, what was your build process like? Were you talking to auditors? Were you talking to companies who had to deal with auditors? What aspects of the problem did you approach this from?AJ: It's really both aspects. And that's where I think it's just a really unique perspective I have because I've talked with a lot of auditors; I was an auditor and worked with auditors' hand-in-hand and I understood the challenges of being an auditor, and the speed that you have to move when you're in the consulting industry. But I also talked to a lot of customers because those were the people I dealt with on a regular basis, both from a sales perspective and from, you know, sitting there with the CTOs trying to figure out how to design a secure solution in AWS. So, I took it from the approach of you can't automate compliance; you can't fix the audit problem by only focusing on one side of the table, which is what currently happens where one side of the table is the client, then you get to automate evidence collection. But if the auditors can't use that information that you've automated, then it's still a bad process for both people. So, I took the approach of thinking about this from both, “How do I make this easier for auditors but also make it easier for the clients that are forced to undergo these audits?”Corey: From a lot of perspectives, having compliance achieved, regardless of whether it's PCI, whether it's HIPAA, whether it's SOC 2, et cetera, et cetera, et cetera, the reason that a companies go through it is that it's an attestation that they are, for better or worse, doing the right things. In some cases, it's a requirement to operate in a regulated industry. In other cases, it's required to process credit card transactions, which is kind of every industry, and in still others, it's an easy shorthand way of saying that we're not complete rank amateurs at these things, so as a result, we're going to just pass over the result of our most recent SOC 2 audit to our prospective client, and suddenly, their security folks can relax and not send over weeks of questionnaires on the security front. That means that, for some folks, this is more or less a box-checking exercise rather than an actual good-faith effort to improve processes and posture.AJ: Correct. And I think that's actually the problem with compliance is it's looked at as a check-the-box exercise, and that's why there's no security value out of it. That's why you can pick up a SOC 2 report for someone that's hosted on AWS, and you don't see any mention of S3 buckets. You can do a ctrl+F, and you literally don't see anything in a security evaluation about S3 buckets, which is just insane if you know anything about security on AWS. And I think it's because of what you just described, Corey; they're often asked to do this by a regulator, or by a customer, or by a vendor, and the result is, “Hurry up and get this report so that we can close this deal,”—or we can get to the next level with this customer, or with this investor, whatever it may be—instead of, let's go through this, let's have an auditor come in and look at our environment to improve it, to improve this security, which is where I hope the industry can get to because audits aren't going anywhere; people are going to continue to do them and spend thousands of dollars on them, so there should be some security value out of them, in my opinion.Corey: I love using encrypting data at rest as an example of things that make varying amounts of sense because, sure, on your company laptops, if someone steals an employee's laptop from a coffee shop, or from the back of their car one night, yeah, you kind of want the exposure to the company to be limited to replacing the hardware. I mean, even here at The Duckbill Group, where we are not regulated, we've gone through no formal audits, we do have controls in place to ensure that all company laptops have disk encryption turned on. It makes sense from that perspective. And in the data center, it was also important because there were a few notable heists where someone either improperly disposed drives and corporate data wound up on eBay or someone in one notable instance drove a truck through the side of the data center wall, pulled a rack into the bed of the truck and took off, which is kind of impressive [laugh] no matter how you slice it. But in the context of a hyperscale cloud provider like AWS, you're not going to be able to break into their data centers, steal a drive—and of course, it has to be the right collection of drives and the right machines—and then find out how to wind up reassembling that data later.It's just not a viable attack strategy. Now, you can spend days arguing with auditors around something like that, or you can check the box ‘encrypt at rest' and move on. And very often, that is the better path. I'm not going to argue with auditors about that. I'm going to bend the knee, check the box, and get back to doing the business thing that I care about. That is a reasonable approach, is it not?AJ: It is, but I think that's the fault of the auditor because good security requires context. You can't just apply a standard set of controls to every organization, as you're describing, where I would much rather the auditor care about, “Are there any public S3 buckets? What are the security group situation like on that account? How are they managing their users? How are they storing credentials there in the cloud environment as well?Are they using multiple accounts?” So, many other things to care about other than protecting whether or not someone will be able to pull off the heist of the [laugh] 21st century. So, I think from a customer perspective, it's the right model: don't waste time arguing points with your auditors, but on the flip side, find an auditor that has more technical knowledge that can understand context, because security work requires good context and audits require context. And that's the problem with audits now; we're using one framework or several frameworks to apply to every organization. And I've been in the consulting space, like you, Corey, for a while. I have not seen the same environment in any customers. Every customer is different. Every customer has a different setup, so it doesn't make sense to say every control should apply to every company.Corey: And it feels on some level like you wind up getting staff accustomed to treating it as a box-checking exercise. “Right, it's dumb that we wind up having to encrypt S3 buckets, but it's for the audit to just check the box and move on.” So, people do it, then they move on to the next item, which is, “Okay, great. Are there any public S3 buckets?” And they treat it with the same, “Yeah, whatever. It's for the audit,” box-checking approach? No, no, that one's actually serious. You should invest significant effort and time into making sure that it's right.AJ: Exactly. Exactly. And that's where the value of a true compliance assessment that is focused on security comes into play because it's no longer about checking the box, it's like, “Hey, there's a weakness here. A weakness that you probably should have identified. So, let's go fix the weakness, but let's talk about your process to find those weaknesses and then hopefully use some automation to remediate them.”Because a lot of the issues in the cloud you can trace back to why was there not a control in place to prevent this or detect this? And it's sad that compliance assessments are not the thing that can catch those, that are not the other safeguard in place to identify those. And it's because we are treating the entire thing like a check-the-box exercise and not pulling out those items that really matter, and that's just focusing on security. Which is ultimately what these compliance reports are proving: customers are asking for these reports because they want to know if their data is going to be secure. And that's what the report is supposed to do, but on the flip side, everyone knows the organization may not be taking it that serious, and they may be treating it like a check-the-box exercise.Corey: So, while I have you here, we'll divert for a minute because I'm legitimately curious about this one. At a scale of legitimate security concern to, “This is a check-the-box exercise,” where do things like rotating passwords every 60 days or rotating IAM credentials every 90 days fall?AJ: I think it again depends on the organization. I don't think that you need to rotate passwords regularly, personally. I don't know how strong of a control that is if people are doing that, because they're just going to start to make things up that are easy—Corey: Put the number at the end and increment by one every time. Great. Good work.AJ: Yep. So, I think again, it just depends on your organization and what the organization is doing. If you're talking about managing IAM access keys and rotating those, are your engineers even using the CLI? Are they using their access keys? Because if they're not, what are you rotating?You're just rotating [laugh] stale keys that have never been used. Or if you don't even have any IAM users, maybe you're using SSO and they're all using Okta or something else and they're using an IAM role to come in there. So, it's just—again, it's context. And I think the problem is, a lot of folks don't understand AWS or they don't understand the cloud. And when I say, folks, I mean auditors.They don't understand that, so they're just going to ask for everything. “Did you rotate your passwords? Did you do this? Did you do that?” And it may not even make sense for you based off of your environment, but again, is it worth the fight with the auditor, or do you just give them whatever they want and so you can go about your way, whether or not it's a legit security concern?Corey: Yeah. At some point, it's not worth fighting with auditors, but if you find yourself wanting to fight the auditor all the time, at some level, you start to really resent the auditor that you have. To put that slightly more succinctly, how do you deal with non-technical auditors who don't understand your environment—what they're looking at—without strangling them?AJ: Great question. I think it goes back to before you hire your auditor. Oftentimes, in the sales process, there's questions around, “Who's come from the Big Four on your staff?” Or, “What control frameworks do you all specialize in?” Or, “How long will this take? How much will it cost?” But there's very rarely any questions of, “Who on your staff knows AWS?”And it's similar to going to the doctor: you wouldn't go to an eye doctor to get foot surgery. So, you shouldn't go to an auditor who has never seen AWS, that doesn't know what EC2 is, to evaluate your AWS environment. So, I think organizations have to start asking the right questions during the sales process. And it's not about price or time or anything like that when you're assessing who you're going to work with from an auditing firm. It's, are they qualified to actually evaluate the threats facing your organization so that you don't get asked the stupid question.If you're hosted on AWS, you shouldn't be getting asked where are your firewall configurations. They should understand what security groups are and how they work. So, there's just a level of knowledge that should be expected from the organization side. And I would say, if you're working with a current auditor that you're having those issues with, continue to ask the hard questions. Auditors that are not technical—I have a blog post on our website, and it says this is the section your auditors are the most scared of, and it's the logical access section of your SOC 2 report.And auditors that are not technical run away from that section. So, just keep asking the hard questions, and they'll either have to get the knowledge or they realize they're not qualified to do the assessment and the marriage will split up kind of naturally from there. But I think it goes back to the initial process of getting your auditor. Don't worry about cost or time, worry about their technical skills and if they're qualified to assess your environment.Corey: And in 2021, that's a very different story than it was the first few times I encountered auditors discovering the new era. At a startup, the auditor shows up. “Great, how do we get access to your Active Directory?” “Yeah, we don't have one of those.” “Okay, how do we get on the internet here?” “Oh, here's the wireless password.” “Wait, there's not a separate guest network?” “That's right.” “Well, now I have privileged access because I'm on your network.”It's like, “Technically, that's true because if you weren't on this network, you wouldn't be able to print to that printer over there in the corner. But that's the only thing that it lets you do.” Everything else is identity-based, not IP address allow listing, so instead, it's purely just convenience to get the internet; you're about as privileged on this network as you would be at a Starbucks half a world away. And they look at you like you're an idiot. And that should have been the early warning sign that this was not going to be a typical audit conversation. Now, though in 2021, it feels like it's time to find a new auditor.AJ: Exactly. Yeah. Especially because organizations—unfortunately, last year security budgets were some of the things that were first cut when budgets were cut due to the global pandemic, S0—Corey: Well, I'm sure that'll have no lasting repercussions.AJ: Right. [laugh]. That's always a great decision. So compliance, that means compliance budgets have been significantly slashed because that's the first thing that gets cut is spending money on compliance activities. So, the cheaper option, oftentimes, is going to mean even less technical resources.Which is why I don't think manual audits, human audits are going to be a thing moving forward. I think companies are realizing that it doesn't make sense to go through a process, hire an auditor who's selling you on all this technical expertise, and then the staff that's showing up and assigned to your project has never seen inside the AWS console and truly doesn't even know what the cloud is. They think that iCloud on their phone is the only cloud that they're familiar with. And that's what happens; organizations are sold that they're going to get cybersecurity technical experts from these human auditors and then somebody shows up without that experience or expertise. So, you have to start to rely on tools, rely on technologies, and that can be native technologies in the cloud or third-party tools.But I don't think you can actually do a good audit in the cloud manually anyways, no matter how technical you are. I know a lot about AWS but I still couldn't do a great audit by myself in the cloud because auditing is time-based, you bill by the hour and it doesn't make sense for me to do all of those manual things that tools and technologies out there exist to do for us.Corey: So, you started a software company aimed at this problem, not a auditing firm and not a consulting company. How are you solving this via the magic of writing code?AJ: It's just connecting directly where the evidence lives. So, for AWS, I actually tried to do this in a non-software way prior, when I was just a typical auditor, and I was just asking our clients to provision us cross-account access to go in their environment with some security permissions to get evidence directly. And that didn't pass the sniff test at my consulting firm, even though some of the clients were open to it. But we built software to go out to the tools where the evidence directly lives and continuously assess the environment. So, that's AWS, that's GitHub, that Jira, that's all of the different tools where you normally collect this evidence, and instead of having to prove to auditors in a very manual fashion, by grabbing screenshots, you just simply connect using APIs to get the evidence directly from the source, which is more technically accurate.The way that auditing has been done in the past is using sampling methodologies and all these other outdated things, but that doesn't really assess if all of your data stores are configured in the right way; if you're actually backing up your data. It's me randomly picking one and saying, “Yes, you're good to go.” So, we connect directly where the evidence lives and hopefully get to a point where when you get a SOC 2 report, you know that a tool checked it. So, you know that the tool went out and looked at every single data store, or they went out and looked at every single EC2 instance, or security group, whatever it may be, and it wasn't dependent on how the auditor felt that day.Corey: This episode is sponsored in part by ChaosSearch. As basically everyone knows, trying to do log analytics at scale with an ELK stack is expensive, unstable, time-sucking, demeaning, and just basically all-around horrible. So why are you still doing it—or even thinking about it—when there's ChaosSearch? ChaosSearch is a fully managed scalable log analysis service that lets you add new workloads in minutes, and easily retain weeks, months, or years of data. With ChaosSearch you store, connect, and analyze and you're done. The data lives and stays within your S3 buckets, which means no managing servers, no data movement, and you can save up to 80 percent versus running an ELK stack the old-fashioned way. It's why companies like Equifax, HubSpot, Klarna, Alert Logic, and many more have all turned to ChaosSearch. So if you're tired of your ELK stacks falling over before it suffers, or of having your log analytics data retention squeezed by the cost, then try ChaosSearch today and tell them I sent you. To learn more, visit chaossearch.io.Corey: That sounds like it is almost too good to be true. And at first, my immediate response is, “This is amazing,” followed immediately by that's transitioning into anger, that, “Why isn't this a native thing that everyone offers?” I mean, to that end, AWS announced ‘Audit Manager' recently, which I haven't had the opportunity to dive into in any deep sense yet, because it's still brand new, and they decided to release it alongside 15,000 other things, but does that start getting a little bit closer to something companies need? Or is it a typical day-one first release of an Amazon service where, “Well, at least we know the direction you're heading in. We'll check back in two years.”AJ: Exactly. It's the day-one Amazon service release where, “Okay. AWS is getting into the audit space. That's good to know.” But right now, at its core, that AWS service, it's just not usable for audits, for several reasons.One, auditors cannot read the outputs of the information from Audit Manager. And it goes back to the earlier point where you can't automate compliance, you can't fix compliance if the auditors can't use the information because then they're going to go back to asking dumb questions and dumb evidence requests if they don't understand the information coming out of it. And it's just because of the output right now is a dump of JSON, essentially, in a Word document, for some strange reason.Corey: Okay, that is the perfect example right there of two worlds colliding. It's like, “Well, we're going to put JSON out of it because that's the language developers speak. Well, what do auditors prefer?” “I don't know, Microsoft Word?” “Okay, sounds good.” Even Microsoft Excel is a better answer than [laugh] that. And that is just… okay, that is just Looney Tunes awful.AJ: Yep. Yeah, exactly. And that's one problem. The other problem is, Audit Manager requires a compliance manager. If we think about that tool, a developer is not going to use Audit Manager; it's going to be somebody responsible for compliance.It requires them to go manually select every service that their company is using. A compliance manager, one, doesn't even know what the services are; they have no clue what some of these services are, two, how are they going to know if you're using Lambda randomly somewhere or, or a Systems Manager randomly somewhere, or Elastic Beanstalk's in one account or one region. Config here, config—they have to just go through and manually—and I'm like, “Well, that doesn't make any sense because AWS knows what services you're using. Why not just already have those selected and you pull those in scope?” So, the chances of something being excluded are extremely high because it's a really manual process for users to decide what are they actually assessing.And then lastly, the frameworks need a lot of work. Auditing is complex because their standards or regulations and all of that, and there's just a gap between what AWS has listed as a service that addresses a particular control that—there was a few times where I looked at Audit Manager and I had no clue what they were mapping to and why they're mapping. So, it's a typical day-one service; it has some gaps, but I like the direction it's going. I like the idea that an organization can go into their AWS console, hit to a dashboard, and say, “Am I meeting SOC 2?” Or“ am I meeting PCI?” I feel like this is a long time coming. I think you probably could have done it with Security Hub with less automation; you have to do some manual uploads there, but the long answer to say it has a long way to go there, Corey.Corey: I heard a couple of horror stories of, “Oh, my god, it's charging me $300 a day and I can't turn it off,” when it first launched. I assume that's been fixed by now because the screaming has stopped. I have to assume it was. But it was gnarly and surprising people with bills. And surprising people with things labeled ‘audit' is never a great plan.AJ: Right. Yeah, the pricing was a little ridiculous as well. And I didn't really understand the pricing model. But that's typical of a new AWS service, I never really understand. That's why I'm glad that you exist because I'm always confused at first about why things cost so much, but then if you give it some time, it starts to make a little bit more sense.Corey: Exactly. The first time you see a new pricing dimension, it's novel and exciting and more than a little scary, and you dive into it. But then it's just pattern recognition. It's, “Oh, it's one of these things again. Great.” It's why it lends itself to a consulting story.So, you were in the army for a while. And as you mentioned, you got tired of sleeping on the ground, so you went into corporate life. And you were at a national cybersecurity professional services firm for a while. What was it that finally made you, I guess, snap for lack of a better term and, “I'm going to start my own thing?” Because in my case, it was, “Well, okay. I get fired an awful lot. Maybe I should try setting out my own shingle because I really don't have another great option.” I don't get the sense, given your resume and pedigree, that that was your situation?AJ: Not quite. I surprisingly, don't do well with authority. So, a little bit I like to challenge things and question the norm often, which got me in trouble in the military, definitely got me in trouble in corporate life. But for me it was, I wanted to change; I wanted to innovate. I just kept seeing that there was a problem with what we were doing and how we were doing it, and I didn't feel like I had the ability to innovate.Innovating in a professional services firm is updating a Google Sheet, or adding a new Google Form and sending that off to a client. That's not really the innovation that I was looking to do. And I realized that if I wanted to create something that was going to solve this problem, I could go join one of the many startups out there that are out there trying to solve this problem, or I could just try to go do it myself and leverage my experience. And two worlds collided as far as timing and opportunity where I financially was in a position to take a chance like this, and I had the knowledge that I finally think I needed to feel comfortable going out on my own and just made the decision. I'm a pretty decisive person, and I decided that I was going to do it and just went with it.And despite going about this during the global pandemic, which presented its own challenges last year, getting this off the ground. But it was really—I collected a bunch of knowledge. I realized, maybe, two and a half years ago, actually, that I wanted to start my own business in this space, but I didn't know what I wanted to do just yet. I knew I wanted to do software, I didn't know how I wanted to do it, I didn't know how I was going to make it work. But I just decided to take my time and learn as much as I can.And once I felt like I acquired enough knowledge and there was really nothing else I could gain from not doing this on my own, and I knew I wasn't going to go join a startup to join them on this journey, it was a no-brainer just to pull the trigger.Corey: It seems to have worked out for you. I'm starting to see you folks crop up from time-to-time, things seem to be going well. How big are you?AJ: Yeah, we're doing well. We have a team of seven of us now, which is crazy to think about because I remember when it was just me and my co-founder staring at each other on Zoom every day and wondering if they're ever going to be anybody else on these [laugh] calls and talking to us. But it's going really well. We have early customers that are happy and that's all that I can ask for and they're not just happy silently; they're being really public about being happy about the platform, and about the process. And just working with people that get it and we're building a lot of momentum.I'm having a lot of fun on LinkedIn and doing a lot of marketing efforts there as well. So, it's been going well; it's been actually going better than expected, surprisingly, which I don't know, I'm a pretty optimistic entrepreneur and I thought things will go well, but it's much better than expected, which means I'm sleeping a lot less than I expected, as well.Corey: Yeah, at some point, when you find yourself on the startup train, it's one of those, “Oh, yeah. That's right. My health is in the gutter, my relationships are starting to implode around me.” Balance is key. And I think that that is something that we don't talk about enough in this world.There are periodically horrible tweets about how you should wind up focusing on your company, it should be the all-consuming thing that drives you at all hours of the day. And you check and, “Oh, who made that observation on Twitter? Oh, it's a VC.” And then you investigate the VC and huh, “You should only have one serious bet, it should be your all-consuming passion” says someone who's invested in a wide variety of different companies all at the same time, in the hopes that one of them succeeds. Huh.Almost like this person isn't taking the advice they're giving themselves and is incentivized to give that advice to others. Huh, how about that? And I know that's a cynical take, but it continues to annoy me when I see it. Where do you stand on the balance side of the equation?AJ: Yeah, I think balance is key. I work a lot, but I rest a lot too. And I spend—I really hold my mornings as my kind of sacred place, and I spend my mornings meditating, doing yoga, working out, and really just giving back to myself. And I encourage my team to do the same. And we don't just encourage it from just a, “Hey, you guys should do this,” but I talk to my team a lot about not taking ourselves too seriously.It's our number one core value. It's why our slogan is ‘make compliance suck less' because it's really my military background. We're not being shot at; we're sleeping at home every night. And while compliance and cybersecurity, it's really important, and we're protecting really important things, it's not that serious to go all-in and to not have balance, and not to take time off not to relax. I mean, a part of what we do at ByteChek is we have a 10% rule, which means 10% of the week, I encourage my team to spend it on themselves, whether that's doing meditation, going to take a nap.And these are work hours; you know, go out, play golf. I spent my 10% this morning playing golf during work hours. And I encourage all my team, every single week, spend four hours dedicated to yourself because there's nothing that we will be able to do as a company without the people here being correct and being mentally okay. And that's something that I learned a long time ago in the military. You spend a year away from home and you start to really realize what's important.And it's not your job. And that's the thing. We hire a lot of veterans here because of my veteran background, and I tell all the vets that come here when you're in the military, your job, your rank, and your day-to-day work is your identity. It's who you are. You're a Marine or you're a Soldier, or you're a Sailor; you're an Airman if that's a bad choice that you made. Sorry for my Air Force guys.Corey: Well, now there's a Spaceman story as well, I'm told. But I don't know if they call them spacemen or not, but remember, there's a new branch to consider. And we can't forget the Coast Guard either.AJ: If they don't call themselves Spacemen, that is their name from now on. We just made it, today. If I ever meet somebody in the Space Force, [laugh] I'm calling them the Spacemen. That is amazing. But I tell our interns that we bring from the military, you have to strip that away.You have to become an individual because ByteChek is not your identity. And it won't be your identity. And ByteChek's not my identity. It's something that I'm doing, and I am optimistic that it's going to work out and I really hope that it does. But if it doesn't, I'm going to be all right; my team is going to be all right and we're going to all continue to go on.And we just try to live that out every day because there's so many more important things going on in this world other than cybersecurity compliance, so we really shouldn't take ourselves too seriously. And that advice of just grinding it out, and that should be your only focus, that's only a recipe for disaster, in my opinion.Corey: AJ, thank you so much for taking the time to speak with me. If people want to hear more about what you have to say, where can they find you?AJ: They can find me on LinkedIn. That's my one spot that I'm currently on. I am going to pop on Twitter here pretty soon. I don't know when, but probably in the next few weeks or so. I've been encouraged by a lot of folks to join the tech community on Twitter, so I'll be there soon.But right now they can find me on LinkedIn. I give four hours back a week to mentoring, so if you hear this and you want to reach out, you want to chat with me, send me a message and I will send you a link to find time on my calendar to meet. I spend four hours every Friday mentoring, so I'm open to chat and help anyone. And when you see me on LinkedIn, you'll see me talking about diversity in cybersecurity because I think really the only way you can solve a cybersecurity skills shortage is by hiring more diverse individuals. So, come find me there, engage with me, talk to me; I'm a very open person and I like to meet new people. And that's where you can find me.Corey: Excellent. And we'll of course throw a link to your LinkedIn profile in the [show notes 00:29:44]. Thank you so much for taking the time to speak with me. It's really appreciated.AJ: Yeah, definitely. Thank you, Corey. This is kind of like a dream come true to be on this podcast that I've listened to a lot and talk about something that I'm passionate about. So, thanks for the opportunity.Corey: AJ Yawn, CEO and co-founder of ByteChek. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you hated this podcast, please leave a five-star review on your podcast platform of choice along with a comment that's embedded inside of a Word document.Announcer: This has been this week's episode of Screaming in the Cloud. You can also find more Corey at screaminginthecloud.com, or wherever fine snark is sold.This has been a HumblePod production. Stay humble.

Digging In
Episode 23 - MISS DIG 811 IT Systems Manager

Digging In

Play Episode Listen Later Jun 14, 2021 16:17


Digging In is joined by Katie Gruzwalski, MISS DIG 811 IT Systems Manager, to discuss the important role technology plays in the Notification Center being able to service all of its Stakeholders, as well as some important changes the IT department is working on.

Digging In
Episode 23 - MISS DIG 811 IT Systems Manager

Digging In

Play Episode Listen Later Jun 14, 2021 16:17


Digging In is joined by Katie Gruzwalski, MISS DIG 811 IT Systems Manager, to discuss the important role technology plays in the Notification Center being able to service all of its Stakeholders, as well as some important changes the IT department is working on.

WiFi Ninjas - Wireless Networking Podcast
WN Podcast 070 - Cisco Meraki Systems Manager MDM with Paul Fidler

WiFi Ninjas - Wireless Networking Podcast

Play Episode Listen Later Jun 11, 2021 59:44


MDM can vastly simplify onboarding of your devices while improving security. Learn about what's possible with Cisco Meraki Systems Manager MDM in an interview with Paul Fidler.

Opwall's Field Notes
Entry #12: The Secret Behind Transylvania's Biodiversity with Toby Farman

Opwall's Field Notes

Play Episode Listen Later May 14, 2021 36:01


Toby Farman is Opwall's Systems Manager and the Country Manager for our Romania expeditions. Toby originally studied biotechnology, but an Opwall trip in 2007 sparked his passion for travel. After graduating, Toby traveled the world for 2 years with nothing more than a backpack. Rather than settle down upon his return, Toby began working for Opwall and went on to manage projects in Mozambique, South Africa, and now Transylvania. In this episode, we discuss what makes Transylvania so special for wildlife, how bears, hay meadows, and traditional agriculture fit together within the mosaic of hills and valleys that define the Transylvanian region, and what the rest of the world can learn from this special place.

SuperYacht Radio
Fresh Approach to old issues with Voyager IP! Smart communications for yachts.

SuperYacht Radio

Play Episode Listen Later Apr 26, 2021 52:55


Catch up with Mark Elliott, Co-founder of Voyager IP, and Barry Egan, I.T. Systems Manager as they explain how the need for high performance and signal strength even in the worst of conditions has driven demand for single beam satellite solutions, why their choice was for KA over KU, how they provide technical support, and their plans for the future. #SatComms #VoyagerIP #vsat #sophos #celerway #comms #eto #xpcc #satellite #vsat #comms #superyachts #yachtengineer #yachtcaptain #maritime #marineindustry #cybersecurity #yachting

Aviation Careers Podcast
ACP325 Airport Safety Management Systems Manager Arielle Sewell

Aviation Careers Podcast

Play Episode Listen Later Mar 22, 2021 31:00


Welcome to the inspirational, informational, and transparent Aviation Careers Podcast.  Today I am speaking with Arielle Sewell, a Safety Management Systems Manager for a small-hub airport.  SMS at airports receives little attention, but as the discipline expands more specialists will be needed in this field. Sponsor: https://planeenglishsim.com   App-based aviation radio simulator is an easy way […] The post ACP325 Airport Safety Management Systems Manager Arielle Sewell appeared first on Aviation Careers Podcast.

Zertified Fresh
eComm + The Machine

Zertified Fresh

Play Episode Listen Later Feb 28, 2021 64:23


The Dog Days Are Over this week as we emerge from the cold and snow with a look inside our eComm department with Mike Hildebrandt and Eva Matt. They review the department, some recent changes, and their outlook on the future. We segue into a conversation with The Machine herself, covering her evolution at LineDrive and how she helps our clients navigate the complexities of Amazon. Finally we Shake It Off with a candid conversation with everyone's favorite Systems Manager, Pez. Keep It Fresh!

Meraki Unboxed
Episode 33: Device Management for Remote Working and Learning

Meraki Unboxed

Play Episode Listen Later Sep 2, 2020 31:16


As organizations contemplate a future with more remote working and learning, it's more important than ever to identify efficient ways to support dispersed employees and students at scale. For many years, Meraki Systems Manager has been supporting the management of up to tens of thousands of devices, regardless of their location, so it's more than up to the task at hand. In this episode we welcome back Amelie from our Product Marketing team to discuss some of the attributes and features of Systems Manager that can ease the burden of IT teams tasked with supporting large scale, and enduring, remote working and learning.

My Food Job Rocks!
Ep. 223 - From Coffee to Beyond Meat with Weber Stibolt, Quality Systems Manager at Beyond Meat

My Food Job Rocks!

Play Episode Listen Later Jun 15, 2020 50:18


I first interviewed Weber Stibolt in episode 92, when he was a Quality Assurance Specialist at Eight O Clock Coffee. A couple of years later, he’s now at one of the most talked-about food startups at the moment: Beyond Meat. The last interview we’ve had from Beyond Meat was episode 24 with one of their food engineers so it’s good to get an update on what’s happening there. So I ask Weber about his transition over there. From applying to the job, moving to the new town, and progressing through the ranks. We talk a lot about one of our favorite programs in IFT, the Emerging Leader’s Network as we were both participants in it. Weber went a little bit farther and became a peer mentor. Probably the best part of this interview was that Weber made his role by presenting a need and making a case. This is a great example that if you’re in the right company, and if you can identify a potential opportunity, you can actually carve out your own unique path. Show Notes A year ago, I was working in coffee and I got an opportunity to work at Beyond Meat Central Missouri Emerging Leader’s Network – Weber and I were in it and Weber became a peer mentor for it Emotional Intelligence How do you learn Emotional Intelligence?: The firs step is to recognize it and use it as a tool to help you move forward Did you seek this job or did you find it interesting: More the former. Coffee is a bit boring because it lacks a challenge. I wanted a better problem solving canvas. There wasn’t enough growth in my abilities. Beyond Meat IPO Was there a change when Beyond Meat went IPO?: Not really. The mission was the same. What is the difference between measuring the quality of Beyond Meat versus the quality of coffee?: Surprisingly, sensory is still a huge part of my day What notes do you look for?: On a flavor perspective, it’s fairly neutral. Nothing on the realm on pungent. Moisture and texture are important too. Oil is also important Small changes can have a fairly big impact. Adding an extra lb of flavoring for example, will affect a lot. What about raw material?: We actually are very happy with our pea protein lots. However, two different manufacturers can be totally different PURIS – suppliers in Beyond Meat Quality Systems Manager: Making paperwork digital Kelly Wilson – VP of quality Gallup Personality Test SQF Conference guy about risk IF I leave a beyond meat in the fridge, would it rot as regular meat?: Technically it’s less risky Why Does Your Food Job Rock?: I’m doing so many cool things in Beyond Meat Trends and technology: We spend a significant amount of money on R+D Plant-based fried chicken in KFC is super convincing It’s made of wheat and soy What is one thing in the industry that you’re interested about?: Cell-cultured meat Clean Meat by Paul Shapiro My podcast case is politics and a few comedy podcasts like Mark Marron’s pocast Any advice on switching roles?: Change Management is extremely important. Every single job has a  change management component Where can find we find you?: LinkedIn’s the best  

Automotive Insiders Presented By OESA
Automotive Insiders Welcomes Chad Walls, Information Systems Manager at Kamco Industries, Inc.

Automotive Insiders Presented By OESA

Play Episode Listen Later Jun 10, 2020 13:13


Automotive Insiders is presented by OESA, the Original Equipment Suppliers Association. Hear industry experts discuss today's Automotive hot topics, to keep the Automotive Supplier Community up to date on the fast-changing mobility landscape. From post-pandemic manufacturing restart planning and worker safety measures, to legal issues and supply chain disruptions, Automotive Insiders is your source of timely and relevant content. In this episode, Chad Walls, Information Systems Manager at Kamco Industries, Inc., discusses what their business was like before they began the path toward a Digital Transformation, and the defining moment when they realized they needed to move toward more digital operations. Chad also talks about compliance and standards, and how they have evolved over the years. Don't miss this insightful conversation.

Counterpoint
Economics and borders

Counterpoint

Play Episode Listen Later May 18, 2020 54:05


What might post COVID-19 China look like? Will they lead the global economic revival or will it struggle? Why are international borders causing so much anxiety over recent years? Are we at the Kindleberger moment and what is it anyway and was the collapse of Virgin Australia inevitable?

Counterpoint - ABC RN
Economics and borders

Counterpoint - ABC RN

Play Episode Listen Later May 18, 2020 54:05


What might post COVID-19 China look like? Will they lead the global economic revival or will it struggle? Why are international borders causing so much anxiety over recent years? Are we at the Kindleberger moment and what is it anyway and was the collapse of Virgin Australia inevitable?

Counterpoint - ABC RN
Economics and borders

Counterpoint - ABC RN

Play Episode Listen Later May 18, 2020 54:05


What might post COVID-19 China look like? Will they lead the global economic revival or will it struggle? Why are international borders causing so much anxiety over recent years? Are we at the Kindleberger moment and what is it anyway and was the collapse of Virgin Australia inevitable?

Mobycast
Automate all the things - Updating container secrets using CloudWatch Events + Lambda

Mobycast

Play Episode Listen Later Mar 4, 2020 68:15


In this episode, we cover the following topics: Developing a system for automatically updating containers when secrets are updated is a two-part solution. First, we need to be notified when secrets are updated. Then, we need to trigger an action to update the ECS service. CloudWatch Events can be used to receive notifications when secrets are updated. We explain CloudWatch Events and its primary components: events, rules and targets. Event patterns are used to filter for the specific events that the rule cares about. We discuss how to write event patterns and the rules of matching events. The event data structure will be different for each type of emitter. We detail a handy tip for determining the event structure of an emitter. We discuss EventBridge and how it relates to CloudWatch Events. We explain how to create CloudWatch Event rules for capturing update events emitted by both Systems Manager Parameter Store and AWS Secrets Manager. AWS Lambda can be leveraged as a trigger of CloudWatch Events. We explain how to develop a Lambda function that invokes the ECS API to recycle all containers. We finish up by showing how this works for a common use case: using the automatic credential rotation feature of AWS Secrets Manager with a containerized app running on ECS that connects to a RDS database. Detailed Show NotesWant the complete episode outline with detailed notes? Sign up here: https://mobycast.fm/show-notes/Support Mobycasthttps://glow.fm/mobycastEnd SongNight Sea Journey by Derek RussoMore InfoFor a full transcription of this episode, please visit the episode webpage.We'd love to hear from you! You can reach us at: Web: https://mobycast.fm Voicemail: 844-818-0993 Email: ask@mobycast.fm Twitter: https://twitter.com/hashtag/mobycast Reddit: https://reddit.com/r/mobycast

Illinois Agronomy Roundup
S2E1: Mark Dostal, Agronomic Systems Manager, Corn States

Illinois Agronomy Roundup

Play Episode Listen Later Mar 2, 2020 16:20


Mark Dostal joins us to provide some history and insight into the Corn States organization, and talks about what’s on his mind as we look forward to the upcoming planting season.

Mobycast
Psst... Secrets Handling for Cloud-Native Apps - Part 2

Mobycast

Play Episode Listen Later Jan 8, 2020 46:42


In this episode, we cover the following topics: AWS offers not one, but two, managed services for secrets management. Systems Manager Parameter Store and AWS Secrets Manager have similar functionality, making it sometimes confusing to know which to use. We compare and contrast the two services to help guide your choice. The three types of sensitive data injection supported by Elastic Container Service (ECS). Understanding when sensitive data is injected into the container and how to handle updates to secrets (such as credential rotation). The required configuration changes and IAM permissions you need to enable ECS integration with Parameter Store and Secrets Manager. A walkthrough of the specific steps you need to take to update your ECS application to support secrets integration. Detailed Show NotesWant the complete episode outline with detailed notes? Sign up here: https://mobycast.fm/show-notes/Support Mobycasthttps://glow.fm/mobycastEnd SongStraddling by Derek RussoMore InfoFor a full transcription of this episode, please visit the episode webpage.We'd love to hear from you! You can reach us at: Web: https://mobycast.fm Voicemail: 844-818-0993 Email: ask@mobycast.fm Twitter: https://twitter.com/hashtag/mobycast Reddit: https://reddit.com/r/mobycast

Take Me Through Your Day
Ep 023 – IT Network Systems Manager of Drug Rehab Facility (CapsLocked/Scrubs)

Take Me Through Your Day

Play Episode Listen Later Jun 24, 2019 41:26


On this week’s episode, we chat with someone in the IT and medical world which makes for an interesting combination! We’ll definitely have to do a follow-up since we had some time constraints on this one, but enjoy the wonderfully, stressful world of a network systems manager of a drug rehab facility. The post Ep 023 – IT Network Systems Manager of Drug Rehab Facility (CapsLocked/Scrubs) appeared first on Take Me Through Your Day.

AWS Morning Brief
Tom Clancy’s Systems Manager OpsCenter

AWS Morning Brief

Play Episode Listen Later Jun 10, 2019 14:23


AWS Morning Brief for the week of June 10th, 2019.

Develpreneur: Become a Better Developer and Entrepreneur

The AWS Management Tools are an excellent way to set up, configure, and monitor a cloud infrastructure.  Many of these are free or have low fees that make them a no-brainer when considering whether to utilize them.  There are several services in this group so let's take each in turn. CloudWatch Monitor your Amazon AWS resources with this service.  You can select metrics and alarms related to variances in those values.  The free tier allows for a few alerts and dashboards.  This is a valuable resource to keep up with how your production services are doing.  You can even look for trends that imply a failure to make changes before its too late. CloudTrail The prior service is for monitoring your resources.  CloudTrail is very similar but for tracking your users.  You can keep up with valid users and also look for potential hackers by analyzing the data in this service.  It tracks activities through API calls, the User Interface, and any other ways to access the systems.  Think of it as an access log file for all of your AWS services in a single location. Service Catalog When you want to build a filter or layer on top of AWS services for your organization, this is the tool you are looking for.  It allows you to provide a catalog for your users of specific AWS services they can access and pre-configured resources as well.  This is a perfect solution for maintaining compliance while still allowing for quick access and deployment of AWS resources. Personal Health Dashboard The number of AWS services can be overwhelming.  This is a bit of an issue when you want to see how the current resource families are doing (Up, Down, or Other) and focus on those that matter to you.  The PHD provides precisely that information.  You are shown a dashboard of how the services that impact your subscription are doing and any scheduled changes.  Instead of going to a generic dashboard to see if your services are having any issues at the Amazon level, use this to get right to what matters to you. Auto Scaling This service allows you to adjust up and down the capacity of your AWS services on a dynamic basis.  That means this is one of the most valuable resources for any cloud solution.  It is here that you configure the rules for when to make adjustments to how you use resources. Config All of these Amazon services require configuration.  That administrative work can quickly become a headache.  Enter, Amazon Config.  This provides you with a way to track all of those configuration steps and create an inventory of them.  If change management is something important to you (it should be) then Amazon Config is going to reduce your headaches. Systems Manager Like all of these other management tools, Systems Manager helps you run your AWS resources.  This is quite possibly the best way to view your entire infrastructure in one location.  It includes a centralized approach to store configuration, secrets and to separate them from code.  You can group resources, set up automation, and make changes directly to them all from this tool. Cloud Formation This service provides a common language to describe and provision your resources.  Although tools like Config and Systems Manager help you manage your systems, this takes it to the next level.  You can use Cloud Formation to model, implement, and even version control your entire infrastructure. OpsWorks Of course, Amazon is up to speed with the tools the cool kids use.  This coverage includes Chef and Puppet for automated configuration and management.  That is where OpsWorks comes in.  It provides managed instances for either of these or the option of OpsWorks Stacks. Trusted Advisor The AWS family of services require some know-how to do it right.  The Trusted Advisor service provides help with that.  IT is a resource for reducing cost, improving security, and increasing performance.

MacroFab Engineering Podcast

System-In-Package Platforms Gene Frantz - Chief Technology Officer for Octavo Systems One of the founders and the visionary behind Octavo Systems Professor in the Practice at Rice University in the Electrical and Computer Engineering Department Was the Principal Technology Fellow at Texas Instruments where he built a career finding new opportunities and building new businesses to leverage TI’s DSP technology Holds 48 patents and has written over 100 papers/articles and presents at conferences around the globe Has a BSEE from the University of Central Florida, a MSEE from Southern Methodist University, and a MBA from Texas Tech University Erik Welsh - Applications and Systems Manager for Octavo Systems Has over 16 years of industry experience designing hardware and software systems, including 11 years at Texas Instruments Supported hundreds of developers bringing embedded systems quickly to market Simplifying complex systems is a passion and mentors engineers developing embedded Linux systems Developed platforms for cutting-edge wireless research to provide open-access to startups Began his career as a SoC (System-On-Chip) designer eventually leading SoC Security Architecture development Currently holds multiple patents in the area of System-in-Package technology Has a Bachelor of Science and Masters in Electrical Engineering from Rice University Octavo Systems For more background on Octavo Systems check out MEP EP#17: System-in-Package (SiP) Platforms with Greg Sheridan How do System-In-Package devices make designers and engineers lives easier? Certification for FCC and CE? Some modules are pre FCC certified with an ID. Is there a benefit using a System-in-Package to reduce product development risk here? OSD335x C-SIP The Future of SIP Visit our Slack Channel and join the conversation in between episodes and please review us, wherever you listen (PodcastAddict, iTunes). It helps this show stay visible and helps new listeners find us. Tags: electronics podcast, Erik Welsh, Gene Frantz, MacroFab, macrofab engineering podcast, MEP, Octavo Systems, OSD335X C-SIP, System on Chip, System-in-Package

AWS re:Invent 2017
GPSTEC307: GPS: Too Many Tools? Amazon EC2 Systems Manager Bridges Operational Models

AWS re:Invent 2017

Play Episode Listen Later Nov 30, 2017 49:00


Come see first-hand how Amazon EC2 Systems Manager can help you manage your servers at scale with the agility and security you need in today's dynamic cloud-enabled world. To be truly agile, you need a way to define and track system configurations, prevent drift, and maintain software compliance. At the same time, you need to collect software inventory, apply OS patches, automate your system image maintenance, and configure anything in the OSs of your EC2 instances and on-premises servers. Amazon EC2 Systems Manager does all of that and more for both Linux and Windows systems. In this session, learn about the seven services that make up Amazon EC2 Systems Manager and see them in action. No matter if you are managing 10 or 10,000 instances, see how you can manage your systems, increasing your agility and security with EC2 Systems Manager.

AWS re:Invent 2017
DEV338: Use Amazon EC2 Systems Manager to Perform Automated Resilience Testing in Your CI/CD Pipeline

AWS re:Invent 2017

Play Episode Listen Later Nov 30, 2017 56:20


Do you know how your applications will behave when things go wrong, either naturally or artificially? See how Expedia uses Amazon EC2 Systems Manager to perform automatic resilience tests as part of CI/CD pipelines, giving application owners confidence they are prepared for the worst.

AWS re:Invent 2017
DEV335: Manage Infrastructure Securely at Scale and Eliminate Operational Risks

AWS re:Invent 2017

Play Episode Listen Later Nov 30, 2017 54:43


Managing AWS and hybrid environments securely and safely while having actionable insights is an operational priority and business driver for all customers. Using SSH or RDP sessions could lead to unintended or malicious outcomes with no traceability. Learn to use Amazon EC2 Systems Manager to improve your security posture, automate at scale, and minimize application downtime for both Windows and Linux workloads. Easily author configurations to automate your infrastructure without SSH access, and control the blast radius of configuration changes. Get a cross-account and cross-region view of what's installed and running on your servers or instances. Learn to use Systems Manager to securely store, manage, and retrieve secrets. You can also run patch compliance checks on the fleet to react to malware and vulnerabilities within minutes, while still providing granular control to users with different privilege levels and full auditability. You will hear from FINRA, the Financial Industry Regulatory Authority, on how they use  Systems Manager to safely manage their Enterprise environment.

AWS re:Invent 2017
DEV306: Embrace DevOps and Learn How to Automate Operations

AWS re:Invent 2017

Play Episode Listen Later Nov 30, 2017 57:15


Managing large-scale production environments can be complex – things will go wrong and learning to operate and manage these environments is critical. From routine tasks such as building AMIs to managing the lifecycle of your instances, investing in automation and tooling can help you detect problems earlier, minimize downtime, and reduce manual work. In this session, you will learn how to use Amazon EC2 Systems Manager to troubleshoot common issues, detect and remediate configuration drift, and automate common actions. You will learn how to author common actions and about community driven features of Systems Manager. You can use the same tools across Linux and Windows, in AWS and in hybrid environments. You will also hear from a Systems Manager customer on how they are using Systems Manager to better manage and operate their infrastructure. Our customer, Ancestry, will talk about how they are using EC2 Systems Manager to manage their environment in an agile manner.

Software Defined Talk
Episode 113: All the great AWS re:Invent news

Software Defined Talk

Play Episode Listen Later Nov 30, 2017 59:26


There’s no clever title this week, just straight to the point of covering the highlights of AWS re:Invent this week. They got the kubernetes now! There’s a passel of releases as well. We also discuss some other news like Meg Whitman leaving HPE (on good standing), net neutrality, WeWork buying Meetup, and Arby’s. For reals! Pre-Roll SDT News SDT got a new logo! SDT got 1,000 logo stickers to give away! You can get a sticker but completing this survey (https://www.surveymonkey.com/r/SSCKN86) or sending us your address in Slack. US Addresses only until Matt can come and get some stickers. We’ll be doing a live show - probably - on Jan 16 at the CloudAustin Meetup (https://www.meetup.com/CloudAustin/events/244102686/). Check out the Software Defined Talk Members Only White-Paper Ex (https://www.patreon.com/sdt)e (https://www.patreon.com/sdt)g (https://www.patreon.com/sdt)esi (https://www.patreon.com/sdt)s (https://www.patreon.com/sdt) podcast Join us all in the SDT Slack (http://www.softwaredefinedtalk.com/slack). Upcoming SDT newsletter (http://eepurl.com/dbM2_X). Misc. news before re:Invent coverage Changing of the guard at HPE (https://www.theregister.co.uk/2017/11/21/hpe_meg_whitman/). WeWork buys MeetUp (https://www.wired.com/story/why-wework-is-buying-meetup/). Net Neutrality (https://www.wired.com/story/heres-how-the-end-of-net-neutrality-will-change-the-internet/) - I realize this is naive, but I feel like things already operate this way. EFF write-up (https://www.eff.org/deeplinks/2017/11/lump-coal-internets-stocking-fcc-poised-gut-net-neutrality-rules) Stratechery (https://stratechery.com/2017/light-touch-cable-and-dsl-the-broadband-tradeoff-the-importance-of-antitrust/) & follow-up (https://stratechery.com/2017/light-touch-cable-and-dsl-the-broadband-tradeoff-the-importance-of-antitrust/) This week in PE: OOOHH-OOOO! BARRACUDA (https://www.theregister.co.uk/2017/11/27/barracuda_private_equity/)! Also, Arby’s: eat all you want you’ll die anyway (https://www.cnbc.com/2017/11/28/roark-capital-to-buy-buffalo-wild-wings-for-2-point-9-billion.html). Work in tech? Time to ask for a raise. (https://www.wsj.com/articles/tech-boom-creates-new-order-for-world-markets-1511260200) Good overview (http://www.computerweekly.com/feature/Redefining-OpenStack-Addressing-the-identity-and-integration-for-enterprise-readiness) of the end of OpenStack’s big tent theory. AWS re:Invent AWS Business Update Amazon Web Services has an $18 billion revenue (https://www.channele2e.com/news/live-blog-amazon-web-services-ceo-andy-jassy/) run rate and the business is growing 42 percent year over year New AWS Services (100+ new total) Loosely break into themes of Containers, Databases, AI/ML, and IOT Amazon MQ (https://aws.amazon.com/blogs/aws/amazon-mq-managed-message-broker-service-for-activemq/) - Apache ActiveMQ as a Service (lunches eaten?) AppSync (https://aws.amazon.com/blogs/aws/introducing-amazon-appsync/) - GraphQL as a Service (lunches eaten?) Aurora Serverless (https://aws.amazon.com/blogs/aws/in-the-works-amazon-aurora-serverless/) - burst database consumption Comprehend (https://aws.amazon.com/blogs/aws/amazon-comprehend-continuously-trained-natural-language-processing/) - Natural Language Processing across 98 languages DeepLens (https://aws.amazon.com/blogs/aws/deeplens/) - video camera with AI embedded DynamoDB Global (https://aws.amazon.com/blogs/aws/new-for-amazon-dynamodb-global-tables-and-on-demand-backup/) - similar to Azure/Google initiatives EC2 Bare Metal Instances (https://aws.amazon.com/blogs/aws/new-amazon-ec2-bare-metal-instances-with-direct-access-to-hardware/) - lots of competitors try to differentiate on this (lunches eaten?) came out of the VMware work i3.metal instance types c5 AMIs can work too (new KVM-based instance type) EC2 Instance types, up to 25Gbps networking H1 (https://aws.amazon.com/blogs/aws/new-h1-instances-fast-dense-storage-for-big-data-applications/) - higher throughput to storage, replacing D2 instances M5 (https://aws.amazon.com/blogs/aws/m5-the-next-generation-of-general-purpose-ec2-instances/) - 1.15Gbps write to storage, encrypted at rest, multipurpose instances, new Nitro hypervisor Deep dive on EC2 virtualization/bare metal (http://www.brendangregg.com/blog/2017-11-29/aws-ec2-virtualization-2017.html) T2 Unlimited (https://aws.amazon.com/blogs/aws/new-t2-unlimited-going-beyond-the-burst-with-high-performance/) - good for microservices, bursty workloads with a credit system Elastic Container Service for Kubernetes (https://aws.amazon.com/blogs/aws/amazon-elastic-container-service-for-kubernetes/) (EKS) - called it! upstream K8s automatically runs K8s with three masters across three AZs monitoring/healthchecks built in, managed service Fargate (https://aws.amazon.com/blogs/aws/aws-fargate/) - Containers on demand, no host/orchestrator needed similar to Azure Container Instances apparently Google has App Engine Flexible which is similar (thanks JP!) So, Matt: why would I use EKS instead of Fargate, etc.? Another write-up (https://www.enterprisetech.com/2017/11/30/kubernetes-momentum-builds-new-aws-tools/). FreeRTOS (https://aws.amazon.com/freertos/) - AWS bought(?) existing open source (https://www.freertos.org) IoT operating system vendor Glacier/S3 Select (https://aws.amazon.com/blogs/aws/s3-glacier-select/) - run SQL-like queries against your buckets and storage (CSV & JSON) GuardDuty (https://aws.amazon.com/blogs/aws/amazon-guardduty-continuous-security-monitoring-threat-detection/) - continuous security monitoring & threat detection (lunches eaten?) IoT Analytics (https://aws.amazon.com/blogs/aws/launch-presenting-aws-iot-analytics/) - MQTT processing, reporting & storage IoT Device Defender (https://aws.amazon.com/blogs/aws/in-the-works-aws-sepio-secure-your-iot-fleet/) - reporting, alerting & mitigation of existing IoT fleets IoT Device Management (https://aws.amazon.com/blogs/aws/aws-iot-device-management/) - lifecycle, management & monitoring of IoT devices Kinesis Video Streams (https://aws.amazon.com/blogs/aws/amazon-kinesis-video-streams-serverless-video-ingestion-and-storage-for-vision-enabled-apps/) - video ingestion/processing service Media Services (https://aws.amazon.com/blogs/aws/aws-media-services-process-store-and-monetize-cloud-based-video/) - YouTube as a Service, including monetization. Seems there should be an embeddable player somewhere. Neptune (https://aws.amazon.com/blogs/aws/amazon-neptune-a-fully-managed-graph-database-service/) - managed graph database service (lunches eaten?) Rekognition Video (https://aws.amazon.com/blogs/aws/launch-welcoming-amazon-rekognition-video-service/) - Rekognition now does video SageMaker (https://aws.amazon.com/blogs/aws/sagemaker/) - framework for building AI services Sumerian (https://aws.amazon.com/blogs/aws/launch-presenting-amazon-sumerian/) - VR/AR/3D IDE and platform? Systems Manager (https://aws.amazon.com/blogs/aws/aws-systems-manager/) - custom dashboards based off of tags, ties into AWS system management tools Time Sync Service (https://aws.amazon.com/blogs/aws/keeping-time-with-amazon-time-sync-service/) - AWS NTP Translate (https://aws.amazon.com/blogs/aws/introducing-amazon-translate-real-time-text-language-translation/) - Google & MS already have this Transcribe (https://aws.amazon.com/blogs/aws/amazon-transcribe-scalable-and-accurate-automatic-speech-recognition/) - speech recognition, we should use this! More: The New Stack (https://thenewstack.io/aws-takes-kubernetes-offers-serverless-database-service/), The Register (https://www.theregister.co.uk/2017/11/29/amazon_aws_kubernetes/). This kind of over-the-top analysis (https://blog.codeship.com/aws-reinvent-a-musical-review-of-the-2017-keynote/) is kinda our thing (https://www.patreon.com/sdt). BACK OFF, MAN! AWS Strategy Update On Hybrid Cloud: “In the fullness of time — I don’t know if it’s five, 10 or 15 years out — relatively few companies will own their own data centers. Those that do will have a much smaller footprint. It will be a transition and it won’t happen overnight.” Link (https://www.channele2e.com/news/live-blog-amazon-web-services-ceo-andy-jassy/) More: ‘Is Multi-Cloud Real?: “We certainly get asked about it a lot. Most enterprises, when they think about a plan for moving to the cloud, they think they will distribute workloads across a couple of cloud providers. But few actually make that decision because you have to standardize on lowest common denominator when you go multi-cloud. AWS is so far ahead and you don’t want to handicap developer teams. Asking developers to be fluent in multiple cloud platforms is a lot. And all the cloud providers have volume discounts. If you split workloads across multi-cloud, you’re diminishing those discounts. In practice, companies pick a predominate cloud provider for their workloads. And they may have a secondary cloud provider just in case they want to switch providers.’ AWS re:Invent Preview Review ✔SaaS lunches will be eaten? ✔Amazon Kubernetes Service? This Week in Kubernetes All about AWS this week! Well, GKS did get rid of billing for cluster managers Coté finished up this pile of crap (get a preview!) (https://docs.google.com/document/d/13JaEeN3Vww_Lu5FTgFgArl16HUQ2Oo4lIR1ua_7zZU4/edit?usp=sharing) and right after emailing it in was reminded that Ben wrote this up already, (https://stratechery.com/2016/how-google-cloud-platform-is-challenging-aws/) plus an update based on re:Invent this week (https://stratechery.com/2017/aws-fargate-and-kubernetes-support-embrace-and-extend-awss-execution-advantage/). End-roll Conferences Coté’s junk: NEXT WEEK, FOOLS! SpringOne Platform registration open (https://2017.springoneplatform.io/ehome/s1p/registration), Dec 4th to 5th. Use the code S1P200_Cote for $200 off registration (https://2017.springoneplatform.io/ehome/s1p/registration). Coté and many others speaking. Coté will be doing a tiny talk at CloudAustin on December 19th (https://www.meetup.com/CloudAustin/events/244459662/). Matt’s (not) on the Road! Taking it off for the Holidays. Recommendations Matt Ray: Art of War, backlaid by Wu Tang Clan (https://www.youtube.com/watch?v=qCk7ozsr428) Brandon: Hindenburg audio editor (https://hindenburg.com/). Coté: Programmed Inequality (http://amzn.to/2Aj4StV); drink after the kids go to bed; Mindhunter (https://www.netflix.com/title/80114855); Jim and Andy (https://www.netflix.com/title/80209608).

Engineers & Coffee
larger than life

Engineers & Coffee

Play Episode Listen Later Jul 16, 2017 34:53


aws re:invent 2017 kotlin conf processing lambda cold start times by language lambda performance by language rate based rules for WAF ec2 Systems Manager adds hierarchy, tagging postlight aws lambda migration / cost savings

AWS re:Invent 2016
WIN205: NEW LAUNCH! Amazon EC2 Systems Manager for Hybrid Cloud Management at Scale

AWS re:Invent 2016

Play Episode Listen Later Dec 24, 2016 51:00


Today, we are announcing EC2 Systems Manager. Amazon EC2 Systems Manager is a management service that helps you automatically collect software inventory, apply OS patches, create system images, and configure Windows and Linux operating systems. These capabilities help you define and track system configurations, prevent drift, and maintain software compliance of your EC2 and on-premises configurations. This session provides an overview of these newly announced services and how they work together within the larger AWS ecosystem to provide comprehensive management capabilities.

Getting On Top
How Do Computers Work? A Non Technical Overview with Paul Morris

Getting On Top

Play Episode Listen Later Apr 5, 2016 31:00


HOW DO COMPUTERS WORK?  A NON TECHNICAL OVERVIEW with PAUL MORRIS Today computers run our lives.  From our laptops to our mobile phones, to our automobiles , to our appliances, our wrist watches, and on and on.  But how does it all work?  On this show I will attempt to give a brief overview. Paul Morris' career spans more than thirty in the IT industry. Starting his career as a computer programmer and systems analyst, Paul rapidly rose to the position of Programming and Systems Manager at ADP serving the Wall Street community.  After starting and running a Corporate Training Consultancy, Paul experienced a spontaneous emotional healing which led him to explore the healing arts.  Paul now practices as an Emotional Healer helping clients heal their Emotional Trauma and deal with their Depression.  Find Paul Morris @ www.depressivesanonymous.org Listener call in number (347) 215-9456

BDPA iRadio Show
BDPA iRadio: April 28, 2015

BDPA iRadio Show

Play Episode Listen Later Apr 28, 2015 61:00


The BDPA iRadio Show creates a vibrant communications platform that speaks to all BDPA stakeholders.  We have an exciting line-up for our show on April 28th, 2015. Coram Rimes, Vice President, National BDPA Theonnie Shields, Systems Manager, State Farm Jason Rashaad, Technical Program Manager, Amazon The BDPA iRadio Show. Linking Business, Education and Technology. The BDPA iRadio Show creates a vibrant communications platform that speaks to all BDPA stakeholders. Sponsored by the BDPA Education and Technology Foundation, and BETF Executive Director Wayne Hicks. Produced by Franne McNeal. Studio Engineering by Everaldo Gallimore. Co-Hosting by Timothy Butts, Jala Cruz and Ronald Story. The BDPA iRadio Show broadcasts the 2nd and 4th Tuesday of every month. Join us on blogtalkradio.com/BDPA.

More Podcast Less Process
The Video Word Made Flesh

More Podcast Less Process

Play Episode Listen Later Mar 17, 2014 63:36


Guests Nicole Martin (Multimedia Archivist and Systems Manager, Human Rights Watch), Erik Piil (Digital Archivist, Anthology Film Archives), and Peter Oleksik (Assistant Media Conservator, Museum of Modern Art) discuss the different organizational approaches and challenges of managing video collections for access and preservation. Whether dealing with video in an archival or production environment, there are a number of decision points around digitization, storage, description, and playback, the options for which are highly dependent on the mission and capabilities of the organization. Josh & Jefferson and their guests talk about these issues and all things video. It's a visual treat for your ears.   This podcast was funded in part by the New York State Archives Documentary Heritage Program and produced by METRO (www.metro.org) and AVPreserve (www.avpreserve.com).   Audio Engineer: Rebecca Chandler