Podcasts about cloud native

  • 396PODCASTS
  • 1,626EPISODES
  • 40mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Nov 26, 2022LATEST

POPULARITY

20152016201720182019202020212022

Categories



Best podcasts about cloud native

Show all podcasts related to cloud native

Latest podcast episodes about cloud native

Software Engineering Daily
Cloud-native WebAssembly with Matt Butcher

Software Engineering Daily

Play Episode Listen Later Nov 26, 2022 64:57


When Web Assembly was created it was supposed to be a compile target, where you could compile your favorite programming language and then execute it inside of a web browser. This made it possible for developers to choose a programming language like C++ for compute intensive applications. Fermyon is taking Web Assembly to the cloud. The post Cloud-native WebAssembly with Matt Butcher appeared first on Software Engineering Daily.

Kubernetes Podcast from Google
Kubernetes on Vessels, with Louis Bailleul

Kubernetes Podcast from Google

Play Episode Listen Later Nov 24, 2022 42:55


Louis Bailleul is a Chief Enterprise Architect at PGS. After years of running highly-ranked super computers to process PGS' seismic data, Louis's team at PGS has lead a transition to Google Cloud. Listen in to learn about HPC in Google Cloud with GKE, and to explore using Kubernetes to do processing on vessels at sea! Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Listen to the KubeCon NA 2022 recap episode News of the week Docker + Wasm Istio control plane vulnerability CVE-2022-39278 KubeFlow joins CNCF as an Incubating Project CNCF Backstage course CNCF Istio intro course Links from the interview PGS A picture of a PGS vessel PGS post from 2021 about their supercomputing rankings and transition to Google Cloud Top500 List Kubernetes Custom Resources (CRDs) Scaling Kubernetes to Thousands of CRDs Google Cloud Spot Instances Google Cloud Preemptible VM Instances Google Cloud - Manage capacity and quota KubeCon NA 2019: How the Department of Defense Moved to Kubernetes and Istio - Nicolas Chaillan Bare Metal K8s Clustering at Chick-fil-A Scale by Brian Chambers, Caleb Hurd, and Alex Crane

Cloud Security Podcast by Google
EP98 How to Cloud IR or Why Attackers Become Cloud Native Faster?

Cloud Security Podcast by Google

Play Episode Listen Later Nov 21, 2022 26:58


Guests: Matt Linton, Chaos Specialist @ Google John Stone, Chaos Coordinator @ Office of the CISO, Google Cloud Topics: Let's talk about security incident response in the cloud.  Back in 2014 when I [Anton] first touched on this, the #1 challenge was getting the data to investigate as cloud providers had few logs available. What are the top 2022 cloud incident response challenges? Does cloud change the definition of a security incident? Is “exposed storage bucket” an incident? Is vulnerability an incident in the cloud? What should I have in my incident response plans for the cloud? Should I have a separate cloud IR plan? What is our advice on running incident response jointly with a CSP like us? How would 3rd party firms (like, well, Mandiant) work with a client and a CSP during an investigation? We all read the Threat Horizons reports, but can you remind us of the common causes for cloud incidents we observed recently? What goals do the attackers typically pursue there? Resources: “Building Secure and Reliable Systems” book (especially ch 14-16, and ch17) Google Cybersecurity Action Team Threat Horizons Report #4 Is Out! (#3, #2, #1) “Incident Plan vs Incident Planning?” blog (2013)

Light Reading Podcasts
What's the Story? Leading Lights entries double down on cloud native

Light Reading Podcasts

Play Episode Listen Later Nov 15, 2022 21:33


Light Reading's Iain Morris and Kelsey Ziser review the Leading Lights Awards categories they judged and examine emerging trends that appeared in the submissions. Hosted on Acast. See acast.com/privacy for more information.

Cloud Crunch
S4E3: Modern Cloud Operations – Managing a Cloud Native Environment

Cloud Crunch

Play Episode Listen Later Nov 10, 2022 17:44 Transcription Available


Welcome back to Cloud Crunch. Today's topic, “Modern Cloud Operations – Managing a Cloud Native Environment.” Learn how managing the cloud has evolved, and what leadership needs to do now to adjust for the change. This conversation is led by our lead host and Director of Marketing, Michael Elliott. We are also joined by our honored guest Jeff McInnish – VP, Managed Cloud Services for 2nd Watch.

Kubernetes Podcast from Google

In this episode we bring you with us to KubeCon NA 2022 in Detroit, Michigan. We interviewed 15 attendees from various backgrounds and learned some cool insights. Featuring: Mo Khan, Software Engineer, Microsoft. Katrina Verey, Senior Staff Production Engineer, Shopify. Aishwarya Harpalem, Student, Rutgers University. Jeffery Sica, Principal Developer Experience Engineer, CNCF. Kirsten Schumy, Software Engineer, AWS. Jean-Paul Robinson, HPC Architect, University of Alabama at Birmingham. Madhav Jivrajani, Software Engineer, Vmware. Leigh Capili, Developer Advocate, Vmware Tanzu. Nim Jayawardena, Developer Programs Engineer, Google. Charlie Yu, Developer Programs Engineer, Google. Ahrar Monsur, Developer Programs Engineer, Google. Mickey Boxell, Product Manager, Oracle. Eddie Zaneski, Software Engineer, Chainuard. Andy Piggott, Chief Product Officer, Section. Logan Smith, Director of Business Development, GrafanaLabs. Brian Dorsey, Developer Advocate, Google - Shoutout for recommending the microphones for interviews. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod News of the week CrowdStrike cryptojacking finding Skaffold v2 Generally Available GKE Security Posture Dashboard Blog Video Cdk8s+ from AWS Blog Project page CNCF Sandbox project application information Istio becomes a CNCF Incubating project Cert-manager becomes a CNCF Incubating project Cisco OpenClarity Kube-router bug Google Cloud Next Wrap-Up Microsoft Ignite highlights blog Cloud Native SecurityCon Linux Foundation partnership with Razom for Ukraine Links from the interview Kubernetes SIG Auth Kubernetes SIG API Machinery FluxCD Online Boutique Sample App Kubernetes SIG-CLI Cloud Native 101: Motor City Edition by Bob Killen and Jeffrey Sica Consumers to Contributors by Brendan O'Leary Kubernet-Bees: How Bees Solve the Problems of Distributed Systems SchedMD Slurm Kube-bind Contribute to etcd! Cloud Native WASM Day Cloud Native SecurityCon Backstage (Incubating CNCF Project) eBPF Cilium (Incubating CNCF Project) Acorn Labs Vulcan Mind-Meld (Star Trek) Kids' Day at KubeCon NA 2022

Kubernetes Bytes
Part 1 - Live from Kubecon North America 2022 - Interviews with Percona, EDB, Dell, and Akamai

Kubernetes Bytes

Play Episode Listen Later Nov 1, 2022 41:26


In this part - 1 episode of Kubernetes Bytes - live from Detroit during the Kubecon + CloudNativeCon North America 2022, Ryan Wallner and Bhavin Shah talk to guests on the show floor and learn more about what's new at Kubecon, what are their thoughts on Day 0 events, Keynotes, etc, and also share some things to do in Detroit. They talk to Peter Zaitsev - Founder of Percona, Gabriele Bartolini - VP of Cloud Native at EDB, Tim Banks - Lead Developer Advocate at Dell Technologies and Stephen Rust - Principal Software Engineer at Akamai. Show Notes: Percona Kubernetes Operators PostgreSQL 15 CloudNativePG Linode Kubernetes Engine Linode Careers

Cloud Security Podcast
How to become a Cloud Native Security Architect?

Cloud Security Podcast

Play Episode Listen Later Oct 30, 2022 50:39


In this episode of the Virtual Coffee with Ashish edition, we spoke with Christophe Parisel (Christophe's Linkedin) about what how to transition from being a technical architect on premise to a cloud security architect and then a cloud native security architect. Episode ShowNotes, Links and Transcript on Cloud Security Podcast: www.cloudsecuritypodcast.tv Host Twitter: Ashish Rajan (@hashishrajan) Guest Twitter: Christophe Parisel (Christophe's Linkedin) Podcast Twitter - @CloudSecPod @CloudSecureNews If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - Cloud Security News - Cloud Security Academy Spotify TimeStamp for Interview Questions (00:00) Ashish's Intro to the Episode (02:21) https://snyk.io/csp (03:18) A little bit about Christophe (05:08) What is Cloud Native? (07:27) Why Cloud Native is important? (09:34) Responsibilities of Cloud Native Architect (13:15) Solution Architect vs Cloud Native Architect (15:32) Culture to move into Cloud Native Environment (18:09) Designing an application in Cloud (21:41) Designing an application using Kubernetes Cluster (24:39) Learning Kubernetes as an Architect (28:09) Common services people should standardise (31:50) Frameworks for Kubernetes Architecture (34:06) Logging with Kubernetes at Scale (38:24) Challenge with transitioning to Cloud Native Security Architect (39:43)Should we trust the cloud? (43:37) Bottlerocket in Kubernetes (46:00) Certifications for Cloud Native Security Architect

CISO Stories Podcast
CSP #93 - Approaching Cloud Security from a Cloud-Native Perspective - Josh Dreyfuss

CISO Stories Podcast

Play Episode Listen Later Oct 25, 2022 20:57


What is the best way to approach cloud security as the cloud environment evolves and what should security leaders consider as they think about scaling their security? Join us to learn about how CISO of Wiz, Ryan Kazanciyan thinks about cloud security from a cloud-native perspective, what makes securing your cloud infrastructure so challenging, and what makes your cloud security posture “good”? This segment is sponsored by Wiz. Visit https://securityweekly.com/wiz to learn more about them! Visit https://securityweekly.com/csp for all the latest episodes! Follow us on Twitter: https://www.twitter.com/cyberleaders Follow us on LinkedIn: https://www.linkedin.com/company/cybersecuritycollaborative/ Visit https://securityweekly.com/csp for all the latest episodes! Show Notes: https://securityweekly.com/csp93

Kubernetes Podcast from Google
Looking Forward and Back, with Adam Glick

Kubernetes Podcast from Google

Play Episode Listen Later Oct 13, 2022 48:52 Very Popular


After four and a half years hosting this podcast (and almost 9 years at Google) Craig Box is moving on from the latter, which unfortunately means leaving the former. But the show must go on. In this episode Craig introduces new hosts Abdel Sghiouar and Kaslin Fields. We take a small look forward, and then a big look back. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Links from the show Adam’s last episode Abdelfettah Sghiouar Devoxx MA Cloud Careers Podcast You probably DON’T need a Service Mesh Kaslin Fields Containers as cookies Biscuits and gravy Contributor comms First-gen stickers Second-gen stickers Episode 60, with Mark Shuttleworth Episode 15, with Dan Ciruli and Jasmine Jaksic Dan on sticker duty Episode 30, with Joe Zou A rare team photo Music and musicians Kaossilator Episode 191, with DJ Fresh Episode 127, with David Pait Episode 83, with Guinevere Saenger Episode 120, with Melanie Cebula Episode 121, with Ed Huang Double guest trivia: Episodes 1 and 100 with Paris Pittman Episodes 62 and 180 with Ricardo Rocha (on a technicality) The Adam face Corey Quinn: separated at birth? One of many booth meetups Follow Craig Box on Twitter Follow Adam Glick on LinkedIn

DMRadio Podcast
Reality on the Ground - And in the Cloud!

DMRadio Podcast

Play Episode Listen Later Oct 13, 2022 53:32


The cloud continues to change everything, but the rumors of on-prem's demise have been exaggerated. Traditional data centers will have a very long tail, even as cloud computing giants like Amazon, Microsoft and Google bolster their offerings. Cloud Native is the new call to arms, as organizations look to optimize their mission critical workflows across an increasingly heterogeneous technology topography. But what, exactly, is Cloud Native? And how are today's innovators spanning multiple cloud environments while protecting their on-prem investments? Check out this episode of DM Radio to hear Host @eric_kavanagh interview Dr. Stefan Sigg, Software AG Chief Product Officer, and renowned IT Analyst Dion Hinchcliffe of Constellation Research. They will discuss strategies for protecting data sovereignty, avoiding vendor lock-in, and optimizing information architecture across cloud, on-prem and the edge.

Screaming in the Cloud
Raising Awareness on Cloud-Native Threats with Michael Clark

Screaming in the Cloud

Play Episode Listen Later Oct 13, 2022 38:44


About MichaelMichael is the Director of Threat Research at Sysdig, managing a team of experts tasked with discovering and defending against novel security threats. Michael has more than 20 years of industry experience in many different roles, including incident response, threat intelligence, offensive security research, and software development at companies like Rapid7, ThreatQuotient, and Mantech. Prior to joining Sysdig, Michael worked as a Gartner analyst, advising enterprise clients on security operations topics.Links Referenced: Sysdig: https://sysdig.com/ “2022 Sysdig Cloud-Native Threat Report”: https://sysdig.com/threatreport TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Something interesting about this particular promoted guest episode that is brought to us by our friends at Sysdig is that when they reached out to set this up, one of the first things out of their mouth was, “We don't want to sell anything,” which is novel. And I said, “Tell me more,” because I was also slightly skeptical. But based upon the conversations that I've had, and what I've seen, they were being honest. So, my guest today—surprising as though it may be—is Mike Clark, Director of Threat Research at Sysdig. Mike, how are you doing?Michael: I'm doing great. Thanks for having me. How are you doing?Corey: Not dead yet. So, we take what we can get sometimes. You folks have just come out with the “2022 Sysdig Cloud-Native Threat Report”, which on one hand, it feels like it's kind of a wordy title, on the other it actually encompasses everything that it is, and you need every single word of that report. At a very high level, what is that thing?Michael: Sure. So, this is our first threat report we've ever done, and it's kind of a rite of passage, I think for any security company in the space; you have to have a threat report. And the cloud-native part, Sysdig specializes in cloud and containers, so we really wanted to focus in on those areas when we were making this threat report, which talks about, you know, some of the common threats and attacks we were seeing over the past year, and we just wanted to let people know what they are and how they protect themselves.Corey: One thing that I've found about a variety of threat reports is that they tend to excel at living in the fear, uncertainty, and doubt space. And invariably, they paint a very dire picture of the internet about become cascading down. And then at the end, there's always a, “But there is hope. Click here to set up a meeting with us.” It's basically a very thinly- veiled cover around what is fundamentally a fear, uncertainty, and doubt-driven marketing strategy, and then it tries to turn into a sales pitch.This does absolutely none of that. So, I have to ask, did you set out to intentionally make something that added value in that way and have contributed to the body of knowledge, or is it because it's your inaugural report; you didn't realize you were supposed to turn it into a terrible sales pitch.Michael: We definitely went into that on purpose. There's a lot of ways to fix things, especially these days with all the different technologies, so we can easily talk about the solutions without going into specific products. And that's kind of way we went about it. There's a lot of ways to fix each of the things we mentioned in the report. And hopefully, the person reading it finds a good way to do it.Corey: I'd like to unpack a fair bit of what's in the report. And let's be clear, I don't intend to read this report into a microphone; that is generally not a great way of conveying information that I have found. But I want to highlight a few things that leapt out to me that I find interesting. Before I do that, I'm curious to know, most people who write reports, especially ones of this quality, are not sitting there cogitating in their office by themselves, and they set pen to paper and emerge four days later with the finished treatise. There's a team involved, there's more than one person that weighs in. Who was behind this?Michael: Yeah, it was a pretty big team effort across several departments. But mostly, it came to the Sysdig threat research team. It's about ten people right now. It's grown quite a bit through the past year. And, you know, it's made up of all sorts of backgrounds and expertise.So, we have machine learning people, data scientists, data engineers, former pen-testers and red team, a lot of blue team people, people from the NSA, people from other government agencies as well. And we're also a global research team, so we have people in Europe and North America working on all of this. So, we try to get perspectives on how these threats are viewed by multiple areas, not just Silicon Valley, and express fixes that appeal to them, too.Corey: Your executive summary on this report starts off with a cloud adversary analysis of TeamTNT. And my initial throwaway joke on that, it was going to be, “Oh, when you start off talking about any entity that isn't you folks, they must have gotten the platinum sponsorship package.” But then I read the rest of that paragraph and I realized that wait a minute, this is actually interesting and germane to something that I see an awful lot. Specifically, they are—and please correct me if I'm wrong on any of this; you are definitionally the expert whereas I am, obviously the peanut gallery—but you talk about TeamTNT as being a threat actor that focuses on targeting the cloud via cryptojacking, which is a fanciful word for, “Okay, I've gotten access to your cloud environment; what am I going to do with it? Mine Bitcoin and other various cryptocurrencies.” Is that generally accurate or have I missed the boat somewhere fierce on that? Which is entirely possible.Michael: That's pretty accurate. We also think it just one person, actually, and they are very prolific. So, they were pretty hard to get that platinum support package because they are everywhere. And even though it's one person, they can do a lot of damage, especially with all the automation people can make now, one person can appear like a dozen.Corey: There was an old t-shirt that basically encompassed everything that was wrong with the culture of the sysadmin world back in the naughts, that said, “Go away, or I will replace you with a very small shell script.” But, on some level, you can get a surprising amount of work done on computers, just with things like for loops and whatnot. What I found interesting was that you have put numbers and data behind something that I've always taken for granted and just implicitly assumed that everyone knew. This is a common failure mode that we all have. We all have blind spots where we assume the things that we spend our time on is easy and the stuff that other people are good at and you're not good at, those are the hard things.It has always been intuitively obvious to me as a cloud economist, that when you wind up spending $10,000 in cloud resources to mine cryptocurrency, it does not generate $10,000 of cryptocurrency on the other end. In fact, the line I've been using for years is that it's totally economical to mine Bitcoin in the cloud; the only trick is you have to do it in someone else's account. And you've taken that joke and turned it into data. Something that you found was that in one case, that you were able to attribute $8,100 of cryptocurrency that were generated by stealing $430,000 of cloud resources to do it. And oh, my God, we now have a number and a ratio, and I can talk intelligently and sound four times smarter. So, ignoring anything else in this entire report, congratulations, you have successfully turned this into what is beginning to become a talking point of mine. Value unlocked. Good work. Tell me more.Michael: Oh, thank you. Cryptomining is kind of like viruses in the old on-prem environment. Normally it just cleaned up and never thought of again; the antivirus software does its thing, life goes on. And I think cryptominers are kind of treated like that. Oh, there's a miner; let's rebuild the instance or bring a new container online or something like that.So, it's often considered a nuisance rather than a serious threat. It also doesn't have the, you know, the dangerous ransomware connotation to it. So, a lot of people generally just think of as a nuisance, as I said. So, what we wanted to show was, it's not really a nuisance and it can cost you a lot of money if you don't take it seriously. And what we found was for every dollar that they make, it costs you $53. And, you know, as you mentioned, it really puts it into view of what it could cost you by not taking it seriously. And that number can scale very quickly, just like your cloud environment can scale very quickly.Corey: They say this cloud scales infinitely and that is not true. First, tried it; didn't work. Secondly, it scales, but there is an inherent limit, which is your budget, on some level. I promise they can add hard drives to S3 faster than you can stuff data into it. I've checked.One thing that I've seen recently was—speaking of S3—I had someone reach out in what I will charitably refer to as a blind panic because they were using AWS to do something. Their bill was largely $4 a month in S3 charges. Very reasonable. That carries us surprisingly far. And then they had a credential leak and they had a threat actor spin up all the Lambda functions in all of the regions, and it went from $4 a month to $60,000 a day and it wasn't caught for six days.And then AWS as they tend to do, very straight-faced, says, “Yeah, we would like our $360,000, please.” At which point, people start panicking because a lot of the people who experience this are not themselves sophisticated customers; they're students, they're learning how this stuff works. And when I'm paying $4 a month for something, it is logical and intuitive for me to think that, well, if I wind up being sloppy with their credentials, they could run that bill up to possibly $25 a month and that wouldn't be great, so I should keep an eye on it. Yeah, you dropped a whole bunch of zeros off the end of that. Here you go. And as AWS spins up more and more regions and as they spin up more and more services, the ability to exploit this becomes greater and greater. This problem is not getting better, it is only getting worse, by a lot.Michael: Oh, yeah, absolutely. And I feel really bad for those students who do have that happen to them. I've heard on occasion that the cloud providers will forgive some debts, but there's no guarantee of that happening, from breaches. And you know, the more that breaches happen, the less likely they are going to forgive it because they still to pay for it; someone's paying for it in the end. And if you don't improve and fix your environment and it keeps happening, one day, they're just going to stick you with the bill.Corey: To my understanding, they've always done the right thing when I've highlighted something to them. I don't have intimate visibility into it and of course, they have a threat model themselves of, okay, I'm going to spin up a bunch of stuff, mine cryptocurrency for a month—cry and scream and pretend I got hacked because fraud is very much a thing, there is a financial incentive attached to this—and they mostly seem to get it right. But the danger that I see for the cloud provider is not that they're going to stop being nice and giving money away, but assume you're a student who just winds up getting more than your entire college tuition as a surprise bill for this month from a cloud provider. Even assuming at the end of that everything gets wiped and you don't owe anything. I don't know about you, but I've never used that cloud provider again because I've just gotten a firsthand lesson in exactly what those risks are, it's bad for the brand.Michael: Yeah, it really does scare people off of that. Now, some cloud providers try to offer more proactive protections against this, try to shut down instances really quick. And you know, you can take advantage of limits and other things, but they don't make that really easy to do. And setting those up is critical for everybody.Corey: The one cloud provider that I've seen get this right, of all things, has been Oracle Cloud, where they have an always free tier. Until you affirmatively upgrade your account to chargeable, they will not charge you a penny. And I have experimented with this extensively, and they're right, they will not charge you a penny. They do have warnings plastered on the site, as they should, that until you upgrade your account, do understand that if you exceed a threshold, we will stop serving traffic, we will stop servicing your workload. And yeah, for a student learner, that's absolutely what I want. For a big enterprise gearing up for a giant Superbowl commercial or whatnot, it's, “Yeah, don't care what it costs, just make sure you continue serving traffic. We don't get a redo on this.” And without understanding exactly which profile of given customer falls into, whenever the cloud provider tries to make an assumption and a default in either direction, they're wrong.Michael: Yeah, I'm surprised that Oracle Cloud of all clouds. It's good to hear that they actually have a free tier. Now, we've seen attackers have used free tiers quite a bit. It all depends on how people set it up. And it's actually a little outside the threat report, but the CI/CD pipelines in DevOps, anywhere there's free compute, attackers will try to get their miners in because it's all about scale and not quality.Corey: Well, that is something I'd be curious to know. Because you talk about focusing specifically on cloud and containers as a company, which puts you in a position to be authoritative on this. That Lambda story that I mentioned about, surprise $60,000 a day in cryptomining, what struck me about that and caught me by surprise was not what I think would catch most people who didn't swim in this world by surprise of, “You can spend that much?” In my case, what I'm wondering about is, well hang on a minute. I did an article a year or two ago, “17 Ways to Run Containers On AWS” and listed 17 AWS services that you could use to run containers.And a few months later, I wrote another article called “17 More Ways to Run Containers On AWS.” And people thought I was belaboring the point and making a silly joke, and on some level, of course I was. But I was also highlighting very clearly that every one of those containers running in a service could be mining cryptocurrency. So, if you get access to someone else's AWS account, when you see those breaches happen, are people using just the one or two services they have things ready to go for, or are they proliferating as many containers as they can through every service that borderline supports it?Michael: From what we've seen, they usually just go after a compute, like EC2 for example, as it's most well understood, it gets the job done, it's very easy to use, and then get your miner set up. So, if they happen to compromise your credentials versus the other method that cryptominers or cryptojackers do is exploitation, then they'll try to spread throughout their all their EC2 they can and spin up as much as they can. But the other interesting thing is if they get into your system, maybe via an exploit or some other misconfiguration, they'll look for the IAM metadata service as soon as they get in, to try to get your IAM credentials and see if they can leverage them to also spin up things through the API. So, they'll spin up on the thing they compromised and then actively look for other ways to get even more.Corey: Restricting the permissions that anything has in your cloud environment is important. I mean, from my perspective, if I were to have my account breached, yes, they're going to cost me a giant pile of money, but I know the magic incantations to say to AWS and worst case, everyone has a pet or something they don't want to see unfortunate things happen to, so they'll waive my fee; that's fine. The bigger concern I've got—in seriousness—I think most companies do is the data. It is the access to things in the account. In my case, I have a number of my clients' AWS bills, given that that is what they pay me to work on.And I'm not trying to undersell the value of security here, but on the plus side that helps me sleep at night, that's only money. There are datasets that are far more damaging and valuable about that. The worst sleep I ever had in my career came during a very brief stint I had about 12 years ago when I was the director of TechOps at Grindr, the gay dating site. At that scenario, if that data had been breached, people could very well have died. They live in countries where that winds up not being something that is allowed, or their family now winds up shunning them and whatnot. And that's the stuff that keeps me up at night. Compared to that, it's, “Well, you cost us some money and embarrassed a company.” It doesn't really rank on the same scale to me.Michael: Yeah. I guess the interesting part is, data requires a lot of work to do something with for a lot of attackers. Like, it may be opportunistic and come across interesting data, but they need to do something with it, there's a lot more risk once they start trying to sell the data, or like you said, if it turns into something very unfortunate, then there's a lot more risk from law enforcement coming after them. Whereas with cryptomining, there's very little risk from being chased down by the authorities. Like you said, people, they rebuild things and ask AWS for credit, or whoever, and move on with their lives. So, that's one reason I think cryptomining is so popular among threat actors right now. It's just the low risk compared to other ways of doing things.Corey: It feels like it's a nuisance. One thing that I was dreading when I got this copy of the report was that there was going to be what I see so often, which is let's talk about ransomware in the cloud, where people talk about encrypting data in S3 buckets and sneakily polluting the backups that go into different accounts and how your air -gapping and the rest. And I don't see that in the wild. I see that in the fear-driven marketing from companies that have a thing that they say will fix that, but in practice, when you hear about ransomware attacks, it's much more frequently that it is their corporate network, it is on-premises environments, it is servers, perhaps running in AWS, but they're being treated like servers would be on-prem, and that is what winds up getting encrypted. I just don't see the attacks that everyone is warning about. But again, I am not primarily in the security space. What do you see in that area?Michael: You're absolutely right. Like we don't see that at all, either. It's certainly theoretically possible and it may have happened, but there just doesn't seem to be that appetite to do that. Now, the reasoning? I'm not a hundred percent sure why, but I think it's easier to make money with cryptomining, even with the crypto markets the way they are. It's essentially free money, no expenses on your part.So, maybe they're not looking because again, that requires more effort to understand especially if it's not targeted—what data is important. And then it's not exactly the same method to do the attack. There's versioning, there's all this other hoops you have to jump through to do an extortion attack with buckets and things like that.Corey: Oh, it's high risk and feels dirty, too. Whereas if you're just, I guess, on some level, psychologically, if you're just going to spin up a bunch of coin mining somewhere and then some company finds it and turns it off, whatever. You're not, as in some cases, shaking down a children's hospital. Like that's one of those great, I can't imagine how you deal with that as a human being, but I guess it takes all types. This doesn't get us to sort of the second tentpole of the report that you've put together, specifically around the idea of supply chain attacks against containers. There have been such a tremendous number of think pieces—thought pieces, whatever they're called these days—talking about a software bill of materials and supply chain threats. Break it down for me. What are you seeing?Michael: Sure. So, containers are very fun because, you know, you can define things as code about what gets put on it, and they become so popular that sharing sites have popped up, like Docker Hub and other public registries, where you can easily share your container, it has everything built, set up, so other people can use it. But you know, attackers have kind of taken notice of this, too. Where anything's easy, an attacker will be. So, we've seen a lot of malicious containers be uploaded to these systems.A lot of times, they're just hoping for a developer or user to come along and use them because your Docker Hub does have the official designation, so while they can try to pretend to be like Ubuntu, they won't be the official. But instead, they may try to see theirs and links and things like that to entice people to use theirs instead. And then when they do, it's already pre-loaded with a miner or, you know, other malware. So, we see quite a bit of these containers in Docker Hub. And they're disguised as many different popular packages.They don't stand up to too much scrutiny, but enough that, you know, a casual looker, even Docker file may not see it. So yeah, we see a lot of—and embedded credentials and other big part that we see in these containers. That could be an organizational issue, like just a leaked credential, but you can put malicious credentials into Docker files, to0, like, say an SSH private key that, you know, if they start this up, the attacker can now just log—SSH in. Or other API keys or other AWS changing commands you can put in there. You can put really anything in there, and wherever you load it, it's going to run. So, you have to be really careful.[midroll 00:22:15]Corey: Years ago, I gave a talk at the conference circuit called, “Terrible Ideas in Git” that purported to teach people how to get worked through hilarious examples of misadventure. And the demos that I did on that were, well, this was fun and great, but it was really annoying resetting them every time I gave the talk, so I stuffed them all into a Docker image and then pushed that up to Docker Hub. Great. It was awesome. I didn't publicize it and talk about it, but I also just left it as an open repository there because what are you going to do? It's just a few directories in the route that have very specific contrived scenarios with Git, set up and ready to go.There's nothing sensitive there. And the thing is called, “Terrible Ideas.” And I just kept watching the download numbers continue to increment week over week, and I took it down because it's, I don't know what people are going to do with that. Like, you see something on there and it says, “Terrible Ideas.” For all I know, some bank is like, “And that's what we're running in production now.” So, who knows?But the idea o—not that there was necessarily anything wrong with that, but the fact that there's this theoretical possibility someone could use that or put the wrong string in if I give an example, and then wind up running something that is fairly compromisable in a serious environment was just something I didn't want to be a part of. And you see that again, and again, and again. This idea of what Docker unlocks is amazing, but there's such a tremendous risk to it. I mean, I've never understood 15 years ago, how you're going to go and spin up a Linux server on top of EC2 and just grab a community AMI and use that. It's yeah, I used to take provisioning hardware very seriously to make sure that I wasn't inadvertently using something compromised. Here, it's like, “Oh, just grab whatever seems plausible from the catalog and go ahead and run that.” But it feels like there's so much of that, turtles all the way down.Michael: Yeah. And I mean, even if you've looked at the Docker file, with all the dependencies of the things you download, it really gets to be difficult. So, I mean, to protect yourself, it really becomes about, like, you know, you can do the static scanning of it, looking for bad strings in it or bad version numbers for vulnerabilities, but it really comes down to runtime analysis. So, when you start to Docker container, you really need the tools to have visibility to what's going on in the container. That's the only real way to know if it's safe or not in the end because you can't eyeball it and really see all that, and there could be a binary assortment of layers, too, that'll get run and things like that.Corey: Hell is other people's workflows, as I'm sure everyone's experienced themselves, but one of mine has always been that if I'm doing something as a proof of concept to build it up on a developer box—and I do keep my developer environments for these sorts of things isolated—I will absolutely go and grab something that is plausible- looking from Docker Hub as I go down that process. But when it comes time to wind up putting it into a production environment, okay, now we're going to build our own resources. Yeah, I'm sure the Postgres container or whatever it is that you're using is probably fine, but just so I can sleep at night, I'm going to take the public Docker file they have, and I'm going to go ahead and build that myself. And I feel better about doing that rather than trusting some rando user out there and whatever it is that they've put up there. Which on the one hand feels like a somewhat responsible thing to do, but on the other, it feels like I'm only fooling myself because some rando putting things up there is kind of what the entire open-source world is, to a point.Michael: Yeah, that's very true. At some point, you have to trust some product or some foundation to have done the right thing. But what's also true about containers is they're attacked and use for attacks, but they're also used to conduct attacks quite a bit. And we saw a lot of that with the Russian-Ukrainian conflict this year. Containers were released that were preloaded with denial-of-service software that automatically collected target lists from, I think, GitHub they were hosted on.So, all a user to get involved had to do was really just get the container and run it. That's it. And now they're participating in this cyberwar kind of activity. And they could also use this to put on a botnet or if they compromise an organization, they could spin up at all these instances with that Docker container on it. And now that company is implicated in that cyber war. So, they can also be used for evil.Corey: This gets to the third point of your report: “Geopolitical conflict influences attacker behaviors.” Something that happened in the early days of the Russian invasion was that a bunch of open-source maintainers would wind up either disabling what their software did or subverting it into something actively harmful if it detected it was running in the Russian language and/or in a Russian timezone. And I understand the desire to do that, truly I do. I am no Russian apologist. Let's be clear.But the counterpoint to that as well is that, well, to make a reference I made earlier, Russia has children's hospitals, too, and you don't necessarily know the impact of fallout like that, not to mention that you have completely made it untenable to use anything you're doing for a regulated industry or anyone else who gets caught in that and discovers that is now in their production environment. It really sets a lot of stuff back. I've never been a believer in that particular form of vigilantism, for lack of a better term. I'm not sure that I have a better answer, let's be clear. I just, I always knew that, on some level, the risk of opening that Pandora's box were significant.Michael: Yeah. Even if you're doing it for the right reasons. It still erodes trust.Corey: Yeah.Michael: Especially it erodes trust throughout open-source. Like, not just the one project because you'll start thinking, “Oh, how many other projects might do this?” And—Corey: Wait, maybe those dirty hippies did something in our—like, I don't know, they've let those people anywhere near this operating system Linux thing that we use? I don't think they would have done that. Red Hat seems trustworthy and reliable. And it's yo, [laugh] someone needs to crack open a history book, on some level. It's a sticky situation.I do want to call out something here that it might be easy to get the wrong idea from the summary that we just gave. Very few things wind up raising my hackles quite like companies using tragedy to wind up shilling whatever it is they're trying to sell. And I'll admit when I first got this report, and I saw, “Oh, you're talking about geopolitical conflict, great.” I'm not super proud of this, but I was prepared to read you the riot act, more or less when I inevitably got to that. And I never did. Nothing in this entire report even hints in that direction.Michael: Was it you never got to it, or, uh—Corey: Oh, no. I've read the whole thing, let's be clear. You're not using that to sell things in the way that I was afraid you were. And simultaneously I want to say—I want to just point that out because that is laudable. At the same time, I am deeply and bitterly resentful that that even is laudable. That should be the common state.Capitalizing on tragedy is just not something that ever leaves any customer feeling good about one of their vendors, and you've stayed away from that. I just want to call that out is doing the right thing.Michael: Thank you. Yeah, it was actually a big topic about how we should broach this. But we have a good data point on right after it started, there was a huge spike in denial-of-service installs. And that we have a bunch of data collection technology, honeypots and other things, and we saw the day after cryptomining started going down and denial-of-service installs started going up. So, it was just interesting how that community changed their behaviors, at least for a time, to participate in whatever you want to call it, the hacktivism.Over time, though, it kind of has gone back to the norm where maybe they've gotten bored or something or, you know, run out of funds, but they're starting cryptomining again. But these events can cause big changes in the hacktivism community. And like I mentioned, it's very easy to get involved. We saw over 150,000 downloads of those pre-canned denial-of-service containers, so it's definitely something that a lot of people participated in.Corey: It's a truism that war drives innovation and different ways of thinking about things. It's a driver of progress, which says something deeply troubling about us. But it's also clear that it serves as a driver for change, even in this space, where we start to see different applications of things, we see different threat patterns start to emerge. And one thing I do want to call out here that I think often gets overlooked in the larger ecosystem and industry as a whole is, “Well, no one's going to bother to hack my nonsense. I don't have anything interesting for them to look at.”And it's, on some level, an awful lot of people running tools like this aren't sophisticated enough themselves to determine that. And combined with your first point in the report as well that, well, you have an AWS account, don't you? Congratulations. You suddenly have enormous piles of money—from their perspective—sitting there relatively unguarded. Yay. Security has now become everyone's problem, once again.Michael: Right. And it's just easier now. It means, it was always everyone's problem, but now it's even easier for attackers to leverage almost everybody. Like before, you had to get something on your PC. You had to download something. Now, your search of GitHub can find API keys, and then that's it, you know? Things like that will make it game over or your account gets compromised and big bills get run up. And yeah, it's very easy for all that to happen.Corey: Ugh. I do want to ask at some point, and I know you asked me not to do it, but I'm going to do it anyway because I have this sneaking suspicion that given that you've spent this much time on studying this problem space, that you probably, as a company, have some answers around how to address the pain that lives in these problems. What exactly, at a high level, is it that Sysdig does? Like, how would you describe that in an elevator without sabotaging the elevator for 45 minutes to explain it in depth to someone?Michael: So, I would describe it as threat detection and response for cloud containers and workloads in general. And all the other kind of acronyms for cloud, like CSPM, CIEM.Corey: They're inventing new and exciting acronyms all the time. And I honestly at this point, I want to have almost an acronym challenge of, “Is this a cybersecurity acronym or is it an audio cable? Which is it?” Because it winds up going down that path, super easily. I was at RSA walking the expo floor and I had I think 15 different companies I counted pitching XDR, without a single one bothering to explain what that meant. Okay, I guess it's just the thing we've all decided we need. It feels like security people selling to security people, on some level.Michael: I was a Gartner analyst.Corey: Yeah. Oh… that would do it then. Terrific. So, it's partially your fault, then?Michael: No. I was going to say, don't know what it means either.Corey: Yeah.Michael: So, I have no idea [laugh]. I couldn't tell you.Corey: I'm only half kidding when I say in many cases, from the vendor perspective, it seems like what it means is whatever it is they're trying to shoehorn the thing that they built into filling. It's kind of like observability. Observability means what we've been doing for ten years already, just repurposed to catch the next hype wave.Michael: Yeah. The only thing I really understand is: detection and response is a very clear detect things and respond to things. So, that's a lot of what we do.Corey: It's got to beat the default detection mechanism for an awful lot of companies who in years past have found out that they have gotten breached in the headline of The New York Times. Like it's always fun when that, “Wait, what? What? That's u—what? How did we not know this was coming?”It's when a third party tells you that you've been breached, it's never as positive—not that it's a positive experience anyway—than discovering yourself internally. And this stuff is complicated, the entire space is fraught, and it always feels like no matter how far you go, you could always go further, but left to its inevitable conclusion, you'll burn through the entire company budget purely on security without advancing the other things that company does.Michael: Yeah.Corey: It's a balance.Michael: It's tough because it's a lot to know in the security discipline, so you have to balance how much you're spending and how much your people actually know and can use the things you've spent money on.Corey: I really want to thank you for taking the time to go through the findings of the report for me. I had skimmed it before we spoke, but talking to you about this in significantly more depth, every time I start going to cite something from it, I find myself coming away more impressed. This is now actively going on my calendar to see what the 2023 version looks like. Congratulations, you've gotten me hooked. If people want to download a copy of the report for themselves, where should they go to do that?Michael: They could just go to sysdig.com/threatreport. There's no email blocking or gating, so you just download it.Corey: I'm sure someone in your marketing team is twitching at that. Like, why can't we wind up using this as a lead magnet? But ugh. I look at this and my default is, oh, wow, you definitely understand your target market. Because we all hate that stuff. Every mandatory field you put on those things makes it less likely I'm going to download something here. Click it and have a copy that's awesome.Michael: Yep. And thank you for having me. It's a lot of fun.Corey: No, thank you for coming. Thanks for taking so much time to go through this, and thanks for keeping it to the high road, which I did not expect to discover because no one ever seems to. Thanks again for your time. I really appreciate it.Michael: Thanks. Have a great day.Corey: Mike Clark, Director of Threat Research at Sysdig. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment pointing out that I didn't disclose the biggest security risk at all to your AWS bill, an AWS Solutions Architect who is working on commission.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Ship It! DevOps, Infra, Cloud Native
Vorsprung durch Technik

Ship It! DevOps, Infra, Cloud Native

Play Episode Listen Later Oct 12, 2022 72:30


I don't think that you can imagine just how excited Gerhard was to find out that Audi, his favourite car company, has a Kubernetes competence centre. We have Sebastian Kister joining us today to tell us why people, followed by tech make the process. The right thing to focus on is the genuine smiles that people give in response to something we do or say. That is an important SLI & SLO for reducing friction between silos. How does this impact the flow of artefacts into production systems that design & build cars?

Changelog Master Feed
Vorsprung durch Technik (Ship It! #74)

Changelog Master Feed

Play Episode Listen Later Oct 12, 2022 72:30


I don't think that you can imagine just how excited Gerhard was to find out that Audi, his favourite car company, has a Kubernetes competence centre. We have Sebastian Kister joining us today to tell us why people, followed by tech make the process. The right thing to focus on is the genuine smiles that people give in response to something we do or say. That is an important SLI & SLO for reducing friction between silos. How does this impact the flow of artefacts into production systems that design & build cars?

The Business of Open Source
The Relationship Between Open Source and Cloud Native with Randy Abernethy

The Business of Open Source

Play Episode Listen Later Oct 12, 2022 37:59


Randy Abernethy, Managing Director at RX-M, joins me for a chat about the relationship between open source and cloud native. In this episode, Randy and I discuss how the clients he works with at RX-M are looking to cloud-native as part of their forward-thinking strategies. Tune into this episode to learn how Randy sees the C-Suite viewing open source, how he sees clients evaluate risk in open-source projects, and his views on the relationship between open source and cloud native. Highlights: Randy introduces himself and his company, RX-M (00:46) Why do companies come to RX-M for help when evaluating and implementing cloud native? (03:44) How do RX-M clients approach open source? (9:35) What are the C-Suite's views open source (16:19) How Randy's clients evaluate risk in open-source projects (23:34) The role CNCF plays in how companies evaluate and implement cloud-native solutions (30:31) Randy's view on the relationship between open source and cloud native (32:39) Links: LinkedIn: https://www.linkedin.com/in/randyabernethy/ Twitter: @randyabernethy Company: www.rx-m.com

Ship It! DevOps, Infra, Cloud Native
A modern bank infrastructure

Ship It! DevOps, Infra, Cloud Native

Play Episode Listen Later Oct 6, 2022 80:00 Transcription Available


Matias Pan is a Staff Software Engineer at Lemon Cash, a crypto startup based in Argentina. Lemon infrastructure runs digital wallets & physical cards, which technically makes them a bank. How does Matias & his team think about enabling developers get code from their workstations into production? Remember, we are talking about a bank - a bad deploy is a big deal. And when a bad database migration goes out, what happens then?

Changelog Master Feed
A modern bank infrastructure (Ship It! #73)

Changelog Master Feed

Play Episode Listen Later Oct 6, 2022 80:00


Matias Pan is a Staff Software Engineer at Lemon Cash, a crypto startup based in Argentina. Lemon infrastructure runs digital wallets & physical cards, which technically makes them a bank. How does Matias & his team think about enabling developers get code from their workstations into production? Remember, we are talking about a bank - a bad deploy is a big deal. And when a bad database migration goes out, what happens then?

Kubernetes Podcast from Google
Fresh Pivot, with Dan Stein

Kubernetes Podcast from Google

Play Episode Listen Later Oct 5, 2022 49:28 Very Popular


Dan Stein is an engineering manager at General Bioinformatics. Dan Stein is also DJ Fresh, a multi-million selling artist with two UK number one records. Learn about the surprising overlap between these two careers. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod and @craigbox Chatter of the week Trevor Noah stepping down as host of Daily Show Follow @craigbox to learn what’s next News of the week Google Cloud adds GPU support to Autopilot Pricing CVE-2021-36782 in Rancher State of DevOps Report for 2022 Congratulations to the 27 Summer LFX Program CNCF interns Reviewing the 2019 Kubernetes security audit Links from the interview DJ Fresh Atari 800 and Atari ST Pong Atari BASIC Commodore Amiga OctaMED Fatboy Slim and the Atari ST Dogs on Acid music forum Taylor Hawkins Tribute Concerts Abolishing the high tax rate in the UK, or not Breakbeat Kaos Hold Your Colour by Pendulum Kryptonite by DJ Fresh Gold Dust Subsequent hits: Louder Hot Right Now Kyma (sound design language) and Max/MSP We Got Coders General Bioinformatics NGS gene sequencing Ensembl Hasura GraphQL Playground NCBI - National Center for Biotechnology Information Max Martin How Music Works by John Powell Learning: Treehouse Udemy 3Blue1Brown Codeacademy DJ Fresh’s new single, Higher DJ Fresh on Facebook Dan Stein on Twitter

The Business of Open Source
Finding Unexpected Product-Market Fit with Ian Tien

The Business of Open Source

Play Episode Listen Later Oct 5, 2022 31:06


Ian Tien, CEO and Co-Founder of Mattermost, joins me to talk about how Mattermost went from being a video game company to an open-source messaging platform that provides collaboration for developers and other mission-critical teams. In this episode, Ian and I discuss the reality of product-led growth in open-source companies, Ian's perspective on open source moving towards platform-based solutions, and the advice he would give to other open-source founders.  Highlights: Ian introduces himself and the Mattermost open-source project (00:45) The use cases Ian sees for Mattermost (05:17) Ian takes us through the origin story of Mattermost and how it went from being a video game company to an open-source messaging solution (08:59) The role open source played in the success of Mattermost (15:07) Ian's perspective on open source moving towards platform based solutions (20:22) Does Ian think the product-led growth model of “If you build it, they will come” is realistic, and how can that mentality lead to success? (27:34) The advice Ian would give other open-source founders (28:33) Links:Ian Twitter: https://www.linkedin.com/in/iantien/ Company: https://www.mattermost.com

Buluta Doğru

32. Bölümden kucak dolusu sevgiler. Bu bölümde SAP'de kıdemli yazılım geliştirici olarak çalışan sevgili Alper Dedeoğlu sağolsun davetimizi kabul etti ve konuğumuz oldu. Alper'le SAP ekosistemi ve bu ekosistemle bulut bilişimin kesişim noktaları üstüne güzel bir sohbet gerçekleştirdik.

Application Security Weekly (Video)
Critical Requirements for Cloud Native Application Security - Dean Agron - ASW #214

Application Security Weekly (Video)

Play Episode Listen Later Oct 4, 2022 39:36


The core focus of this podcast is to provide the listeners with food for thoughts for what is required for releasing secured cloud native applications - Continuous, Multi-layer, and Multi-service analysis and focusing not only on the code, but also on the runtime and the infrastructure. - Focus on the vulnerabilities that matter. The critical, exploitable ones. Use Context. - Choose the right remediation forms. It may come in different shapes   Segment Resources: Oxeye Website for videos and content - www.oxeye.io   Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw214

Red Hat X Podcast Series
Build Your Cloud – Nutanix Cloud Platform is the ideal infrastructure for Modernization

Red Hat X Podcast Series

Play Episode Listen Later Oct 4, 2022 33:11


Infrastructure is one of the four pillars of IT Modernization, along with theDevelopment Process, Application Architecture and Deployment methodology. Leveraging the promise of Cloud Native means using the best of each of these. However, most organizations have a spectrum of applications in their portfolio to manage from the traditional VM based to the fully Cloud Native, microservices based apps deployed in containers. The challenge for today's CIO has is to ensure the Dev teams have access to the modern tooling, processes, and infrastructure they need, while simultaneously providing a modern platform for the traditional parts of their application portfolio. All while ensuring security, and compliance, reducing complexity, and being cost effective.

Paul's Security Weekly TV
Critical Requirements for Cloud Native Application Security - Dean Agron - ASW #214

Paul's Security Weekly TV

Play Episode Listen Later Oct 4, 2022 39:36


The core focus of this podcast is to provide the listeners with food for thoughts for what is required for releasing secured cloud native applications - Continuous, Multi-layer, and Multi-service analysis and focusing not only on the code, but also on the runtime and the infrastructure. - Focus on the vulnerabilities that matter. The critical, exploitable ones. Use Context. - Choose the right remediation forms. It may come in different shapes   Segment Resources: Oxeye Website for videos and content - www.oxeye.io   Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw214

Kubernetes Bytes
Kubernetes Security 101 - 4C's of Cloud Native Security

Kubernetes Bytes

Play Episode Listen Later Oct 1, 2022 59:44


In this Episode of Kubernetes Bytes, Ryan and Bhavin talk about upcoming conferences and dig into the world of Kuberentes Security. Bhavin and Ryan talk about and dig into the various aspects of the 4C's of Cloud Native Security (Code, Container, Cluster and Cloud). Bhavin and Ryan dig in a foot deep from everything from encryption at rest, network policies, linux seccomp, software SBOM and ransomeware. This episode had so many good resources in the show notes, we decided to create a community resource for everyone. Please see the below public google doc with all show notes, links and more. Feel free to comment and engage! Cloud Native Security 101 Resource Community Document

Ship It! DevOps, Infra, Cloud Native
Klustered & Rawkode Academy

Ship It! DevOps, Infra, Cloud Native

Play Episode Listen Later Sep 29, 2022 67:12


One of our listeners, Andrew Welker, suggested that we talk about Klustered, so a few hours before David Flanagan was about to do his workshop at Container Days, we recorded this episode. We talked about all the weird and wonderful Kubernetes debugging sessions on Klustered, a YouTube playlist with 43 videos and counting. We then talked about Rawkode Academy, and we finished with conferences. Good thing we did, because David almost forgot about KubeHuddle, the conference that he is co-organising next week. Gerhard is looking forward to talking at it! No, seriously, check it out at kubehuddle.com.

Changelog Master Feed
Klustered & Rawkode Academy (Ship It! #72)

Changelog Master Feed

Play Episode Listen Later Sep 29, 2022 67:12


One of our listeners, Andrew Welker, suggested that we talk about Klustered, so a few hours before David Flanagan was about to do his workshop at Container Days, we recorded this episode. We talked about all the weird and wonderful Kubernetes debugging sessions on Klustered, a YouTube playlist with 43 videos and counting. We then talked about Rawkode Academy, and we finished with conferences. Good thing we did, because David almost forgot about KubeHuddle, the conference that he is co-organising next week. Gerhard is looking forward to talking at it! No, seriously, check it out at kubehuddle.com.

Kubernetes Podcast from Google
VMware Tanzu, with Betty Junod

Kubernetes Podcast from Google

Play Episode Listen Later Sep 28, 2022 37:51 Very Popular


Betty Junod, VP of Product Marketing at VMware Tanzu, kindly took up Craig’s challenge to explain the various parts of the Tanzu ecosystem, and how the traditional IT buyer and the modern cloud native really aren’t that different. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod and @craigbox Chatter of the week NASA DART mission Deep Impact Armageddon Apparent retrograde motion Planets beyond Neptune News of the week Istio sails into the CNCF SPIFFE and SPIRE graduate Episode 45, with Andrew Jessup Brigade archived Sysdig 2022 Cloud Native threat report The nice TeamTNT Episode 188, with Kateryna Ivashchenko Episode 169, with Anna Belak Chainguard introduces Wolfi workerd, from Cloudflare Introducing Palaemon Custom org policy for GKE in preview Leveraging Kubernetes for an elastic platform at Blablacar by Sebastien Doido Links from the interview VMware History Docker Solo.io VMware Tanzu introduction blog VMware acquires Heptio VMware acquires Pivotal Tanzu Mission Control Tanzu for Kubernetes Operations Tanzu Application Platform Tanzu Kubernetes Grid Bring your own host to TKG Project Pacific introduction TKG 2.0 VMware Aria Operations for Applications Tanzu Application Service Cloud Foundry Open source projects: Velero Antrea Carvel Cartographer Michigan cider Detroit-style pizza Betty Junod on Twitter

The Business of Open Source
Best Practices for Founding an Open-Source Company with Amanda Brock

The Business of Open Source

Play Episode Listen Later Sep 28, 2022 33:56


Amanda Brock, CEO of Open UK, joins me for an engaging conversation on best practices in founding an open-source company. In this episode, Amanda and I chat about the various business models available for building a company around open-source technology, the common pitfalls and crossroads open-source founders find themselves facing, and how to do open-source in a way that leads to long-term success and profitability. Highlights: What is Open UK? (00:40) The various business models for building a company around open-source technology (04:09) Which business models Amanda feels work best and why (08:07) The importance of founders prioritizing open-source communities (14:07) How and why to do open-source the right way (17:04) What is the true cost of founding an open-source company compared to traditional business models? (26:44) Who are you building for, and how do you get to profitability? (30:35) Links: LinkedIn: https://www.linkedin.com/in/amandabrocktech/ Twitter: www.twitter.com/amandabrocktech Open UK www.twitter.com/openuk_uk Company: www.openuk.uk

Buluta Doğru
Thundra

Buluta Doğru

Play Episode Listen Later Sep 26, 2022 58:23


Arkadaşlar 31. bölümden herkese merhaba. Bu bölümde observability alanında Sidekick ve Foresight gibi ürünleri olan Ankara merkezli Thundra firmasından sevgili Barış Kaya ve Burak Kantarcı konuğumuz oldu. Kendileriyle firma, ürünler ve bu ürünlerin çözüm sunduğu sorunlar hakkında bir sohbet gerçekleştirdik. https://www.thundra.io/

Light Reading Podcasts
Executive Spotlight Q&A: Flexible, Cloud Native Networking is Here

Light Reading Podcasts

Play Episode Listen Later Sep 21, 2022 13:32


Enterprises have fully moved their IT stack into the cloud, and network infrastructure is next. Dave Ward, PacketFabric's CEO, joins Light Reading to discuss the need for serious positive disruption in telecom, the flexibility of a cloud-native network, and how PacketFabric delivers real-time connectivity anywhere you want to go. #sponsored Hosted on Acast. See acast.com/privacy for more information.

Ship It! DevOps, Infra, Cloud Native
Modern Software Engineering

Ship It! DevOps, Infra, Cloud Native

Play Episode Listen Later Sep 21, 2022 82:31


Dave Farley, co-author of Continuous Delivery, is back to talk about his latest book, Modern Software Engineering, a Top 3 Software Engineering best seller on Amazon UK this September. Shipping good software starts with you giving yourself permission to do a good job. It continues with a healthy curiosity, admitting that you don't know, and running many experiments, safely, without blowing everything up. And then there is scope creep…

Changelog Master Feed
Modern Software Engineering (Ship It! #71)

Changelog Master Feed

Play Episode Listen Later Sep 21, 2022 82:31


Dave Farley, co-author of Continuous Delivery, is back to talk about his latest book, Modern Software Engineering, a Top 3 Software Engineering best seller on Amazon UK this September. Shipping good software starts with you giving yourself permission to do a good job. It continues with a healthy curiosity, admitting that you don't know, and running many experiments, safely, without blowing everything up. And then there is scope creep…

Oracle Groundbreakers
Josh Long on Fast, Scalable, Cloud Native Services in Java

Oracle Groundbreakers

Play Episode Listen Later Sep 21, 2022 19:36


JavaOne 2022 Speaker Preview In this conversation Oracle's Jim Grisanzio talks with JavaOne 2022 speaker Josh Long from San Francisco. Josh is a Java Champion and a Spring Developer Advocate. In this conversation he previews upcoming session on Kubernetes Native Java. He also talks about his experiences becoming a developer and working with the Java community around the world.   JavaOne 2022 October 17-20 in Las Vegas JavaOne 2022: Registration and Sessions JavaOne 2022: News Updates at Inside Java Josh Long, Java Champion & Spring Developer Advocate @starbuxman Java Development and Community OpenJDK Inside Java Dev.Java @java on Twitter Java on YouTube Duke's Corner Podcast Host Jim Grisanzio, Oracle Java Developer Relations, @jimgris

The Business of Open Source
The Open-Source Evolution of Python with Wes McKinney

The Business of Open Source

Play Episode Listen Later Sep 21, 2022 33:20


Wes McKinney, CTO & Co-Founder of Voltron Data, joins me for an in-depth conversation on how his quest to develop Python as an open-source programming language led him to creating the pandas project and founding four companies. In this episode, Wes and I dive into his unique background as the founder of the pandas project and he describes his perspective on the early days of Python, his journey into the world of open-source start-ups, and the risks and benefits of paying developers to work on open-source projects. Highlights: Wes introduces himself and describes his role (00:46) Wes' role in elevating Python to a mainstream programming language (02:15) How working with Python led Wes to co-founding his first two companies (09:01) Apache Arrow's critical role at Voltron Data and their focus on accelerating Arrow adoption (12:52) How did the team at Voltron Data decide on an open-source business model? (18:54) Wes speaks to the risk that can come from having developers work on an open-source project (22:31) Wes' perspective on the real-world applications and benefits of paying developers to work on open-source projects (27:44) Links:Wes LinkedIn: https://www.linkedin.com/in/wesmckinn/ Twitter: https://twitter.com/wesmckinn Company: https://voltrondata.com/

Kubernetes Podcast from Google
Ambient Mesh, with Justin Pettit and Ethan Jackson

Kubernetes Podcast from Google

Play Episode Listen Later Sep 20, 2022 55:47 Very Popular


When you think of a service mesh, you probably think of “sidecar containers running with each pod”. The Istio team has come up with a new approach, introduced recently as an experimental preview. Google Cloud software engineers Justin Pettit and Ethan Jackson join Craig to explore ambient mesh. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Listening immediately and listening on a 1 year delay Death and state funeral of Queen Elizabeth II The Queue What the queue says about our relationship with royalty News of the week Cloud Custodian becomes an incubating project Anthos VM support GKE control plane metrics CVE-2022-3172: Aggregated API server can cause clients to be redirected CVE-2021-25749: runAsNonRoot logic bypass for Windows containers Akuity Platform Episode 172, with Jesse Suen Weave GitOps 2022.09 Coroot Community Edition Constellation, by Edgeless Systems Register for Google Cloud Next Dell and Red Hat expand strategic collaboration Links from the interview Nicira Open vSwitch Introucing Ambient Mesh Service mesh First mention of Ambient in 2018 No first class support for sidecars in Kubernetes Istio working group meeting, August 2021 Remote proxy proposal HBONE: HTTP/2-based overlay network environment mTLS HTTP Connect GIF MASQUE and QUIC Get started with Ambient Mesh Ambient Mesh Security Deep Dive Justin Pettit and Ethan Jackson on Twitter

Cloud Security Podcast
SecDataOps Explained - Modern Security Stack

Cloud Security Podcast

Play Episode Listen Later Sep 16, 2022 47:28


Data Lakes as an asset to collect and build threat actors or hiring for Data Scientists/Analyst are not typical things in Cloud Security well unless the organisation is dealing with PetaBytes of data. At a large scale company these are data problem not a security problem at that point even if the problem is in security team. In this episode with Jonathan Rau, CISO of Lightspin we spoke about his previous experience of creating and growing a SecDataOps team with Cloud Security and Ops in IHSMarkit. We spoke about what is this SecDataOps, What is Security Data Lake and if Cloud Native tools are enough for these problems. This episode is better on video - YouTube Link Cloud Security Meetup Amsterdam - Tech Fashion Theme - Sep,2022 Cloud Security Meetup NewYork - Tech Fashion Theme - Sep,2022 Host Twitter: Ashish Rajan (@hashishrajan) Guest Linkedin: Jonathan Rau Podcast Twitter - @CloudSecPod @CloudSecureNews If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - Cloud Security News - Cloud Security Academy

Ship It! DevOps, Infra, Cloud Native
Kaizen! Four PRs, one big feature

Ship It! DevOps, Infra, Cloud Native

Play Episode Listen Later Sep 14, 2022 67:09 Transcription Available


In today's Kaizen episode, we talk about shipping Adam's Christmas present: chapter support for all Changelog episodes that we now publish. This feature was hard because there are many subtle differences in how the ID3 spec is implemented. Of course, once the PR shipped, there were other issues to solve, including an upgrade the world kind of scenario. Since Lars Wikman did all the heavy ID3 lifting, he is here with us too.

Changelog Master Feed
Kaizen! Four PRs, one big feature (Ship It! #70)

Changelog Master Feed

Play Episode Listen Later Sep 14, 2022 67:09 Transcription Available


In today's Kaizen episode, we talk about shipping Adam's Christmas present: chapter support for all Changelog episodes that we now publish. This feature was hard because there are many subtle differences in how the ID3 spec is implemented. Of course, once the PR shipped, there were other issues to solve, including an upgrade the world kind of scenario. Since Lars Wikman did all the heavy ID3 lifting, he is here with us too.

Oracle Groundbreakers
Building Cloud Native Applications with Rustam Mehmandarov

Oracle Groundbreakers

Play Episode Listen Later Sep 13, 2022 24:42


JavaOne 2022 Speaker Preview In this conversation Oracle's Jim Grisanzio talks with JavaOne 2022 speaker Rustam Mehmandarov from Oslo, Norway. Rustam is a Java Champion and also Chief Engineer at Computas AS in Oslo. In this conversation he previews his three upcoming sessions at JavaOne, which explore building cloud native apps in Java. The discussion also covers Rustam's experiences in the Java community and at various conferences around the world.   JavaOne 2022 October 17-20 in Las Vegas JavaOne 2022: Registration and Sessions JavaOne 2022: News Updates at Inside Java Rustam Mehmandarov, Java Champion, Chief Engineer at Computas AS @rmehmandarov Java Development and Community OpenJDK Inside Java Dev.Java @java on Twitter Java on YouTube Duke's Corner Podcast Host Jim Grisanzio, Oracle Java Developer Relations, @jimgris

Screaming in the Cloud
The Ever-Changing World of Cloud Native Observability with Ian Smith

Screaming in the Cloud

Play Episode Listen Later Sep 13, 2022 41:58


About IanIan Smith is Field CTO at Chronosphere where he works across sales, marketing, engineering and product to deliver better insights and outcomes to observability teams supporting high-scale cloud-native environments. Previously, he worked with observability teams across the software industry in pre-sales roles at New Relic, Wavefront, PagerDuty and Lightstep.Links Referenced: Chronosphere: https://chronosphere.io Last Tweet in AWS: lasttweetinaws.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while, I find that something I'm working on aligns perfectly with a person that I wind up basically convincing to appear on this show. Today's promoted guest is Ian Smith, who's Field CTO at Chronosphere. Ian, thank you for joining me.Ian: Thanks, Corey. Great to be here.Corey: So, the coincidental aspect of what I'm referring to is that Chronosphere is, despite the name, not something that works on bending time, but rather an observability company. Is that directionally accurate?Ian: That's true. Although you could argue it probably bend a little bit of engineering time. But we can talk about that later.Corey: [laugh]. So, observability is one of those areas that I think is suffering from too many definitions, if that makes sense. And at first, I couldn't make sense of what it was that people actually meant when they said observability, this sort of clarified to me at least when I realized that there were an awful lot of, well, let's be direct and call them ‘legacy monitoring companies' that just chose to take what they were already doing and define that as, “Oh, this is observability.” I don't know that I necessarily agree with that. I know a lot of folks in the industry vehemently disagree.You've been in a lot of places that have positioned you reasonably well to have opinions on this sort of question. To my understanding, you were at interesting places, such as LightStep, New Relic, Wavefront, and PagerDuty, which I guess technically might count as observability in a very strange way. How do you view observability and what it is?Ian: Yeah. Well, a lot of definitions, as you said, common ones, they talk about the three pillars, they talk really about data types. For me, it's about outcomes. I think observability is really this transition from the yesteryear of monitoring where things were much simpler and you, sort of, knew all of the questions, you were able to define your dashboards, you were able to define your alerts and that was really the gist of it. And going into this brave new world where there's a lot of unknown things, you're having to ask a lot of sort of unique questions, particularly during a particular instance, and so being able to ask those questions in an ad hoc fashion layers on top of what we've traditionally done with monitoring. So, observability is sort of that more flexible, more dynamic kind of environment that you have to deal with.Corey: This has always been something that, for me, has been relatively academic. Back when I was running production environments, things tended to be a lot more static, where, “Oh, there's a problem with the database. I will SSH into the database server.” Or, “Hmm, we're having a weird problem with the web tier. Well, there are ten or 20 or 200 web servers. Great, I can aggregate all of their logs to Syslog, and worst case, I can log in and poke around.”Now, with a more ephemeral style of environment where you have Kubernetes or whatnot scheduling containers into place that have problems you can't attach to a running container very easily, and by the time you see an error, that container hasn't existed for three hours. And that becomes a problem. Then you've got the Lambda universe, which is a whole ‘nother world pain, where it becomes very challenging, at least for me, in order to reason using the old style approaches about what's actually going on in your environment.Ian: Yeah, I think there's that and there's also the added complexity of oftentimes you'll see performance or behavioral changes based on even more narrow pathways, right? One particular user is having a problem and the traffic is spread across many containers. Is it making all of these containers perform badly? Not necessarily, but their user experience is being affected. It's very common in say, like, B2B scenarios for you to want to understand the experience of one particular user or the aggregate experience of users at a particular company, particular customer, for example.There's just more complexity. There's more complexity of the infrastructure and just the technical layer that you're talking about, but there's also more complexity in just the way that we're handling use cases and trying to provide value with all of this software to the myriad of customers in different industries that software now serves.Corey: For where I sit, I tend to have a little bit of trouble disambiguating, I guess, the three baseline data types that I see talked about again and again in observability. You have logs, which I think I've mostly I can wrap my head around. That seems to be the baseline story of, “Oh, great. Your application puts out logs. Of course, it's in its own unique, beautiful format. Why wouldn't it be?” In an ideal scenario, they're structured. Things are never ideal, so great. You're basically tailing log files in some cases. Great. I can reason about those.Metrics always seem to be a little bit of a step beyond that. It's okay, I have a whole bunch of log lines that are spitting out every 500 error that my app is throwing—and given my terrible code, it throws a lot—but I can then ideally count the number of times that appears and then that winds up incrementing counter, similar to the way that we used to see with StatsD, for example, and Collectd. Is that directionally correct? As far as the way I reason about, well so far, logs and metrics?Ian: I think at a really basic level, yes. I think that, as we've been talking about, sort of greater complexity starts coming in when you have—particularly metrics in today's world of containers—Prometheus—you mentioned StatsD—Prometheus has become sort of like the standard for expressing those things, so you get situations where you have incredibly high cardinality, so cardinality being the interplay between all the different dimensions. So, you might have, my container is a label, but also the type of endpoint is running on that container as a label, then maybe I want to track my customer organizations and maybe I have 5000 of those. I have 3000 containers, and so on and so forth. And you get this massive explosion, almost multiplicatively.For those in the audience who really live and read cardinality, there's probably someone screaming about well, it's not truly multiplicative in every sense of the word, but, you know, it's close enough from an approximation standpoint. As you get this massive explosion of data, which obviously has a cost implication but also has, I think, a really big implication on the core reason why you have metrics in the first place you alluded to, which is, so a human being can reason about it, right? You don't want to go and look at 5000 log lines; you want to know, out of those 5000 log lines of 4000 errors and I have 1000, OKs. It's very easy for human beings to reason about that from a numbers perspective. When your metrics start to re-explode out into thousands, millions of data points, and unique sort of time series more numbers for you to track, then you're sort of losing that original goal of metrics.Corey: I think I mostly have wrapped my head around the concept. But then that brings us to traces, and that tends to be I think one of the hardest things for me to grasp, just because most of the apps I build, for obvious reasons—namely, I'm bad at programming and most of these are proof of concept type of things rather than anything that's large scale running in production—the difference between a trace and logs tends to get very muddled for me. But the idea being that as you have a customer session or a request that talks to different microservices, how do you collate across different systems all of the outputs of that request into a single place so you can see timing information, understand the flow that user took through your application? Is that again, directionally correct? Have I completely missed the plot here? Which is again, eminently possible. You are the expert.Ian: No, I think that's sort of the fundamental premise or expected value of tracing, for sure. We have something that's akin to a set of logs; they have a common identifier, a trace ID, that tells us that all of these logs essentially belong to the same request. But importantly, there's relationship information. And this is the difference between just having traces—sorry, logs—with just a trace ID attached to them. So, for example, if you have Service A calling Service B and Service C, the relatively simple thing, you could use time to try to figure this out.But what if there are things happening in Service B at the same time there are things happening in Service C and D, and so on and so forth? So, one of the things that tracing brings to the table is it tells you what is currently happening, what called that. So oh, I know that I'm Service D. I was actually called by Service B and I'm not just relying on timestamps to try and figure out that connection. So, you have that information and ultimately, the data model allows you to fully sort of reflect what's happening with the request, particularly in complex environments.And I think this is where, you know, tracing needs to be sort of looked at as not a tool for—just because I'm operating in a modern environment, I'm using some Kubernetes, or I'm using Lambda, is it needs to be used in a scenario where you really have troubles grasping, from a conceptual standpoint, what is happening with the request because you need to actually fully document it. As opposed to, I have a few—let's say three Lambda functions. I maybe have some key metrics about them; I have a little bit of logging. You probably do not need to use tracing to solve, sort of, basic performance problems with those. So, you can get yourself into a place where you're over-engineering, you're spending a lot of time with tracing instrumentation and tracing tooling, and I think that's the core of observability is, like, using the right tool, the right data for the job.But that's also what makes it really difficult because you essentially need to have this, you know, huge set of experience or knowledge about the different data, the different tooling, and what influential architecture and the data you have available to be able to reason about that and make confident decisions, particularly when you're under a time crunch which everyone is familiar with a, sort of like, you know, PagerDuty-style experience of my phone is going off and I have a customer-facing incident. Where is my problem? What do I need to do? Which dashboard do I need to look at? Which tool do I need to investigate? And that's where I think the observability industry has become not serving the outcomes of the customers.Corey: I had a, well, I wouldn't say it's a genius plan, but it was a passing fancy that I've built this online, freely available Twitter client for authoring Twitter threads—because that's what I do is that of having a social life—and it's available at lasttweetinaws.com. I've used that as a testbed for a few things. It's now deployed to roughly 20 AWS regions simultaneously, and this means that I have a bit of a problem as far as how to figure out not even what's wrong or what's broken with this, but who's even using it?Because I know people are. I see invocations all over the planet that are not me. And sometimes it appears to just be random things crawling the internet—fine, whatever—but then I see people logging in and doing stuff with it. I'd kind of like to log and see who's using it just so I can get information like, is there anyone I should talk to about what it could be doing differently? I love getting user experience reports on this stuff.And I figured, ah, this is a perfect little toy application. It runs in a single Lambda function so it's not that complicated. I could instrument this with OpenTelemetry, which then, at least according to the instructions on the tin, I could then send different types of data to different observability tools without having to re-instrument this thing every time I want to kick the tires on something else. That was the promise.And this led to three weeks of pain because it appears that for all of the promise that it has, OpenTelemetry, particularly in a Lambda environment, is nowhere near ready for being able to carry a workload like this. Am I just foolish on this? Am I stating an unfortunate reality that you've noticed in the OpenTelemetry space? Or, let's be clear here, you do work for a company with opinions on these things. Is OpenTelemetry the wrong approach?Ian: I think OpenTelemetry is absolutely the right approach. To me, the promise of OpenTelemetry for the individual is, “Hey, I can go and instrument this thing, as you said and I can go and send the data, wherever I want.” The sort of larger view of that is, “Well, I'm no longer beholden to a vendor,”—including the ones that I've worked for, including the one that I work for now—“For the definition of the data. I am able to control that, I'm able to choose that, I'm able to enhance that, and any effort I put into it, it's mine. I own that.”Whereas previously, if you picked, say, for example, an APM vendor, you said, “Oh, I want to have some additional aspects of my information provider, I want to track my customer, or I want to track a particular new metric of how much dollars am I transacting,” that effort really going to support the value of that individual solution, it's not going to support your outcomes. Which is I want to be able to use this data wherever I want, wherever it's most valuable. So, the core premise of OpenTelemetry, I think, is great. I think it's a massive undertaking to be able to do this for at least three different data types, right? Defining an API across a whole bunch of different languages, across three different data types, and then creating implementations for those.Because the implementations are the thing that people want, right? You are hoping for the ability to, say, drop in something. Maybe one line of code or preferably just, like, attach a dependency, let's say in Java-land at runtime, and be able to have the information flow through and have it complete. And this is the premise of, you know, vendors I've worked with in the past, like New Relic. That was what New Relic built on: the ability to drop in an agent and get visibility immediately.So, having that out-of-the-box visibility is obviously a goal of OpenTelemetry where it makes sense—Go, it's very difficult to attach things at runtime, for example—but then saying, well, whatever is provided—let's say your gRPC connections, database, all these things—well, now I want to go and instrument; I want to add some additional value. As you said, maybe you want to track something like I want to have in my traces the email address of whoever it is or the Twitter handle of whoever is so I can then go and analyze that stuff later. You want to be able to inject that piece of information or that instrumentation and then decide, well, where is the best utilized? Is it best utilized in some tooling from AWS? Is it best utilized in something that you've built yourself? Is it best of utilized an open-source project? Is it best utilized in one of the many observability vendors, or is even becoming more common, I want to shove everything in a data lake and run, sort of, analysis asynchronously, overlay observability data for essentially business purposes.All of those things are served by having a very robust, open-source standard, and simple-to-implement way of collecting a really good baseline of data and then make it easy for you to then enhance that while still owning—essentially, it's your IP right? It's like, the instrumentation is your IP, whereas in the old world of proprietary agents, proprietary APIs, that IP was basically building it, but it was tied to that other vendor that you were investing in.Corey: One thing that I was consistently annoyed by in my days of running production infrastructures at places, like, you know, large banks, for example, one of the problems I kept running into is that this, there's this idea that, “Oh, you want to use our tool. Just instrument your applications with our libraries or our instrumentation standards.” And it felt like I was constantly doing and redoing a lot of instrumentation for different aspects. It's not that we were replacing one vendor with another; it's that in an observability, toolchain, there are remarkably few, one-size-fits-all stories. It feels increasingly like everyone's trying to sell me a multifunction printer, which does one thing well, and a few other things just well enough to technically say they do them, but badly enough that I get irritated every single time.And having 15 different instrumentation packages in an application, that's either got security ramifications, for one, see large bank, and for another it became this increasingly irritating and obnoxious process where it felt like I was spending more time seeing the care and feeding of the instrumentation then I was the application itself. That's the gold—that's I guess the ideal light at the end of the tunnel for me in what OpenTelemetry is promising. Instrument once, and then you're just adjusting configuration as far as where to send it.Ian: That's correct. The organization's, and you know, I keep in touch with a lot of companies that I've worked with, companies that have in the last two years really invested heavily in OpenTelemetry, they're definitely getting to the point now where they're generating the data once, they're using, say, pieces of the OpenTelemetry pipeline, they're extending it themselves, and then they're able to shove that data in a bunch of different places. Maybe they're putting in a data lake for, as I said, business analysis purposes or forecasting. They may be putting the data into two different systems, even for incident and analysis purposes, but you're not having that duplication effort. Also, potentially that performance impact, right, of having two different instrumentation packages lined up with each other.Corey: There is a recurring theme that I've noticed in the observability space that annoys me to no end. And that is—I don't know if it's coming from investor pressure, from folks never being satisfied with what they have, or what it is, but there are so many startups that I have seen and worked with in varying aspects of the observability space that I think, “This is awesome. I love the thing that they do.” And invariably, every time they start getting more and more features bolted onto them, where, hey, you love this whole thing that winds up just basically doing a tail-F on a log file, so it just streams your logs in the application and you can look for certain patterns. I love this thing. It's great.Oh, what's this? Now, it's trying to also be the thing that alerts me and wakes me up in the middle of the night. No. That's what PagerDuty does. I want PagerDuty to do that thing, and I want other things—I want you just to be the log analysis thing and the way that I contextualize logs. And it feels like they keep bolting things on and bolting things on, where everything is more or less trying to evolve into becoming its own version of Datadog. What's up with that?Ian: Yeah, the sort of, dreaded platform play. I—[laugh] I was at New Relic when there were essentially two products that they sold. And then by the time I left, I think there was seven different products that were being sold, which is kind of a crazy, crazy thing when you think about it. And I think Datadog has definitely exceeded that now. And I definitely see many, many vendors in the market—and even open-source solutions—sort of presenting themselves as, like, this integrated experience.But to your point, even before about your experience of these banks it oftentimes become sort of a tick-a-box feature approach of, “Hey, I can do this thing, so buy more. And here's a shared navigation panel.” But are they really integrated? Like, are you getting real value out of it? One of the things that I do in my role is I get to work with our internal product teams very closely, particularly around new initiatives like tracing functionality, and the constant sort of conversation is like, “What is the outcome? What is the value?”It's not about the feature; it's not about having a list of 19 different features. It's like, “What is the user able to do with this?” And so, for example, there are lots of platforms that have metrics, logs, and tracing. The new one-upmanship is saying, “Well, we have events as well. And we have incident response. And we have security. And all these things sort of tie together, so it's one invoice.”And constantly I talk to customers, and I ask them, like, “Hey, what are the outcomes that you're getting when you've invested so heavily in one vendor?” And oftentimes, the response is, “Well, I only need to deal with one vendor.” Okay, but that's not an outcome. [laugh]. And it's like the business having a single invoice.Corey: Yeah, that is something that's already attainable today. If you want to just have one vendor with a whole bunch of crappy offerings, that's what AWS is for. They have AmazonBasics versions of everything you might want to use in production. Oh, you want to go ahead and use MongoDB? Well, use AmazonBasics MongoDB, but they call it DocumentDB because of course they do. And so, on and so forth.There are a bunch of examples of this, but those companies are still in business and doing very well because people often want the genuine article. If everyone was trying to do just everything to check a box for procurement, great. AWS has already beaten you at that game, it seems.Ian: I do think that, you know, people are hoping for that greater value and those greater outcomes, so being able to actually provide differentiation in that market I don't think is terribly difficult, right? There are still huge gaps in let's say, root cause analysis during an investigation time. There are huge issues with vendors who don't think beyond sort of just the one individual who's looking at a particular dashboard or looking at whatever analysis tool there is. So, getting those things actually tied together, it's not just, “Oh, we have metrics, and logs, and traces together,” but even if you say we have metrics and tracing, how do you move between metrics and tracing? One of the goals in the way that we're developing product at Chronosphere is that if you are alerted to an incident—you as an engineer; doesn't matter whether you are massively sophisticated, you're a lead architect who has been with the company forever and you know everything or you're someone who's just come out of onboarding and is your first time on call—you should not have to think, “Is this a tracing problem, or a metrics problem, or a logging problem?”And this is one of those things that I mentioned before of requiring that really heavy level of knowledge and understanding about the observability space and your data and your architecture to be effective. And so, with the, you know, particularly observability teams and all of the engineers that I speak with on a regular basis, you get this sort of circumstance where well, I guess, let's talk about a real outcome and a real pain point because people are like, okay, yeah, this is all fine; it's all coming from a vendor who has a particular agenda, but the thing that constantly resonates is for large organizations that are moving fast, you know, big startups, unicorns, or even more traditional enterprises that are trying to undergo, like, a rapid transformation and go really cloud-native and make sure their engineers are moving quickly, a common question I will talk about with them is, who are the three people in your organization who always get escalated to? And it's usually, you know, between two and five people—Corey: And you can almost pick those perso—you say that and you can—at least anyone who's worked in environments or through incidents like this more than a few times, already have thought of specific people in specific companies. And they almost always fall into some very predictable archetypes. But please, continue.Ian: Yeah. And people think about these people, they always jump to mind. And one of the things I asked about is, “Okay, so when you did your last innovation around observably”—it's not necessarily buying a new thing, but it maybe it was like introducing a new data type or it was you're doing some big investment in improving instrumentation—“What changed about their experience?” And oftentimes, the most that can come out is, “Oh, they have access to more data.” Okay, that's not great.It's like, “What changed about their experience? Are they still getting woken up at 3 am? Are they constantly getting pinged all the time?” One of the vendors that I worked at, when they would go down, there were three engineers in the company who were capable of generating list of customers who are actually impacted by damage. And so, every single incident, one of those three engineers got paged into the incident.And it became borderline intolerable for them because nothing changed. And it got worse, you know? The platform got bigger and more complicated, and so there were more incidents and they were the ones having to generate that. But from a business level, from an observability outcomes perspective, if you zoom all the way up, it's like, “Oh, were we able to generate the list of customers?” “Yes.”And this is where I think the observability industry has sort of gotten stuck—you know, at least one of the ways—is that, “Oh, can you do it?” “Yes.” “But is it effective?” “No.” And by effective, I mean those three engineers become the focal point for an organization.And when I say three—you know, two to five—it doesn't matter whether you're talking about a team of a hundred or you're talking about a team of a thousand. It's always the same number of people. And as you get bigger and bigger, it becomes more and more of a problem. So, does the tooling actually make a difference to them? And you might ask, “Well, what do you expect from the tooling? What do you expect to do for them?” Is it you give them deeper analysis tools? Is it, you know, you do AI Ops? No.The answer is, how do you take the capabilities that those people have and how do you spread it across a larger population of engineers? And that, I think, is one of those key outcomes of observability that no one, whether it be in open-source or the vendor side is really paying a lot of attention to. It's always about, like, “Oh, we can just shove more data in. By the way, we've got petabyte scale and we can deal with, you know, 2 billion active time series, and all these other sorts of vanity measures.” But we've gotten really far away from the outcomes. It's like, “Am I getting return on investment of my observability tooling?”And I think tracing is this—as you've said, it can be difficult to reason about right? And people are not sure. They're feeling, “Well, I'm in a microservices environment; I'm in cloud-native; I need tracing because my older APM tools appear to be failing me. I'm just going to go and wriggle my way through implementing OpenTelemetry.” Which has significant engineering costs. I'm not saying it's not worth it, but there is a significant engineering cost—and then I don't know what to expect, so I'm going to go on through my data somewhere and see whether we can achieve those outcomes.And I do a pilot and my most sophisticated engineers are in the pilot. And they're able to solve the problems. Okay, I'm going to go buy that thing. But I've just transferred my problems. My engineers have gone from solving problems in maybe logs and grepping through petabytes worth of logs to using some sort of complex proprietary query language to go through your tens of petabytes of trace data but actually haven't solved any problem. I've just moved it around and probably just cost myself a lot, both in terms of engineering time and real dollars spent as well.Corey: One of the challenges that I'm seeing across the board is that observability, for certain use cases, once you start to see what it is and its potential for certain applications—certainly not all; I want to hedge that a little bit—but it's clear that there is definite and distinct value versus other ways of doing things. The problem is, is that value often becomes apparent only after you've already done it and can see what that other side looks like. But let's be honest here. Instrumenting an application is going to take some significant level of investment, in many cases. How do you wind up viewing any return on investment that it takes for the very real cost, if only in people's time, to go ahead instrumenting for observability in complex environments?Ian: So, I think that you have to look at the fundamentals, right? You have to look at—pretend we knew nothing about tracing. Pretend that we had just invented logging, and you needed to start small. It's like, I'm not going to go and log everything about every application that I've had forever. What I need to do is I need to find the points where that logging is going to be the most useful, most impactful, across the broadest audience possible.And one of the useful things about tracing is because it's built in distributed environments, primarily for distributed environments, you can look at, for example, the biggest intersection of requests. A lot of people have things like API Gateways, or they have parts of a monolith which is still handling a lot of requests routing; those tend to be areas to start digging into. And I would say that, just like for anyone who's used Prometheus or decided to move away from Prometheus, no one's ever gone and evaluated Prometheus solution without having some sort of Prometheus data, right? You don't go, “Hey, I'm going to evaluate a replacement for Prometheus or my StatsD without having any data, and I'm simultaneously going to generate my data and evaluate the solution at the same time.” It doesn't make any sense.With tracing, you have decent open-source projects out there that allow you to visualize individual traces and understand sort of the basic value you should be getting out of this data. So, it's a good starting point to go, “Okay, can I reason about a single request? Can I go and look at my request end-to-end, even in a relatively small slice of my environment, and can I see the potential for this? And can I think about the things that I need to be able to solve with many traces?” Once you start developing these ideas, then you can have a better idea of, “Well, where do I go and invest more in instrumentation? Look, databases never appear to be a problem, so I'm not going to focus on database instrumentation. What's the real problem is my external dependencies. Facebook API is the one that everyone loves to use. I need to go instrument that.”And then you start to get more clarity. Tracing has this interesting network effect. You can basically just follow the breadcrumbs. Where is my biggest problem here? Where are my errors coming from? Is there anything else further down the call chain? And you can sort of take that exploratory approach rather than doing everything up front.But it is important to do something before you start trying to evaluate what is my end state. End state obviously being sort of nebulous term in today's world, but where do I want to be in two years' time? I would like to have a solution. Maybe it's open-source solution, maybe it's a vendor solution, maybe it's one of those platform solutions we talked about, but how do I get there? It's really going to be I need to take an iterative approach and I need to be very clear about the value and outcomes.There's no point in doing a whole bunch of instrumentation effort in things that are just working fine, right? You want to go and focus your time and attention on that. And also you don't want to go and burn just singular engineers. The observability team's purpose in life is probably not to just write instrumentation or just deploy OpenTelemetry. Because then we get back into the land where engineers themselves know nothing about the monitoring or observability they're doing and it just becomes a checkbox of, “I dropped in an agent. Oh, when it comes time for me to actually deal with an incident, I don't know anything about the data and the data is insufficient.”So, a level of ownership supported by the observability team is really important. On that return on investment, sort of, though it's not just the instrumentation effort. There's product training and there are some very hard costs. People think oftentimes, “Well, I have the ability to pay a vendor; that's really the only cost that I have.” There's things like egress costs, particularly volumes of data. There's the infrastructure costs. A lot of the times there will be elements you need to run in your own environment; those can be very costly as well, and ultimately, they're sort of icebergs in this overall ROI conversation.The other side of it—you know, return and investment—return, there's a lot of difficulty in reasoning about, as you said, what is the value of this going to be if I go through all this effort? Everyone knows a sort of, you know, meme or archetype of, “Hey, here are three options; pick two because there's always going to be a trade off.” Particularly for observability, it's become an element of, I need to pick between performance, data fidelity, or cost. Pick two. And when data fidelity—particularly in tracing—I'm talking about the ability to not sample, right?If you have edge cases, if you have narrow use cases and ways you need to look at your data, if you heavily sample, you lose data fidelity. But oftentimes, cost is a reason why you do that. And then obviously, performance as you start to get bigger and bigger datasets. So, there's a lot of different things you need to balance on that return. As you said, oftentimes you don't get to understand the magnitude of those until you've got the full data set in and you're trying to do this, sort of, for real. But being prepared and iterative as you go through this effort and not saying, “Okay, well, I'm just going to buy everything from one vendor because I'm going to assume that's going to solve my problem,” is probably that undercurrent there.Corey: As I take a look across the entire ecosystem, I can't shake the feeling—and my apologies in advance if this is an observation, I guess, that winds up throwing a stone directly at you folks—Ian: Oh, please.Corey: But I see that there's a strong observability community out there that is absolutely aligned with the things I care about and things I want to do, and then there's a bunch of SaaS vendors, where it seems that they are, in many cases, yes, advancing the state of the art, I am not suggesting for a second that money is making observability worse. But I do think that when the tool you sell is a hammer, then every problem starts to look like a nail—or in my case, like my thumb. Do you think that there's a chance that SaaS vendors are in some ways making this entire space worse?Ian: As we've sort of gone into more cloud-native scenarios and people are building things specifically to take advantage of cloud from a complexity standpoint, from a scaling standpoint, you start to get, like, vertical issues happening. So, you have things like we're going to charge on a per-container basis; we're going to charge on a per-host basis; we're going to charge based off the amount of gigabytes that you send us. These are sort of like more horizontal pricing models, and the way the SaaS vendors have delivered this is they've made it pretty opaque, right? Everyone has experiences, or has jerks about overages from observability vendors' massive spikes. I've worked with customers who have used—accidentally used some features and they've been billed a quarter million dollars on a monthly basis for accidental overages from a SaaS vendor.And these are all terrible things. Like, but we've gotten used to this. Like, we've just accepted it, right, because everyone is operating this way. And I really do believe that the move to SaaS was one of those things. Like, “Oh, well, you're throwing us more data, and we're charging you more for it.” As a vendor—Corey: Which sort of erodes your own value proposition that you're bringing to the table. I mean, I don't mean to be sitting over here shaking my fist yelling, “Oh, I could build a better version in a weekend,” except that I absolutely know how to build a highly available Rsyslog cluster. I've done it a handful of times already and the technology is still there. Compare and contrast that with, at scale, the fact that I'm paying 50 cents per gigabyte ingested to CloudWatch logs, or a multiple of that for a lot of other vendors, it's not that much harder for me to scale that fleet out and pay a much smaller marginal cost.Ian: And so, I think the reaction that we're seeing in the market and we're starting to see—we're starting to see the rise of, sort of, a secondary class of vendor. And by secondary, I don't mean that they're lesser; I mean that they're, sort of like, specifically trying to address problems of the primary vendors, right? Everyone's aware of vendors who are attempting to reduce—well, let's take the example you gave on logs, right? There are vendors out there whose express purpose is to reduce the cost of your logging observability. They just sit in the middle; they are a middleman, right?Essentially, hey, use our tool and even though you're going to pay us a whole bunch of money, it's going to generate an overall return that is greater than if you had just continued pumping all of your logs over to your existing vendor. So, that's great. What we think really needs to happen, and one of the things we're doing at Chronosphere—unfortunate plug—is we're actually building those capabilities into the solution so it's actually end-to-end. And by end-to-end, I mean, a solution where I can ingest my data, I can preprocess my data, I can store it, query it, visualize it, all those things, aligned with open-source standards, but I have control over that data, and I understand what's going on with particularly my cost and my usage. I don't just get a bill at the end of the month going, “Hey, guess what? You've spent an additional $200,000.”Instead, I can know in real time, well, what is happening with my usage. And I can attribute it. It's this team over here. And it's because they added this particular label. And here's a way for you, right now, to address that and cap it so it doesn't cost you anything and it doesn't have a blast radius of, you know, maybe degraded performance or degraded fidelity of the data.That though is diametrically opposed to the way that most vendors are set up. And unfortunately, the open-source projects tend to take a lot of their cues, at least recently, from what's happening in the vendor space. One of the ways that you can think about it is a sort of like a speed of light problem. Everyone knows that, you know, there's basic fundamental latency; everyone knows how fast disk is; everyone knows the, sort of like, you can't just make your computations happen magically, there's a cost of running things horizontally. But a lot of the way that the vendors have presented efficiency to the market is, “Oh, we're just going to incrementally get faster as AWS gets faster. We're going to incrementally get better as compression gets better.”And of course, you can't go and fit a petabyte worth of data into a kilobyte, unless you're really just doing some sort of weird dictionary stuff, so you feel—you're dealing with some fundamental constraints. And the vendors just go, “I'm sorry, you know, we can't violate the speed of light.” But what you can do is you can start taking a look at, well, how is the data valuable, and start giving the people controls on how to make it more valuable. So, one of the things that we do with Chronosphere is we allow you to reshape Prometheus metrics, right? You go and express Prometheus metrics—let's say it's a business metric about how many transactions you're doing as a business—you don't need that on a per-container basis, particularly if you're running 100,000 containers globally.When you go and take a look at that number on a dashboard, or you alert on it, what is it? It's one number, one time series. Maybe you break it out per region. You have five regions, you don't need 100,000 data points every minute behind that. It's very expensive, it's not very performant, and as we talked about earlier, it's very hard to reason about as a human being.So, giving the tools to be able to go and condense that data down and make it more actionable and more valuable, you get performance, you get cost reduction, and you get the value that you ultimately need out of the data. And it's one of the reasons why, I guess, I work at Chronosphere. Which I'm hoping is the last observability [laugh] venture I ever work for.Corey: Yeah, for me a lot of the data that I see in my logs, which is where a lot of this stuff starts and how I still contextualize these things, is nonsense that I don't care about and will never care about. I don't care about load balance or health checks. I don't particularly care about 200 results for the favicon when people visit the site. I care about other things, but just weed out the crap, especially when I'm paying by the pound—or at least by the gigabyte—in order to get that data into something. Yeah. It becomes obnoxious and difficult to filter out.Ian: Yeah. And the vendors just haven't done any of that because why would they, right? If you went and reduced the amount of log—Corey: Put engineering effort into something that reduces how much I can charge you? That sounds like lunacy. Yeah.Ian: Exactly. They're business models entirely based off it. So, if you went and reduced every one's logging bill by 30%, or everyone's logging volume by 30% and reduced the bills by 30%, it's not going to be a great time if you're a publicly traded company who has built your entire business model on essentially a very SaaS volume-driven—and in my eyes—relatively exploitative pricing and billing model.Corey: Ian, I want to thank you for taking so much time out of your day to talk to me about this. If people want to learn more, where can they find you? I mean, you are a Field CTO, so clearly you're outstanding in your field. But if, assuming that people don't want to go to farm country, where's the best place to find you?Ian: Yeah. Well, it'll be a bunch of different conferences. I'll be at KubeCon this year. But chronosphere.io is the company website. I've had the opportunity to talk to a lot of different customers, not from a hard sell perspective, but you know, conversations like this about what are the real problems you're having and what are the things that you sort of wish that you could do?One of the favorite things that I get to ask people is, “If you could wave a magic wand, what would you love to be able to do with your observability solution?” That's, A, a really great part, but oftentimes be being able to say, “Well, actually, that thing you want to do, I think I have a way to accomplish that,” is a really rewarding part of this particular role.Corey: And we will, of course, put links to that in the show notes. Thank you so much for being so generous with your time. I appreciate it.Ian: Thanks, Corey. It's great to be here.Corey: Ian Smith, Field CTO at Chronosphere on this promoted guest episode. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment, which going to be super easy in your case, because it's just one of the things that the omnibus observability platform that your company sells offers as part of its full suite of things you've never used.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Kubernetes Podcast from Google
Security, Access and War, with Kateryna Ivashchenko

Kubernetes Podcast from Google

Play Episode Listen Later Sep 9, 2022 39:00 Very Popular


Kateryna Ivashchenko is a Senior Demand Generation Manager at Teleport, an organizer of community events, and a supporter of the developer community in her home country of Ukraine. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod News of the week Introducing Ambient Mesh in Istio Istio 1.15 Linkerd 2.12 Linkerd and the Gateway API Symbiosis Cuber nay-tace Reddit discussion VMware Tanzu announcments from VMware Explore Isovalent raises $40m Series B Kubernetes Blog: PodSecurityPolicy: The Historical Context Pod Security Admission Controller in Stable CSI Inline Volumes have graduated to GA cgroup v2 graduates to GA Kubernetes was never designed for batch jobs by Kurt Schelfthout 7 years of GKE General Availability Links from the interview Portworx Teleport 24 February 2022: Russia invades Ukraine BeyondCorp Teleport open source hunter2 Okta breach Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin’s Most Dangerous Hackers by Andy Greenberg War in Ukraine Kateryna’s sister’s T-shirt Independence Day Chris Lentricchia and Operation Dvoretskyi CNCF crowdfunding DevOpsDays Kyiv International Snack Exchange Kateryna Ivashchenko on Twitter

Changelog Master Feed
The cloud native ecosystem (Ship It! #69)

Changelog Master Feed

Play Episode Listen Later Sep 8, 2022 76:18 Transcription Available


Maybe it's the Californian sun. Or perhaps it's the time spent at Disney Studios, the home of the best stories. One thing is for sure: Taylor Dolezal is one of the happiest cloud native people that Gerhard knows. As a former Lead SRE for Disney Studios, Taylor has significant hands-on experience running cloud native technologies in a large company. After a few years as a HashiCorp Developer Advocate, Taylor is now Head of End User Ecosystem at CNCF. In his current role, he is helping enable cloud native success for end-users like Boeing, Mercedes Benz & many others.

Cloud Security Podcast
Cloud Security Monitoring in a Modern Security Stack

Cloud Security Podcast

Play Episode Listen Later Sep 8, 2022 36:53


In this episode of the Virtual Coffee with Ashish edition, we spoke with Jack Naglieri (Jack's Twitter) about what Security Monitoring can look like for a Cloud Native Company Episode ShowNotes, Links and Transcript on Cloud Security Podcast: www.cloudsecuritypodcast.tv Host Twitter: Ashish Rajan (@hashishrajan) Guest Twitter: Jack Naglieri (Jack's Twitter) Podcast Twitter - @CloudSecPod @CloudSecureNews If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - Cloud Security News - Cloud Security Academy Spotify TimeStamp for Interview Questions (00:00) Ashish's Intro to the Episode (02:40) https://snyk.io/csp (02:51) Corey's professional background (03:34) Jack's introduction (06:15 )What is Cloud Native? (07:41) What is a modern security stack? (09:50) Why Cloud Native Security Monitoring? (12:36) The current market for security monitoring (15:45) Cloud Native monitoring for on-prem (18:10) How to start with Cloud Native Security Monitoring? (21:01) Security monitoring in cloud vs traditional (22:51) Challenges with Cloud Native Security Monitoring (25:25) How can SMBs tackle Cloud Native Security Monitoring? (26:52) Are cloud native tools more cost effective than traditional ones? (28:30) Heterogeneous log correlation (30:09) What is a security data lake? (35:25) Does the modern security team need data skils?

DataCentric Podcast
Exploring Edge Computing, with Stratus's Jason Andersen.

DataCentric Podcast

Play Episode Listen Later Sep 6, 2022 50:27


It's all about Edge Computing as hosts Matt Kimball and Steve McDowell, principal analysts at Moor Insights & Strategy, welcome Jason Andersen, VP of Strategy and Product Management at Stratus. Jason brings Matt & Steve up-to-speed on what's happening in the world of edge computing in a wide-ranging conversation about standards (or the lack of), the evolution of edge architectures, the fragmented edge market, the colliding worlds of IT & OT, and how Stratus helps its customers navigate through this confusing world. There is a whole lot packed into this hour-long discussion, but it's invaluable for anyone working in edge today. 00:00 Kick-it Off 01:25 Catching up with Strategy: Product updates, SGH Acquisition 03:57 Pivoting from Fault-Tolerance to the Edge 08:48 It's all about Resilience! 11:00 How edge is evolving 13:00 Fragmentation at the edge 16:21 Containerization & Cloud-Native at the edge 20:48 Simplifying the edge control-plane while delivering QoS 23:20 Revisiting 2 Different Worlds as IT & OT Converge 30:10 Security at the Edge 37:22 General attitudes impacting edge deployments 41:14 Where's Stratus focused in the near-term? 45:14 Importance of the overall solution stack 50:11 Wrapping-Up 50:26 Done Special Guest: Jason Andersen.

Resilient Cyber
S3E16: Greg Thomas - Secure Service Mesh & Cloud-native Networking

Resilient Cyber

Play Episode Listen Later Sep 1, 2022 32:50


Nikki - In one of your recent posts you speak about how more organizations are looking to leverage service mesh in their own environments. Can you talk a little bit about why a team may be interested in moving to a more service mesh architecture? Nikki: What do you think may impede or stop an organization from adopting updated networking practices and technologies, like service mesh, and how can they get started adopting it?Chris: What role do you think Service Mesh plays in the push for Zero Trust and maturing security in cloud-native environments?Chris: I've heard you use the team Secure Service Networking, what exactly is this, and is it different than Service Mesh? We know there are the four pillars of Service Networking: Service Discovery, Secure Network, Automate Network, Access Service. What are these exactly? Chris: In the context of micro-services and Kubernetes, how does networking change? Nikki: The field of engineering is growing more and more, we have Infrastructure Engineers, Application Engineers, versus the traditional job roles of Systems or Software Engineers. Do you see an industry trend moving to expanding the engineering field into different disciplines, like Platform Engineers? Or do you think some of these roles are similar but are getting updated titles?Chris: HashiCorp has some excellent offerings such as Terraform, Vault, Consul and so on. What resources can folks use to upskill in these technologies?Nikki: I saw you recently did a talk on securing service level networking for the DoD - do you feel like a lot of those principles apply outside of the DOD or federal space? Or do you see the private sector using more of these technologies?

Software Engineering Daily
Cloud-native Observability with Martin Mao

Software Engineering Daily

Play Episode Listen Later Aug 31, 2022 48:18


Maintaining availability in a modern digital application is critical to keeping your application operating and available and to keep meeting your customers growing demands. There are many observability platforms out there and certainly Prometheus is a popular open source solution for cloud native companies yet operating an observability platform, costs money, and all of the The post Cloud-native Observability with Martin Mao appeared first on Software Engineering Daily.

Cisco Champion Radio
S9|E34 An Innovative Approach to Cloud-Native Application Architecture

Cisco Champion Radio

Play Episode Listen Later Aug 30, 2022 50:01


Businesses are shifting to microservice-based application architectures, which support rapid development with their flexibility, stability, security, and scale. At the same time, these architectures present new connectivity and security challenges that require us to move away from traditional approaches. In this session, we'll show how Cisco is applying its 30 years of networking leadership to cloud-native application architectures with Calisti, Cisco's new Service Mesh Manager. You'll also hear about Panoptica, the Cisco Secure Application Cloud, which applies Cisco security leadership to cloud-native architectures. Finally, we will have an open discussion about additional technology development initiatives led by Cisco's Emerging Technologies & Incubation team. Learn more Cisco Emerging Technologies & Incubation: https://eti.cisco.com/ Calisti (Cisco's Service Mesh Manager): https://calisti.app/ Panoptica (Cisco's Secure Application Cloud): https://panoptica.app/ Follow us https://twitter.com/CiscoChampion Hosts Dan Kelcher (twitter.com/ipswitch), Merdian IT, Enterprise Network and Cybersecurity Solutions Architect Richard Atkin (twitter.com/UKRichA), ITGL, Solution Architect Guest Tim Szigeti, Cisco, Principal Technical Marketing Engineer, ET&I Moderator Amilee San Juan (twitter.com/amileesan1), Cisco, Customer Voices and Cisco Champion Program

The CTO Advisor
Migrating a Monolithic Application to Cloud-Native using Google Cloud Platform Tools

The CTO Advisor

Play Episode Listen Later Aug 24, 2022


In this sponsored proof of concept project, the CTO Advisor Team takes a monolithic application and modernizes the app using the Google Cloud Platform assessment and automation tools. Keith Townsend, Principal of the CTO Advisor, interviews AWS application developer trainer Alastair Cooke on his experience assessing and modernizing a monolithic application to Google Cloud. Can [...]

Kubernetes Podcast from Google
Kubernetes 1.25, with Cici Huang

Kubernetes Podcast from Google

Play Episode Listen Later Aug 23, 2022 26:51 Very Popular


It’s release day! We discuss today’s Kubernetes 1.25 with release team lead Cici Huang, Software Engineer at Google Cloud. What’s in, what’s out, and what is it like to lead a release you are also promoting a feature in? Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Nelson underwater England underwater A picture of a sheep Follow Craig on Twitter for more like that News of the week Kubernetes 1.25 release Introducing Acorn Acorn Labs: Rancher Co-Founders’ New Kubernetes Startup by Christine Hall Episode 57, with Darren Shepherd GKE updates: New observability metrics GKE Autopilot now default 256 pods per node KubeCon schedule published Cloud Native Rejekts Scaling Kubernetes to thousands of CRDs by Nic Cope Links from the interview IBM Watson Kubernetes Community Awards SIG API Machinery Chair & Cici’s hiring manager: Fede Bongiovanni Kubernetes 1.25 release team Release blog Highlights: PodSecurityPolicy is removed; Pod Security Admission is stable cgroups v2 KMS v2alpha1 CRD valdation experession language Registry change Kubernetes 1.24 delay Theme and logo Envelopes: 1.24 lead: Episode 178, with James Laverack 1.26 lead: Leonard Pahlke Cici Huang on GitHub

Software Engineering Daily
Cloud-native Authorization with Tim Hinrichs

Software Engineering Daily

Play Episode Listen Later Aug 9, 2022 57:09 Very Popular


Enabling authorization policies across disparate cloud-native environments such as containers, microservices and modern application delivery infrastructure is complex and can be a roadblock for software engineering teams. Open Policy Agent, or OPA, is an open, declarative, policy-as-code approach to authorization that reduces security and compliance burden for engineering teams.  Business context is translated into declarative The post Cloud-native Authorization with Tim Hinrichs appeared first on Software Engineering Daily.