Podcasts about elastic beanstalk

Orchestration service offered by Amazon Web Services

  • 30PODCASTS
  • 47EPISODES
  • 45mAVG DURATION
  • ?INFREQUENT EPISODES
  • Aug 22, 2023LATEST
elastic beanstalk

POPULARITY

20172018201920202021202220232024


Best podcasts about elastic beanstalk

Latest podcast episodes about elastic beanstalk

The Tech Trek
Driving Business Value through Continuous Deployment

The Tech Trek

Play Episode Listen Later Aug 22, 2023 34:03


In this episode, Amir interviews Ryan Fox, VP of Engineering at Super, about their journey with developer internal tooling and CICD. Ryan talks about the growth of the engineering team and the current focus on internal tooling. He discusses how the team views these internal tools as a competitive advantage. Highlights: [00:00:00] Super's journey and growth.  [00:04:27] Mission-aligned teams.  [00:08:33] Elastic Beanstalk's limitations.  [00:13:26] Canarying support.  [00:17:00] Deploy time and frequency.  [00:19:27] Engineering team competitive advantage.  [00:24:37] How to measure team productivity.  [00:26:55] Continuous deployment and lessons learned.  [00:31:09] Maintaining end-to-end testing. Guest: Ryan Fox is the VP of Engineering at Super.com, a mobile commerce and fintech platform focused on saving money for people who need to save, not just want to save. Super.com is one of the fastest-growing tech companies in North America and has raised more than USD 100 million to date. Previously, Ryan worked as both a SWE and SRE at Google, where he worked on systems handling millions of requests per second. LinkedIn: https://www.linkedin.com/in/ryan-m-fox/ --- Thank you so much for checking out this episode of The Tech Trek, and we would appreciate it if you would take a minute to rate and review us on your favorite podcast player. Want to learn more about us? Head over at https://www.elevano.com Have questions or want to cover specific topics with our future guests? Please message me at https://www.linkedin.com/in/amirbormand (Amir Bormand)

Um Inventor Qualquer
Elastic BeansTalk vale a pena? É fácil de aprender? | AWS

Um Inventor Qualquer

Play Episode Listen Later Jan 16, 2023 11:41 Transcription Available


Entenda como é fácil aprender e usar o Elastic Beanstalk da AWS para simplificar o deploy de suas aplicações em nuvem, e ainda economizar dinheiro em Cloud e na operação da sua empresa.Poupe tempo com processos complicados de deploy e com a configuração de servidores para rodar suas aplicações Web.Inscreva-se no pré-lançamento do curso AWS:https://www.uminventorqualquer.com.br/curso-aws/Canal Wesley Milan: https://bit.ly/3LqiYwgInstagram: https://bit.ly/3tfzAj0LinkedIn: https://www.linkedin.com/in/wesleymilan/Podcast: https://bit.ly/3qa5JH1

52 Weeks of Cloud
52 weeks AWS: Episode 22 Solutions Architect: Planning for Disaster

52 Weeks of Cloud

Play Episode Listen Later May 22, 2022 26:08


Episode 22 covers the preparing for Disasters.00:00 Intro02:37 Planning for failures03:34 Avoiding and planning for disasters05:37 Using the Well-Architected Framework design principles05:51 Recovery Point objectives (RPO)06:34 Recovery time objective (RTO)07:12 Plan for disaster recovery08:03 Storage and backup building blocks09:50 S3 Cross-Region replication10:41 EBS volume snapshots11:41 File system replication12:50 Compute Capacity recovery13:44 Strategies for disaster recovery15:01 Networking design for resilience15:34 Databases and recovery16:45 Automation Services: CloudFormation, Elastic Beanstalk and AWS OpsWorks17:38 Four disaster recovery strategies: Backup and restore, Pilot light, Warm Standby and Multi-site18:23 AWS Storage Gateway24:00 Summary of common DR patternsIf you enjoyed this video, here are additional resources to look at:Coursera + Duke Specialization: Building Cloud Computing Solutions at Scale Specialization: https://www.coursera.org/specializations/building-cloud-computing-solutions-at-scalePython, Bash, and SQL Essentials for Data Engineering Specialization: https://www.coursera.org/specializations/python-bash-sql-data-engineering-dukeAWS Certified Solutions Architect - Professional (SAP-C01) Cert Prep: 1 Design for Organizational Complexity:https://www.linkedin.com/learning/aws-certified-solutions-architect-professional-sap-c01-cert-prep-1-design-for-organizational-complexity/design-for-organizational-complexity?autoplay=trueO'Reilly Book: Practical MLOps: https://www.amazon.com/Practical-MLOps-Operationalizing-Machine-Learning/dp/1098103017O'Reilly Book: Python for DevOps: https://www.amazon.com/gp/product/B082P97LDW/Pragmatic AI: An Introduction to Cloud-based Machine Learning: https://www.amazon.com/gp/product/B07FB8F8QP/Pragmatic AI Labs Book: Python Command-Line Tools: https://www.amazon.com/gp/product/B0855FSFYZPragmatic AI Labs Book: Cloud Computing for Data Analysis: https://www.amazon.com/gp/product/B0992BN7W8Pragmatic AI Book: Minimal Python: https://www.amazon.com/gp/product/B0855NSRR7Pragmatic AI Book: Testing in Python: https://www.amazon.com/gp/product/B0855NSRR7Subscribe to Pragmatic AI Labs YouTube Channel: https://www.youtube.com/channel/UCNDfiL0D1LUeKWAkRE1xO5QSubscribe to 52 Weeks of AWS Podcast: https://52-weeks-of-cloud.simplecast.comView content on noahgift.com: https://noahgift.com/View content on Pragmatic AI Labs Website: https://paiml.com/

Screaming in the Cloud
Creating “Quinntainers” with Casey Lee

Screaming in the Cloud

Play Episode Listen Later Apr 20, 2022 46:16


About CaseyCasey spends his days leveraging AWS to help organizations improve the speed at which they deliver software. With a background in software development, he has spent the past 20 years architecting, building, and supporting software systems for organizations ranging from startups to Fortune 500 enterprises.Links Referenced: “17 Ways to Run Containers in AWS”: https://www.lastweekinaws.com/blog/the-17-ways-to-run-containers-on-aws/ “17 More Ways to Run Containers on AWS”: https://www.lastweekinaws.com/blog/17-more-ways-to-run-containers-on-aws/ kubernetestheeasyway.com: https://kubernetestheeasyway.com snark.cloud/quinntainers: https://snark.cloud/quinntainers ECS Chargeback: https://github.com/gaggle-net/ecs-chargeback  twitter.com/nektos: https://twitter.com/nektos TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored by our friends at Revelo. Revelo is the Spanish word of the day, and its spelled R-E-V-E-L-O. It means “I reveal.” Now, have you tried to hire an engineer lately? I assure you it is significantly harder than it sounds. One of the things that Revelo has recognized is something I've been talking about for a while, specifically that while talent is evenly distributed, opportunity is absolutely not. They're exposing a new talent pool to, basically, those of us without a presence in Latin America via their platform. It's the largest tech talent marketplace in Latin America with over a million engineers in their network, which includes—but isn't limited to—talent in Mexico, Costa Rica, Brazil, and Argentina. Now, not only do they wind up spreading all of their talent on English ability, as well as you know, their engineering skills, but they go significantly beyond that. Some of the folks on their platform are hands down the most talented engineers that I've ever spoken to. Let's also not forget that Latin America has high time zone overlap with what we have here in the United States, so you can hire full-time remote engineers who share most of the workday as your team. It's an end-to-end talent service, so you can find and hire engineers in Central and South America without having to worry about, frankly, the colossal pain of cross-border payroll and benefits and compliance because Revelo handles all of it. If you're hiring engineers, check out revelo.io/screaming to get 20% off your first three months. That's R-E-V-E-L-O dot I-O slash screaming.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today is someone that I had the pleasure of meeting at re:Invent last year, but we'll get to that story in a minute. Casey Lee is the CTO with a company called Gaggle, which is—as they frame it—saving lives. Now, that seems to be a relatively common position that an awful lot of different tech companies take. “We're saving lives here.” It's, “You show banner ads and some of them are attack platforms for JavaScript malware. Let's be serious here.” Casey, thank you for joining me, and what makes the statement that Gaggle saves lives not patently ridiculous?Casey: Sure. Thanks, Corey. Thanks for having me on the show. So Gaggle, we're ed-tech company. We sell software to school districts, and school districts use our software to help protect their students while the students use the school-issued Google or Microsoft accounts.So, we're looking for signs of bullying, harassment, self-harm, and potentially suicide from K-12 students while they're using these platforms. They will take the thoughts, concerns, emotions they're struggling with and write them in their school-issued accounts. We detect that and then we notify the school districts, and they get the students the help they need before they can do any permanent damage to themselves. We protect about 6 million students throughout the US. We ingest a lot of content.Last school year, over 6 billion files, about the equal number of emails ingested. We're looking for concerning content and then we have humans review the stuff that our machine learning algorithms detect and flag. About 40 million items had to go in front of humans last year, resulted in about 20,000 what we call PSSes. These are Possible Student Situations where students are talking about harming themselves or harming others. And that resulted in what we like to track as lives saved. 1400 incidents last school year where a student was dealing with suicide ideation, they were planning to take their own lives. We detect that and get them help within minutes before they can act on that. That's what Gaggle has been doing. We're using tech, solving tech problems, and also saving lives as we do it.Corey: It's easy to lob a criticism at some of the things you're alluding to, the idea of oh, you're using machine learning on student data for young kids, yadda, yadda, yadda. Look at the outcome, look at the privacy controls you have in place, and look at the outcomes you're driving to. Now, I don't necessarily trust the number of school administrations not to become heavy-handed and overbearing with it, but let's be clear, that's not the intent. That is not what the success stories you have alluded to. I've got to say I'm a fan, so thanks for doing what you're doing. I don't say that very often to people who work in tech companies.Casey: Cool. Thanks, Corey.Corey: But let's rewind a bit because you and I had passed like ships in the night on Twitter for a while, but last year at re:Invent something odd happened. First, my business partner procrastinated at getting his ticket—that's not the odd part; he does that a lot—but then suddenly ticket sales slammed shut and none were to be had anywhere. You reached out with a, “Hey, I have a spare ticket because someone can't go. Let me get it to you.” And I said, “Terrific. Let me pay you for the ticket and take you to dinner.”You said, “Yes on the dinner, but I'd rather you just look at my AWS bill and don't worry about the cost of the ticket.” “All right,” said I. I know a deal when I see one. We grabbed dinner at the Venetian. I said, “Bust out your laptop.” And you said, “Oh, I was kidding.” And I said, “Great. I wasn't. Bust it out.”And you went from laughing to taking notes in about the usual time that happens when I start looking at these things. But how was your recollection of that? I always tend to romanticize some of these things. Like, “And then everyone's restaurant just turned, stopped, and clapped the entire time.” Maybe that part didn't happen.Casey: Everything was right up until the clapping part. That was a really cool experience. I appreciate you walking through that with me. Yeah, we've got lots of opportunity to save on our AWS bill here at Gaggle, and in that little bit of time that we had together, I think I walked away with no more than a dozen ideas for where to shave some costs. The most obvious one, the first thing that you keyed in on, is we had RIs coming due that weren't really well-optimized and you steered me towards savings plans. We put that in place and we're able to apply those savings plans not just to our EC2 instances but also to our serverless spend as well.So, that was a very worthwhile and cost-effective dinner for us. The thing that was most surprising though, Corey, was your approach. Your approach to how to review our bill was not what I thought at all.Corey: Well, what did you expect my approach was going to be? Because this always is of interest to me. Like, do you expect me to, like, whip a portable machine learning rig out of my backpack full of GPUs or something?Casey: I didn't know if you had, like, some secret tool you were going to hit, or if nothing else, I thought you were going to go for the Cost Explorer. I spend a lot of time in Cost Explorer, that's my go-to tool, and you wanted nothing to do with Cost Exp—I think I was actually pulling up Cost Explorer for you and you said, “I'm not interested. Take me to the bills.” So, we went right to the billing dashboard, you started opening up the invoices, and I thought to myself, “I don't remember the last time I looked at an AWS invoice.” I just, it's noise; it's not something that I pay attention to.And I learned something, that you get a real quick view of both the cost and the usage. And that's what you were keyed in on, right? And you were looking at things relative to each other. “Okay, I have no idea about Gaggle or what they do, but normally, for a company that's spending x amount of dollars in EC2, why is your data transfer cost the way it is? Is that high or low?” So, you're looking for kind of relative numbers, but it was really cool watching you slice and dice that bill through the dashboard there.Corey: There are a few things I tie together there. Part of it is that this is sort of a surprising thing that people don't think about but start with big numbers first, rather than going alphabetically because I don't really care about your $6 Alexa for Business spend. I care a bit more about the $6 million, or whatever it happens to be at EC2—I'm pulling numbers completely out of the ether, let's be clear; I don't recall what the exact magnitude of your bill is and it's not relevant to the conversation.And then you see that and it's like, “Huh. Okay, you're spending $6 million on EC2. Why are you spending 400 bucks on S3? Seems to me that those two should be a little closer aligned. What's the deal here? Oh, God, you're using eight petabytes of EBS volumes. Oh, dear.”And just, it tends to lead to interesting stuff. Break it down by region, service, and use case—or usage type, rather—is what shows up on those exploded bills, and that's where I tend to start. It also is one of the easiest things to wind up having someone throw into a PDF and email my way if I'm not doing it in a restaurant with, you know, people clapping standing around.Casey: [laugh]. Right.Corey: I also want to highlight that you've been using AWS for a long time. You're a Container Hero; you are not bad at understanding the nuances and depths of AWS, so I take praise from you around this stuff as valuing it very highly. This stuff is not intuitive, it is deeply nuanced, and you have a business outcome you are working towards that invariably is not oriented day in day out around, “How do I get these services for less money than I'm currently paying?” But that is how I see the world and I tend to live in a very different space just based on the nature of what I do. It's sort of a case study and the advantage of specialization. But I know remarkably little about containers, which is how we wound up reconnecting about a week or so before we did this recording.Casey: Yeah. I saw your tweet; you were trying to run some workload—container workload—and I could hear the frustration on the other end of Twitter when you were shaking your fist at—Corey: I should not tweet angrily, and I did in this case. And, eh, every time I do I regret it. But it played well with the people, so that does help. I believe my exact comment was, “‘me: I've got this container. Run it, please.' ‘Google Cloud: Run. You got it, boss.' AWS has 17 ways to run containers and they all suck.”And that's painting with an overly broad brush, let's be clear, but that was at the tail end of two or three days of work trying to solve a very specific, very common, business problem, that I was just beating my head off of a wall again and again and again. And it took less than half an hour from start to finish with Google Cloud Run and I didn't have to think about it anymore. And it's one of those moments where you look at this and realize that the future is here, we just don't see it in certain ways. And you took exception to this. So please, let's dive in because 280 characters of text after half a bottle of wine is not the best context to have a nuanced discussion that leaves friendships intact the following morning.Casey: Nice. Well, I just want to make sure I understand the use case first because I was trying to read between the lines on what you needed, but let me take a guess. My guess is you got your source code in GitHub, you have a Docker file, and you want to be able to take that repo from GitHub and just have it continuously deployed somewhere in Run. And you don't want to have headaches with it; you just want to push more changes up to GitHub, Docker Build runs and updates some service somewhere. Am I right so far?Corey: Ish, but think a little further up the stack. It was in service of this show. So, this show, as people who are listening to this are probably aware by this point, periodically has sponsors, which we love: We thank them for participating in the ongoing support of this show, which empowers conversations like this. Sometimes a sponsor will come to us with, “Oh, and here's the URL we want to give people.” And it's, “First, you misspelled your company name from the common English word; there are three sublevels within the domain, and then you have a complex UTM tagging tracking co—yeah, you realize people are driving to work when they're listening to this?”So, I've built a while back a link shortener, snark.cloud because is it the shortest thing in the world? Not really, but it's easily understandable when I say that, and people hear it for what it is. And that's been running for a long time as an S3 bucket with full of redirects, behind CloudFront. So, I wind up adding a zero-byte object with a redirect parameter on it, and it just works.Now, the challenge that I have here as a business is that I am increasingly prolific these days. So, anything that I am not directly required to be doing, I probably shouldn't necessarily be the one to do it. And care and feeding of those redirect links is a prime example of this. So, I went hunting, and the things that I was looking for were, obviously, do the redirect. Now, if you pull up GitHub, there are hundreds of solutions here.There are AWS blog posts. One that I really liked and almost got working was Eric Johnson's three-part blog post on how to do it serverlessly, with API Gateway, and DynamoDB, no Lambdas required. I really liked aspects of what that was, but it was complex, I kept smacking into weird challenges as I went, and front end is just baffling to me. Because I needed a front end app for people to be able to use here; I need to be able to secure that because it turns out that if you just have a, anyone who stumbles across the URL can redirect things to other places, well, you've just empowered a whole bunch of spam email, and you're going to find that service abused, and everyone starts blocking it, and then you have trouble. Nothing lasts the first encounter with jerks.And I was getting more and more frustrated, and then I found something by a Twitter engineer on GitHub, with a few creative search terms, who used to work at Google Cloud. And what it uses as a client is it doesn't build any kind of custom web app. Instead, as a database, it uses not S3 objects, not Route 53—the ideal database—but a Google sheet, which sounds ridiculous, but every business user here knows how to use that.Casey: Sure.Corey: And it looks for the two columns. The first one is the slug after the snark.cloud, and the second is the long URL. And it has a TTL of five seconds on cache, so make a change to that spreadsheet, five seconds later, it's live. Everyone gets it, I don't have to build anything new, I just put it somewhere around the relevant people can access it, I gave him a tutorial and a giant warning on it, and everyone gets that. And it just works well. It was, “Click here to deploy. Follow the steps.”And the documentation was a little, eh, okay, I had to undo it once and redo it again. Getting the domain registered was getting—ported over took a bit of time, and there were some weird SSL errors as the certificates were set up, but once all of that was done, it just worked. And I tested the heck out of it, and cold starts are relatively low, and the entire thing fits within the free tier. And it is reminiscent of the magic that I first saw when I started working with some of the cloud providers services, years ago. It's been a long time since I had that level of delight with something, especially after three days of frustration. It's one of the, “This is a great service. Why are people not shouting about this from the rooftops?” That was my perspective. And I put it out on Twitter and oh, Lord, did I get comments. What was your take on it?Casey: Well, so my take was, when you're evaluating a platform to use for running your applications, how fast it can get you to Hello World is not necessarily the best way to go. I just assumed you're wrong. I assumed of the 17 ways AWS has to run containers, Corey just doesn't understand. And so I went after it. And I said, “Okay, let me see if I can find a way that solves his use case, as I understand it, through a quick tweet.”And so I tried to App Runner; I saw that App Runner does not meet your needs because you have to somehow get your Docker image pushed up to a repo. App Runner can take an image that's already been pushed up and deployed for you or it can build from source but neither of those were the way I understood your use case.Corey: Having used App Runner before via the Copilot CLI, it is the closest as best I can tell to achieving what I want. But also let's be clear that I don't believe there's a free tier; there needs to be a load balancer in front of it, so you're starting with 15 bucks a month for this thing. Which is not the end of the world. Had I known at the beginning that all of this was going to be there, I would have just signed up for a bit.ly account and called it good. But here we are.Casey: Yeah. I tried Copilot. Copilot is a great developer experience, but it also is just pulling together tons of—I mean just trying to do a Copilot service deploy, VPCs are being created and tons IAM roles are being created, code pipelines, there's just so much going on. I was like 20 minutes into it, and I said, “Yeah, this is not fitting the bill for what Corey was looking for.” Plus, it doesn't solve my the way I understood your use case, which is you don't want to worry about builds, you just want to push code and have new Docker images get built for you.Corey: Well, honestly, let's be clear here, once it's up and running, I don't want to ever have to touch the silly thing again.Casey: Right.Corey: And that's so far has been the case, after I forked the repo and made a couple of changes to it that I wanted to see. One of them was to render the entire thing case insensitive because I get that one wrong a lot, and the other is I wanted to change the permanent 301 redirect to a temporary 302 redirect because occasionally, sponsors will want to change where it goes in the fullness of time. And that is just fine, but I want to be able to support that and not have to deal with old cached data. So, getting that up and running was a bit of a challenge. But the way that it worked, was following the instructions in the GitHub repo.The developer environment had spun up in the Google's Cloud Shell was just spectacular. It prompted me for a few things and it told me step by step what to do. This is the sort of thing I could have given a basically non-technical user, and they would have had success with it.Casey: So, I tried it as well. I said, “Well, okay, if I'm going to respond to Corey here and challenge him on this, I need to try Cloud Run.” I had no experience with Cloud Run. I had a small example repo that loosely mapped what I understood you were trying to do. Within five minutes, I had Cloud Run working.And I was surprised anytime I pushed a new change, within 45 seconds the change was built and deployed. So, here's my conclusion, Corey. Google Cloud Run is great for your use case, and AWS doesn't have the perfect answer. But here's my challenge to you. I think that you just proved why there's 17 different ways to run containers on AWS, is because there's that many different types of users that have different needs and you just happen to be number 18 that hasn't gotten the right attention yet from AWS.Corey: Well, let's be clear, like, my gag about 17 ways to run containers on AWS was largely a joke, and it went around the internet three times. So, I wrote a list of them on the blog post of “17 Ways to Run Containers in AWS” and people liked it. And then a few months later, I wrote “17 More Ways to Run Containers on AWS” listing 17 additional services that all run containers.And my favorite email that I think I've ever received in feedback was from a salty AWS employee, saying that one of them didn't really count because of some esoteric reason. And it turns out that when I'm trying to make a point of you have a sarcastic number of ways to run containers, pointing out that well, one of them isn't quite valid, doesn't really shatter the argument, let's be very clear here. So, I appreciate the feedback, I always do. And it's partially snark, but there is an element of truth to it in that customers don't want to run containers, by and large. That is what they do in service of a business goal.And they want their application to run which is in turn to serve as the business goal that continues to abstract out into, “Remain a going concern via the current position the company stakes out.” In your case, it is saving lives; in my case, it is fixing horrifying AWS bills and making fun of Amazon at the same time, and in most other places, there are somewhat more prosaic answers to that. But containers are simply an implementation detail, to some extent—to my way of thinking—of getting to that point. An important one [unintelligible 00:18:20], let's be clear, I was very anti-container for a long time. I wrote a talk, “Heresy in the Church of Docker” that then was accepted at ContainerCon. It's like, “Oh, boy, I'm not going to leave here alive.”And the honest answer is many years later, that Kubernetes solves almost all the criticisms that I had with the downside of well, first, you have to learn Kubernetes, and that continues to be mind-bogglingly complex from where I sit. There's a reason that I've registered kubernetestheeasyway.com and repointed it to ECS, Amazon's container service that is not requiring you to cosplay as a cloud provider yourself. But even ECS has a number of challenges to it, I want to be very clear here. There are no silver bullets in this.And you're completely correct in that I have a large, complex environment, and the application is nuanced, and I'm willing to invest a few weeks in setting up the baseline underlying infrastructure on AWS with some of these services, ideally not all of them at once because that's something a lunatic would do, but getting them up and running. The other side of it, though, is that if I am trying to evaluate a cloud provider's handling of containers and how this stuff works, the reason that everyone starts with a Hello World-style example is that it delivers ideally, the meantime to dopamine. There's a reason that Hello World doesn't have 18 different dependencies across a bunch of different databases and message queues and all the other complicated parts of running a modern application. Because you just want to see how it works out of the gate. And if getting that baseline empty container that just returns the string ‘Hello World' is that complicated and requires that much work, my takeaway is not that this user experience is going to get better once I'd make the application itself more complicated.So, I find that off-putting. My approach has always been find something that I can get the easy, minimum viable thing up and running on, and then as I expand know that you'll be there to catch me as my needs intensify and become ever more complex. But if I can't get the baseline thing up and running, I'm unlikely to be super enthused about continuing to beat my head against the wall like, “Well, I'll just make it more complex. That'll solve the problem.” Because it often does not. That's my position.Casey: Yeah, I agree that dopamine hit is valuable in getting attached to want to invest into whatever tech stack you're using. The challenge is your second part of that. Your second part is will it grow with me and scale with me and support the complex edge cases that I have? And the problem I've seen is a lot of organizations will start with something that's very easy to get started with and then quickly outgrow it, and then come up with all sorts of weird Rube Goldberg-type solutions. Because they jumped all in before seeing—I've got kind of an example of that.I'm happy to announce that there's now 18 ways to run containers on AWS. Because in your use case, in the spirit of AWS customer obsession, I hear your use case, I've created an open-source project that I want to share called Quinntainers—Corey: Oh, no.Casey: —and it solves—yes. Quinntainers is live and is ready for the world. So, now we've got 18 ways to run containers. And if you have Corey's use case of, “Hey, here's my container. Run it for me,” now we've got a one command that you can run to get things going for you. I can share a link for you and you could check it out. This is a [unintelligible 00:21:38]—Corey: Oh, we're putting that in the [show notes 00:21:37], for sure. In fact, if you go to snark.cloud/quinntainers, you'll find it.Casey: You'll find it. There you go. The idea here was this: There is a real use case that you had, and I looked at AWS does not have an out-of-the-box simple solution for you. I agree with that. And Google Cloud Run does.Well, the answer would have been from AWS, “Well, then here, we need to make that solution.” And so that's what this was, was a way to demonstrate that it is a solvable problem. AWS has all the right primitives, just that use case hadn't been covered. So, how does Quinntainers work? Real straightforward: It's a command-line—it's an NPM tool.You just run a [MPX 00:22:17] Quinntainer, it sets up a GitHub action role in your AWS account, it then creates a GitHub action workflow in your repo, and then uses the Quinntainer GitHub action—reusable action—that creates the image for you; every time you push to the branch, pushes it up to ECR, and then automatically pushes up that new version of the image to App Runner for you. So, now it's using App Runner under the covers, but it's providing that nice developer experience that you are getting out of Cloud Run. Look, is container really the right way to go with running containers? No, I'm not making that point at all. But the point is it is a—Corey: It might very well be.Casey: Well, if you want to show a good Hello World experience, Quinntainer's the best because within 30 seconds, your app is now set up to continuously deliver containers into AWS for your very specific use case. The problem is, it's not going to grow for you. I mean that it was something I did over the weekend just for fun; it's not something that would ever be worthy of hitching up a real production workload to. So, the point there is, you can build frameworks and tools that are very good at getting that initial dopamine hit, but then are not going to be there for you unnecessarily as you mature and get more complex.Corey: And yet, I've tilted a couple of times at the windmill of integrating GitHub actions in anything remotely resembling a programmatic way with AWS services, as far as instance roles go. Are you using permanent credentials for this as stored secrets or are you doing the [OICD 00:23:50][00:23:50] handoff?Casey: OIDC. So, what happens is the tool creates the IAM role for you with the trust policy on GitHub's OIDC provider, sets all that up for you in your account, locks it down so that just your repo and your main branch is able to push or is able to assume the role, the role is set up just to allow deployments to App Runner and ECR repository. And then that's it. At that point, it's out of your way. And you're just git push, and couple minutes later, your updates are now running an App Runner for you.Corey: This episode is sponsored in part by our friends at Vultr. Optimized cloud compute plans have landed at Vultr to deliver lightning fast processing power, courtesy of third gen AMD EPYC processors without the IO, or hardware limitations, of a traditional multi-tenant cloud server. Starting at just 28 bucks a month, users can deploy general purpose, CPU, memory, or storage optimized cloud instances in more than 20 locations across five continents. Without looking, I know that once again, Antarctica has gotten the short end of the stick. Launch your Vultr optimized compute instance in 60 seconds or less on your choice of included operating systems, or bring your own. It's time to ditch convoluted and unpredictable giant tech company billing practices, and say goodbye to noisy neighbors and egregious egress forever.Vultr delivers the power of the cloud with none of the bloat. "Screaming in the Cloud" listeners can try Vultr for free today with a $150 in credit when they visit getvultr.com/screaming. That's G E T V U L T R.com/screaming. My thanks to them for sponsoring this ridiculous podcast.Corey: Don't undersell what you've just built. This is something that—is this what I would use for a large-scale production deployment, obviously not, but it has streamlined and made incredibly accessible things that previously have been very complex for folks to get up and running. One of the most disturbing themes behind some of the feedback I got was, at one point I said, “Well, have you tried running a Docker container on Lambda?” Because now it supports containers as a packaging format. And I said no because I spent a few weeks getting Lambda up and running back when it first came out and I've basically been copying and pasting what I got working ever since the way most of us do.And response is, “Oh, that explains a lot.” With the implication being that I'm just a fool. Maybe, but let's be clear, I am never the only person in the room who doesn't know how to do something; I'm just loud about what I don't know. And the failure mode of a bad user experience is that a customer feels dumb. And that's not okay because this stuff is complicated, and when a user has a bad time, it's a bug.I learned that in 2012. From Jordan Sissel the creator of LogStash. He has been an inspiration to me for the last ten years. And that's something I try to live by that if a user has a bad time, something needs to get fixed. Maybe it's the tool itself, maybe it's the documentation, maybe it's the way that GitHub repo's readme is structured in a way that just makes it accessible.Because I am not a trailblazer in most things, nor do I intend to be. I'm not the world's best engineer by a landslide. Just look at my code and you'd argue the fact that I'm an engineer at all. But if it's bad and it works, how bad is it? Is sort of the other side of it.So, my problem is that there needs to be a couple of things. Ignore for a second the aspect of making it the right answer to get something out of the door. The fact that I want to take this container and just run it, and you and I both reach for App Runner as the default AWS service that does this because I've been swimming in the AWS waters a while and you're a frickin AWS Container Hero, where it is expected that you know what most of these things do. For someone who shows up on the containers webpage—which by the way lists, I believe 15 ways to run containers on mobile and 19 ways to run containers on non-mobile, which is just fascinating in its own right—and it's overwhelming, it's confusing, and it's not something that makes it is abundantly clear what the golden path is. First, get it up and working, get it running, then you can add nuance and flavor and the rest, and I think that's something that's gotten overlooked in our mad rush to pretend that we're all Google engineers, circa 2012.Casey: Mmm. I think people get stressed out when they tried to run containers in AWS because they think, “What is that golden path?” You said golden path. And my advice to people is there is no golden path. And the great thing about AWS is they do continue to invest in the solutions they come up with. I'm still bitter about Google Reader.Corey: As am I.Casey: Yeah. I built so much time getting my perfect set of RSS feeds and then I had to find somewhere else to—with AWS, the different offerings that are available for running containers, those are there intentionally, it's not by accident. They're there to solve specific problems, so the trick is finding what works best for you and don't feel like one is better than the other is going to get more attention than others. And they each have different use cases.And I approach it this way. I've seen a couple of different people do some great flowcharts—I think Forrest did one, Vlad did one—on ways to make the decision on how to run your containers. And I break it down to three questions. I ask people first of all, where are you going to run these workloads? If someone says, “It has to be in the data center,” okay, cool, then ECS Anywhere or EKS Anywhere and we'll figure out if Kubernetes is needed.If they need specific requirements, so if they say, “No, we can run in the cloud, but we need privileged mode for containers,” or, “We need EBS volumes,” or, “We want really small container sizes,” like, less than a quarter-VCP or less than half a gig of RAM—or if you have custom log requirements, Fargate is not going to work for you, so you're going to run on EC2. Otherwise, run it on Fargate. But that's the first question. Figure out where are you going to run your containers. That leads to the second question: What's your control plane?But those are different, sort of related but different questions. And I only see six options there. That's App Runner for your control plane, LightSail for your control plane, Rosa if you're invested in OpenShift already, EKS either if you have Momentum and Kubernetes or you have a bunch of engineers that have a bunch of experience with Kubernetes—if you don't have either, don't choose it—or ECS. The last option Elastic Beanstalk, but let's leave that as a—if you're not currently invested in Elastic Beanstalk don't start today. But I look at those as okay, so I—first question, where am I going to run my containers? Second question, what do I want to use for my control plane? And there's different pros and cons of each of those.And then the third question, how do I want to manage them? What tools do I want to use for managing deployment? All those other tools like Copilot or App2Container or Proton, those aren't my control plane; those aren't where I run my containers; that's how I manage, deploy, and orchestrate all the different containers. So, I look at it as those three questions. But I don't know, what do you think of that, Corey?Corey: I think you're onto something. I think that is a terrific way of exploring that question. I would argue that setting up a framework like that—one or very similar—is what the AWS containers page should be, just coming from the perspective of what is the neophyte customer experience. On some level, you almost need a slide of have choose your level of experience ranging from, “What's a container?” To, “I named my kid Kubernetes because I make terrible life decisions,” and anywhere in between.Casey: Sure. Yeah, well, and I think that really dictates the control plane level. So, for example, LightSail, where does LightSail fit? To me, the value of LightSail is the simplicity. I'm looking at a monthly pricing: Seven bucks a month for a container.I don't know how [unintelligible 00:30:23] works, but I can think in terms of monthly pricing. And it's tailored towards a console user, someone just wants to click in, point to an image. That's a very specific user, there's thousands of customers that are very happy with that experience, and they use it. App Runner presents that scale to zero. That's one of the big selling points I see with App Runner. Likewise, with Google Cloud Run. I've got that scale to zero. I can't do that with ECS, or EKS, or any of the other platforms. So, if you've got something that has a ton of idle time, I'd really be looking at those. I would argue that I think I did the math, Google Cloud Run is about 30% more expensive than App Runner.Corey: Yeah, if you disregard the free tier, I think that's have it—running persistently at all times throughout the month, the drop-out cold starts would cost something like 40 some odd bucks a month or something like that. Don't quote me on it. Again and to be clear, I wound up doing this very congratulatory and complimentary tweet about them on I think it was Thursday, and then they immediately apparently took one look at this and said, “Holy shit. Corey's saying nice things about us. What do we do? What do we do?” Panic.And the next morning, they raised prices on a bunch of cloud offerings. Whew, that'll fix it. Like—Casey: [laugh].Corey: Di-, did you miss the direction you're going on here? No, that's the exact opposite of what you should be doing. But here we are. Interestingly enough, to tie our two conversation threads together, when I look at an AWS bill, unless you're using Fargate, I can't tell whether you're using Kubernetes or not because EKS is a small charge. And almost every case for the control plane, or Fargate under it.Everything else just manifests as EC2 spend. From the perspective of the cloud provider. If you're running a Kubernetes cluster, it is a single-tenant application that can have some very funky behaviors like cross-AZ chatter back and fourth because there's no internal mechanism to say talk to the free thing, rather than the two cents a gigabyte thing. It winds up spinning up and down in a bunch of different ways, and the behavior patterns, because of how placement works are not necessarily deterministic, depending upon workload. And that becomes something that people find odd when, “Okay, we look at our bill for a week, what can you say?”“Well, first question. Are you running Kubernetes at all?” And they're like, “Who invited these clowns?” Understand, we're not prying into your workloads for a variety of excellent legal and contractual reasons, here. We are looking at how they behave, and for specific workloads, once we have a conversation engineering team, yeah, we're going to dive in, but it is not at all intuitive from the outside to make any determination whether you're running containers, or whether you're running VMs that you just haven't done anything with in 20 years, or what exactly is going on. And that's just an artifact of the billing system.Casey: We ran into this challenge in Gaggle. We don't use EKS, we use ECS, but we have some shared clusters, lots of EC2 spend, hard to figure out which team is creating the services that's running that up. We actually ended up creating a tool—we open-sourced it—ECS Chargeback, and what it does is it looks at the CPU memory reservations for each task definition, and then prorates the overall charge of the ECS cluster, and then creates metrics in Datadog to give us a breakdown of cost per ECS service. And it also measures what we like to refer to as waste, right? Because if you're reserving four gigs of memory, but your utilization never goes over two gigs, we're paying for that reservation, but you're underutilizing.So, we're able to also show which services have the highest degree of waste, not just utilization, so it helps us go after it. But this is a hard problem. I'd be curious, how do you approach these shared ECS resources and slicing and dicing those bills?Corey: Everyone has a different approach, too. This there is no unifiable, correct answer. A previous show guest, Peter Hamilton, over at Remind had done something very similar, open-sourced a bunch of these things. Understanding what your spend is important on this, and it comes down to getting at the actual business concern because in some cases, effectively dead reckoning is enough. You take a look at the cluster that is really hard to attribute because it's a shared service. Great. It is 5% of your bill.First pass, why don't we just agree that it is a third for Service A, two-thirds for Service B, and we'll call it mostly good at that point? That can be enough in a lot of cases. With scale [laugh] you're just sort of hand-waving over many millions of dollars a year there. How about we get into some more depth? And then you start instrumenting and reporting to something, be it CloudWatch, be a Datadog, be it something else, and understanding what the use case is.In some cases, customers have broken apart shared clusters for that specific reason. I don't think that's necessarily the best approach from an engineering perspective, but again, this is not purely an engineering decision. It comes down to serving the business need. And if you're taking up partial credits on that cluster, for a tax credit for R&D for example, you want that position to be extraordinarily defensible, and spending a few extra dollars to ensure that it is the right business decision. I mean, again, we're pure advisory; we advise customers on what we would do in their position, but people often mistake that to be we're going to go for the lowest possible price—bad idea, or that we're going to wind up doing this from a purely engineering-centric point of view.It's, be aware of that in almost every case, with some very notable weird exceptions, the AWS Bill costs significantly less than the payroll expense that you have of people working on the AWS environment in various ways. People are more expensive, so the idea of, well, you can save a whole bunch of engineering effort by spending a bit more on your cloud, yeah, let's go ahead and do that.Casey: Yeah, good point.Corey: The real mark of someone who's senior enough is their answer to almost any question is, “It depends.” And I feel I've fallen into that trap as well. Much as I'd love to sit here and say, “Oh, it's really simple. You do X, Y, and Z.” Yeah… honestly, my answer, the simple answer, is I think that we orchestrate a cyber-bullying campaign against AWS through the AWS wishlist hashtag, we get people to harass their account managers with repeated requests for, “Hey, could you go ahead and [dip 00:36:19] that thing in—they give that a plus-one for me, whatever internal system you're using?”Just because this is a problem we're seeing more and more. Given that it's an unbounded growth problem, we're going to see it more and more for the foreseeable future. So, I wish I had a better answer for you, but yeah, that's stuff's super hard is honest, but it's also not the most useful answer for most of us.Casey: I'd love feedback from anyone from you or your team on that tool that we created. I can share link after the fact. ECS Chargeback is what we call it.Corey: Excellent. I will follow up with you separately on that. That is always worth diving into. I'm curious to see new and exciting approaches to this. Just be aware that we have an obnoxious talent sometimes for seeing these things and, “Well, what about”—and asking about some weird corner edge case that either invalidates the entire thing, or you're like, “Who on earth would ever have a problem like that?” And the answer is always, “The next customer.”Casey: Yeah.Corey: For a bounded problem space of the AWS bill. Every time I think I've seen it all, I just have to talk to one more customer.Casey: Mmm. Cool.Corey: In fact, the way that we approached your teardown in the restaurant is how we launched our first pass approach. Because there's value in something like that is different than the value of a six to eight-week-long, deep-dive engagement to every nook and cranny. And—Casey: Yeah, for sure. It was valuable to us.Corey: Yeah, having someone come in to just spend a day with your team, diving into it up one side and down the other, it seems like a weird thing, like, “How much good could you possibly do in a day?” And the answer in some cases is—we had a Honeycomb saying that in a couple of days of something like this, we wound up blowing 10% off their entire operating budget for the company, it led to an increased valuation, Liz Fong-Jones says that—on multiple occasions—that the company would not be what it was without our efforts on their bill, which is just incredibly gratifying to hear. It's easy to get lost in the idea of well, it's the AWS bill. It's just making big companies spend a little bit less to another big company. And that's not exactly, you know, saving the lives of K through 12 students here.Casey: It's opening up opportunities.Corey: Yeah. It's about optimizing for the win for everyone. Because now AWS gets a lot more money from Honeycomb than they would if Honeycomb had not continued on their trajectory. It's, you can charge customers a lot right now, or you can charge them a little bit over time and grow with them in a partnership context. I've always opted for the second model rather than the first.Casey: Right on.Corey: But here we are. I want to thank you for taking so much time out of well, several days now to argue with me on Twitter, which is always appreciated, particularly when it's, you know, constructive—thanks for that—Casey: Yeah.Corey: For helping me get my business partner to re:Invent, although then he got me that horrible puzzle of 1000 pieces for the Cloud-Native Computing Foundation landscape and now I don't ever want to see him again—so you know, that happens—and of course, spending the time to write Quinntainers, which is going to be at snark.cloud/quinntainers as soon as we're done with this recording. Then I'm going to kick the tires and send some pull requests.Casey: Right on. Yeah, thanks for having me. I appreciate you starting the conversation. I would just conclude with I think that yes, there are a lot of ways to run containers in AWS; don't let it stress you out. They're there for intention, they're there by design. Understand them.I would also encourage people to go a little deeper, especially if you got a significantly large workload. You got to get your hands dirty. As a matter of fact, there's a hands-on lab that a company called Liatrio does. They call it their Night Lab; it's a one-day free, hands-on, you run legacy monolithic job applications on Kubernetes, gives you first-hand experience on how to—gets all the way up into observability and doing things like Canary deployments. It's a great, great lab.But you got to do something like that to really get your hands dirty and understand how these things work. So, don't sweat it; there's not one right way. There's a way that will probably work best for each user, and just take the time and understand the ways to make sure you're applying the one that's going to give you the most runway for your workload.Corey: I will definitely dig into that myself. But I think you're right, I think you have nailed a point that is, again, a nuanced one and challenging to put in a rage tweet. But the services don't exist in a vacuum. They're not there because, despite the joke, someone wants to get promoted. It's because there are customer needs that are going on that, and this is another way of meeting those needs.I think there could be better guidance, but I also understand that there are a lot of nuanced perspectives here and that… hell is someone else's workflow—Casey: [laugh].Corey: —and there's always value in broadening your perspective a bit on those things. If people want to learn more about you and how you see the world, where's the best place to find you?Casey: Probably on Twitter: twitter.com/nektos, N-E-K-T-O-S.Corey: That might be the first time Twitter has been described as a best place for anything. But—Casey: [laugh].Corey: Thank you once again, for your time. It is always appreciated.Casey: Thanks, Corey.Corey: Casey Lee, CTO at Gaggle and AWS Container Hero. And apparently writing code in anger to invalidate my points, which is always appreciated. Please do more of that, folks. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, or the YouTube comments, which is always a great place to go reading, whereas if you've hated this podcast, please leave a five-star review in the usual places and an angry comment telling me that I'm completely wrong, and then launching your own open-source tool to point out exactly what I've gotten wrong this time.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

yancanfm
28. 2021年の振り返りスペシャル ・ 3周年おめでとう!

yancanfm

Play Episode Listen Later Dec 13, 2021 55:53


王様ランキング / Sky Castle / スタバでミルクの種類を聞かれる! / "オーガニック"ってどういう意味? / 今年のテックなふりかえり / 今年の人生のふりかえり / 今年の趣味のふりかえり / ニューヨークのベーグルうまい /

change gcp ecs sky castle elastic beanstalk
Le Podcast AWS en Français
Quoi de neuf ?

Le Podcast AWS en Français

Play Episode Listen Later Oct 15, 2021 11:49


Un épisode sur deux du podcast est consacré à une brève revue des principales nouveautés AWS. Cette semaine, c'est le calme avant la tempête, la conférence AWS re:Invent qui se tiendra à Las Vegas en présenciel. Cette semaine nou sparlons de VMWare Cloud et Outposts, de Elastic BeansTalk et ses bases de données, du Cloud Development Kit (CDK), d'une réduction de prix (encore) et de la disponibilité des instances Amazon EC2 pour macOS

Le Podcast AWS en Français
Quoi de neuf ?

Le Podcast AWS en Français

Play Episode Listen Later Oct 15, 2021 11:49


Un épisode sur deux du podcast est consacré à une brève revue des principales nouveautés AWS. Cette semaine, c'est le calme avant la tempête, la conférence AWS re:Invent qui se tiendra à Las Vegas en présenciel. Cette semaine nou sparlons de VMWare Cloud et Outposts, de Elastic BeansTalk et ses bases de données, du Cloud Development Kit (CDK), d'une réduction de prix (encore) et de la disponibilité des instances Amazon EC2 pour macOS

airhacks.fm podcast with adam bien
Modules Are Needed, But Not Easy

airhacks.fm podcast with adam bien

Play Episode Listen Later Oct 8, 2021 48:22


An airhacks.fm conversation with Ondrej Mihályi (@OndroMih) about: last episode with Ondrej: Productive Clouds 2.0 with Serverless Jakarta EE, "Modularization, Monoliths, Micro Services, Clouds, Functions and Kubernetes" #151 episode with Matjaz Juric, modules are useful, but the tooling is not easy, using OSGi for User Interfaces, hybrid Java / JavaScript UI, build time and development time modularity, frontend and backend separation is important, business and presentation separation, Boundary Control Entity (BCE) pattern is permissive, strict modularization with WARs and JARs, logical over physical modules, JPMS for hiding internal implementation, modules are more important in teams as contracts, WARs as simple as AWS Lambdas, kubernetes and readiness probes, Elastic Beanstalk is similar to Payara Cloud, Payara Micro optimizations for Payara Cloud, redeployment without restarting the instances, Payara Micro Arquillian Container, hollow JAR approach and Payara Micro, Payara Micro could support native compilation in the future, Jakarta EE core profile and CDI lite, native compilation for resource reduction, Payara implements MicroProfile as early as soon, Ondrej Mihályi on twitter: @OndroMih

Running in Production
Politico Europe Is a Business to Business News and Data Service

Running in Production

Play Episode Listen Later Jul 26, 2021 72:10


Karl Roos talks about building a B2B news platform with Rails, Node and Python. It's hosted AWS using Elastic Beanstalk.

Screaming in the Cloud
Hacking AWS in Good Faith with Nick Frichette

Screaming in the Cloud

Play Episode Listen Later Jul 1, 2021 35:31


About NickNick Frichette is a Penetration Tester and Team Lead for State Farm. Outside of work he does vulnerability research. His current primary focus is developing techniques for AWS exploitation. Additionally he is the founder of hackingthe.cloud which is an open source encyclopedia of the attacks and techniques you can perform in cloud environments.Links: Hacking the Cloud: https://hackingthe.cloud/ Determine the account ID that owned an S3 bucket vulnerability: https://hackingthe.cloud/aws/enumeration/account_id_from_s3_bucket/ Twitter: https://twitter.com/frichette_n Personal website:https://frichetten.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached, the hard way. Take a look at this. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That's canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.Corey: This episode is sponsored in part by our friends at Lumigo. If you've built anything from serverless, you know that if there's one thing that can be said universally about these applications, it's that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications. It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You've created more problems for yourself; make one of them go away. To learn more, visit lumigo.io.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I spend a lot of time throwing things at AWS in varying capacities. One area I don't spend a lot of time giving them grief is in the InfoSec world because as it turns out, they—and almost everyone else—doesn't have much of a sense of humor around things like security. My guest today is Nick Frechette, who's a penetration tester and team lead for State Farm. Nick, thanks for joining me.Nick: Hey, thank you for inviting me on.Corey: So, like most folks in InfoSec, you tend to have a bunch of different, I guess, titles or roles that hang on signs around someone's neck. And it all sort of distills down, on some level—in your case, at least, and please correct me if I'm wrong—to ‘cloud security researcher.' Is that roughly correct? Or am I missing something fundamental?Nick: Yeah. So, for my day job, I do penetration testing, and that kind of puts me up against a variety of things, from web applications, to client-side applications, to sometimes the cloud. In my free time, though, I like to spend a lot of time on security research, and most recently been focusing pretty heavily on AWS.Corey: So, let's start at the very beginning. What is a cloud security researcher? “What is it you'd say it is you do here?” For lack of a better phrasing?Nick: Well, to be honest, the phrase ‘security researcher' or ‘cloud security researcher' has been, kind of… I guess watered down in recent years; everybody likes to call themselves a researcher in some way or another. You have some folks who participate in the bug bounty programs. So, for example, GCP, and Azure have their own bug bounties. AWS does not, and too sure why. And so they want to find vulnerabilities with the intention of getting cash compensation for it.You have other folks who are interested in doing security research to try and better improve defenses and alerting and monitoring so that when the next major breach happens, they're prepared or they'll be able to stop it ahead of time. From what I do, I'm very interested in offensive security research. So, how can I as, a penetration tester, or red teamer or, I guess, an actual criminal, [laugh] how can I take advantage of AWS, or try to avoid detection from services like GuardDuty and CloudTrail?Corey: So, let's break that down a little bit further. I've heard the term of ‘red team versus blue team' used before. Red team—presumably—is the offensive security folks—and yes, some of those people are, in fact, quite offensive—and blue team is the defense side. In other words, keeping folks out. Is that a reasonable summation of the state of the world?Nick: It can be, yeah, especially when it comes to security. One of the nice parts about the whole InfoSec field—I know a lot of folks tend to kind of just say, “Oh, they're there to prevent the next breach,” but in reality, InfoSec has a ton of different niches and different job specialties. “Blue teamers,” quote-unquote, tend to be the defense side working on ensuring that we can alert and monitor potential attacks, whereas red teamers—or penetration testers—tend to be the folks who are trying to do the actual exploitation or develop techniques to do that in the future.Corey: So, you talk a bit about what you do for work, obviously, but what really drew my notice was stuff you do that isn't part of your core job, as best I understand it. You're focused on vulnerability research, specifically with a strong emphasis on cloud exploitation, as you said—AWS in particular—and you're the founder of Hacking the Cloud, which is an open-source encyclopedia of various attacks and techniques you can perform in cloud environments. Tell me about that.Nick: Yeah, so Hacking the Cloud came out of a frustration I had when I was first getting into AWS, that there didn't seem to be a ton of good resources for offensive security professionals to get engaged in the cloud. By comparison, if you wanted to learn about web application hacking, or attacking Active Directory, or reverse engineering, if you have a credit card, I can point you in the right direction. But there just didn't seem to be a good course or introduction to how you, as a penetration tester, should attack AWS. There's things like, you know, open S3 buckets are a nightmare, or that server-side request forgery on an EC2 instance can result in your organization being fined very, very heavily. I kind of wanted to go deeper with that.And with Hacking the Cloud, I've tried to gather a bunch of offensive security research from various blog posts and conference talks into a single location, so that both the offense side and the defense side can kind of learn from it and leverage that to either improve defenses or look for things that they can attack.Corey: It seems to me that doing things like that is not likely to wind up making a whole heck of a lot of friends over on the cloud provider side. Can you talk a little bit about how what you do is perceived by the companies you're focusing on?Nick: Yeah. So, in terms of relationship, I don't really have too much of an idea of what they think. I have done some research and written on my blog, as well as published to Hacking the Cloud, some techniques for doing things like abusing the SSM agent, as well as abusing the AWS API to enumerate permissions without logging into CloudTrail. And ironically, through the power of IP addresses, I can see when folks from the Amazon corporate IP address space look at my blog, and that's always fun, especially when there's, like, four in the course of a couple of minutes, or five or six. But I don't really know too much about what they—or how they view it, or if they think it's valuable at all. I hope they do, but really not too sure.Corey: I would imagine that they do, on some level, but I guess the big question is, you know that someone doesn't like what you're doing when they send, you know, cease and desist notices, or have the police knock on your door. I feel like at most levels, we're past that in an InfoSec level, at least I'd like to believe we are. We don't hear about that happening all too often anymore. But what's your take on it?Nick: Yeah, I definitely agree. I definitely think we are beyond that. Most companies these days know that vulnerabilities are going to happen, no matter how hard you try and how much money you spend, and so it's better to be accepting of that and open to it. And especially because the InfoSec community can be so, say, noisy at times, it's definitely worth it to pay attention, definitely be appreciative of the information that may come out. AWS is pretty awesome to work with, having disclosed to them a couple times, now.They have a safe harbor provision, which essentially says that so long as you're operating in good faith, you are allowed to do security testing. They do have some rules around that, but they are pretty clear in terms of if you were operating in good faith, you wouldn't be doing anything like that. It tends to be pretty obviously malicious things that they'll ask you to stop.Corey: So, talk to me a little bit about what you've found lately, and been public about. There have been a number of examples that have come up whenever people start googling your name or looking at things you've done. But what's happening lately? What have you found that's interesting?Nick: Yeah. So, I think most recently, the thing that's kind of gotten the most attention has been a really interesting bug I found in the AWS API. Essentially, kind of the core of it is that when you are interacting with the API, obviously that gets logged to CloudTrail, so long as it's compatible. So, if you are successful, say you want to do, like, Secrets Manager, ListSecrets, that shows up in CloudTrail. And similarly, if you do not have that permission on a role or user and you try to do it, that access denied also gets logged to CloudTrail.Something kind of interesting that I found is that by manually modifying a request, or mal-forming them, what we can do is we can modify the content-type header, and as a result when you do that—and you can provide literally gibberish. I think I have VS Code window here somewhere with a content-type of ‘meow'—when you do that, the AWS API knows the action that you're trying to call because of that messed up content type, it doesn't know exactly what you're trying to do and as a result, it doesn't get logged to CloudTrail. Now, while that may seem kind of weirdly specific and not really, like, a concern, the nice part of it though is that for some API actions—somewhere in the neighborhood of 600. I say ‘in the neighborhood of' just because it fluctuates over time—as a result of that, you can tell if you have that permission, or if you don't without that being logged to CloudTrail. And so we can do this enumeration of permissions without somebody in the defense side seeing us do it. Which is pretty awesome from a offensive security perspective.Corey: On some level, it would be easy to say, “Well, just not showing up in the logs isn't really a security problem at all.” I guess that you disagree?Nick: I do, yeah. So, let's sort of look at it from a real-world perspective. Let's say, Corey, you're tired of saving people money on their AWS bill, you'd instead maybe want to make a little money on the side and you're okay with perhaps, you know, committing some crimes to do it. Through some means you get access to a company's AWS credentials for some particular role, whether that's through remote code execution on an EC2 instance, or maybe find them in an open location like an S3 bucket or a Git repository, or maybe you phish a developer, through some means, you have an access key and a secret access key. The new problem that you have is that you don't know what those credentials are associated with, or what permissions they have.They could be the root account keys, or they could be literally locked down to a single S3 bucket to read from. It all just kind of depends. Now, historically, your options for figuring that out are kind of limited. Your best bet would be to brute-force the AWS API using a tool like Pacu, or my personal favorite, which is enumerate-iam by Andres Riancho. And what that does is it just tries a bunch of API calls and sees which one works and which one doesn't.And if it works, you clearly know that you have that permission. Now, the problem with that, though, is that if you were to do that, that's going to light up CloudTrail like a Christmas tree. It's going to start showing all these access denieds for these various API calls that you've tried. And obviously, any defender who's paying attention is going to look at that and go, “Okay. That's, uh, that's suspicious,” and you're going to get shut down pretty quickly.What's nice about this bug that I found is that instead of having to litter CloudTrail with all these logs, we can just do this enumeration for roughly 600-ish API actions across roughly 40 AWS services, and nobody is the wiser. You can enumerate those permissions, and if they work fantastic, and you can then use them, and if you come to find you don't have any of those 600 permissions, okay, then you can decide on where to go from there, or maybe try to risk things showing up in CloudTrail.Corey: CloudTrail is one of those services that I find incredibly useful, or at least I do in theory. In practice, it seems that things don't show up there, and you don't realize that those types of activities are not being recorded until one day there's an announcement of, “Hey, that type of activity is now recorded.” As of the time of this recording, the most recent example that in memory is data plane requests to DynamoDB. It's, “Wait a minute. You mean that wasn't being recorded previously? Huh. I guess it makes sense, but oh, dear.”And that causes a reevaluation of what's happening in the—from a security policy and posture perspective for some clients. There's also, of course, the challenge of CloudTrail logs take a significant amount of time to show up. It used to be over 20 minutes, I believe now it's closer to 15—but don't quote me on that, obviously. Run your own tests—which seems awfully slow for anything that's going to be looking at those in an automated fashion and taking a reactive or remediation approach to things that show up there. Am I missing something key?Nick: No, I think that is pretty spot on. And believe me, [laugh] I am fully aware at how long CloudTrail takes to populate, especially with doing a bunch of research on what is and what is not logged to CloudTrail. I know that there are some operations that can be logged more quickly than the 15-minute average. Off the top of my head, though, I actually don't quite remember what those are. But you're right, in general, the majority at least do take quite a while.And that's definitely time in which an adversary or someone like me, could maybe take advantage of that 15-minute window to try and brute force those permissions, see what we have access to, and then try to operate and get out with whatever goodies we've managed to steal.Corey: Let's say that you're doing the thing that you do, however that comes to be—and I am curious—actually, we'll start there. I am curious; how do you discover these things? Is it looking at what is presented and then figuring out, “Huh, how can I wind up subverting the system it's based on?” And, similar to the way that I take a look at any random AWS services and try and figure out how to use it as a database? How do you find these things?Nick: Yeah, so to be honest, it all kind of depends. Sometimes it's completely by accident. So, for example, the API bug I described about not logging to CloudTrail, I actually found that due to [laugh] copy and pasting code from AWS's website, and I didn't change the content-type header. And as a result, I happened to notice this weird behavior, and kind of took advantage of it. Other times, it's thinking a little bit about how something is implemented and the security ramifications of it.So, for example, the SSM agent—which is a phenomenal tool in order to do remote access on your EC2 instances—I was sitting there one day and just kind of thought, “Hey, how does that authenticate exactly? And what can I do with it?” Sure enough, it authenticates the exact same way that the AWS API does, that being the metadata service on the EC2 instance. And so what I figured out pretty quickly is if you can get access to an EC2 instance, even as a low-privilege user or you can do server-side request forgery to get the keys, or if you just have sufficient permissions within the account, you can potentially intercept SSM messages from, like, a session and provide your own results. And so in effect, if you've compromised an EC2 instance, and the only way, say, incident response has into that box is SSM, you can effectively lock them out of it and, kind of, do whatever you want in the meantime.Corey: That seems like it's something of a problem.Nick: It definitely can be. But it is a lot of fun to play keep-away with incident response. [laugh].Corey: I'd like to reiterate that this is all in environments you control and have permissions to be operating within. It is not recommended that people pursue things like this in other people's cloud environments without permissions. I don't want to find us sued for giving crap advice, and I don't want to find listeners getting arrested because they didn't understand the nuances of what we're talking about.Nick: Yes, absolutely. Getting legal approval is really important for any kind of penetration testing or red teaming. I know some folks sometimes might get carried away, but definitely be sure to get approval before you do any kind of testing.Corey: So, how does someone report a vulnerability to a company like AWS?Nick: So AWS, at least publicly, doesn't have any kind of bug bounty program. But what they do have is a vulnerability disclosure program. And that is essentially an email address that you can contact and send information to, and that'll act as your point of contact with AWS while they investigate the issue. And at the end of their investigation, they can report back with their findings, whether they agree with you and they are working to get that patched or fixed immediately, or if they disagree with you and think that everything is hunky-dory, or if you may be mistaken.Corey: I saw a tweet the other day that I would love to get your thoughts on, which said effectively, that if you don't have a public bug bounty program, then any way that a researcher chooses to disclose the vulnerability is definitionally responsible on their part because they don't owe you any particular duty of care. Responsible disclosure, of course, is also referred to as, “Coordinated vulnerability disclosure” because we're always trying to reinvent terminology in this space. What do you think about that? Is there a duty of care from security researchers to responsibly disclose the vulnerabilities they find, or coordinate those vulnerabilities with vendors in the absence of a public bounty program on turning those things in?Nick: Yeah, you know, I think that's a really difficult question to answer. From my own personal perspective, I always think it's best to contact the developers, or the company, or whoever maintains whatever you found a vulnerability in, give them the best shot to have it fixed or repaired. Obviously, sometimes that works great, and the company is super receptive, and they're willing to patch it immediately. And other times, they just don't respond, or sometimes they respond harshly, and so depending on the situation, it may be better for you to release it publicly with the intention that you're informing folks that this particular company or this particular project may have an issue. On the flip side, I can kind of understand—although I don't necessarily condone it—why folks pursue things like exploit brokers, for example.So, if a company doesn't have a bug bounty program, and the researcher isn't expecting any kind of, like, cash compensation, I can understand why they may spend tens of hours, maybe hundreds of hours chasing down a particularly impactful vulnerability, only to maybe write a blog post about it or get a little head pat and say, “Thanks, nice work.” And so I can see why they may pursue things like selling to an exploit broker who may pay them hefty sum, if it is a—Corey: Orders of magnitude more. It's, “Oh, good. You found a way to remotely execute code across all of EC2 in every region”—that is a hypothetical; don't email me—have a t-shirt. It seems like you could basically buy all the t-shirts for [laugh] what that is worth on the export market.Nick: Yes, absolutely. And I do know from some experience that folks will reach out to you and are interested in, particularly, some cloud exploits. Nothing, like, minor, like some of the things that I've found, but more thinking more of, like, accessing resources without anybody knowing or accessing resources cross-account; that could go for quite a hefty sum.Corey: This episode is sponsored by ExtraHop. ExtraHop provides threat detection and response for the Enterprise (not the starship). On-prem security doesn't translate well to cloud or multi-cloud environments, and that's not even counting IoT. ExtraHop automatically discovers everything inside the perimeter, including your cloud workloads and IoT devices, detects these threats up to 35 percent faster, and helps you act immediately. Ask for a free trial of detection and response for AWS today at extrahop.com/trial.Corey: It always feels squicky, on some level, to discover something like this that's kind of neat, and wind up selling it to basically some arguably terrible people. Maybe. We don't know who's buying these things from the exploit broker. Counterpoint, having reported a few security problems myself to various providers, you get an autoresponder, then you get a thank you email that goes into a bit more detail—for the well-run programs, at least—and invariably, the company's position is, is whatever you found is not as big of a deal as you think it is, and therefore they see no reason to publish it or go loud with it. Wouldn't you agree?Because, on some level, their entire position is, please don't talk about any security shortcomings that you may have discovered in our system. And I get why they don't want that going loud, but by the same token, security researchers need a reputation to continue operating on some level in the market as security researchers, especially independents, especially people who are trying to make names for themselves in the first place.Nick: Yeah.Corey: How do you resolve that dichotomy yourself?Nick: Yeah, so, from my perspective, I totally understand why a company or project wouldn't want you to publicly disclose an issue. Everybody wants to look good, and nobody wants to be called out for any kind of issue that may have been unintentionally introduced. I think the thing at the end of the day, though, from my perspective, if I, as some random guy in the middle of nowhere Illinois finds a bug, or to be frank, if anybody out there finds a vulnerability in something, then a much more sophisticated adversary is equally capable of finding such a thing. And so it's better to have these things out in the open and discussed, rather than hidden away, so that we have the best chance of anybody being able to defend against it or develop detections for it, rather than just kind of being like, “Okay, the vendor didn't like what I had to say, I guess I'll go back to doing whatever [laugh] things I normally do.”Corey: You've obviously been doing this for a while. And I'm going to guess that your entire security researcher career has not been focused on cloud environments in general and AWS in particular.Nick: Yes, I've done some other stuff in relation to abusing GitLab Runners. I also happen to find a pretty neat RCE and privilege escalation in the very popular open-source project. Pi-hole. Not sure if you have any experience with that.Corey: Oh, I run it myself all the time for various DNS blocking purposes and other sundry bits of nonsense. Oh, yes, good. But what I'm trying to establish here is that this is not just one or two companies that you've worked with. You've done this across the board, which means I can ask a question without naming and shaming anyone, even implicitly. What differentiates good vulnerability disclosure programs from terrible ones?Nick: Yeah, I think the major differentiator is the reactivity of the project, as in how quickly they respond to you. There are some programs I've worked with where you disclose something, maybe even that might be of a high severity, and you might not hear back four weeks at a time, whereas there are other programs, particularly the MSRC—which is a part of Microsoft—or with AWS's disclosure program, where within the hour, I had a receipt of, “Hey, we received this, we're looking into it.” And then within a couple hours after that, “Yep, we verified it. We see what you're seeing, and we're going to look at it right away.” I think that's definitely one of the major differentiators for programs.Corey: Are there any companies you'd like to call out in either direction—and, “No,” is a perfectly valid [laugh] answer to this one—for having excellent disclosure programs versus terrible ones?Nick: I don't know if I'd like to call anybody out negatively. But in support, I have definitely appreciated working with both AWS's and the MSRC—Microsoft's—I think both of them have done a pretty fantastic job. And they definitely know what they're doing at this point.Corey: Yeah, I must say that I primarily focus on AWS and have for a while, which should be blindingly obvious to anyone who's listened to me talk about computers for more than three and a half minutes. But my experiences with the security folks at AWS have been uniformly positive, even when I find things that they don't want me talking about, that I will be talking about regardless, they've always been extremely respectful, and I have never walked away from the conversation thinking that I was somehow cheated by the experience. In fact, a couple of years ago at the last in-person re:Invent, I got to give a talk around something I reported specifically about how AWS runs its vulnerability disclosure program with one of their security engineers, Zach Glick, and he was phenomenally transparent around how a lot of these things work, and what they care about, and how they view these things, and what their incentives are. And obviously being empathetic to people reporting things in with the understanding that there is no duty of care that when security researchers discover something, they then must immediately go and report it in return for a pat on the head and a thank you. It was really neat being able to see both sides simultaneously around a particular issue. I'd recommend it to other folks, except I don't know how you make that lightning strike twice.Nick: It's very, very wise. Yes.Corey: Thank you. I do my best. So, what's next for you? You've obviously found a number of interesting vulnerabilities around information disclosure. One of the more recent things that I found that was sort of neat as I trolled the internet—I don't believe it was yours, but there was a ability to determine the account ID that owned an S3 bucket by enumerating by a binary search. Did you catch that at all?Nick: I did. That was by Ben Bridts, which is—it's pretty awesome technique, and that's been something I've been kind of interested in for a while. There is an ability to enumerate users' roles and service-linked roles inside an account, so long as the account ID. The problem, of course, is getting the account ID. So, when Ben put that out there I was super stoked about being able to leverage that now for enumeration and maybe some fun phishing tricks with that.Corey: I love the idea. I love seeing that sort of thing being conducted. And AWS's official policy as best I remember when I looked at this once, account IDs are not considered confidential. Do you agree with that?Nick: Yep. That is my understanding of how AWS views it. From my perspective, having an account ID can be beneficial. I mentioned that you can enumerate users' roles and service-linked roles with it, and that can be super useful from a phishing perspective. The average phishing email looks like, “Oh, you won an iPad,” or, “Oh, you're the 100th visitor of some website,” or something like that.But imagine getting an email that looks like it's from something like AWS developer support, or from some research program that they're doing, and they can say to you, like, “Hey, we see that you have these roles in your account with account ID such-and-such, and we know that you're using EKS, and you're using ECS,” that phishing email becomes a lot more believable when suddenly this outside party seemingly knows so much about your account. And that might be something that you would think, “Oh, well only a real AWS employee or AWS would know that.” So, from my perspective, I think it's best to try and keep your account ID secret. I actually redact it from every screenshot that I publish, or at the very least, I try to. At the same time, though, it's not the kind of thing that's going to get somebody in your account in a single step, so I can totally see why some folks aren't too concerned about it.Corey: I feel like we also got a bit of a red herring coming from AWS blog posts themselves, where they always will give screenshots explaining what they do, and redact the account ID in every case. And the reason that I was told at one point was, “Oh, we have an internal provisioning system that's different. It looks different, and I don't want to confuse people whenever I wind up doing a screenshot.” And that's great, and I appreciate that. And part of me wonders on one level how accurate is that?Because sure, I understand that you don't necessarily want to distract people with something that looks different, but then I found out that the system is called Isengard and, yeah, it's great. They've mentioned it periodically in blog posts, and talks, and the rest. And part of me now wonders, oh, wait a minute. Is it actually because they don't want to disclose the differences between those systems, or is it because they don't have license rights publicly to use the word Isengard and don't want to get sued by whoever owns the rights to the Lord of the Rings trilogy. So, one wonders what the real incentives are in different cases. But I've always viewed account IDs as being the sort of thing that eh, you probably want to share them around all the time, but it also doesn't necessarily hurt.Nick: Exactly, yeah. It's not the kind of thing you want to share with the world immediately, but it doesn't really hurt in the end.Corey: There was an early time when the partner network was effectively determining tiers of partner by how much spend they influenced, and the way that you've demonstrated that was by giving account IDs for your client accounts. The only verification at the time, to my understanding was that, “Yep, that mapped to the client you said it did.” And that was it. So, I can understand back in those days not wanting to muddy those waters. But those days are also long passed.So, I get it. I'm not going to be the first person to advertise mine, but if you can discover my account ID by looking at a bucket, it doesn't really keep me up at night.So, all of those things considered, we've had a pretty wide-ranging conversation here about a variety of things. What's next? What interests you as far as where you're going to start looking and exploring—and exploiting as the case may be—various cloud services? hackthe.cloud—which there is the dot in there, which also turns it into a domain; excellent choice—is absolutely going to be a great collection for a lot of what you find and for other people to contribute and learn from one another. But where are you aimed at? What's next?Nick: Yeah, so one thing I've been really interested in has been fuzzing the AWS API. As anyone who's ever used AWS before knows, there are hundreds of services with thousands of potential API endpoints. And so from a fuzzing perspective, there is a wide variety of things for us to potentially affect or potentially find vulnerabilities in. I'm currently working on a library that will allow me to make that fuzzing a lot easier. You could use things like botocore, Boto3, like, some of the AWS SDKs.The problem though, is that those are designed for, sort of like, the happy path where you can format your request the way Amazon wants. As a security researcher or as someone doing fuzzing, I kind of want to send random gibberish sometimes, or I want to malform my requests. And so that library is still in production, but it has already resulted in a bug. While I was fuzzing part of the AWS API, I happened to notice that I broke Elastic Beanstalk—quite literally—when [laugh] when I was going through the AWS console, I got the big red error message of, “[unintelligible 00:29:35] that request parameter is null.” And I was like, “Huh. Well, why is it null?”And come to find out as a result of that, there is a HTML injection vulnerability in the Elastic—well, there was a HTML injection vulnerability in the Elastic Beanstalk, for the AWS console. Pivoting from there, the Elastic Beanstalk uses Angular 1.8.1, or at least it did when I found it. As a result of that, we can modify that HTML injection to do template injection. And for the AngularJS crowd, template injection is basically cross-site scripting [laugh] because there is no sandbox anymore, at least in that version. And so as a result of that, I was able to get cross-site scripting in the AWS console, which is pretty exciting. That doesn't tend to happen too frequently.Corey: No that is not a typical issue that winds up getting disclosed very often.Nick: Definitely, yeah. And so I was excited about it, and considering the fact that my library for fuzzing is literally, like, not even halfway done, or is barely halfway done, I'm looking forward to what other things I can find with it.Corey: I look forward to reading more. And at the time of this recording, I should point out that this has not been finalized or made public, so I'll be keeping my eyes open to see what happens with this. And hopefully, this will be old news by the time this episode drops. If not, well, [laugh] this might be an interesting episode once it goes out.Nick: Yeah. I hope they'd have it fixed by then. They haven't responded to it yet other than the, “Hi, we've received your email. Thanks for checking in.” But we'll see how that goes.Corey: Watching news as it breaks is always exciting. If people want to learn more about what you're up to, and how you go about things, where can they find you?Nick: Yeah, so you can find me at a couple different places. On Twitter I'm @frichette_n. I also write a blog where I contribute a lot of my research at frechetten.com as well as Hacking the Cloud. I contribute a lot of the AWS stuff that gets thrown on there. And it's also open-source, so if anyone else would like to contribute or share their knowledge, you're absolutely welcome to do so. Pull requests are open and excited for anyone to contribute.Corey: Excellent. And we will of course include links to that in the [show notes 00:31:42]. Thank you so much for taking the time to speak with me. I really appreciate it.Nick: Yeah, thank you so much for inviting me on. I had a great time.Corey: Nick Frechette, penetration tester and team lead for State Farm. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with a comment telling me why none of these things are actually vulnerabilities, but simultaneously should not be discussed in public, ever.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

AWS Morning Brief
Listener Questions 6

AWS Morning Brief

Play Episode Listen Later Jun 18, 2021 21:13


TranscriptCorey: This episode is sponsored in part by LaunchDarkly. Take a look at what it takes to get your code into production. I'm going to just guess that it's awful because it's always awful. No one loves their deployment process. What if launching new features didn't require you to do a full-on code and possibly infrastructure deploy? What if you could test on a small subset of users and then roll it back immediately if results aren't what you expect? LaunchDarkly does exactly this. To learn more, visit launchdarkly.com and tell them Corey sent you, and watch for the wince.Jesse: Hello, and welcome to AWS Morning Brief: Fridays From the Field. I'm Jesse DeRose.Amy: I'm Amy Negrette.Tim: And I'm Tim Banks.Jesse: This is the podcast within a podcast where we talk about all the ways we've seen AWS used and abused in the wild, with a healthy dose of complaining about AWS for good measure. Today is a very special episode for two reasons. First, we're going to be talking about all the things that you want to talk about. That's right, it's time for another Q&A session. Get hyped.Amy: And second as is Duckbill's customary hazing ritual, we're putting a new Duckbill Group Cloud Economist Tim Banks through the wringer to answer some of your pressing questions about cloud costs and AWS. And he has pretty much the best hobbies.Tim: [laugh].Jesse: Absolutely.Tim: You know, I choke people for fun.Jesse: [laugh]. I don't even know where to begin with that. I—you know—Amy: It's the best LinkedIn bio, that's [laugh] where you begin with that.Tim: Yeah, I will change it right after this, I promise. But no, I think it's funny, we were talking about Jiu-Jitsu as a hobby, but my other hobby is I like to cook a lot, and I'm an avid, avid chili purist. And we were in a meeting earlier and Amy mentioned something about a bowl of sweet chili. And, dear listeners, let me tell you, I was aghast.Amy: It's more of a sweet stewed meat than it is, like, some kind of, like, meat candy. It is not a meat candy. Filipinos make very sweet stews because we cannot handle chili, and honestly, we shouldn't be able to handle anything that's caramelized or has sugar in it, but we try to anyway. [laugh].Tim: But this sounds interesting, but I don't know that I would categorize it as chili, especially if it has beans in it.Jesse: It has beans. We put beans in everything.Tim: Oh, then it can't be chili.Jesse: Are you a purist that your chili cannot have beans in it?Tim: Well, no. Chili doesn't have beans in it.Amy: Filipino food has beans in it. Our desserts have beans in it. [laugh].Jesse: We are going to pivot, we're going to hard pivot this episode to just talk about the basis of what a chili recipe consists of. Sorry, listeners, no cost discussions today.Tim: Well, I mean, it's a short list: a chili contains meat and it contains heat.Jesse: [laugh].Tim: That's it. No tomatoes, no beans, no corn, or spaghetti, or whatever people put in it.Amy: Okay, obviously the solution is that we do some kind of cook-off where Tim and Pete cook for everybody, and we pull in Pete as a special quote-unquote, outside consultant, and I just eat a lot of food, and I'm cool with that. [laugh].Jesse: I agree to this.Tim: Pete is afraid of me, so I'm pretty sure he's going to pick my chili.Jesse: [laugh].Amy: I could see him doing that. But also, I just like eating food.Tim: No, no, it's great. We should definitely do a chili cook-off. But yeah, I am willing to entertain any questions about, you know, chili, and I'm willing to defend my stance with facts and the truth. So…Amy: If you have some meat—or [sheet 00:03:19]—related questions, please get into our DMs on Twitter.Jesse: [laugh]. All right. Well, thank you to everyone who submitted their listener questions. We've picked a few that we would like to talk about here today. I will kick us off with the first question.This first question says, “Long-time listener first-time caller. As a solo developer, I'm really interested in using some of AWS's services. Recently, I came across AWS's Copilot, and it looks like a potentially great solution for deployment of a basic architecture for a SaaS-type product that I'm developing. I'm concerned that messing around with Copilot might lead to an accidental large bill that I can't afford as a solo dev. So, I was wondering, do you have a particular [bizing 00:04:04] availability approach when dealing with a new AWS service, ideally, specific steps or places to start with tracking billing? And then specifically for Copilot, how could I set it up so it can trip off billing alarms if my setup goes over a certain threshold? Is there a way to keep track of cost from the beginning?”Tim: AWS has some basic billing alerts in there. They are always going to be kind of reactive.Jesse: Yes.Amy: They can detect some trends, but as a solo developer, what you're going to get is notification that the previous day's spending was pretty high. And then you'll be able to trend it out over that way. As far as asking if there's a proactive way to predict what the cost of your particular architecture is going to be, the easy answer is going to be no. Not one that's not going to be cost-prohibitive to purchase a sole developer.Jesse: Yeah, I definitely recommend setting up those reactive billing alerts. They're not going to solve all of your use cases here, but they're definitely better than nothing. And the one that I definitely am thinking of that I would recommend turning on is the Cost Explorer Cost Anomaly Detector because that actually looks at your spend based on a specific service, a specific AWS cost category, a specific user-defined cost allocation tag. And it'll tell you if there is a spike in spend. Now, if your spend is just continuing to grow steadily, Cost Anomaly Detector isn't going to give you all the information you want.It's only going to look for those anomalous spikes where all of a sudden, you turned something on that you meant to turn off, and left it on. But it's still something that's going to start giving you some feedback and information over time that may help you keep an eye on your billing usage and your spend.Amy: Another thing we highly recommend is to have a thorough tagging strategy, especially if you're using a service to deploy resources. Because you want to make sure that all of your resources, you know what they do and you know who they get charged to. And Copilot does allow you to do resource tagging within it, and then from there should be able to convert them to cost allocation tags so you can see them in your console.Jesse: Awesome. Well, our next question is from Rob. Rob asks, “How do I stay HIPAA compliant, but keep my savings down? Do I really need VPC Flow Logs on? Could we talk in general about the security options in AWS and their cost impact? My security team wants everything on but it would cost us ten times our actual AWS bill.”Rob, we have actually seen this from a number of clients. It is a tough conversation to have because the person in charge of the bill wants to make sure that spend is down, but security may need certain security measures in place, product may need certain measures in place for service level agreements or service level objectives, and there's absolutely a need to find that balance between cost optimization and all of these compliance needs.Tim: Yeah, I think it's also really important to thoroughly understand what the compliance requirements are. Fairly certain for HIPAA that you may not have to have VPC Flow Logs specifically enabled. The language is something like, ‘logging of visitors to the site' or something like that. So, you need to be very clear and concise about what you actually need, and remember, for compliance, typically it's just a box check. It's not going to be a how much or what percent; it's going to be, “Do you have this or do you not?”And so if the HIPAA compliance changes where you absolutely have to have VPC Flow Logging turned on, then there's not going to be a way around that in order to maintain your compliance. But if the language is not specifically requiring that, then you don't have to, and that's going to become something you have to square with your security team. There are ways to do those kinds of logging on other things depending on what your application stack looks like, but that's definitely a conversation you're going to want to have, either with your security team, with your product architects, or maybe even outside or third-party consultant.Jesse: Another thing to think about here is, how much is each of these features in AWS costing you? How much are these security regulations, the SLA architecture choices, how much are each of those things costing you in AWS? Because that is ultimately part of the conversation, too. You can go back to security, or product, or whoever and say, “I understand that this is a business requirement. This is how much it's costing the business.”And that doesn't mean that they have to change it, but that is now additional information that everybody has to collaboratively decide, “Okay, is it worthwhile for us to have this restriction, have this compliance component at this cost?” And again, as Tim was mentioning, if it is something that needs to be set up for compliance purposes, for audit purposes, then there's not really a lot you can do. It's kind of a, I don't want to say sunk cost, but it is a cost that you need to understand that is required for that feature. But if it's not something that is required for audit purposes, if it's not something that just needs to be, like, a checkbox, maybe there's an opportunity here if the cost is so high that you can change the feature in a way that brings the cost down a little bit but still gives security, or product, or whoever else the reassurances that they need.Tim: I think the other very important thing to remember is that you are not required to run your application in AWS.Jesse: Yeah.Tim: You can run it on-premise, you can run at a different cloud provider. If it's going to be cost-prohibitive to run at AWS and you can't get the cost down to a manageable level, through, kind of, normal cost reduction methods of EDPs, or your pricing agreement, remember you can always put that on bare metal somewhere and then you will be able to have the logging for free. Now, mind you, you're going to have to spend money elsewhere to get that done, but you're going to have to look and see what the overall cost is going to be. It may, in fact, be much less expensive to host that on metal, or at a different provider than it would be at AWS.Corey: This episode is sponsored by ExtraHop. ExtraHop provides threat detection and response for the Enterprise (not the starship). On-prem security doesn't translate well to cloud or multi-cloud environments, and that's not even counting IoT. ExtraHop automatically discovers everything inside the perimeter, including your cloud workloads and IoT devices, detects these threats up to 35 percent faster, and helps you act immediately. Ask for a free trial of detection and response for AWS today at extrahop.com/trial.Jesse: Our next question is from Trevor Shaffer. He says, “Loving these Friday from the field episodes and the costing”—thank you—“I'm in that world right now, so all of this hits home for me. One topic not covered with the cost categorization, which I'm tasked with, is how to separate base costs versus usage costs. Case in point, we're driving towards cost metrics based on users and prices go up as users go up. All of that makes sense, but there's always that base load required to serve quote-unquote, ‘no users.'“The ALP instance hours, versus the LCU hour, minimum number of EC2 instances for high availability, things like that. Currently, you can't tag those differently, so I think I'm just doomed here and my hopes will be dashed. For us, our base costs are about 25% of our bill. Looking for tricks on how to do this one well. You can get close with a lot of scripting and time, teasing out each item manually.” Trevor, you can, and I also think that is definitely going to be a pain point if you start scripting some of these things. That sounds like a lot of effort that may give you some useful information, but I don't know if it's going to give you all of the information that you want.Tim: Well, it's also a lot of effort, and it's also room for error. It won't take but a simple error in anything that you write where these costs can then be calculated incorrectly. So, that's something to consider as well: is it worth the overall costs of engineering time, and maintenance, and everything like that, to write these scripts? These are decisions that engineers groups have to make all the time. That said, I do think that this is, for me I think, one of the larger problems that you see with AWS billing is that it is difficult to differentiate something that should be reasonably difficult to differentiate.If I get my cell phone bill, I know exactly how much it's going to cost us to have the line, and then I can see exactly how much it's going to cost me for the minutes. The usage cost is very easily separated from—I'm sorry, the base cost is very easily separated from the usage cost. It's not always that way with AWS, I do think that's something that they could fix.Jesse: Yeah, one thing that I've been thinking of is, I don't want to just recommend turning things on and measuring, but I'm thinking about this from the same perspective that you would think about getting a baseline for any kind of monitoring service: as you turn on a metric or as you start introducing a new metric before you start building alerts for that metric, you need to let that metric run for a certain amount of time to see what the baseline number, usage amount, whatever, looks like before you can start setting alerts. I'm thinking about that same thing here. I know that's a tougher thing to do when this is actually cost involved when it's actually costing you money to leave something on and just watch what usage looks like over time, but that is something that will give you the closest idea of what base costs look like. And one of the things to think about, again, is if the base costs are unwieldy for you or not worthwhile for you in terms of the way the architecture is built, is there either a different way that you can build the architecture that is maybe more ephemeral that will make it cost less when there are no users active? Is there a different cloud provider that you can deploy these resources to that is going to ultimately cost you less when you have no users active?Tim: I think too, though, that when you have these discussions with engineering teams and they're looking at what their priorities are going to be and what the engineering cost is going to be, oftentimes, they're going to want metrics on how much is this costing us—how much would it cost otherwise? What is our base cost, what's our usage cost?—so that you can make a case and justify it with numbers. So, you may think that it is better to run this somewhere else or to re-architect your infrastructure around this, but you're going to have to have some data to back it up. And if this is what you need to gather that data, then yeah, it is definitely a pain point.Amy: I agree. I think this is one of those cases where—and I am also loath to just leave things on for the sake of it, but especially as you onboard new architectures and new applications, this should be done at that stage when you start standing things up and finalizing that architecture. Once you know the kind of architecture you want and you're pushing things to production, find out what that baseline is, have it be part of that process, and have it be a cost of that process. And finally, “As someone new to AWS and wanting to become a software DevOps insert-buzzword-here engineer”—I'm a buzzword engineer—“We've been creating projects in Amplify, Elastic Beanstalk, and other services. I keep the good ones alive and have done a pretty good job of killing things off when I don't need it. What are your thoughts on free managed services in general when it comes to cost transparencies with less than five months left on my free year? Is it a bad idea to use them as someone who is just job hunting? I'm willing to spend a little per month, but don't want to be here with a giant bill.”So, chances are if you're learning a new technology or a new service, unless you run into that pitfall where you're going to get a big bill as a surprise and you've been pretty diligent about turning your services off, your bill is not going to rise that much higher. That said, there have been a lot of instances, on Twitter especially, popping up where they are getting very large bills. If you're not using them and you're not actively learning on them, I would just turn them off so you don't forget later. We've also talked about this in our build versus buy, where that is the good thing about having as a managed service is if you don't need it anymore and you're not learning or using them, you can just turn them off. And if you have less than half a year on your first free year, there are plenty of services that have a relatively free tier or a really cheap tier at the start, so if you want to go back and learn on them later, you still could.Tim: I think too, Amy, it's also important to reflect, at least for this person, that if they're in an environment where they're trying to learn something if maintaining infrastructure is not the main core of what they're trying to learn, then I wouldn't do it. The reason that they have these managed services is to allow engineering teams to be more focused on the things that they want to do as far as development versus the things they have to do around infrastructure management. If you don't have an operations team or an infrastructure team, then maintaining the infrastructure on your own sometimes can become unwieldy to the point that you're not really even learning the thing you wanted to learn; now you're learning how to manage Elasticsearch.Amy: Yeah.Jesse: Absolutely. I think that's one of the most critical things to think about here. These managed services give you the opportunity to use all these services without managing the infrastructure overhead. And to me, there may be a little bit extra costs involved for that, but to me that cost is worth the freedom to not worry about managing the infrastructure, to be able to just spin up a cluster of something and play with it. And then when you're done, obviously, make sure you turn it off, but you don't have to worry about the infrastructure unless you're specifically going to be looking for work where you do need to manage that infrastructure, and that's a separate question entirely.Amy: Yeah. I'm not an infrastructure engineer, so anytime I'm not using infrastructure, and I'm not using a service, I just—I make sure everything's turned off. Deleting stacks is very cathartic for me, just letting everything—just watching it all float away into the sunset does a lot for me, just knowing that it's not one more thing I'm going to have to watch over because it's not a thing I like doing or want to do. So yeah, if that's not what you want to do, then don't leave them on and just clean up after yourself, I suppose. [laugh].Tim: I'll even say that even if you're an infrastructure engineer, which is my background, that you can test your automation of building and all this, you know, building a cluster, deploying things like that, and then tear it down and get rid of it. You don't have to leave it up forever. If you're load testing an application, that's a whole different thing, but that's probably not what you're doing if you're concerned about the free tier costs. So yeah, if you're learning Terraform, you can absolutely deploy a cluster or something and just tear it back out as soon as you're done. If you're learning how to manage whatever it is, build it, test it, make sure it runs, and then tear it back down.Jesse: All righty, folks, that's going to do it for us this week. If you've got questions you would like us to answer, please go to lastweekinaws.com/QA, fill out the form and we'd be happy to answer those on a future episode of the show. If you've enjoyed this podcast, please go to lastweekinaws.com/review and give it a five-star review on your podcast platform of choice, whereas if you hated this podcast, please go to lastweekinaws.com/review, give it a five-star rating on your podcast platform of choice and tell us whether you prefer sweet chili or spicy chili.Announcer: This has been a HumblePod production. Stay humble.

Screaming in the Cloud
Making Compliance Suck Less with AJ Yawn

Screaming in the Cloud

Play Episode Listen Later Jun 17, 2021 34:13


About AJAJ Yawn is a seasoned cloud security professional that possesses over a decade of senior information security experience with extensive experience managing a wide range of cybersecurity compliance assessments (SOC 2, ISO 27001, HIPAA, etc.) for a variety of SaaS, IaaS, and PaaS providers.AJ advises startups on cloud security and serves on the Board of Directors of the ISC2 Miami chapter as the Education Chair, he is also a Founding Board member of the National Association of Black Compliance and Risk Management professions, regularly speaks on information security podcasts, events, and he contributes blogs and articles to the information security community including publications such as CISOMag, InfosecMag, HackerNoon, and ISC2.Before Bytechek, AJ served as a senior member of national cybersecurity professional services firm SOC-ISO-Healthcare compliance practice. AJ helped grow the practice from a 9 person team to over 100 team members serving clients all over the world. AJ also spent over five years on active duty in the United States Army, earning the rank of Captain.AJ is relentlessly committed to learning and encouraging others around him to improve themselves. He leads by example and has earned several industry-recognized certifications, including the AWS Certified Solutions Architect-Professional, CISSP, AWS Certified Security Specialty, AWS Certified Solutions Architect-Associate, and PMP. AJ is also involved with the AWS training and certification department, volunteering with the AWS Certification Examination Subject Matter Expert program.AJ graduated from Georgetown University with a Master of Science in Technology Management and from Florida State University with a Bachelor of Science in Social Science. While at Florida State, AJ played on the Florida State University Men's basketball team participating in back to back trips to the NCAA tournament playing under Coach Leonard Hamilton.Links: ByteChek: https://www.bytechek.com/ Blog post, Everything You Need to Know About SOC 2 Trust Service Criteria CC6.0 (Logical and Physical Access Controls): https://help.bytechek.com/en/articles/4567289-everything-you-need-to-know-about-soc-2-trust-service-criteria-cc6-0-logical-and-physical-access-controls LinkedIn: https://www.linkedin.com/in/ajyawn/ Twitter: https://twitter.com/AjYawn TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of Cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached, the hard way. Take a look at this. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That's canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.Corey: This episode is sponsored in part by our friends at Lumigo. If you've built anything from serverless, you know that if there's one thing that can be said universally about these applications, it's that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications. It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You've created more problems for yourself; make one of them go away. To learn more, visit lumigo.io.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by AJ Yawn, co-founder, and CEO of ByteChek. AJ, thanks for joining me.AJ: Thanks for having me on, Corey. Really excited about the conversation.Corey: So, what is ByteChek? It sounds like it's one of those things—‘byte' spelled as in computer term, not teeth, and ‘chek' without a second C in it because frugality looms everywhere, and we save money where we can by sometimes not buying the extra letter or vowel. So, what is ByteChek?AJ: Exactly. You get it. ByteChek is a cybersecurity compliance software company, built with one goal in mind: make compliance suck less. And the way that we do that is by automating the worst part of compliance, which is evidence collection and taking out a lot of the subjective nature of dealing with an audit by connecting directly where the evidence lives and focusing on security.Corey: That sound you hear is Pandora's Box creaking open because back before I started focusing on AWS bills, I spent a few months doing a deep dive PCI project for workloads going into AWS because previously I've worked in regulated industries a fair bit. I've been a SOC 2 control owner, I've gone through the PCI process multiple times, I've dabbled with HIPAA as a consultant. And I thought, “Huh, there might be a business need here.” And it turns out, yeah, there really is.The problem for me is that the work made me want to die. I found it depressing; it was dull; it was a whole lot of hurry up and wait. And that didn't align with how I approach the world, so I immediately got the hell out of there. You apparently have a better perspective on, you know, delivering things companies need and don't need to have constant novel entertainment every 30 seconds. So, how did you start down this path, and what set you on this road?AJ: Yeah, great question. I started in the army as a information security officer, worked in a variety of different capacities. And when I left the military—mainly because I didn't like sleeping outside anymore—I got into cybersecurity compliance consulting. And that's where I got first into compliance and seeing the backwards way that we would do things with old document requests and screenshots. And I enjoyed the process because there was a reason for it, like you said.There's a business value to this, going through this compliance assessments. So, I knew they were important, but I hated the way we were doing it. And while there, I just got exposed to so many companies that had to go through this, and I just thought there was a better way. Like, typical entrepreneur story, right? You see a problem and you're like, “There has to be a better way than grabbing screenshots of the EC2 console.” And set out to build a product to do that, to just solve that problem that I saw on a regular basis. And I tell people all the time, I was complicit in making compliance stuff before. I was in that role and doing the things that I think sucked and not focused on security. And that's what we're solving here at ByteChek.Corey: So, I've dabbled in it and sort of recoiled in horror. You've gone into this to the point where you are not only handling it for customers but in order to build software that goes in a positive direction, you have to be deeply steeped in this yourself. As you're going down this process, what was your build process like? Were you talking to auditors? Were you talking to companies who had to deal with auditors? What aspects of the problem did you approach this from?AJ: It's really both aspects. And that's where I think it's just a really unique perspective I have because I've talked with a lot of auditors; I was an auditor and worked with auditors' hand-in-hand and I understood the challenges of being an auditor, and the speed that you have to move when you're in the consulting industry. But I also talked to a lot of customers because those were the people I dealt with on a regular basis, both from a sales perspective and from, you know, sitting there with the CTOs trying to figure out how to design a secure solution in AWS. So, I took it from the approach of you can't automate compliance; you can't fix the audit problem by only focusing on one side of the table, which is what currently happens where one side of the table is the client, then you get to automate evidence collection. But if the auditors can't use that information that you've automated, then it's still a bad process for both people. So, I took the approach of thinking about this from both, “How do I make this easier for auditors but also make it easier for the clients that are forced to undergo these audits?”Corey: From a lot of perspectives, having compliance achieved, regardless of whether it's PCI, whether it's HIPAA, whether it's SOC 2, et cetera, et cetera, et cetera, the reason that a companies go through it is that it's an attestation that they are, for better or worse, doing the right things. In some cases, it's a requirement to operate in a regulated industry. In other cases, it's required to process credit card transactions, which is kind of every industry, and in still others, it's an easy shorthand way of saying that we're not complete rank amateurs at these things, so as a result, we're going to just pass over the result of our most recent SOC 2 audit to our prospective client, and suddenly, their security folks can relax and not send over weeks of questionnaires on the security front. That means that, for some folks, this is more or less a box-checking exercise rather than an actual good-faith effort to improve processes and posture.AJ: Correct. And I think that's actually the problem with compliance is it's looked at as a check-the-box exercise, and that's why there's no security value out of it. That's why you can pick up a SOC 2 report for someone that's hosted on AWS, and you don't see any mention of S3 buckets. You can do a ctrl+F, and you literally don't see anything in a security evaluation about S3 buckets, which is just insane if you know anything about security on AWS. And I think it's because of what you just described, Corey; they're often asked to do this by a regulator, or by a customer, or by a vendor, and the result is, “Hurry up and get this report so that we can close this deal,”—or we can get to the next level with this customer, or with this investor, whatever it may be—instead of, let's go through this, let's have an auditor come in and look at our environment to improve it, to improve this security, which is where I hope the industry can get to because audits aren't going anywhere; people are going to continue to do them and spend thousands of dollars on them, so there should be some security value out of them, in my opinion.Corey: I love using encrypting data at rest as an example of things that make varying amounts of sense because, sure, on your company laptops, if someone steals an employee's laptop from a coffee shop, or from the back of their car one night, yeah, you kind of want the exposure to the company to be limited to replacing the hardware. I mean, even here at The Duckbill Group, where we are not regulated, we've gone through no formal audits, we do have controls in place to ensure that all company laptops have disk encryption turned on. It makes sense from that perspective. And in the data center, it was also important because there were a few notable heists where someone either improperly disposed drives and corporate data wound up on eBay or someone in one notable instance drove a truck through the side of the data center wall, pulled a rack into the bed of the truck and took off, which is kind of impressive [laugh] no matter how you slice it. But in the context of a hyperscale cloud provider like AWS, you're not going to be able to break into their data centers, steal a drive—and of course, it has to be the right collection of drives and the right machines—and then find out how to wind up reassembling that data later.It's just not a viable attack strategy. Now, you can spend days arguing with auditors around something like that, or you can check the box ‘encrypt at rest' and move on. And very often, that is the better path. I'm not going to argue with auditors about that. I'm going to bend the knee, check the box, and get back to doing the business thing that I care about. That is a reasonable approach, is it not?AJ: It is, but I think that's the fault of the auditor because good security requires context. You can't just apply a standard set of controls to every organization, as you're describing, where I would much rather the auditor care about, “Are there any public S3 buckets? What are the security group situation like on that account? How are they managing their users? How are they storing credentials there in the cloud environment as well?Are they using multiple accounts?” So, many other things to care about other than protecting whether or not someone will be able to pull off the heist of the [laugh] 21st century. So, I think from a customer perspective, it's the right model: don't waste time arguing points with your auditors, but on the flip side, find an auditor that has more technical knowledge that can understand context, because security work requires good context and audits require context. And that's the problem with audits now; we're using one framework or several frameworks to apply to every organization. And I've been in the consulting space, like you, Corey, for a while. I have not seen the same environment in any customers. Every customer is different. Every customer has a different setup, so it doesn't make sense to say every control should apply to every company.Corey: And it feels on some level like you wind up getting staff accustomed to treating it as a box-checking exercise. “Right, it's dumb that we wind up having to encrypt S3 buckets, but it's for the audit to just check the box and move on.” So, people do it, then they move on to the next item, which is, “Okay, great. Are there any public S3 buckets?” And they treat it with the same, “Yeah, whatever. It's for the audit,” box-checking approach? No, no, that one's actually serious. You should invest significant effort and time into making sure that it's right.AJ: Exactly. Exactly. And that's where the value of a true compliance assessment that is focused on security comes into play because it's no longer about checking the box, it's like, “Hey, there's a weakness here. A weakness that you probably should have identified. So, let's go fix the weakness, but let's talk about your process to find those weaknesses and then hopefully use some automation to remediate them.”Because a lot of the issues in the cloud you can trace back to why was there not a control in place to prevent this or detect this? And it's sad that compliance assessments are not the thing that can catch those, that are not the other safeguard in place to identify those. And it's because we are treating the entire thing like a check-the-box exercise and not pulling out those items that really matter, and that's just focusing on security. Which is ultimately what these compliance reports are proving: customers are asking for these reports because they want to know if their data is going to be secure. And that's what the report is supposed to do, but on the flip side, everyone knows the organization may not be taking it that serious, and they may be treating it like a check-the-box exercise.Corey: So, while I have you here, we'll divert for a minute because I'm legitimately curious about this one. At a scale of legitimate security concern to, “This is a check-the-box exercise,” where do things like rotating passwords every 60 days or rotating IAM credentials every 90 days fall?AJ: I think it again depends on the organization. I don't think that you need to rotate passwords regularly, personally. I don't know how strong of a control that is if people are doing that, because they're just going to start to make things up that are easy—Corey: Put the number at the end and increment by one every time. Great. Good work.AJ: Yep. So, I think again, it just depends on your organization and what the organization is doing. If you're talking about managing IAM access keys and rotating those, are your engineers even using the CLI? Are they using their access keys? Because if they're not, what are you rotating?You're just rotating [laugh] stale keys that have never been used. Or if you don't even have any IAM users, maybe you're using SSO and they're all using Okta or something else and they're using an IAM role to come in there. So, it's just—again, it's context. And I think the problem is, a lot of folks don't understand AWS or they don't understand the cloud. And when I say, folks, I mean auditors.They don't understand that, so they're just going to ask for everything. “Did you rotate your passwords? Did you do this? Did you do that?” And it may not even make sense for you based off of your environment, but again, is it worth the fight with the auditor, or do you just give them whatever they want and so you can go about your way, whether or not it's a legit security concern?Corey: Yeah. At some point, it's not worth fighting with auditors, but if you find yourself wanting to fight the auditor all the time, at some level, you start to really resent the auditor that you have. To put that slightly more succinctly, how do you deal with non-technical auditors who don't understand your environment—what they're looking at—without strangling them?AJ: Great question. I think it goes back to before you hire your auditor. Oftentimes, in the sales process, there's questions around, “Who's come from the Big Four on your staff?” Or, “What control frameworks do you all specialize in?” Or, “How long will this take? How much will it cost?” But there's very rarely any questions of, “Who on your staff knows AWS?”And it's similar to going to the doctor: you wouldn't go to an eye doctor to get foot surgery. So, you shouldn't go to an auditor who has never seen AWS, that doesn't know what EC2 is, to evaluate your AWS environment. So, I think organizations have to start asking the right questions during the sales process. And it's not about price or time or anything like that when you're assessing who you're going to work with from an auditing firm. It's, are they qualified to actually evaluate the threats facing your organization so that you don't get asked the stupid question.If you're hosted on AWS, you shouldn't be getting asked where are your firewall configurations. They should understand what security groups are and how they work. So, there's just a level of knowledge that should be expected from the organization side. And I would say, if you're working with a current auditor that you're having those issues with, continue to ask the hard questions. Auditors that are not technical—I have a blog post on our website, and it says this is the section your auditors are the most scared of, and it's the logical access section of your SOC 2 report.And auditors that are not technical run away from that section. So, just keep asking the hard questions, and they'll either have to get the knowledge or they realize they're not qualified to do the assessment and the marriage will split up kind of naturally from there. But I think it goes back to the initial process of getting your auditor. Don't worry about cost or time, worry about their technical skills and if they're qualified to assess your environment.Corey: And in 2021, that's a very different story than it was the first few times I encountered auditors discovering the new era. At a startup, the auditor shows up. “Great, how do we get access to your Active Directory?” “Yeah, we don't have one of those.” “Okay, how do we get on the internet here?” “Oh, here's the wireless password.” “Wait, there's not a separate guest network?” “That's right.” “Well, now I have privileged access because I'm on your network.”It's like, “Technically, that's true because if you weren't on this network, you wouldn't be able to print to that printer over there in the corner. But that's the only thing that it lets you do.” Everything else is identity-based, not IP address allow listing, so instead, it's purely just convenience to get the internet; you're about as privileged on this network as you would be at a Starbucks half a world away. And they look at you like you're an idiot. And that should have been the early warning sign that this was not going to be a typical audit conversation. Now, though in 2021, it feels like it's time to find a new auditor.AJ: Exactly. Yeah. Especially because organizations—unfortunately, last year security budgets were some of the things that were first cut when budgets were cut due to the global pandemic, S0—Corey: Well, I'm sure that'll have no lasting repercussions.AJ: Right. [laugh]. That's always a great decision. So compliance, that means compliance budgets have been significantly slashed because that's the first thing that gets cut is spending money on compliance activities. So, the cheaper option, oftentimes, is going to mean even less technical resources.Which is why I don't think manual audits, human audits are going to be a thing moving forward. I think companies are realizing that it doesn't make sense to go through a process, hire an auditor who's selling you on all this technical expertise, and then the staff that's showing up and assigned to your project has never seen inside the AWS console and truly doesn't even know what the cloud is. They think that iCloud on their phone is the only cloud that they're familiar with. And that's what happens; organizations are sold that they're going to get cybersecurity technical experts from these human auditors and then somebody shows up without that experience or expertise. So, you have to start to rely on tools, rely on technologies, and that can be native technologies in the cloud or third-party tools.But I don't think you can actually do a good audit in the cloud manually anyways, no matter how technical you are. I know a lot about AWS but I still couldn't do a great audit by myself in the cloud because auditing is time-based, you bill by the hour and it doesn't make sense for me to do all of those manual things that tools and technologies out there exist to do for us.Corey: So, you started a software company aimed at this problem, not a auditing firm and not a consulting company. How are you solving this via the magic of writing code?AJ: It's just connecting directly where the evidence lives. So, for AWS, I actually tried to do this in a non-software way prior, when I was just a typical auditor, and I was just asking our clients to provision us cross-account access to go in their environment with some security permissions to get evidence directly. And that didn't pass the sniff test at my consulting firm, even though some of the clients were open to it. But we built software to go out to the tools where the evidence directly lives and continuously assess the environment. So, that's AWS, that's GitHub, that Jira, that's all of the different tools where you normally collect this evidence, and instead of having to prove to auditors in a very manual fashion, by grabbing screenshots, you just simply connect using APIs to get the evidence directly from the source, which is more technically accurate.The way that auditing has been done in the past is using sampling methodologies and all these other outdated things, but that doesn't really assess if all of your data stores are configured in the right way; if you're actually backing up your data. It's me randomly picking one and saying, “Yes, you're good to go.” So, we connect directly where the evidence lives and hopefully get to a point where when you get a SOC 2 report, you know that a tool checked it. So, you know that the tool went out and looked at every single data store, or they went out and looked at every single EC2 instance, or security group, whatever it may be, and it wasn't dependent on how the auditor felt that day.Corey: This episode is sponsored in part by ChaosSearch. As basically everyone knows, trying to do log analytics at scale with an ELK stack is expensive, unstable, time-sucking, demeaning, and just basically all-around horrible. So why are you still doing it—or even thinking about it—when there's ChaosSearch? ChaosSearch is a fully managed scalable log analysis service that lets you add new workloads in minutes, and easily retain weeks, months, or years of data. With ChaosSearch you store, connect, and analyze and you're done. The data lives and stays within your S3 buckets, which means no managing servers, no data movement, and you can save up to 80 percent versus running an ELK stack the old-fashioned way. It's why companies like Equifax, HubSpot, Klarna, Alert Logic, and many more have all turned to ChaosSearch. So if you're tired of your ELK stacks falling over before it suffers, or of having your log analytics data retention squeezed by the cost, then try ChaosSearch today and tell them I sent you. To learn more, visit chaossearch.io.Corey: That sounds like it is almost too good to be true. And at first, my immediate response is, “This is amazing,” followed immediately by that's transitioning into anger, that, “Why isn't this a native thing that everyone offers?” I mean, to that end, AWS announced ‘Audit Manager' recently, which I haven't had the opportunity to dive into in any deep sense yet, because it's still brand new, and they decided to release it alongside 15,000 other things, but does that start getting a little bit closer to something companies need? Or is it a typical day-one first release of an Amazon service where, “Well, at least we know the direction you're heading in. We'll check back in two years.”AJ: Exactly. It's the day-one Amazon service release where, “Okay. AWS is getting into the audit space. That's good to know.” But right now, at its core, that AWS service, it's just not usable for audits, for several reasons.One, auditors cannot read the outputs of the information from Audit Manager. And it goes back to the earlier point where you can't automate compliance, you can't fix compliance if the auditors can't use the information because then they're going to go back to asking dumb questions and dumb evidence requests if they don't understand the information coming out of it. And it's just because of the output right now is a dump of JSON, essentially, in a Word document, for some strange reason.Corey: Okay, that is the perfect example right there of two worlds colliding. It's like, “Well, we're going to put JSON out of it because that's the language developers speak. Well, what do auditors prefer?” “I don't know, Microsoft Word?” “Okay, sounds good.” Even Microsoft Excel is a better answer than [laugh] that. And that is just… okay, that is just Looney Tunes awful.AJ: Yep. Yeah, exactly. And that's one problem. The other problem is, Audit Manager requires a compliance manager. If we think about that tool, a developer is not going to use Audit Manager; it's going to be somebody responsible for compliance.It requires them to go manually select every service that their company is using. A compliance manager, one, doesn't even know what the services are; they have no clue what some of these services are, two, how are they going to know if you're using Lambda randomly somewhere or, or a Systems Manager randomly somewhere, or Elastic Beanstalk's in one account or one region. Config here, config—they have to just go through and manually—and I'm like, “Well, that doesn't make any sense because AWS knows what services you're using. Why not just already have those selected and you pull those in scope?” So, the chances of something being excluded are extremely high because it's a really manual process for users to decide what are they actually assessing.And then lastly, the frameworks need a lot of work. Auditing is complex because their standards or regulations and all of that, and there's just a gap between what AWS has listed as a service that addresses a particular control that—there was a few times where I looked at Audit Manager and I had no clue what they were mapping to and why they're mapping. So, it's a typical day-one service; it has some gaps, but I like the direction it's going. I like the idea that an organization can go into their AWS console, hit to a dashboard, and say, “Am I meeting SOC 2?” Or“ am I meeting PCI?” I feel like this is a long time coming. I think you probably could have done it with Security Hub with less automation; you have to do some manual uploads there, but the long answer to say it has a long way to go there, Corey.Corey: I heard a couple of horror stories of, “Oh, my god, it's charging me $300 a day and I can't turn it off,” when it first launched. I assume that's been fixed by now because the screaming has stopped. I have to assume it was. But it was gnarly and surprising people with bills. And surprising people with things labeled ‘audit' is never a great plan.AJ: Right. Yeah, the pricing was a little ridiculous as well. And I didn't really understand the pricing model. But that's typical of a new AWS service, I never really understand. That's why I'm glad that you exist because I'm always confused at first about why things cost so much, but then if you give it some time, it starts to make a little bit more sense.Corey: Exactly. The first time you see a new pricing dimension, it's novel and exciting and more than a little scary, and you dive into it. But then it's just pattern recognition. It's, “Oh, it's one of these things again. Great.” It's why it lends itself to a consulting story.So, you were in the army for a while. And as you mentioned, you got tired of sleeping on the ground, so you went into corporate life. And you were at a national cybersecurity professional services firm for a while. What was it that finally made you, I guess, snap for lack of a better term and, “I'm going to start my own thing?” Because in my case, it was, “Well, okay. I get fired an awful lot. Maybe I should try setting out my own shingle because I really don't have another great option.” I don't get the sense, given your resume and pedigree, that that was your situation?AJ: Not quite. I surprisingly, don't do well with authority. So, a little bit I like to challenge things and question the norm often, which got me in trouble in the military, definitely got me in trouble in corporate life. But for me it was, I wanted to change; I wanted to innovate. I just kept seeing that there was a problem with what we were doing and how we were doing it, and I didn't feel like I had the ability to innovate.Innovating in a professional services firm is updating a Google Sheet, or adding a new Google Form and sending that off to a client. That's not really the innovation that I was looking to do. And I realized that if I wanted to create something that was going to solve this problem, I could go join one of the many startups out there that are out there trying to solve this problem, or I could just try to go do it myself and leverage my experience. And two worlds collided as far as timing and opportunity where I financially was in a position to take a chance like this, and I had the knowledge that I finally think I needed to feel comfortable going out on my own and just made the decision. I'm a pretty decisive person, and I decided that I was going to do it and just went with it.And despite going about this during the global pandemic, which presented its own challenges last year, getting this off the ground. But it was really—I collected a bunch of knowledge. I realized, maybe, two and a half years ago, actually, that I wanted to start my own business in this space, but I didn't know what I wanted to do just yet. I knew I wanted to do software, I didn't know how I wanted to do it, I didn't know how I was going to make it work. But I just decided to take my time and learn as much as I can.And once I felt like I acquired enough knowledge and there was really nothing else I could gain from not doing this on my own, and I knew I wasn't going to go join a startup to join them on this journey, it was a no-brainer just to pull the trigger.Corey: It seems to have worked out for you. I'm starting to see you folks crop up from time-to-time, things seem to be going well. How big are you?AJ: Yeah, we're doing well. We have a team of seven of us now, which is crazy to think about because I remember when it was just me and my co-founder staring at each other on Zoom every day and wondering if they're ever going to be anybody else on these [laugh] calls and talking to us. But it's going really well. We have early customers that are happy and that's all that I can ask for and they're not just happy silently; they're being really public about being happy about the platform, and about the process. And just working with people that get it and we're building a lot of momentum.I'm having a lot of fun on LinkedIn and doing a lot of marketing efforts there as well. So, it's been going well; it's been actually going better than expected, surprisingly, which I don't know, I'm a pretty optimistic entrepreneur and I thought things will go well, but it's much better than expected, which means I'm sleeping a lot less than I expected, as well.Corey: Yeah, at some point, when you find yourself on the startup train, it's one of those, “Oh, yeah. That's right. My health is in the gutter, my relationships are starting to implode around me.” Balance is key. And I think that that is something that we don't talk about enough in this world.There are periodically horrible tweets about how you should wind up focusing on your company, it should be the all-consuming thing that drives you at all hours of the day. And you check and, “Oh, who made that observation on Twitter? Oh, it's a VC.” And then you investigate the VC and huh, “You should only have one serious bet, it should be your all-consuming passion” says someone who's invested in a wide variety of different companies all at the same time, in the hopes that one of them succeeds. Huh.Almost like this person isn't taking the advice they're giving themselves and is incentivized to give that advice to others. Huh, how about that? And I know that's a cynical take, but it continues to annoy me when I see it. Where do you stand on the balance side of the equation?AJ: Yeah, I think balance is key. I work a lot, but I rest a lot too. And I spend—I really hold my mornings as my kind of sacred place, and I spend my mornings meditating, doing yoga, working out, and really just giving back to myself. And I encourage my team to do the same. And we don't just encourage it from just a, “Hey, you guys should do this,” but I talk to my team a lot about not taking ourselves too seriously.It's our number one core value. It's why our slogan is ‘make compliance suck less' because it's really my military background. We're not being shot at; we're sleeping at home every night. And while compliance and cybersecurity, it's really important, and we're protecting really important things, it's not that serious to go all-in and to not have balance, and not to take time off not to relax. I mean, a part of what we do at ByteChek is we have a 10% rule, which means 10% of the week, I encourage my team to spend it on themselves, whether that's doing meditation, going to take a nap.And these are work hours; you know, go out, play golf. I spent my 10% this morning playing golf during work hours. And I encourage all my team, every single week, spend four hours dedicated to yourself because there's nothing that we will be able to do as a company without the people here being correct and being mentally okay. And that's something that I learned a long time ago in the military. You spend a year away from home and you start to really realize what's important.And it's not your job. And that's the thing. We hire a lot of veterans here because of my veteran background, and I tell all the vets that come here when you're in the military, your job, your rank, and your day-to-day work is your identity. It's who you are. You're a Marine or you're a Soldier, or you're a Sailor; you're an Airman if that's a bad choice that you made. Sorry for my Air Force guys.Corey: Well, now there's a Spaceman story as well, I'm told. But I don't know if they call them spacemen or not, but remember, there's a new branch to consider. And we can't forget the Coast Guard either.AJ: If they don't call themselves Spacemen, that is their name from now on. We just made it, today. If I ever meet somebody in the Space Force, [laugh] I'm calling them the Spacemen. That is amazing. But I tell our interns that we bring from the military, you have to strip that away.You have to become an individual because ByteChek is not your identity. And it won't be your identity. And ByteChek's not my identity. It's something that I'm doing, and I am optimistic that it's going to work out and I really hope that it does. But if it doesn't, I'm going to be all right; my team is going to be all right and we're going to all continue to go on.And we just try to live that out every day because there's so many more important things going on in this world other than cybersecurity compliance, so we really shouldn't take ourselves too seriously. And that advice of just grinding it out, and that should be your only focus, that's only a recipe for disaster, in my opinion.Corey: AJ, thank you so much for taking the time to speak with me. If people want to hear more about what you have to say, where can they find you?AJ: They can find me on LinkedIn. That's my one spot that I'm currently on. I am going to pop on Twitter here pretty soon. I don't know when, but probably in the next few weeks or so. I've been encouraged by a lot of folks to join the tech community on Twitter, so I'll be there soon.But right now they can find me on LinkedIn. I give four hours back a week to mentoring, so if you hear this and you want to reach out, you want to chat with me, send me a message and I will send you a link to find time on my calendar to meet. I spend four hours every Friday mentoring, so I'm open to chat and help anyone. And when you see me on LinkedIn, you'll see me talking about diversity in cybersecurity because I think really the only way you can solve a cybersecurity skills shortage is by hiring more diverse individuals. So, come find me there, engage with me, talk to me; I'm a very open person and I like to meet new people. And that's where you can find me.Corey: Excellent. And we'll of course throw a link to your LinkedIn profile in the [show notes 00:29:44]. Thank you so much for taking the time to speak with me. It's really appreciated.AJ: Yeah, definitely. Thank you, Corey. This is kind of like a dream come true to be on this podcast that I've listened to a lot and talk about something that I'm passionate about. So, thanks for the opportunity.Corey: AJ Yawn, CEO and co-founder of ByteChek. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you hated this podcast, please leave a five-star review on your podcast platform of choice along with a comment that's embedded inside of a Word document.Announcer: This has been this week's episode of Screaming in the Cloud. You can also find more Corey at screaminginthecloud.com, or wherever fine snark is sold.This has been a HumblePod production. Stay humble.

The Cloudify Tech Talk Podcast
Episode Five : Case Study: Next Insurance (ft. Special guest Haim Yadid)

The Cloudify Tech Talk Podcast

Play Episode Listen Later Sep 6, 2020 70:54


In this special edition of the Cloudify Tech Talk Podcast, we delve into a unique case study with a special guest Haim Yadid- Director of Platform Engineering  for NEXT INSURANCE. We cover all things DevOps from Terraform, Kubernetes, to Elastic Beanstalk, and how they relate to a very specific and successful use case.

.NET Bytes
Episode 21: News from June 19th, 2020 through July 1st, 2020

.NET Bytes

Play Episode Listen Later Jul 6, 2020 50:01


THE NEWS FROM REDMOND What is Project Reunion? .NET Conf - “Focus on Microservices” - July 30, 2020 .NET Conf 2020 - November 10-12 Windows Terminal Preview 1.1 Release Terminal 2.0 Roadmap Introducing dotnet-monitor, an experimental tool Visual Studio 2019 version 16.6.3 Visual Studio 2019 version 16.7 Preview 3 Visual Studio 2019 version 16.7 Preview 3.1 Visual Studio 2019 for Mac 8.6.5 Release Notes Announcing .NET 5.0 Preview 6 ASP.NET Core updates in .NET 5 Preview 6 Announcing Entity Framework Core EFCore 5.0 Preview 6 Announcing TypeScript 4.0 Beta Architecting Cloud Native .NET Applications for Azure F# 5 and F# tools update for June AROUND THE WORLD UnoConf 2020 (Virtual & Free) – Aug 12, 2020 – Save the date Introducing Sdkbin - The Marketplace for Software Developers Introducing GitHub Super Linter: one linter to rule them all AWS Elastic Beanstalk adds .NET Core on Linux platform Environment Variables with .NET Core and Elastic Beanstalk .NET Foundation June/July 2020 Update PROJECTS OF THE WEEK Orchard Also, be sure and check out the Project of the Week archives! SHOUT-OUTS / PLUGS .NET Bytes on Twitter Matt Groves is: Tweeting on Twitter Live Streaming on Twitch A Notable Person Calvin Allen is: Tweeting on Twitter Live Streaming on Twitch

Rails with Jason
044 - Cameron Gray, Co-Founder of Convox

Rails with Jason

Play Episode Listen Later May 12, 2020 35:56


In this episode I talk with Cameron Gray about Convox which is a free, open-source tool to assist with deploying applications to various cloud platforms. Cameron and I talk about how Convox works under the hood and how to get started with Convox for deploying an application. Technologies we touch on include AWS, Elastic Beanstalk, ECS, Docker and Kubernetes.

CastOverflow
Ep. 24 - AWS Elastic Beanstalk

CastOverflow

Play Episode Listen Later Apr 15, 2020 4:04


Hoje falaremos sobre Elastic Beanstalk. Já ouviu falar?

elastic beanstalk aws elastic beanstalk
AWS Morning Brief
Goldilocks and the Three Elastic Beanstalk Consoles

AWS Morning Brief

Play Episode Listen Later Apr 13, 2020 11:43


AWS Morning Brief for the week of April 13, 2020.

amazon cloud aws devops goldilocks consoles elastic beanstalk last week in aws
Running in Production
Smart Music Helps Musicians Practice More Efficiently

Running in Production

Play Episode Listen Later Feb 9, 2020 55:33


Julien Blanchard talks about building a service with Rails, Phoenix and .NET Core. It's deployed on AWS EKS and Elastic Beanstalk.

Rails with Jason
029 - AWS Deployment with Andreas Wittig

Rails with Jason

Play Episode Listen Later Feb 4, 2020 48:48


Me and Andreas talk about various AWS deployment options including EC2, Elastic Beanstalk, Heroku (which uses AWS under the hood), ECS, Packer, Fargate, Ansible, Chef, and more!

AWS re:Invent 2019
DOP326: Deploy your code, scale, and lower cloud costs using Elastic Beanstalk

AWS re:Invent 2019

Play Episode Listen Later Dec 7, 2019 56:25


You can effortlessly convert your code into web applications without having to worry about provisioning and managing AWS infrastructure, applying patches and updates to your platform, or using a variety of tools to monitor your application's health. In this session, we show you how anyone-not just professional developers-can use AWS Elastic Beanstalk in various scenarios without the need to understand or manage the infrastructure details. These scenarios include an administrator moving a Windows .NET workload into the cloud, a developer building a containerized enterprise app as a Docker image, and a data scientist deploying a machine learning model.

code cloud scale windows costs lower aws docker deploy elastic beanstalk aws elastic beanstalk
CODEMOTION 2019
Cómo gestionar 20 millones de peticiones en 24 horas y no morir en el intento - Chema Roldán

CODEMOTION 2019

Play Episode Listen Later Oct 14, 2019 43:48


Otras charlas de Codemotion 2019 también en podcast: https://lk.autentia.com/Codemotion-Podcast ¿Cómo escalas un producto que registra más de 5000 usuarios nuevos todos los días y más de 20 M de peticiones nuevas? ¿Y si te digo que nuestros usuarios vienen de más de 100 países? ¿Y si te cuento encima que no tenemos a ningún DevOps en el equipo? Mi nombre es Chema Roldán y soy CTO y co-fundador de Genially. Me gustaría compartir con vosotros cómo hemos conseguido escalar nuestra infraestructura de manera eficiente y de forma real. Hablaremos de cómo utilizamos los servicios de AWS como Route53, Elastic Beanstalk, Lambda, API Gateway, SQS y Elastic Container Service. ¡Nos vamos a divertir!

Ruby Rogues
RR 433: ShipLane with John Epperson

Ruby Rogues

Play Episode Listen Later Oct 8, 2019 64:41


John Epperson has been doing ruby for 12 years and is a friend of Andrew Mason. He got into Docker a couple years ago and felt like something was missing, so he wrote Shiplane. He liked Docker because it was a promise that he could delegate a lot of the manual devops work to something else, and that something else was able to automate all of it. What he noticed was if you have a Docker thing in development and want to transfer it into production, there was no clear path to get a Docker item from development to production. The process wasn’t truly automated, so he created ShipLane in an attempt to automate it. ShipLane solves this problem by assuming that you have a box out there, whether it’s a VM or an actual physical box, and you have SSH access to it. It logs in, it makes sure you have Docker installed, and gives you the ability to actually take your development docker compose, and convert it to a productionized version. It also hooks in to Capistrano and replaces that with ShipLane commands. Right now ShipLane is using Docker Raw and creates a network for your stuff to work within, deploys your containers, and then your service is up and running. There are different tools you can use to run Docker in production, but John didn’t want to run containers by using Docker Run with a bunch of stuff after it, didn’t want to maintain a custom script, so he automated it. John is currently working on a version that will translate your Docker Compose files into Kubernetes YAML files. There’s a lot of choices to be made in Rails, none of which are wrong, but one choice begets many more, and there’s so many to make and so many consequences it’s difficult to know your path, and ShipLane provides clearer a path. John talks about how to get started with ShipLane. First, you need to gem install ShipLane and Capestrano, put it in your bundle file, and require it in Capestrano (there’s a generator). It has Capestrano listed as a dependency requirement. Andrew has used ShipLane and found it very quick and effective, only took them about 30 minutes. John asks for feedback from users on ShipLane, since many people are using it but no one has given any.  John talks about the versatility of ShipLane as a general solution. He addresses some concerns that people have about putting stuff into containers, and assures them that ShipLane is intended to work right out of the box. It is important that when containerizing services available on the platform of our choice to step back and question if you creating this infrastructure correctly. They discuss some methods for deciding what goes into containers. The panel discusses some of the advantages of Docker, particularly deployment time. Everything is already bundled, the assets are precompiled, so it cuts your deployments down a lot. They talk about different frameworks for Ruby that they like for their scaling abilities, such as Docker, Kubernetes, and Elastic Beanstalk. While Elastic Beanstalk is not one of the primary targets of ShipLane, it is designed as a generalized path to go from development to production, so it shouldn’t matter what your production target is in the end. If you’re gonna pick a provider that isn’t one of the big 3, then ShipLane is a great option If you’re picking a SASS provider, there’s always a possibility that it isn’t compatible with the generalized version, but if we’re targeting Kubernetes it should generally work. The panel discusses the general advice not to use Docker in development and whether or not it has merit. John finds that flips back and forth between projects, and those projects all have different dependencies, so Docker makes it easier to switch between projects because he doesn’t have to think about the dependencies. They talk about how John manages his Docker /compose version with these various projects. They all agree that Kubernetes should not be run locally.  Finally they discuss whether tools like ShipLane are the next step with Docker. They believe that containerization is here to stay, but the only thing that might remotely threaten Docker is going back to bare metal development or going serverless. They discuss whether going serverless kills Docker. Ultimately, the most important thing is that the problem gets solved.  Panelists Charles Max Wood Andrew Mason David Kimura With special guest: John Epperson Sponsors Sentry use the code “devchat” for 2 months free on Sentry’s small plan Cloud 66 - Pain Free Rails Deployments Try Cloud 66 Rails for FREE & get $66 free credits with promo code RubyRogues React Round Up Links ShipLane Mountain West Ruby 2016 - Surviving the Framework Hype Cycle by Brandon Hays Docker Capistrano Docker Swarm Kubernetes Docker Compose Chef Puppet Digital Ocean Postgress Sinatra Elastic Beanstalk Follow DevChatTV on Facebook and Twitter Picks Charles Max Wood: VESA adjustment for your monitors Velcro strips For Love of Mother Not David Kimura: Grapes.js Mario Kart Tour Andrew Mason: Hacktoberfest Chuck John Epperson: Glenscotia 15 year scotch Immortals book Follow John on Github, on rockagile.io, and Twitter

Devchat.tv Master Feed
RR 433: ShipLane with John Epperson

Devchat.tv Master Feed

Play Episode Listen Later Oct 8, 2019 64:41


John Epperson has been doing ruby for 12 years and is a friend of Andrew Mason. He got into Docker a couple years ago and felt like something was missing, so he wrote Shiplane. He liked Docker because it was a promise that he could delegate a lot of the manual devops work to something else, and that something else was able to automate all of it. What he noticed was if you have a Docker thing in development and want to transfer it into production, there was no clear path to get a Docker item from development to production. The process wasn’t truly automated, so he created ShipLane in an attempt to automate it. ShipLane solves this problem by assuming that you have a box out there, whether it’s a VM or an actual physical box, and you have SSH access to it. It logs in, it makes sure you have Docker installed, and gives you the ability to actually take your development docker compose, and convert it to a productionized version. It also hooks in to Capistrano and replaces that with ShipLane commands. Right now ShipLane is using Docker Raw and creates a network for your stuff to work within, deploys your containers, and then your service is up and running. There are different tools you can use to run Docker in production, but John didn’t want to run containers by using Docker Run with a bunch of stuff after it, didn’t want to maintain a custom script, so he automated it. John is currently working on a version that will translate your Docker Compose files into Kubernetes YAML files. There’s a lot of choices to be made in Rails, none of which are wrong, but one choice begets many more, and there’s so many to make and so many consequences it’s difficult to know your path, and ShipLane provides clearer a path. John talks about how to get started with ShipLane. First, you need to gem install ShipLane and Capestrano, put it in your bundle file, and require it in Capestrano (there’s a generator). It has Capestrano listed as a dependency requirement. Andrew has used ShipLane and found it very quick and effective, only took them about 30 minutes. John asks for feedback from users on ShipLane, since many people are using it but no one has given any.  John talks about the versatility of ShipLane as a general solution. He addresses some concerns that people have about putting stuff into containers, and assures them that ShipLane is intended to work right out of the box. It is important that when containerizing services available on the platform of our choice to step back and question if you creating this infrastructure correctly. They discuss some methods for deciding what goes into containers. The panel discusses some of the advantages of Docker, particularly deployment time. Everything is already bundled, the assets are precompiled, so it cuts your deployments down a lot. They talk about different frameworks for Ruby that they like for their scaling abilities, such as Docker, Kubernetes, and Elastic Beanstalk. While Elastic Beanstalk is not one of the primary targets of ShipLane, it is designed as a generalized path to go from development to production, so it shouldn’t matter what your production target is in the end. If you’re gonna pick a provider that isn’t one of the big 3, then ShipLane is a great option If you’re picking a SASS provider, there’s always a possibility that it isn’t compatible with the generalized version, but if we’re targeting Kubernetes it should generally work. The panel discusses the general advice not to use Docker in development and whether or not it has merit. John finds that flips back and forth between projects, and those projects all have different dependencies, so Docker makes it easier to switch between projects because he doesn’t have to think about the dependencies. They talk about how John manages his Docker /compose version with these various projects. They all agree that Kubernetes should not be run locally.  Finally they discuss whether tools like ShipLane are the next step with Docker. They believe that containerization is here to stay, but the only thing that might remotely threaten Docker is going back to bare metal development or going serverless. They discuss whether going serverless kills Docker. Ultimately, the most important thing is that the problem gets solved.  Panelists Charles Max Wood Andrew Mason David Kimura With special guest: John Epperson Sponsors Sentry use the code “devchat” for 2 months free on Sentry’s small plan Cloud 66 - Pain Free Rails Deployments Try Cloud 66 Rails for FREE & get $66 free credits with promo code RubyRogues React Round Up Links ShipLane Mountain West Ruby 2016 - Surviving the Framework Hype Cycle by Brandon Hays Docker Capistrano Docker Swarm Kubernetes Docker Compose Chef Puppet Digital Ocean Postgress Sinatra Elastic Beanstalk Follow DevChatTV on Facebook and Twitter Picks Charles Max Wood: VESA adjustment for your monitors Velcro strips For Love of Mother Not David Kimura: Grapes.js Mario Kart Tour Andrew Mason: Hacktoberfest Chuck John Epperson: Glenscotia 15 year scotch Immortals book Follow John on Github, on rockagile.io, and Twitter

All Ruby Podcasts by Devchat.tv
RR 433: ShipLane with John Epperson

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Oct 8, 2019 64:41


John Epperson has been doing ruby for 12 years and is a friend of Andrew Mason. He got into Docker a couple years ago and felt like something was missing, so he wrote Shiplane. He liked Docker because it was a promise that he could delegate a lot of the manual devops work to something else, and that something else was able to automate all of it. What he noticed was if you have a Docker thing in development and want to transfer it into production, there was no clear path to get a Docker item from development to production. The process wasn’t truly automated, so he created ShipLane in an attempt to automate it. ShipLane solves this problem by assuming that you have a box out there, whether it’s a VM or an actual physical box, and you have SSH access to it. It logs in, it makes sure you have Docker installed, and gives you the ability to actually take your development docker compose, and convert it to a productionized version. It also hooks in to Capistrano and replaces that with ShipLane commands. Right now ShipLane is using Docker Raw and creates a network for your stuff to work within, deploys your containers, and then your service is up and running. There are different tools you can use to run Docker in production, but John didn’t want to run containers by using Docker Run with a bunch of stuff after it, didn’t want to maintain a custom script, so he automated it. John is currently working on a version that will translate your Docker Compose files into Kubernetes YAML files. There’s a lot of choices to be made in Rails, none of which are wrong, but one choice begets many more, and there’s so many to make and so many consequences it’s difficult to know your path, and ShipLane provides clearer a path. John talks about how to get started with ShipLane. First, you need to gem install ShipLane and Capestrano, put it in your bundle file, and require it in Capestrano (there’s a generator). It has Capestrano listed as a dependency requirement. Andrew has used ShipLane and found it very quick and effective, only took them about 30 minutes. John asks for feedback from users on ShipLane, since many people are using it but no one has given any.  John talks about the versatility of ShipLane as a general solution. He addresses some concerns that people have about putting stuff into containers, and assures them that ShipLane is intended to work right out of the box. It is important that when containerizing services available on the platform of our choice to step back and question if you creating this infrastructure correctly. They discuss some methods for deciding what goes into containers. The panel discusses some of the advantages of Docker, particularly deployment time. Everything is already bundled, the assets are precompiled, so it cuts your deployments down a lot. They talk about different frameworks for Ruby that they like for their scaling abilities, such as Docker, Kubernetes, and Elastic Beanstalk. While Elastic Beanstalk is not one of the primary targets of ShipLane, it is designed as a generalized path to go from development to production, so it shouldn’t matter what your production target is in the end. If you’re gonna pick a provider that isn’t one of the big 3, then ShipLane is a great option If you’re picking a SASS provider, there’s always a possibility that it isn’t compatible with the generalized version, but if we’re targeting Kubernetes it should generally work. The panel discusses the general advice not to use Docker in development and whether or not it has merit. John finds that flips back and forth between projects, and those projects all have different dependencies, so Docker makes it easier to switch between projects because he doesn’t have to think about the dependencies. They talk about how John manages his Docker /compose version with these various projects. They all agree that Kubernetes should not be run locally.  Finally they discuss whether tools like ShipLane are the next step with Docker. They believe that containerization is here to stay, but the only thing that might remotely threaten Docker is going back to bare metal development or going serverless. They discuss whether going serverless kills Docker. Ultimately, the most important thing is that the problem gets solved.  Panelists Charles Max Wood Andrew Mason David Kimura With special guest: John Epperson Sponsors Sentry use the code “devchat” for 2 months free on Sentry’s small plan Cloud 66 - Pain Free Rails Deployments Try Cloud 66 Rails for FREE & get $66 free credits with promo code RubyRogues React Round Up Links ShipLane Mountain West Ruby 2016 - Surviving the Framework Hype Cycle by Brandon Hays Docker Capistrano Docker Swarm Kubernetes Docker Compose Chef Puppet Digital Ocean Postgress Sinatra Elastic Beanstalk Follow DevChatTV on Facebook and Twitter Picks Charles Max Wood: VESA adjustment for your monitors Velcro strips For Love of Mother Not David Kimura: Grapes.js Mario Kart Tour Andrew Mason: Hacktoberfest Chuck John Epperson: Glenscotia 15 year scotch Immortals book Follow John on Github, on rockagile.io, and Twitter

Full Stack Podcast
AWS – Elastic Beanstalk

Full Stack Podcast

Play Episode Listen Later Aug 27, 2019 55:08


AWS - Elastic Beanstalk

elastic beanstalk aws elastic beanstalk
Devchat.tv Master Feed
RR 414: Docker Talk

Devchat.tv Master Feed

Play Episode Listen Later May 28, 2019 54:30


Sponsors Sentry use code “devchat” for $100 credit Triplebyte offers $1000 signing bonus Cloud 66 - Pain Free Rails Deployments Try Cloud 66 Rails for FREE & get $66 free credits with promo code RubyRogues Panel Charles Max Wood Andrew Mason Dave Kimura David Richards Episode Summary Today the panel is talking about the many applications of Docker. They talk about where Docker fits into the development lifestyle and what kind of applications Docker can help with. Dave goes over some of the some of the Docker terminology, how to set up some basic scenarios, and some of the difficulties often encountered by first time users. They talk about how to make sure you’re putting together a Docker file correctly. The panel agrees that Docker had a different workflow from other systems, and discuss some of the tradeoffs of using docker. They mention some specific use cases for docker and what it’s like to migrate to Docker. Dave cautions listeners that databases needs to exist outside of Docker or Kubernetes. Dave and Andrew argue whether or not Docker belongs in the developer environment. The panel discusses ways to maintain productivity when introducing Docker and give some advice to programmers who are new to using Docker. They talk about cases where using Docker can be very helpful. They wrap up by talking about how to get started with Docker in your CI/CD and how to run tests with Docker. Links Docker Microservices Kubernetes ISO file Docker images Bundler Ubuntu Red Hat Alpine Linux Sinatra Podwrench Sidekick Foreman CI/CD AWS Azure DigitalOcean Elastic Beanstalk Google Cloud Redis Cloud Native Development Follow DevChat on Facebook and Twitter Picks Andrew Mason: Rails Flip Flop Dave Kimura: Cloud Native Development Dewalt Flexvolt circular saw Charles Max Wood: Everywhere RB David Richards: Warren Buffet's letters to his shareholders

Ruby Rogues
RR 414: Docker Talk

Ruby Rogues

Play Episode Listen Later May 28, 2019 54:30


Sponsors Sentry use code “devchat” for $100 credit Triplebyte offers $1000 signing bonus Cloud 66 - Pain Free Rails Deployments Try Cloud 66 Rails for FREE & get $66 free credits with promo code RubyRogues Panel Charles Max Wood Andrew Mason Dave Kimura David Richards Episode Summary Today the panel is talking about the many applications of Docker. They talk about where Docker fits into the development lifestyle and what kind of applications Docker can help with. Dave goes over some of the some of the Docker terminology, how to set up some basic scenarios, and some of the difficulties often encountered by first time users. They talk about how to make sure you’re putting together a Docker file correctly. The panel agrees that Docker had a different workflow from other systems, and discuss some of the tradeoffs of using docker. They mention some specific use cases for docker and what it’s like to migrate to Docker. Dave cautions listeners that databases needs to exist outside of Docker or Kubernetes. Dave and Andrew argue whether or not Docker belongs in the developer environment. The panel discusses ways to maintain productivity when introducing Docker and give some advice to programmers who are new to using Docker. They talk about cases where using Docker can be very helpful. They wrap up by talking about how to get started with Docker in your CI/CD and how to run tests with Docker. Links Docker Microservices Kubernetes ISO file Docker images Bundler Ubuntu Red Hat Alpine Linux Sinatra Podwrench Sidekick Foreman CI/CD AWS Azure DigitalOcean Elastic Beanstalk Google Cloud Redis Cloud Native Development Follow DevChat on Facebook and Twitter Picks Andrew Mason: Rails Flip Flop Dave Kimura: Cloud Native Development Dewalt Flexvolt circular saw Charles Max Wood: Everywhere RB David Richards: Warren Buffet's letters to his shareholders

All Ruby Podcasts by Devchat.tv
RR 414: Docker Talk

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later May 28, 2019 54:30


Sponsors Sentry use code “devchat” for $100 credit Triplebyte offers $1000 signing bonus Cloud 66 - Pain Free Rails Deployments Try Cloud 66 Rails for FREE & get $66 free credits with promo code RubyRogues Panel Charles Max Wood Andrew Mason Dave Kimura David Richards Episode Summary Today the panel is talking about the many applications of Docker. They talk about where Docker fits into the development lifestyle and what kind of applications Docker can help with. Dave goes over some of the some of the Docker terminology, how to set up some basic scenarios, and some of the difficulties often encountered by first time users. They talk about how to make sure you’re putting together a Docker file correctly. The panel agrees that Docker had a different workflow from other systems, and discuss some of the tradeoffs of using docker. They mention some specific use cases for docker and what it’s like to migrate to Docker. Dave cautions listeners that databases needs to exist outside of Docker or Kubernetes. Dave and Andrew argue whether or not Docker belongs in the developer environment. The panel discusses ways to maintain productivity when introducing Docker and give some advice to programmers who are new to using Docker. They talk about cases where using Docker can be very helpful. They wrap up by talking about how to get started with Docker in your CI/CD and how to run tests with Docker. Links Docker Microservices Kubernetes ISO file Docker images Bundler Ubuntu Red Hat Alpine Linux Sinatra Podwrench Sidekick Foreman CI/CD AWS Azure DigitalOcean Elastic Beanstalk Google Cloud Redis Cloud Native Development Follow DevChat on Facebook and Twitter Picks Andrew Mason: Rails Flip Flop Dave Kimura: Cloud Native Development Dewalt Flexvolt circular saw Charles Max Wood: Everywhere RB David Richards: Warren Buffet's letters to his shareholders

Coder Radio
354: A Life of Learning

Coder Radio

Play Episode Listen Later Apr 25, 2019 45:34


We celebrate the life of Erlang author Dr Joe Armstrong by remembering his many contributions to computer science and unique approach to lifelong learning. Plus some code to read, your feedback, and more!

AWS re:Invent 2018
DEV323: PaaS - From Code to Running Application using AWS Elastic Beanstalk

AWS re:Invent 2018

Play Episode Listen Later Nov 30, 2018 62:00


Come learn how Elastic Beanstalk can help you go from code to running application in a matter of minutes, without the need to provision or manage any of the underlying Amazon Web Services (AWS) resources. Hear how Qualcomm is able to migrate application to AWS faster than before through Forge, an internally built application platform that leverages Elastic Beanstalk to simplify the development and deployment of applications to AWS with security and organizational best practices out of the box.

Screaming in the Cloud
Episode 37: Hiring in the Cloud “I assume CrowdStrike makes drones”

Screaming in the Cloud

Play Episode Listen Later Nov 21, 2018 35:14


What’s hiring in the world of Cloud like? What are companies looking for in possible employees? What kind of career trajectory should applicants display? Today, we’re talking to Don O’Neill, who has had an interesting career path and the archetype of who most companies want to hire. He’s been an independent contributor, platform leader, and Cloud consultant. Currently, Don is platform engineer manager at Articulate, an eLearning software solution for course authoring and eLearning development. He works with platform engineers to automate Blue Ocean pipelines with Docker, Terraform, and various Amazon Web Services (AWS) technologies, such as Elastic Beanstalk. Some of the highlights of the show include: Don reached out to his network to ask people that he had a professional relationship with about who was hiring and what challenges they faced Don’s “Therapy”: Go to meet-ups to talk about DevOps topics; serves as a “I’ve-got-to-get-my-hiney-out-of-the-house-and-get-some-social-time” Don’s journey from being a “wee lad in the industry” to a senior member/leader and giving back as a way to recognize those who helped him along the way Hiring Horror Stories: People going through borderline ridiculous levels of hiring games and terrible interview paradigms Companies sometimes look for something too specific - exact match instead of fuzzy match; they never have time to train, but time to look for a perfect unicorn Articulate’s Hiring Process: Day 1 - Slack interview; Day 2 - Technical pieces; and Day 3 - Pairing with others Articulate looks for people enthusiastic about technology, able to learn, and with emotional intelligence; company values independence, autonomy, and respect Companies that spend several hours to make a hiring decision tend to have less success with those they hire Cloud Certificates/Certifications: Can be valuable for applicants with no real-world experience; they don’t indicate how they’re going to work or learn Applicants need to demonstrate a base level of knowledge; if they don’t have a skill set, they should start a project to learn about something - learning is fun If you’re established in your career, reach out to someone just starting out to guide them If you’re starting out in your career, reach out to people to talk about the next steps to take in your career (contact Corey or Don) Links: Don O’Neill on Twitter Articulate Hangops.slack.com CoffeeOps AWS Azure Docker Terraform Elastic Beanstalk Autoscan Marchex Apex Learning Dice Monster Indeed Switch App (Tinder for Jobs) Kubernetes Spotify in Stockholm CrowdStrike re:Invent AWS Summits Digital Ocean

Develpreneur: Become a Better Developer and Entrepreneur
AWS - The Compute Family of Services

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Sep 3, 2018 24:36


This season (season 4) will cover the groups of services Amazon provides in their AWS offerings.  Each episode will focus on a particular group and review the included services from a high level.  We have created posts over the last year to go a little deeper into each service.  However, this season will give you some great ideas on where to start and what they currently offer. The Virtual Machine We start our season with a focus on the "compute" family of services.  These cover a few different ways to handle your cloud-based applications.  The first we look at is the traditional virtual machine.  This service starts with the elastic compute cloud (EC2).  We use this service as part of our Launch Your Internet Business class and embrace the free tier to keep your startup costs down.  The pricing around EC2 can be a little confusing, so Amazon has added LightSail as a more natural way to manage your VM. Flexibility The power of a cloud solution and a VM is the ability to be flexible in how many resources it uses.  Amazon has provided Elastic Beanstalk to automate the grow and shrink needs of a VM.  It is not a trivial service to understand.  However, it is very powerful and well worth the time spent mastering that service.  You may find this too much for your needs if you stick a single server and a small number of users.  On the other hand, it is nice to know your site can handle a flood of interest. Containers Developers have progressed from VMs to containers as the latest hot platform.  Amazon is right there with us and added compute services to run containers on top of EC2 instances.  They have Fargate as an over-arching management service that will help you spin up a container instance without worrying about the underlying VM details.  This is all we want from a container including a lack of need for a system specialist that crafts the adequately sized VM. The "compute" services are the core of so many of the AWS offerings that this is a perfect place to get started.  Go ahead and take advantage of the free tier to spin up your VM and play around with it for the next year (until the free period expires).  This is as effective a "try before you buy" deal as I have found.

Datacenter Technical Deep Dives
#vBrownBag US AWS Certified Solutions Architect - Professional, Deployment Management with Konrad Clapa

Datacenter Technical Deep Dives

Play Episode Listen Later May 15, 2018 56:57


CloudFormation, OpWorks and Elastic Beanstalk In the fourth video in our series on studying for the AWS Certified Solutions Architect - Professional, Konrad Clapa (@clapa_konrad) talks about AWS CloudFormation, OpsWorks and Elastic Beanstalk.

AWS re:Invent 2017
DEV305: Manage Your Applications with AWS Elastic Beanstalk

AWS re:Invent 2017

Play Episode Listen Later Nov 30, 2017 61:20


AWS Elastic Beanstalk provides an easy way for you to quickly deploy, manage, and scale applications in the AWS Cloud. Through interactive demos and code samples, this session will teach you how to deploy your code using Elastic Beanstalk, provision and use other AWS services (Amazon SNS, Amazon SQS, and Amazon DynamoDB, AWS CodeCommit) use your application's health metrics to tune performance, scale your application to handle millions of requests, perform zero-downtime deployments with traffic routing, and keep the underlying application platform up-to-date with managed updates.

manage applications aws aws cloud amazon dynamodb elastic beanstalk amazon sqs aws elastic beanstalk aws codecommit
PurePerformance
045 101 Series: AWS

PurePerformance

Play Episode Listen Later Sep 25, 2017 58:58


If you thought EC2 was the first service offered by Amazon Web Services and if you think 53 in “Route 53” is just a random number then you should listen to this 101 on AWS Podcast. This time we got to chat with Wayne Segar ( https://www.linkedin.com/in/wayne-segar-6222ba57/ ) who has been helping companies to move to new cloud technologies and services such as AWS. Wayne gave us a great overview of the key services in Compute, Database, Storage, Management, Development as well as how Monitoring works with AWS.If you want to make your first steps with AWS, such as deploying your first EC2 Instance or Application on Elastic Beanstalk, then feel free to follow our 101 AWS Monitoring Tutorial:https://github.com/Dynatrace/AWSMonitoringTutorials

PurePerformance
045 101 Series: AWS

PurePerformance

Play Episode Listen Later Sep 25, 2017 58:58


If you thought EC2 was the first service offered by Amazon Web Services and if you think 53 in “Route 53” is just a random number then you should listen to this 101 on AWS Podcast. This time we got to chat with Wayne Segar ( https://www.linkedin.com/in/wayne-segar-6222ba57/ ) who has been helping companies to move to new cloud technologies and services such as AWS. Wayne gave us a great overview of the key services in Compute, Database, Storage, Management, Development as well as how Monitoring works with AWS.If you want to make your first steps with AWS, such as deploying your first EC2 Instance or Application on Elastic Beanstalk, then feel free to follow our 101 AWS Monitoring Tutorial:https://github.com/Dynatrace/AWSMonitoringTutorials

Coders Campus Podcast
EP17 - Deploying your code to Elastic Beanstalk

Coders Campus Podcast

Play Episode Listen Later Feb 24, 2017 25:07


Show notes for this episode can be found via http://coderscampus.com/17 and the main blog post can be found via coderscampus.com/ultimate-guide-hosting-java-web-app-amazon-web-services-aws  

code deploying elastic beanstalk
AWS re:Invent 2016
DEV315: How Bleacher Report Gains Competitive Edge with AWS Elastic Beanstalk and Docker

AWS re:Invent 2016

Play Episode Listen Later Dec 24, 2016 50:00


When you're one of the top sports media sites on the internet, you deal with scale like no other. Bleacher Report and its Team Stream app enable millions of users to access their own personalized view of sports. In this session, we'll talk about how we broke up the monolith into microservices and how Elastic Beanstalk empowered us to move quickly. Learn how a small Ops team provided a world-class build/release pipeline by standing on the shoulder of giants (AWS Elastic Beanstalk, Jenkins, and Docker). This session is designed for those who want to get up and running as quickly as possible, but are uncompromising in their ownership of infrastructure. We'll discuss the reasoning behind our switch from fully managing our own infrastructure to a managed service, including some advanced customizations made possible through AWS CloudFormation and AWS Elastic Beanstalk configuration files (.ebextensions).

AWS re:Invent 2016
DEV206: Scaling Your Web Applications with AWS Elastic Beanstalk

AWS re:Invent 2016

Play Episode Listen Later Dec 24, 2016 52:00


AWS Elastic Beanstalk provides an easy way for you to quickly deploy, manage, and scale applications in the AWS Cloud. Through interactive demos and code samples, this session will teach you how to deploy your code using Elastic Beanstalk, provision and use other AWS services (Amazon SNS, Amazon SQS, and Amazon DynamoDB), use your application’s health metrics to tune performance, scale your application to handle millions of requests, perform zero-downtime deployments with traffic routing, and keep the underlying application platform up-to-date with managed updates. Code samples for demos will be available to all session attendees.

code scaling aws web applications aws cloud amazon dynamodb elastic beanstalk amazon sqs aws elastic beanstalk
AWS re:Invent 2016
DAT313: 6 Million New Registrations in 30 Days: How the Chick-fil-A One App Scaled with AWS

AWS re:Invent 2016

Play Episode Listen Later Dec 24, 2016 45:00


Chris leads the team providing back-end services for the massively popular Chick-fil-A One mobile app that launched in June 2016. Chick-fil-A follows AWS best practices for web services and leverages numerous AWS services, including Elastic Beanstalk, DynamoDB, Lambda, and Amazon S3. This was the largest technology-dependent promotion in Chick-fil-A history. To ensure their architecture would perform at unknown and massive scale, Chris worked with AWS Support through an AWS Infrastructure Event Management (IEM) engagement and leaned on automated operations to enable load testing before launch.

AWS re:Invent 2016
ARC205: Born in the Cloud; Built Like a Startup

AWS re:Invent 2016

Play Episode Listen Later Dec 24, 2016 50:00


This presentation provides a comparison of three modern architecture patterns that startups are building their business around. It includes a realistic analysis of cost, team management, and security implications of each approach. It covers Elastic Beanstalk, Amazon ECS, Docker, Amazon API Gateway, AWS Lambda, Amazon DynamoDB, and Amazon CloudFront, as well as Docker.

startups built cloud docker aws lambda amazon dynamodb elastic beanstalk amazon ecs amazon cloudfront amazon api gateway
Software Defined Talk
Episode 82: Attack of the two-pizza teams

Software Defined Talk

Play Episode Listen Later Dec 8, 2016 57:10


...Eventually, someone has to clean up the leftover pizza. ...That sweet OpEx. ..."Easy to stay." Amazon came out with a slew of features last week. This week we discuss them and take some cracks at the broad, portfolio approach at AWS compared to historic (like .Net) platform approaches. We also discuss footwear and what to eat and where to stay in Las Vegas. Footware Kenneth Cole slip on shoes (http://amzn.to/2gH6OzD). Keen Austin shoes, slip-on (http://amzn.to/2h2gveX) and lace (http://amzn.to/2ggll4y). The Doc Martin's Coté used to wear, Hickmire (http://amzn.to/2hlPnIJ). Mid-roll Coté: the Cloud Native roadshows are over, but check out the cloud native WIP I have at cote.io/cloud2 (http://cote.io/cloud2) or, just check out some excerpts on working with auditors (https://medium.com/@cote/auditors-your-new-bffs-918c8671897a#.et5tv7p7l), selecting initial projects (https://medium.com/@cote/getting-started-picking-your-first-cloud-native-projects-or-every-digital-transformation-starts-d0b1295f3712#.v7jpyjvro), and dealing with legacy (https://medium.com/built-to-adapt/deal-with-legacy-before-it-deals-with-you-cc907c800845#.ixtz1kqdz). Matt: Presenting at the CC Dojo #3, talking DevOps in Tokyo (https://connpass.com/event/46308/) AWS re:Invent Matt Ray heroically summarizes all here. Richard has a write-up as well (https://www.infoq.com/news/2016/12/aws-reinvent-recap). RedMonk re:Cap (http://redmonk.com/sogrady/2016/12/07/the-redmonk-reinvent-recap/) Global Partner Summit Don't hedge your bets, "AWS has no time for uncommitted partners" (http://www.zdnet.com/article/andy-jassy-warns-aws-has-no-time-for-uncommitted-partners/) "10,000 new Partners have joined the APN in the past 12 months" (https://aws.amazon.com/blogs/aws/aws-global-partner-summit-report-from-reinvent-2016/) Day 1 - "I'd like to tell you about…" Amazon Lightsail (https://aws.amazon.com/blogs/aws/amazon-lightsail-the-power-of-aws-the-simplicity-of-a-vps/) Monthly instances with memory, cpu, storage & static IP Bitnami! Hello Digital Ocean & Linode Amazon Athena (https://aws.amazon.com/blogs/aws/amazon-athena-interactive-sql-queries-for-data-in-amazon-s3/) S3 SQL queries, based on Presto distributed SQL engine JSON, CSV, log files, delimited text, others Coté: this seems pretty amazing. Amazon Rekognition (https://aws.amazon.com/blogs/aws/amazon-rekognition-image-detection-and-recognition-powered-by-deep-learning/) Image detection & recognition Amazon Polly (https://aws.amazon.com/blogs/aws/polly-text-to-speech-in-47-voices-and-24-languages/) Text to Speech in 47 Voices and 24 Languages Coté: Makes transcripts? Amazon Lex (https://aws.amazon.com/blogs/aws/amazon-lex-build-conversational-voice-text-interfaces/) Conversational voice & text interface builder (ie. chatbots) Coté: make chat-bots and such. AWS Greengrass (https://aws.amazon.com/blogs/aws/aws-greengrass-ubiquitous-real-world-computing/) Local Lambda processing for IoT Coté: is this supposed to be, like, for running Lambda things on disconnected devices? Like fPaaS in my car? AWS Snowball Edge & Snowmobile (https://aws.amazon.com/blogs/aws/aws-snowball-edge-more-storage-local-endpoints-lambda-functions/) Local processing of data? S3/NFS and local Lambda processing? I'm thinking easy hybrid on-ramp Not just me (https://twitter.com/CTOAdvisor/status/806320423881162753) More on it (http://www.techrepublic.com/article/how-amazon-is-moving-closer-to-on-premises-compute-with-snowball-edge/) Move exabytes in weeks (https://aws.amazon.com/blogs/aws/aws-snowmobile-move-exabytes-of-data-to-the-cloud-in-weeks/) "Snowmobile is a ruggedized, tamper-resistant shipping container 45 feet long, 9.6 feet high, and 8 feet wide. It is waterproof, climate-controlled, and can be parked in a covered or uncovered area adjacent to your existing data center." Coté: LEGOS! More instance types, Elastic GPUs, F1 Instances, PostgreSQL for Aurora High I/O (I3 3.3 million IOPs 16GB/s), compute (C5 72 vCPUs, 144 GiB), memory (R4 488 Gib), burstable (T2 shared) (https://aws.amazon.com/blogs/aws/ec2-instance-type-update-t2-r4-f1-elastic-gpus-i3-c5/) Mix EC2 instance type with a 1-8 GiB GPU (https://aws.amazon.com/blogs/aws/in-the-work-amazon-ec2-elastic-gpus/) More! (https://aws.amazon.com/blogs/aws/developer-preview-ec2-instances-f1-with-programmable-hardware/) F1: FPGA EC2 instances, also available for use in the AWS Marketplace (https://aws.amazon.com/blogs/aws/amazon-aurora-update-postgresql-compatibility/) RDS vs. Aurora Postgres? Aurora is more fault tolerant apparently? Day 2 AWS OpsWorks for Chef Automate (https://aws.amazon.com/opsworks/chefautomate/) Chef blog (https://blog.chef.io/2016/12/01/chef-automate-now-available-fully-managed-service-aws/) Fully managed Chef Server & Automate Previous OpsWorks now called "OpsWorks Stacks" Cloud Opinion approves the Chef strategy (https://twitter.com/cloud_opinion/status/804374597449584640) EC2 Systems Manager Tools for managing EC2 & on-premises systems (https://aws.amazon.com/ec2/systems-manager/) AWS Codebuild Managed elastic build service with testing (https://aws.amazon.com/blogs/aws/aws-codebuild-fully-managed-build-service/) AWS X-Ray (https://aws.amazon.com/blogs/aws/aws-x-ray-see-inside-of-your-distributed-application/) Distributed debugging service for EC2/ECS/Lambda? "easy way for developers to "follow-the-thread" as execution traverses EC2 instances, ECS containers, microservices, AWS database and messaging services" AWS Personal Health Dashboard (https://aws.amazon.com/blogs/aws/new-aws-personal-health-dashboard-status-you-can-relate-to/) Personalized AWS monitoring & CloudWatch Events auto-remediation Disruptive to PAAS monitoring & APM (New Relic, DataDog, App Dynamics) AWS Shield (https://aws.amazon.com/blogs/aws/aws-shield-protect-your-applications-from-ddos-attacks/) DDoS protection Amazon Pinpoint Mobile notification & analytics service (https://aws.amazon.com/blogs/aws/amazon-pinpoint-hit-your-targets-with-aws/) AWS Glue Managed data catalog & ETL (extract, transform & load) service for data analysis AWS Batch Automated AWS provisioning for batch jobs (https://aws.amazon.com/blogs/aws/aws-batch-run-batch-computing-jobs-on-aws/) C# in Lamba, Lambda Edge, AWS Step Functions Werner Vogels: "serverless, there is no cattle, only the herd" Lambda Edge (https://aws.amazon.com/blogs/aws/coming-soon-lambda-at-the-edge/) for running in response to CloudFront events, ""intelligent" processing of HTTP requests at a location that is close" More (https://aws.amazon.com/blogs/aws/new-aws-step-functions-build-distributed-applications-using-visual-workflows/) Step Functions a visual workflow "state machine" for Lambda functions More (https://serverless.zone/faas-is-stateless-and-aws-step-functions-provides-state-as-a-service-2499d4a6e412) BLOX (https://aws.amazon.com/blogs/compute/introducing-blox-from-amazon-ec2-container-service/): EC2 Container Service Scheduler Open source scheduler, watches CloudWatch events for managing ECS deployments Blox.github.io Analysis discussion for all the AWS stuff Jesus! I couldn't read it all! So, what's the role of Lambda here? It seems like the universal process thingy - like AppleScript, bash scripts, etc. for each part: if you need/want to add some customization to each thing, put a Lambda on it. What's the argument against just going full Amazon, in the same way you'd go full .Net, etc.? Is it cost? Lockin? Performance (people always talk about Amazon being kind of flakey at times - but what isn't flakey, your in-house run IT? Come on.) BONUS LINKS! Not covered in episode. Docker for AWS "EC2 Container Service, Elastic Beanstalk, and Docker for AWS all cost nothing; the only costs are those incurred by using AWS resources like EC2 or EBS." (http://www.infoworld.com/article/3145696/application-development/docker-for-aws-whos-it-really-for.html) Docker gets paid on usage? Apparently an easier learning curve than ECS + AWS services, but whither Blox? Time to Break up Amazon? Someone has an opinion (http://www.geekwire.com/2016/new-study-compares-amazon-19th-century-robber-barons-urges-policymakers-break-online-retail-giant/) HPE Discover, all about the "Hybrid Cloud" Hybrid it up! (http://www.zdnet.com/article/hpe-updates-its-converged-infrastructure-hybrid-cloud-software-lineup/) Killed "The Machine" (http://www.theregister.co.uk/2016/11/29/hp_labs_delivered_machine_proof_of_concept_prototype_but_machine_product_is_no_more/) HPE's Synergy software, based on OpenStack (is this just Helion rebranded?) Not great timing for a conference Sold OpenStack & CloudFoundry bits to SUSE (http://thenewstack.io/suse-add-hpes-openstack-cloud-foundry-portfolio-boost-kubernetes-investment/), the new "preferred Linux partner": How Google is Challenging AWS Ben on public cloud (https://stratechery.com/2016/how-google-cloud-platform-is-challenging-aws/) "open-sourcing Kubernetes was Google's attempt to effectively build a browser on top of cloud infrastructure and thus decrease switching costs; the company's equivalent of Google Search will be machine learning." Exponent.fm episode 097 — Google vs AWS (http://exponent.fm/episode-097-google-versus-aws/) Recommendations Brandon: Apple Wifi Calling (https://support.apple.com/en-us/HT203032) & Airplane mode (https://support.apple.com/en-us/HT204234). Westworld worth watching (http://www.hbo.com/westworld). Matt: Backyard Kookaburras (https://www.youtube.com/watch?v=DmNn7P59HcQ). Magpies too! (http://www.musicalsoupeaters.com/swooping-season/) This gif (https://media.giphy.com/media/wik7sKOl86OFq/giphy.gif). Coté: W Hotel in Las Vegas (http://www.wlasvegas.com/) and lobster eggs benedict (https://www.instagram.com/p/BNxAyQbjKCQ/) at Payard's in Ceasers' Outro: "I need my minutes," Soul Position (http://genius.com/Soul-position-i-need-my-minutes-lyrics).

Engineers & Coffee
apparently i'm competing directly with google now

Engineers & Coffee

Play Episode Listen Later Apr 26, 2016 58:17


jukedeck retrospective episode give us feedback! engineers and coffee in google play Chicago AWS Summit EBS cold storage / throughput Managed platform update for Elastic Beanstalk S3 transfer acceleration and larger snowball Acceleration speed comparison tool "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." Andrew S. Tanenbaum, Computer Networks, 4th ed., p. 91 Kinesis updates Cognito User Pools Facebook Account Kit Device Farm Interactive Testing Octocast (Larry's podcast hosting project) deploying lambda from sbt Titanic: Machine Learning from Disaster | Kaggle Stanford Machine Learning Course

Ruby Rogues
218 RR AWS Deployments with Alex Wood and Trevor Rowe

Ruby Rogues

Play Episode Listen Later Jul 29, 2015 58:44


Check out RailsClips!   02:44 - Alex Wood Introduction Twitter GitHub 03:09 - Trevor Rowe Introduction Twitter GitHub 03:26 - What is offered by Amazon Web Services (AWS)? Elastic Beanstalk OpsWorks Alex's RailsConf 2015 Workshop 06:48 - Setup and Taking Incremental Steps (The Cloud as a Paradigm) Identity and Access Management “Make sure everything works” 12:19 - CloudFormation Tooling aws-sdk-ruby 15:19 - Data-Centric Services (Monitoring, Traceability, Visibility) CloudFormation S3 CloudFront Simple Email Service (SES) Simple Queuing Service (SQS) Simple Notification Service (SNS) DynamoDB AWS Lambda Amazon EC2 Container Service Logging CloudTrail CloudWatch CloudWatch Logs 23:48 - When to Use What (Getting Started) Simplicity vs Control 26:07 - Making Apps Run Better, General Optimizations Route 53 33:43 - Implementing AWS “Eat the elephant one bite at a time” 37:15 - Security Creating Visibility Without Opening an SSH Port     CloudWatch CloudWatch Logs Running Inside a Virtual Private Cloud (VPC) Why doesn’t security happen? 47:51 - Maintaining and Continually Improving Within Teams (Scalability) 56:33 - AWS Resources AWS Official Blog AWS Ruby Development Blog [GitHub] AWS   Picks Interview with Laurent Bossavit of the 10X Programmer and other Myths in Software Engineering (Jessica) Paracord (Chuck) Alex's RailsConf 2015 Workshop (Alex) Stranger in a Strange Land by Robert A. Heinlein (Alex) Kalzumeus Podcast (Alex) Gitter (Trevor) AWS Ruby Development Blog (Trevor)

All Ruby Podcasts by Devchat.tv
218 RR AWS Deployments with Alex Wood and Trevor Rowe

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Jul 29, 2015 58:44


Check out RailsClips!   02:44 - Alex Wood Introduction Twitter GitHub 03:09 - Trevor Rowe Introduction Twitter GitHub 03:26 - What is offered by Amazon Web Services (AWS)? Elastic Beanstalk OpsWorks Alex's RailsConf 2015 Workshop 06:48 - Setup and Taking Incremental Steps (The Cloud as a Paradigm) Identity and Access Management “Make sure everything works” 12:19 - CloudFormation Tooling aws-sdk-ruby 15:19 - Data-Centric Services (Monitoring, Traceability, Visibility) CloudFormation S3 CloudFront Simple Email Service (SES) Simple Queuing Service (SQS) Simple Notification Service (SNS) DynamoDB AWS Lambda Amazon EC2 Container Service Logging CloudTrail CloudWatch CloudWatch Logs 23:48 - When to Use What (Getting Started) Simplicity vs Control 26:07 - Making Apps Run Better, General Optimizations Route 53 33:43 - Implementing AWS “Eat the elephant one bite at a time” 37:15 - Security Creating Visibility Without Opening an SSH Port     CloudWatch CloudWatch Logs Running Inside a Virtual Private Cloud (VPC) Why doesn’t security happen? 47:51 - Maintaining and Continually Improving Within Teams (Scalability) 56:33 - AWS Resources AWS Official Blog AWS Ruby Development Blog [GitHub] AWS   Picks Interview with Laurent Bossavit of the 10X Programmer and other Myths in Software Engineering (Jessica) Paracord (Chuck) Alex's RailsConf 2015 Workshop (Alex) Stranger in a Strange Land by Robert A. Heinlein (Alex) Kalzumeus Podcast (Alex) Gitter (Trevor) AWS Ruby Development Blog (Trevor)

Devchat.tv Master Feed
218 RR AWS Deployments with Alex Wood and Trevor Rowe

Devchat.tv Master Feed

Play Episode Listen Later Jul 29, 2015 58:44


Check out RailsClips!   02:44 - Alex Wood Introduction Twitter GitHub 03:09 - Trevor Rowe Introduction Twitter GitHub 03:26 - What is offered by Amazon Web Services (AWS)? Elastic Beanstalk OpsWorks Alex's RailsConf 2015 Workshop 06:48 - Setup and Taking Incremental Steps (The Cloud as a Paradigm) Identity and Access Management “Make sure everything works” 12:19 - CloudFormation Tooling aws-sdk-ruby 15:19 - Data-Centric Services (Monitoring, Traceability, Visibility) CloudFormation S3 CloudFront Simple Email Service (SES) Simple Queuing Service (SQS) Simple Notification Service (SNS) DynamoDB AWS Lambda Amazon EC2 Container Service Logging CloudTrail CloudWatch CloudWatch Logs 23:48 - When to Use What (Getting Started) Simplicity vs Control 26:07 - Making Apps Run Better, General Optimizations Route 53 33:43 - Implementing AWS “Eat the elephant one bite at a time” 37:15 - Security Creating Visibility Without Opening an SSH Port     CloudWatch CloudWatch Logs Running Inside a Virtual Private Cloud (VPC) Why doesn’t security happen? 47:51 - Maintaining and Continually Improving Within Teams (Scalability) 56:33 - AWS Resources AWS Official Blog AWS Ruby Development Blog [GitHub] AWS   Picks Interview with Laurent Bossavit of the 10X Programmer and other Myths in Software Engineering (Jessica) Paracord (Chuck) Alex's RailsConf 2015 Workshop (Alex) Stranger in a Strange Land by Robert A. Heinlein (Alex) Kalzumeus Podcast (Alex) Gitter (Trevor) AWS Ruby Development Blog (Trevor)

CenturyLink Labs Podcast
CTL 003 - Real Docker Case-Study with Matt Butcher from Revolv

CenturyLink Labs Podcast

Play Episode Listen Later Jun 11, 2014 34:26


The #1 request from CenturyLink Labs readers is to hear about real-life case studies about Docker in production. This week, we are honored to have a fantastic use-case example of Docker in the real world. Matt Butcher is currently head of Cloud Services at Revolv… a crazy-cool home automation hub (internet of things) startup. Think Nest for everything else in the house. Works with your exiting devices (Belkin, Hue, Honeywell, Sonos, etc). Matt Butcher has written 6 books on topics like Drupal, CMS, and LDAP. He also exclusively announced on our podcast that he is working now on a 7th (!!!) called Go in the Cloud. We have been super curious how a hot startup like Revolv which hasraised $7.3M in VC money uses Docker. Here is just the audio podcast for those who are interested in listening on iTunes (subscribe):   How do you use Docker at Revolve? We are still running many core services on Virtual Machines. We have played with a half-dozen Docker technologies and haven’t yet committed to any one just yet. But we have replaced our entire CI/CD solution with Drone (a Docker based on-prem open-source CI/CD solution). It took about a week and a half. We had been using Jenkins and it was a nightmare. We are actively looking for more ways to incorporate Docker into production. We are seriously looking into using Amazon’s Elastic Beanstalk with Docker, but haven’t made commitments on it yet. You wrote “Why Containers Won’t Beat VMs” a year ago. Do you still think that way? At the time of writing that article, Docker was just 3 months old and not well understood and Virtual Machines were gang-busters. Containers looked like a faddy kind of toy. But I did not foresee the cool things that came out of the Docker community like CoreOS and Deis and the other micro-PaaSes. Containers are becoming a very elegant and compelling model for building applications. From a DevOps perspective, it is starting to turn out to look like Docker Containers are the right way of doing things. What do you not like about Docker? A week ago, it would have been the perpetual putting off of the 1.0. But now that is out. My biggest concern right now is that the tools around Docker are immature, but this problem is being solved by the community right now. What is the biggest problem in real-life Docker adoption today? The biggest thing is that right now if I want to deploy Docker, I still have to use Virtual Machines and then put Docker on them. It would be great to have pure Docker hosting from one of the larger hosting providers out there. As a Docker user, are you interested/excited about the libswarm or libchan? I am most excited about libcontainer. Seeing libchan which gives go channels at the network level is very exciting too. I am still not sure what to make of libswarm. It appears to be something more for the ecosystem than for end-users. Are you using any orchestration or PaaS with Docker? Like CoreOS? Deis? Dokku? First started playing with Dokku PaaS a year ago and I like the idea of minimalist build-your-own PaaS. I think it is very promising, but still takes hour to setup all the dependencies. We check into these projects every 2-3 months to see how it looks. So far it is not robust and mature enough, but we think it will be within 2-3 months from now. However we have backed off from PaaS and are going a little lower on the stack, closer to CoreOS. You have blogged about using Drone for CI/CD… how has your experience been with using CI/CD with Docker? Drone works by pulling stuff out of your git repository, build a custom Docker image with whatever dependencies you need (binaries and other), and then execute any arbitrary command you want. In Jenkins, even if you could wire up the code just the way you needed it, you were still running on the slave’s OS which may or may not match up with production. From the moment the Drone container finishes building, we know that the production environment will match exactly the same state as dev/test. With Drone you can also spin up database containers that match production database containers. This creates a much more robust workflow for testing things than what has been available before. How did you get into go? What do you like about go? What do you not like about go? I started out doing Java for 10 years. Then I did PHP/Drupal for a while. When I joined Revolv, I joined as Java. However recently it felt like Java was nesting library upon library. With go I was impressed that I was able to build a remarkably robust application in go with just the core libraries. On the other hand, the fact that go compiles to a small size with low memory meant that I could use dramatically fewer resources. In go, not everything may be easy, but everything in the language should be in the language. That is the suite spot that I wanted in a language. PHP had too much built-in and Java had too little, requiring you to use too many nested libraries. Go was a great middle ground.