Podcasts about snowball edge

  • 9PODCASTS
  • 15EPISODES
  • 35mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jun 8, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about snowball edge

Latest podcast episodes about snowball edge

Screaming in the Cloud
Centralizing Cloud Security Breach Information with Chris Farris

Screaming in the Cloud

Play Episode Listen Later Jun 8, 2023 35:06


Chris Farris, Cloud Security Nerd at PrimeHarbor Technologies, LLC, joins Corey on Screaming in the Cloud to discuss his new project, breaches.cloud, and why he feels having a centralized location for cloud security breach information is so important. Corey and Chris also discuss what it means to dive into entrepreneurship, including both the benefits of not having to work within a corporate structure and the challenges that come with running your own business. Chris also reveals what led him to start breaches.cloud, and what he's learned about some of the biggest cloud security breaches so far. About ChrisChris Farris is a highly experienced IT professional with a career spanning over 25 years. During this time, he has focused on various areas, including Linux, networking, and security. For the past eight years, he has been deeply involved in public-cloud and public-cloud security in media and entertainment, leveraging his expertise to build and evolve multiple cloud security programs.Chris is passionate about enabling the broader security team's objectives of secure design, incident response, and vulnerability management. He has developed cloud security standards and baselines to provide risk-based guidance to development and operations teams. As a practitioner, he has architected and implemented numerous serverless and traditional cloud applications, focusing on deployment, security, operations, and financial modeling.He is one of the organizers of the fwd:cloudsec conference and presented at various AWS conferences and BSides events. Chris shares his insights on security and technology on social media platforms like Twitter, Mastodon and his website https://www.chrisfarris.com.Links Referenced: fwd:cloudsec: https://fwdcloudsec.org/ breaches.cloud: https://breaches.cloud Twitter: https://twitter.com/jcfarris Company Site: https://www.primeharbor.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. My returning guest today is Chris Farris, now at PrimeHarbor, which is his own consultancy. Chris, welcome back. Last time we spoke, you were a Turbot, and now you've decided to go independent because you don't like sleep anymore.Chris: Yeah, I don't like sleep.Corey: [laugh]. It's one of those things where when I went independent, at least in my case, everyone thought that it was, oh, I have this grand vision of what the world could be and how I could look at these things, and that's going to just be great and awesome and everyone's going to just be a better world for it. In my case, it was, no, just there was quite literally nothing else for me to do that didn't feel like an exact reframing of what I'd already been doing for years. I'm a terrible employee and setting out on my own was important. It was the only way I found that I could wind up getting to a place of not worrying about getting fired all the time because that was my particular skill set. And I look back at it now, almost seven years in, and it's one of those things where if I had known then what I know now, I never would have started.Chris: Well, that was encouraging. Thank you [laugh].Corey: Oh, of course. And in sincerity, it's not one of those things where there's any one thing that stops you, but it's the, a lot of people get into the independent consulting dance because they want to do a thing and they're very good at that thing and they love that thing. The problem is, when you're independent, and at least starting out, I was spending over 70% of my time on things that were not billable, which included things like go and find new clients, go and talk to existing clients, the freaking accounting. One of the first hires I made was a fractional CFO, which changed my life. Up until that, my business partner and I were more or less dead reckoning of looking at the bank account and how much money is in there to determine if we could afford things. That's a very unsophisticated way of navigating. It's like driving by braille.Chris: Yeah, I think I went into it mostly as a way to define my professional identity outside of my W-2 employer. I had built cloud security programs for two major media companies and felt like that was my identity: I was the cloud security person for these companies. And so, I was like, ehh, why don't I just define myself as myself, rather than define myself as being part of a company that, in the media space, they are getting overwhelmed by change, and job security, job satisfaction, wasn't really something that I could count on.Corey: One of the weird things that I found—it's counterintuitive—is that when you're independent, you have gotten to a point where you have hit a point of sustainability, where you're not doing the oh, I'm just going to go work for 40 billable hours a week for a client. It's just like being an employee without a bunch of protections and extra steps. That doesn't work super well. But now, at the point where I'm at where the largest client we have is a single-digit percentage of revenue, I can't get fired anymore, without having a whole bunch of people suddenly turn on me because I've done something monstrous, in which case, I probably deserve not to have business anymore, or there's something systemic in the macro environment, which given that I do the media side and I do the cost-cutting side, I work on the way up, I work on the way down, I'm questioning what that looks like in a scenario that doesn't involve me hunting for food. But it's counterintuitive to people who have been employees their whole life, like I was, where, oh, it's risky and dangerous to go out on your own.Chris: It's risky and dangerous to be, you know, tied to a single, yeah, W-2 paycheck. So.Corey: Yeah. The question I'd like to ask is, how many people need to be really pissed off before you have one of those conversations with HR that doesn't involve giving you a cup of coffee? That's the tell: when you don't get coffee, it's a bad conversation.Chris: Actually, that you haven't seen [unintelligible 00:04:25] coffee these days. You don't want the cup of coffee, you know. That's—Corey: Even when they don't give you the crappy percolator navy coffee, like, midnight hobo diner style, it's still going to be a bad meeting because [unintelligible 00:04:37] pretend the coffee's palatable.Chris: Perhaps, yes. I like not having to deal with my own HR department. And I do agree that yeah, getting out of the W-2 space allows me to work on side projects that interests me or, you know, volunteer to do things like continuing the fwd:cloudsec, developing breaches.cloud, et cetera.Corey: I'll never forget, one of my last jobs I had a boss who walked past and saw me looking at Reddit and asked me if that was really the best use of my time. At first—it was in, I think, the sysadmin forum at the time, so yes, it was very much the best use of my time for the problem I was focusing on, but also, even if it wasn't, I spent an inordinate amount of time on social media, just telling stories and building audiences, on some level. That's the weird thing is that what counts as work versus what doesn't count as work gets very squishy when you're doing your own marketing.Chris: True. And even when I was a W-2 employee, I spent a lot of time on Twitter because Twitter was an intel source for us. It was like, “Hey, who's talking about the latest cloud security misconfigurations? Who's talking about the latest data breach? What is Mandiant tweeting about?” It was, you know—I consider it part of my job to be on Twitter and watching things.Corey: Oh, people ask me that. “So, you're on Twitter an awful lot. Don't you have a newsletter to write?” Like, yeah, where do you think that content comes from, buddy?Chris: Exactly. Twitter and Mastodon. And Reddit now.Corey: There's a whole argument to be had about where to find various things. For me at least, because I'm only security adjacent, I was always trying to report the news that other people had, not make the news myself.Chris: You don't want to be the one making the news in security.Corey: Speaking of, I'd like to talk a bit about what you just alluded to breaches.cloud. I don't think I've seen that come across my desk yet, which tells me that it has not been making a big splash just yet.Chris: I haven't been really announcing it; it got published the other night and so basically, yeah, is this is sort of a inaugural marketing push for breaches.cloud. So, what we're looking to do is document all the public cloud security breaches, what happened, why, and more importantly, what the companies did or didn't do that led to the security incident or the security breach.Corey: How are you slicing the difference between broad versus deep? And what I mean by that is, there are some companies where there are indictments and massive deep dives into everything that happens with timelines and blows-by-blows, and other times you wind up with the email that shows up one day of, “Security is very important to us. Now, listen to how we completely dropped the ball on it.” And it just makes the biggest description that they can get away with of what happened. Occasionally, you find out oh, it was an open S3 buckets, or they'll allude to something that sounds like it. Does that count for inclusion? Does it not? How do you make those editorial decisions?Chris: So, we haven't yet built a page around just all of the recipients of the Bucket Negligence Award. We're looking at the specific ones where there's been something that's happened that's usually involving IAM credentials—oftentimes involving IAM credentials found in GitHub—and what led to that. So, in a lot of cases, if there's a detailed company postmortem that they send their customers that said, “Hey, we goofed up, but complete transparency—” and then they hit all the bullet points of how they goofed up. Or in the case of certain others, like Uber, “Hey, we have court transcripts that we can go to,” or, “We have federal indictments,” or, “We have court transcripts, and federal indictments and FTC civil actions.” And so, we go through those trying to suss out what the company did or did not do that led to the breach. And really, the goal here is to be able to articulate as security practitioners, hey, don't attach S3 full access to this role on EC2. That's what got Capital One in trouble.Corey: I have a lot of sympathy for the Capital One breach and I wish they would talk about it more than they do, for obvious reasons, just because it was not, someone showed up and made a very obvious dumb decision, like, “Oh, that was what that giant red screaming thing in the S3 console means.” It was a series of small misconfigurations that led to another one, to another one, to another one, and eventually gets to a point where a sophisticated attacker was able to chain them all together. And yes, it's bad, yes, they're a bank and the rest, but I look at that and it's—that's the sort of exploit that you look at and it's okay, I see it. I absolutely see it. Someone was very clever, and a bunch of small things that didn't rise to the obvious. But they got dragged and castigated as if they basically had a four-character password that they'd left on the back of the laptop on a Post-It note in an airport lounge when their CEO was traveling. Which is not the case.Chris: Or all of the highlighting the fact that Paige Thompson was a former Amazon employee, making it seem like it was her insider abilities that lead to the incident, rather than she just knew that, hey, there's a metadata service and it gives me creds if I ask it.Corey: Right. That drove me nuts. There was no maleficence as an employee. And to be very direct, from what I understand of internal AWS controls, had there been, it would have been audited, flagged, caught, interdicted. I have talked to enough Amazonians that either a lot of them are lying to me very consistently despite not knowing each other, or they're being honest when they say that you can't get access to customer data using secret inside hacks.Chris: Yeah. I have reasonably good faith in AWS and their ability to not touch customer data in most scenarios. And I've had cases that I'm not allowed to talk about where Amazon has gone and accessed customer data, and the amount of rigmarole and questions and drilling that I got as a customer to have them do that was pretty intense and somewhat, actually, annoying.Corey: Oh, absolutely. And, on some level, it gets frustrating when it's a, look, this is a test account. I have nothing of sensitive value in here. I want the thing that isn't working to start working. Can I just give you a whole, like, admin-powered user account and we can move on past all of this? And their answer is always absolutely not.Chris: Yes. Or, “Hey, can you put this in our bucket?” “No, we can't even write to a public bucket or a bucket that, you know, they can share too.” So.Corey: An Amazonian had to mail me a hard drive because they could not send anything out of S3 to me.Chris: There you go.Corey: So, then I wound up uploading it back to S3 with, you know, a Snowball Edge because there's no overkill like massive overkill.Chris: No, the [snowmobile 00:11:29] would have been the massive overkill. But depending on where you live, you know, you might not have been able to get a permit to park the snowmobile there.Corey: They apparently require a loading dock. Same as with the outposts. I can't fake having one of those on my front porch yet.Chris: Ah. Well, there you go. I mean, you know it's the right height though, and you don't mind them ruining your lawn.Corey: So, help me understand. It makes sense to me at least, on some level, why having a central repository of all the various cloud security breaches in one place that's easy to reference is valuable. But what caused you to decide, you know, rather than saying it'd be nice to have, I'm going to go build that thing?Chris: Yeah, so it was actually right before the last time we spoke, Nicholas Sharp was indicted. And there was like, hey, this person was indicted for, you know, this cloud security case. And I'm like, that name rings a bell, but I don't remember who this person was. And so, I kind of realized that there's so many of these things happening now that I forget who is who. And so, when a new piece of news comes along, I'm like, where did this come from and how does this fit into what my knowledge of cloud security is and cloud security cases?So, I kind of realized that these are all running together in my mind. The Department of Justice only referenced ‘Company One,' so it wasn't clear to me if this even was a new cloud incident or one I already knew about. And so basically, I decided, okay, let's build this. Breaches.cloud was available; I think I kind of got the idea from hackingthe.cloud.And I had been working with some college students through the Collegiate Cyber Defense Competition, and I was like, “Hey, anybody want a spring research project that I will pay you for?” And so yeah, PrimeHarbor funded two college students to do quite a bit of the background research for me, I mentored them through, “Hey, so here's what this means,” and, “Hey, have we noticed that all of these seem to relate to credentials found in GitHub? You know, maybe there's a pattern here.” So, if you're not yet scanning for secrets in GitHub, I recommend you start scanning for secrets in your GitHub, private and public repos.Corey: Also, it makes sense to look at the history. Because, oh, I committed a secret. I'm going to go ahead and revert that commit and push that. That solves the problem, right?Chris: No, no, it doesn't. Yes, apparently, you can force push and delete an entire commit, but you really want to use a tool that's going to go back through the commit history and dig through it because as we saw in the Uber incident, when—the second Uber incident, the one that led to the CSOs conviction—yeah, the two attackers, [unintelligible 00:14:09] stuffed a Uber employee's personal GitHub account that they were also using for Uber work, and yeah, then they dug through all the source code and dug through the commit histories until they found a set of keys, and that's what they used for the second Uber breach.Corey: Awful when that hits. It's one of those things where it's just… [sigh], one thing leads to another leads to another. And on some level, I'm kind of amazed by the forensics that happen around all of these things. With the counterpoint, it is so… freakishly difficult, I think, for lack of a better term, just to be able to say what happened with any degree of certainty, so I can't help but wonder in those dark nights when the creeping dread starts sinking in, how many things like this happen that we just never hear about because they don't know?Chris: Because they don't turn on CloudTrail. Probably a number of them. Once the data gets out and shows up on the dark web, then people start knocking on doors. You know, Troy Hunt's got a large collection of data breach stuff, and you know, when there's a data breach, people will send him, “Hey, I found these passwords on the dark web,” and he loads them into Have I Been Pwned, and you know, [laugh] then the CSO finds out. So yeah, there's probably a lot of this that happens in the quiet of night, but once it hits the dark web, I think that data starts becoming available and the victimized company finds out.Corey: I am profoundly cynical, in case that was unclear. So, I'm wondering, on some level, what is the likelihood or commonality, I suppose, of people who are fundamentally just viewing security breach response from a perspective of step one, make sure my resume is always up to date. Because we talk about these business continuity plans and these DR approaches, but very often it feels like step one, secure your own mask before assisting others, as they always say on the flight. Where does personal preservation come in? And how does that compare with company preservation?Chris: I think down at the [IaC 00:16:17] level, I don't know of anybody who has not gotten a job because they had Equifax on their resume back in, what, 2017, 2018, right? Yes, the CSO, the CEO, the CIO probably all lost their jobs. And you know, now they're scraping by book deals and speaking engagements.Corey: And these things are always, to be clear, nuanced. It's rare that this is always one person's fault. If you're a one-person company, okay, yeah, it's kind of your fault, let's be clear here, but there are controls and cost controls and audit trails—presumably—for all of these things, so it feels like that's a relatively easy thing to talk around, that it was a process failure, not that one person sucked. “Well, didn't you design and implement the process?” “Yes. But it turned out there were some holes in it and my team reported that those weren't there and it turned out that they were and, well, live and learn.” It feels like that's something that could be talked around.Chris: It's an investment failure. And again, you know, if we go back to Harry Truman, “The buck stops here,” you know, it's the CEO who decides that, hey, we're going to buy a corporate jet rather than buy a [SIIM 00:17:22]. And those are the choices that happen at the top level that define, do you have a capable security team, and more importantly, do you have a capable security culture such that your security team isn't the only ones who are actually thinking about security?Corey: That's, I guess, a fair question. I saw a take on Twitter—which is always a weird thing—or maybe was Blue-ski or somewhere else recently, that if you don't have a C-level executive responsible for security with security in their title, your company does not take security seriously. And I can see that past a certain point of scale, but as a one-person company, do you have a designated CSO?Chris: As a one-person company and as a security company, I sort of do have a designated CSO. I also have, you know, the person who's like, oh, I'm going to not put MFA on the root of this one thing because, while it's an experiment and it's a sandbox and whatever else, but I also know that that's not where I'm going to be putting any customer data, so I can measure and evaluate the risk from both a security perspective and a business existential investment perspective. When you get to the larger the organization, the more detached the CEO gets from the risk and what the company is building and what the company is doing, is where you get into trouble. And lots of companies have C-level somebody who's responsible for security. It's called the CSO, but oftentimes, they report four levels down, or even more, from the chief executive who is actually the one making the investment decisions.Corey: On some level, the oh yeah, that's my responsibility, too, but it feels like it's a trap that falls into. Like, well, the CTO is responsible for security at a publicly traded company. Like, well… that tends to not work anymore, past certain points of scale. Like when I started out independently, yes, I was the CSO. I was also the accountant. I was also the head of marketing. I was also the janitor. There's a bunch of different roles; we all wear different hats at different times.I'm also not a big fan of shaming that oh, yeah. This is a universal truth that applies to every company in existence. That's also where I think Twitter started to go wrong where you would get called out whenever making an observation or witticism or whatnot because there was some vertex case to which it did not necessarily apply and then people would ‘well, actually,' you to death.Chris: Yeah. Well, and I think there's a lot of us in the security community who are in the security one-percenters. We're, “Hey, yes, I'm a cloud security person on a 15-person cloud security team, and here's this awesome thing we're doing.” And then you've got most of the other companies in this country that are probably below the security poverty line. They may or may not have a dedicated security person, they certainly don't have a SIIM, they certainly don't have anybody who's monitoring their endpoints for malware attacks or anything else, and those are the companies that are getting hit all the time with, you know, a lot of this ransomware stuff. Healthcare is particularly vulnerable to that.Corey: When you take a look across the industry, what is it that you're doing now at PrimeHarbor that you feel has been an unmet need in the space? And let me be clear, as of this recording earlier today, we signed a contract with you for a project. There's more to come on that in the future. So, this is me asking you to tell a story, not challenging, like, what do you actually do? This is not a refund request, let's be very clear here. But what's the unmet need that you saw?Chris: I think the unmet need that I see is we don't talk to our builder community. And when I say builder, I mean, developers, DevOps, sysadmins, whatever. AWS likes the term builder and I think it works. We don't talk to our builder community about risk in a way that makes sense to them. So, we can say, “Hey, well, you know, we have this security policy and section 24601 says that all data's classifications must be signed off by the data custodian,” and a developer is going to look at you with their head tilted, and be like, “Huh? What? I just need to get the sprint done.”Whereas if we can articulate the risk—and one of the reasons I wanted to do breaches.cloud was to have that corpus of articulated risk around specific things—I can articulate the risk and say, “Hey, look, you know how easy it is for somebody to go in and enumerate an S3 bucket? And then once they've enumerated and guessed that S3 bucket exists, they list it, and oh, hey, look, now that they've listed it, they know all of the objects and all of the juicy PII that you just made public.” If you demonstrate that to them, then they're going to be like, “Oh, I'm going to add the extra story point to this story to go figure out how to do CloudFront origin access identity.” And now you've solved, you know, one more security thing. And you've done in a way that not just giving a man a fish or closing the bucket for them, but now they know, hey, I should always use origin access identity. This is why I need to do this particular thing.Corey: One of the challenges that I've seen in a variety of different sites that have tried to start cataloging different breaches and other collections of things happening in public is the discoverability or the library management problem. The most obvious example of this is, of course, the AWS console itself, where when it paginates things like, oh, there are 3000 things here, ten at a time, through various pages for it. Like, the marketplace is just a joke of discoverability. How do you wind up separating the stuff that is interesting and notable, rather than, well, this has about three sentences to it because that's all the company would say?Chris: So, I think even the ones where there's three sentences, we may actually go ahead and add it to the repo, or we may just hold it as a draft, so that we know later on when, “Hey, look, here's a federal indictment for Company Three. Oh, hey, look. Company Three was actually this breach announcement that we heard about three months ago,” or even three years ago. So like, you know, Chegg is a great example of, you know, one of those where, hey, you know, there was an incident, and they disclosed something, and then, years later, FTC comes along and starts banging them over the head. And in the FTC documentation, or in the FTC civil complaint, we got all sorts of useful data.Like, not only were they using root API keys, every contractor and employee there was sharing the root API keys, so when they had a contractor who left, it was too hard to change the keys and share it with everybody, so they just didn't do that. The contractor still had the keys, and that was one of the findings from the FTC against Chegg. Similar to that, Cisco didn't turn off contractors' access, and I think—this is pure speculation—I think the poor contractor one day logged into his Google Cloud Shell, cd'ed into a Terraform directory, ran ‘terraform destroy', and rather than destroying what he thought he was destroying, it had the access keys back to Cisco WebEx and took down 400 EC2 instances that made up all of WebEx. These are the kinds of things that I think it's worth capturing because the stories are going to come out over time.Corey: What have you seen in your, I guess, so far, a limited history of curating this that—I guess, first what is it you've learned that you've started seeing as far as patterns go, as far as what warrants inclusion, what doesn't, and of course, once you started launching and going a bit more public with it, I'm curious to hear what the response from companies is going to be.Chris: So, I want to be very careful and clear that if I'm going to name somebody, that we're sourcing something from the criminal justice system, that we're not going to say, “Hey, everybody knows that it was Paige Thompson who was behind it.” No, no, here's the indictment that said it was Paige Thompson that was, you know, indicted for this Capital One sort of thing. All the data that I'm using, it all comes from public sources, it's all sited, so it's not like, hey, some insider said, “Hey, this is what actually happened.” You know? I very much learned from the Ubiquiti case that I don't want to be in the position of Brian Krebs, where it's the attacker themselves who's updating the site and telling us everything that went wrong, when in fact, it's not because they're in fact the perpetrator.Corey: Yeah, there's a lot of lessons to be learned. And fortunately, for what it's s—at least it seems… mostly, that we've moved past the battle days of security researchers getting sued on a whim from large companies for saying embarrassing things about them. Of course, watch me be tempting fate and by the time this publishes, I'll get sued by some company, probably Azure or whatnot, telling me that, “Okay, we've had enough of you saying bad things about our security.” It's like, well, cool, but I also read the complaint before you file because your security is bad. Buh-dum-tss. I'm kidding. I'm kidding. Please don't sue me.Chris: So, you know, whether it's slander or libel, depending on whether you're reading this or hearing it, you know, truth is an actual defense, so I think Microsoft doesn't have a case against you. I think for what we're doing in breaches, you know—and one of the reasons that I'm going to be very clear on anybody who contributes—and just for the record, anybody is welcome to contribute. The GitHub repo that runs breaches.cloud is public and anybody can submit me a pull request and I will take their write-ups of incidents. But whatever it is, it has to be sourced.One of the things that I'm looking to do shortly, is start soliciting sponsorships for breaches so that we can afford to go pull down the PACER documents. Because apparently in this country, while we have a right to a speedy trial, we don't have a right to actually get the court transcripts for less than ten cents a page. And so, part of what we need to do next is download those—and once we've purchased them, we can make them public—download those, make them public, and let everybody see exactly what the transcript was from the Capital One incident, or the Joey Sullivan trial.Corey: You're absolutely right. It drives me nuts that I have to wind up budgeting money for PACER to pull up court records. And at ten cents a page, it hasn't changed in decades, where it's oh, this is the cost of providing that data. It's, I'm not asking someone to walk to the back room and fax it to me. I want to be very clear here. It just feels like it's one of those areas where the technology and government is not caught up and it's—part of the problem is, of course, having no competition.Chris: There is that. And I think I read somewhere that the ent—if you wanted to download the entire PACER, it would be, like, $100 million. Not that you would do that, but you know, it is the moneymaker for the judicial system, and you know, they do need to keep the lights on. Although I guess that's what my taxes are for. But again, yes, they're a monopoly; they can do that.Corey: Wildly frustrating, isn't it?Chris: Yeah [sigh]… yeah, yeah, yeah. Yeah, I think there's a lot of value in the court transcripts. I've held off on publishing the Capital One case because one, well, already there's been a lot of ink spilled on it, and two, I think all the good detail is going to be in the trial transcripts from Paige Thompson's trial.Corey: So, I am curious what your take is on… well, let's called the ‘FTX thing.' I don't even know how to describe it at this point. Is it a breach? Is it just maleficence? Is it 15,000 other things? But I noticed that it's something that breaches.cloud does talk about a bit.Chris: Yeah. So, that one was a fascinating one that came out because as I was starting this project, I heard you know, somebody who was tweeting was like, “Hey, they were storing all of the crypto private keys in AWS Secrets Manager.” And I was like, “Errr?” And so, I went back and I read John J. Ray III's interim report to the creditors.Now, John Ray is the man who was behind the cleaning up of Enron, and his comment was “FTX is the”—“Never in my career have I seen such a complete failure of corporate controls and such a complete absence of trustworthy information as occurred here.” And as part of his general, broad write-up, they went into, in-depth, a lot of the FTX AWS practices. Like, we talk about, hey, you know, your company should be multi-account. FTX was worse. They had three or four different companies all operating in the same AWS account.They had their main company, FTX US, Alameda, all of them had crypto keys in Secrets Manager and there was no access control between any of those. And what ended up happening on the day that SBF left and Ray came in as CEO, the $400 million worth of crypto somehow disappeared out of FTX's wallets.Corey: I want to call this out because otherwise, I will get letters from the AWS PR spin doctors. Because on the surface of it, I don't know that there's necessarily a lot wrong with using Secrets Manager as the backing store for private keys. I do that with other things myself. The question is, what other controls are there? You can't just slap it into Secrets Manager and, “Well, my job is done. Let's go to lunch early today.”There are challenges [laugh] around the access levels, there are—around who has access, who can audit these things, and what happens. Because most of the secrets I have in Secrets Manager are not the sort of thing that is, it is now a viable strategy to take that thing and abscond to a country with a non-extradition treaty for the rest of my life, but with private keys and crypto, there kind of is.Chris: That's it. It's like, you know, hey, okay, the RDS database password is one thing, but $400 million in crypto is potentially another thing. Putting it in and Secrets Manager might have been the right answer, too. You get KMS customer-managed keys, you get full auditability with CloudTrail, everything else, but we didn't hear any of that coming out of Ray's report to the creditors. So again, the question is, did they even have CloudTrail turned on? He did explicitly say that FTX had not enabled GuardDuty.Corey: On some level, even if GuardDuty doesn't do anything for you, which in my case, it doesn't, but I want to be clear, you should still enable it anyway because you're going to get dragged when there's inevitable breach because there's always a breach somewhere, and then you get yelled at for not having turned on something that was called GuardDuty. You already sound negligent, just with that sentence alone. Same with Security Hub. Good name on AWS's part if you're trying to drive service adoption. Just by calling it the thing that responsible people would use, you will see adoption, even if people never configure or understand it.Chris: Yeah, and then of course, hey, you had Security Hub turned on, but you ignore the 80,000 findings in it. Why did you ignore those 80,000 findings? I find Security Hub to probably be a little bit too much noise. And it's not Security Hub, it's ‘Compliance Hub.' Everything—and I'm going to have a blog post coming out shortly—on this, everything that Security Hub looks at, it looks at it from a compliance perspective.If you look at all of its scoring, it's not how many things are wrong; it's how many rules you are a hundred percent compliant to. It is not useful for anybody below that AWS security poverty line to really master or to really operationalize.Corey: I really want to thank you for taking the time to catch up with me once again. Although now that I'm the client, I expect I can do this on demand, which is just going to be delightful. If people want to learn more, where can they find you?Chris: So, they can find breaches.cloud at, well https://breaches.cloud. If you're looking for me, I am either on Twitter, still, at @jcfarris, or you can find me and my consulting company, which is www.primeharbor.com.Corey: And we will, of course, put links to all of that in the [show notes 00:33:57]. Thank you so much for taking the time to speak with me. As always, I appreciate it.Chris: Oh, thank you for having me again.Corey: Chris Farris, cloud security nerd at PrimeHarbor. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry, insulting comment that you're also going to use as the storage back-end for your private keys.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

AWS Morning Brief
Amazon's Snowball Edge Frustrates This User

AWS Morning Brief

Play Episode Listen Later Feb 22, 2023 8:27


AWS Morning Brief Extras edition for the week of February 22, 2023.Want to give your ears a break and read this as an article? You're looking for this link.https://www.lastweekinaws.com/blog/amazons-snowball-edge-frustrates-this-userNever miss an episode Join the Last Week in AWS newsletter Subscribe wherever you get your podcasts Help the show Leave a review Share your feedback Subscribe wherever you get your podcasts Buy our merch https://store.lastweekinaws.comWhat's Corey up to? Follow Corey on Twitter (@quinnypig) See our recent work at the Duckbill Group Apply to work with Corey and the Duckbill Group to help lower your AWS bill

The Cloud Pod
164: The Cloud Pod SWIFT-ly Moves Its Money to Google Cloud

The Cloud Pod

Play Episode Listen Later May 16, 2022 42:45


On The Cloud Pod this week, Peter's been suspended without pay for two weeks for not filing his vacation requests in triplicate. Plus it's earnings season once again, there's a major Google and SWIFT collaboration afoot, and MSK Serverless is now generally available, making Kafka management fairly hassle-free. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights

Screaming in the Cloud
Would You Kindly Remind with Peter Hamilton

Screaming in the Cloud

Play Episode Listen Later Mar 31, 2022 40:17


About PeterPeter's spent more than a decade building scalable and robust systems at startups across adtech and edtech. At Remind, where he's VP of Technology, Peter pushes for building a sustainable tech company with mature software engineering. He lives in Southern California and enjoys spending time at the beach with his family.Links: Redis: https://redis.com/ Remind: https://www.remind.com/ Remind Engineering Blog: https://engineering.remind.com LinkedIn: https://www.linkedin.com/in/hamiltop Email: peterh@remind101.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Today's episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that's built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you're defining those as, which depends probably on where you work. It's getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that's exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn't eat all the data you've gotten on the system, it's exactly what you've been looking for. Check it out today at min.io/download, and see for yourself. That's min.io/download, and be sure to tell them that I sent you.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats V-U-L-T-R.com slash screaming.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn and this is a fun episode. It is a promoted episode, which means that our friends at Redis have gone ahead and sponsored this entire episode. I asked them, “Great, who are you going to send me from, generally, your executive suite?” And they said, “Nah. You already know what we're going to say. We want you to talk to one of our customers.” And so here we are. My guest today is Peter Hamilton, VP of Technology at Remind. Peter, thank you for joining me.Peter: Thanks, Corey. Excited to be here.Corey: It's always interesting when I get to talk to people on promoted guest episodes when they're a customer of the sponsor because to be clear, you do not work for Redis. This is one of those stories you enjoy telling, but you don't personally have a stake in whether people love Redis, hate Redis, adopt that or not, which is exactly what I try and do on these shows. There's an authenticity to people who have in-the-trenches experience who aren't themselves trying to sell the thing because that is their entire job in this world.Peter: Yeah. You just presented three or four different opinions and I guarantee we felt all at the different times.Corey: [laugh]. So, let's start at the very beginning. What does Remind do?Peter: So, Remind is a messaging tool for education, largely K through 12. We support about 30 million active users across the country, over 2 million teachers, making sure that every student has, you know, equal opportunities to succeed and that we can facilitate as much learning as possible.Corey: When you say messaging that could mean a bunch of different things to a bunch of different people. Once on a lark, I wound up sitting down—this was years ago, so I'm sure the number is a woeful underestimate now—of how many AWS services I could use to send a message from me to you. And this is without going into the lunacy territory of, “Well, I can tag a thing and then mail it to you like a Snowball Edge or something.” No, this is using them as intended, I think I got 15 or 16 of them. When you say messaging, what does that mean to you?Peter: So, for us, it's about communication to the end-user. We will do everything we can to deliver whatever message a teacher or district administrator has to the user. We go through SMS, text messaging, we go through Apple and Google's push services, we go through email, we go through voice call, really pulling out all the stops we can to make sure that these important messages get out.Corey: And I can only imagine some of the regulatory pressure you almost certainly experience. It feels like it's not quite to HIPAA levels, where ohh, there's a private cause of action if any of this stuff gets out, but people are inherently sensitive about communications involving their children. I always sort of knew this in a general sense, and then I had kids myself, and oh, yeah, suddenly I really care about those sorts of things.Peter: Yeah. One of the big challenges, you can build great systems that do the correct thing, but at the end of the day, we're relying on a teacher choosing the right recipient when they send a message. And so we've had to build a lot of processes and controls in place, so that we can, kind of, satisfy two conflicting needs: One is to provide a clear audit log because that's an important thing for districts to know if something does happen, that we have clear communication; and the other is to also be able to jump in and intervene when something inappropriate or mistaken is sent out to the wrong people.Corey: Remind has always been one of those companies that has a somewhat exalted reputation in the AWS space. You folks have been early adopters of a bunch of different services—which let's be clear, in the responsible way, not the, “Well, they said it on stage; time to go ahead and put everything they just listed into production because we for some Godforsaken reason, view it as a todo list.”—but you've been thoughtful about how you approach things, and you have been around as a company for a while. But you've also been making a significant push toward being cloud-native by certain definitions of that term. So, I know this sounds like a college entrance essay, but what does cloud-native mean to you?Peter: So, one of the big gaps—if you take an application that was written to be deployed in a traditional data center environment and just drop it in the cloud, what you're going to get is a flaky data center.Corey: Well, that's unfair. It's also going to be extremely expensive.Peter: [laugh]. Sorry, an expensive, flaky data set.Corey: There we go. There we go.Peter: What we've really looked at–and a lot of this goes back to our history in the earlier days; we ran a top of Heroku and it was kind of the early days what they call the Twelve-Factor Application—but making aggressive decisions about how you structure your architecture and application so that you fit in with some of the cloud tools that are available and that you fit in, you know, with the operating models that are out there.Corey: When you say an aggressive decision, what sort of thing are you talking about? Because when I think of being aggressive with an approach to things like AWS, it usually involves Twitter, and I'm guessing that is not the direction you intend that to go.Peter: No, I think if you look at Twitter or Netflix or some of these players that, quite frankly, have defined what AWS is to us today through their usage patterns, not quite that.Corey: Oh, I mean using Twitter to yell at them explicitly about things—Peter: Oh.Corey: —because I don't do passive-aggressive; I just do aggressive.Peter: Got it. No, I think in our case, it's been plotting a very narrow path that allows us to avoid some of the bigger pitfalls. We have our sponsor here, Redis. Talk a little bit about our usage of Redis and how that's helped us in some of these cases. One of the pitfalls you'll find with pulling a non-cloud-native application and put it in the cloud is state is hard to manage.If you put state on all your machines and machines go down, networks fail, all those things, you now no longer have access to that state and we start to see a lot of problems. One of the decisions we've made is try to put as much data as we can into data stores like Redis or Postgres or something, in order to decouple our hardware from the state we're trying to manage and provide for users so that we're more resilient to those sorts of failures.Corey: I get the sense from the way that we're having this conversation, when you talk about Redis, you mean actual Redis itself, not ElastiCache for Redis, or as to I'm tending to increasingly think about AWS's services, Amazon Basics for Redis.Peter: Yeah. I mean, Amazon has launched a number of products. They have their ElastiCache, they have their new MemoryDB, there's a lot different ways to use this. We've relied pretty heavily on Redis, previously known as Redis Labs, and their enterprise product in their cloud, in order to take care of our most important data—which we just don't want to manage ourselves—trying to manage that on our own using something like ElastiCache, there's so many pitfalls, so many ways that we can lose that data. This data is important to us. By having it in a trusted place and managed by a great ops team, like they have at Redis, we're able to then lean in on the other aspects of cloud data to really get as much value as we can out of AWS.Corey: I am curious. As I said you've had a reputation as a company for a while in the AWS space of doing an awful lot of really interesting things. I mean, you have a robust GitHub presence, you have a whole bunch of tools that have come out Remind that are great, I've linked to a number of them over the years in the newsletter. You are clearly not afraid, culturally, to get your hands dirty and build things yourself, but you are using Redis Enterprise as opposed to open-source Redis. What drove that decision? I have to assume it's not, “Wait. You mean, I can get it for free as an open-source project? Why didn't someone tell me?” What brought you to that decision?Peter: Yeah, a big part of this is what we could call operating leverage. Building a great set of tools that allow you to get more value out of AWS is a little different story than babysitting servers all day and making sure they stay up. So, if you look through, most of our contributions in open-source space have really been around here's how to expand upon these foundational pieces from AWS; here's how to more efficiently launch a suite of servers into an auto-scaling group; here's, you know, our troposphere and other pieces there. This was all before Amazon CDK product, but really, it was, here's how we can more effectively use CloudFormation to capture our Infrastructure as Code. And so we are not afraid in any way to invest in our tooling and invest in some of those things, but when we look at the trade-off of directly managing stateful services and dealing with all the uncertainty that comes, we feel our time is better spent working on our product and delivering value to our users and relying on partners like Redis in order to provide that stability we need.Corey: You raise a good point. An awful lot of the tools that you've put out there are the best, from my perspective, approach to working with AWS services. And that is a relatively thin layer built on top of them with an eye toward making the user experience more polished, but not being so heavily opinionated that as soon as the service goes in a different direction, the tool becomes completely useless. You just decide to make it a bit easier to wind up working with specific environment variables or profiles, rather than what appears to be the AWS UX approach of, “Oh, now type in your access key, your secret key and your session token, and we've disabled copy and paste. Go, have fun.” You've really done a lot of quality of life improvements, more so than you have this is the entire system of how we do deploys, start to finish. It's opinionated and sort of a, like, a take on what Netflix, did once upon a time, with Asgard. It really feels like it's just the right level of abstraction.Peter: We did a pretty good job. I will say, you know, years later, we felt that we got it wrong a couple times. It's been really interesting to see that, that there are times when we say, “Oh, we could take these three or four services and wrap it up into this new concept of an application.” And over time, we just have to start poking holes in that new layer and we start to see we would have been better served by sticking with as thin a layer as possible that enables us, rather than trying to get these higher-level pieces.Corey: It's remarkably refreshing to hear you say that just because so many people love to tell the story on podcasts, or on conference stages, or whatever format they have of, “This is what we built.” And it is an aspirationally superficial story about this. They don't talk about that, “Well, firstly, without these three wrong paths first.” It's always a, “Oh, yes, obviously, we are smart people and we only make the correct decision.”And I remember in the before times sitting in conference talks, watching people talk about great things they'd done, and I'll turn next to the person next to me and say, “Wow, I wish I could be involved in a project like that.” And they'll say, “Yeah, so do I.” And it turns out they work at the company the speaker is from. Because all of these things tend to be the most positive story. Do you have an example of something that you have done in your production environment that going back, “Yeah, in hindsight, I would have done that completely differently.”Peter: Yeah. So, coming from Heroku moving into AWS, we had a great open-source project called Empire, which kind of bridge that gap between them, but used Amazon's ECS in order to launch applications. It was actually command-line compatible with the Heroku command when it first launched. So, a very big commitment there. And at the time—I mean, this comes back to the point I think you and I were talking about earlier, where architecture, costs, infrastructure, they're all interlinked.And I'm a big fan of Conway's Law, which says that an organization's structure needs to match its architecture. And so six, seven years ago, we're heavy growth-based company and we are interns running around, doing all the things, and we wanted to have really strict guardrails and a narrow set of things that our development team could do. And so we built a pretty constrained: You will launch, you will have one Docker image per ECS service, it can only do these specific things. And this allowed our development team to focus on pretty buttons on the screen and user engagement and experiments and whatnot, but as we've evolved as a company, as we built out a more robust business, we've started to track revenue and costs of goods sold more aggressively, we've seen, there's a lot of inefficient things that come out of it.One particular example was we used PgBouncer for our connection pooling to our Postgres application. In the traditional model, we had an auto-scaling group for a PgBouncer, and then our auto-scaling groups for the other applications would connect to it. And we saw additional latency, we saw additional cost, and we eventually kind of twirl that down and packaged that PgBouncer alongside the applications that needed it. And this was a configuration that wasn't available on our first pass; it was something we intentionally did not provide to our development team, and we had to unwind that. And when we did, we saw better performance, we saw better cost efficiency, all sorts of benefits that we care a lot about now that we didn't care about as much, many years ago.Corey: It sounds like you're describing some semblance of an internal platform, where instead of letting all your engineers effectively, “Well, here's the console. Ideally, you use some form of Infrastructure as Code. Good luck. Have fun.” You effectively gate access to that. Is that something that you're still doing or have you taken a different approach?Peter: So, our primary gate is our Infrastructure as Code repository. If you want to make a meaningful change, you open up a PR, got to go through code review, you need people to sign off on it. Anything that's not there may not exist tomorrow. There's no guarantees. And we've gone around, occasionally just shut random servers down that people spun up in our account.And sometimes people will be grumpy about it, but you really need to enforce that culture that we have to go through the correct channels and we have to have this cohesive platform, as you said, to support our development efforts.Corey: So, you're a messaging service in education. So, whenever I do a little bit of digging into backstories of companies and what has made, I guess, an impression, you look for certain things and explicit dates are one of them, where on March 13th of 2020, your business changed just a smidgen. What happened other than the obvious, we never went outside for two years?Peter: [laugh]. So, if we roll back a week—you know, that's March 13th, so if we roll back a week, we're looking at March 6th. On that day, we sent out about 60 million messages over all of our different mediums: Text, email, push notifications. On March 13th that was 100 million, and then, a few weeks later on March 30th, that was 177 million. And so our traffic effectively tripled over the course of those three weeks. And yeah, that's quite a ride, let me tell you.Corey: The opinion that a lot of folks have who have not gotten to play in sophisticated distributed systems is, “Well, what's the hard part there you have an auto-scaling group. Just spin up three times the number of servers in that fleet and problem solved. What's challenging?” A lot, but what did you find that the pressure points were?Peter: So, I love that example, that your auto-scaling group will just work. By default, Amazon's auto-scaling groups only support 1000 backends. So, when your auto-scaling group goes from 400 backends to 1200, things break, [laugh] and not in ways that you would have expected. You start to learn things about how database systems provided by Amazon have limits other than CPU and memory. And they're clearly laid out that there's network bandwidth limits and things you have to worry about.We had a pretty small team at that time and we'd gotten this cadence where every Monday morning, we would wake up at 4 a.m. Pacific because as part of the pandemic, our traffic shifted, so our East Coast users would be most active in the morning rather than the afternoon. And so at about 7 a.m. on the east coast is when everyone came online. And we had our Monday morning crew there and just looking to see where the next pain point was going to be.And we'd have Monday, walk through it all, Monday afternoon, we'd meet together, we come up with our three or four hypotheses on what will break, if our traffic doubles again, and we'd spend the rest of that next week addressing those the best we could and repeat for the next Monday. And we did this for three, four or five weeks in a row, and finally, it stabilized. But yeah, it's all the small little things, the things you don't know about, the limits in places you don't recognize that just catch up to you. And you need to have a team that can move fast and adapt quickly.Corey: You've been using Redis for six, seven years, something along those lines, as an enterprise offering. You've been working with the same vendor who provides this managed service for a while now. What are the fruits of that relationship? What is the value that you see by continuing to have a long-term relationship with vendors? Because let's be serious, most of us don't stay in jobs that long, let alone work with the same vendor.Peter: Yeah. So, coming back to the March 2020 story, many of our vendors started to see some issues here that various services weren't scaled properly. We made a lot of phone calls to a lot of vendors in working with them, and I… very impressed with how Redis Labs at the time was able to respond. We hopped on a call, they said, “Here's what we think we need to do, we'll go ahead and do this. We'll sort this out in a few weeks and figure out what this means for your contract. We're here to help and support in this pandemic because we recognize how this is affecting everyone around the world.”And so I think when you get in those deeper relationships, those long-term relationships, it is so helpful to have that trust, to have a little bit of that give when you need it in times of crisis, and that they're there and willing to jump in right away.Corey: There's a lot to be said for having those working relationships before you need them. So often, I think that a lot of engineering teams just don't talk to their vendors to a point where they may as well be strangers. But you'll see this most notably because—at least I feel it most acutely—with AWS service teams. They'll do a whole kickoff when the enterprise support deal is signed, three years go passed, and both the AWS team and the customer's team have completely rotated since then, and they may as well be strangers. Being able to have that relationship to fall back on in those really weird really, honestly, high-stress moments has been one of those things where I didn't see the value myself until the first time I went through a hairy situation where I found that that was useful.And now it's oh, I—I now bias instead for, “Oh, I can fit to the free tier of this service. No, no, I'm going to pay and become a paying customer.” I'd rather be a customer that can have that relationship and pick up the phone than someone whining at people in a forum somewhere of, “Hey, I'm a free user, and I'm having some problems with production.” Just never felt right to me.Peter: Yeah, there's nothing worse than calling your account rep and being told, “Oh, I'm not your account rep anymore.” Somehow you missed the email, you missed who it was. Prior to Covid, you know—and we saw this many, many years ago—one of the things about Remind is every back-to-school season, our traffic 10Xes in about three weeks. And so we're used to emergencies happening and unforeseen things happening. And we plan through our year and try to do capacity planning and everything, but we been around the block a couple of times.And so we have a pretty strong culture now leaning in hard with our support reps. We have them in our Slack channels. Our AWS team, we meet with often. Redis Labs, we have them on Slack as well. We're constantly talking about databases that may or may not be performing as we expect them, too. They're an extension of our team, we have an incident; we get paged. If it's related to one of the services, we hit them in Slack immediately and have them start checking on the back end while we're checking on our side. So.Corey: One of the biggest takeaways I wish more companies would have is that when you are dependent upon another company to effectively run your production infrastructure, they are no longer your vendor, they're your partner, whether you want them to be or not. And approaching it with that perspective really pays dividends down the road.Peter: Yeah. One of the cases you get when you've been at a company for a long time and been in relationship for a long time is growing together is always an interesting approach. And seeing, sometimes there's some painful points; sometimes you're on an old legacy version of their product that you were literally the last customer on, and you got to work with them to move off of. But you were there six years ago when they're just starting out, and they've seen how you grow, and you've seen how they've grown, and you've kind of been able to marry that experience together in a meaningful way.Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of “Hello, World” demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking, databases, observability, management, and security. And—let me be clear here—it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself, all while gaining the networking, load balancing, and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free, you can do things like run small-scale applications or do proof-of-concept testing without spending a dime. You know that I always like to put asterisks next to the word free? This is actually free, no asterisk. Start now. Visit snark.cloud/oci-free that's snark.cloud/oci-free.Corey: Redis is, these days, of data platform back once upon a time, I viewed it as more of a caching layer. And I admit that the capabilities of the platform has significantly advanced since those days when I viewed it purely through lens of cache. But one of the interesting parts is that neither one of those use cases, in my mind, blends particularly well with heavy use of Spot Fleets, but you're doing exactly that. What are your folks doing over there?Peter: [laugh]. Yeah, so as I mentioned earlier, coming back to some of the Twelve-Factor App design, we heavily rely on Redis as sort of a distributed heap. One of our challenges of delivering all these messages is every single message has its in-flight state: Here's the content, here's who we sent it to, we wait for them to respond. On a traditional application, you might have one big server that stores it all in-memory, and you get the incoming requests, and you match things up. By moving all that state to Redis, all of our workers, all of our application servers, we know they can disappear at any point in time.We use Amazon's Spot Instances and their Spot Fleet for all of our production traffic. Every single web service, every single worker that we have runs on this infrastructure, and we would not be able to do that if we didn't have a reliable and robust place to store this data that is in-flight and currently being accessed. So, we'll have a couple hundred gigs of data at any point in time in a Redis Database, just representing in-flight work that's happening on various machines.Corey: It's really neat seeing Spot Fleets being used as something more than a theoretical possibility. It's something I've always been very interested in, obviously, given the potential cost savings; they approach cheap is free in some cases. But it turns out—we talked earlier about the idea of being cloud-native versus the rickety, expensive data center in the cloud, and an awful lot of applications are simply not built in a way that yeah, we're just going to randomly turn off a subset of your systems, ideally, with two minutes of notice, but all right, have fun with that. And a lot of times, it just becomes a complete non-starter, even for stateless workloads, just based upon how all of these things are configured. It is really interesting to watch a company that has an awful lot of responsibility that you've been entrusted with who embraces that mindset. It's a lot more rare than you'd think.Peter: Yeah. And again, you know, sometimes, we overbuild things, and sometimes we go down paths that may have been a little excessive, but it really comes down to your architecture. You know, it's not just having everything running on Spot. It's making effective use of SQS and other queueing products at Amazon to provide checkpointing abilities, and so you know that should you lose an instance, you're only going to lose a few seconds of productive work on that particular workload and be able to kick off where you left off.It's properly using auto-scaling groups. From the financial side, there's all sorts of weird quirks you'll see. You know, the Spot market has a wonderful set of dynamics where the big instances are much, much cheaper per CPU than the small ones are on the Spot market. And so structuring things in a way that you can colocate different workloads onto the same hosts and hedge against the host going down by spreading across multiple availability zones. I think there's definitely a point where having enough workload, having enough scale allows you to take advantage of these things, but it all comes down to the architecture and design that really enables it.Corey: So, you've been using Redis for longer than I think many of our listeners have been in tech.Peter: [laugh].Corey: And the key distinguishing points for me between someone who is an advocate for a technology and someone who's a zealot—or a pure critic—is they can identify use cases for which is great and use cases for which it is not likely to be a great experience. In your time with Redis, what have you found that it's been great at and what are some areas that you would encourage people to consider more carefully before diving into it?Peter: So, we like to joke that five, six years ago, most of our development process was, “I've hit a problem. Can I use Redis to solve that problem?” And so we've tried every solution possible with Redis. We've done all the things. We have number of very complicated Lua scripts that are managing different keys in an atomic way.Some of these have been more successful than others, for sure. Right now, our biggest philosophy is, if it is data we need quickly, and it is data that is important to us, we put it in Enterprise Redis, the cloud product from Redis. Other use cases, there's a dozen things that you can use for a cache, Redis is great for cache, memcache does a decent job as well; you're not going to see a meaningful difference between those sorts of products. Where we've struggled a little bit has been when we have essentially relational data that we need fast access to. And we're still trying to find a clear path forward here because you can do it and you can have atomic updates and you can kind of simulate some of the ACID characteristics you would have in a relational database, but it adds a lot of complexity.And that's a lot of overhead to our team as we're continuing to develop these products, to extend them, to fix any bugs you might have in there. And so we're kind of recalibrating a bit, and some of those workloads are moving to other data stores where they're more appropriate. But at the end of the day, it's data that we need fast, and it's data that's important, we're sticking with what we got here because it's been working pretty well.Corey: It sounds almost like you started off with the mindset of one database for a bunch of different use cases and you're starting to differentiate into purpose-built databases for certain things. Or is that not entirely accurate?Peter: There's a little bit of that. And I think coming back to some of our tooling, as we kind of jumped on a bit of the microservice bandwagon, we would see, here's a small service that only has a small amount of data that needs to be stored. It wouldn't make sense to bring up a RDS instance, or an Aurora instance, for that, you know, in Postgres. Let's just store it in an easystore like Redis. And some of those cases have been great, some of them have been a little problematic.And so as we've invested in our tooling to make all our databases accessible and make it less of a weird trade-off between what the product needs, what we can do right now, and what we want to do long-term, and reduce that friction, we've been able to be much more deliberate about the data source that we choose in each case.Corey: It's very clear that you're speaking with a voice of experience on this where this is not something that you just woke up and figured out. One last area I want to go into with you is when I asked you what is you care about primarily as an engineering leader and as you look at serving your customers well, you effectively had a dual answer, almost off the cuff, of stability and security. I find the two of those things are deeply intertwined in most of the conversations I have, but they're rarely called out explicitly in quite the way that you do. Talk to me about that.Peter: Yeah, so in our wild journey, stability has always been a challenge. And we've alway—you know, been an early startup mode, where you're constantly pushing what can we ship? How quickly can we ship it? And in our particular space, we feel that this communication that we foster between teachers and students and their parents is incredibly important, and is a thing that we take very, very seriously. And so, a couple years ago, we were trying to create this balance and create not just a language that we could talk about on a podcasts like this, but really recognizing that framing these concepts to our company internally: To our engineers to help them to think as they're building a feature, what are the things they should think about, what are the concerns beyond the product spec; to work with our marketing and sales team to help them to understand why we're making these investments that may not get particular feature out by X date but it's still a worthwhile investment.So, from the security side, we've really focused on building out robust practices and robust controls that don't necessarily lock us into a particular standard, like PCI compliance or things like that, but really focusing on the maturity of our company and, you know, our culture as we go forward. And so we're in a place now we are ISO 27001; we're heading into our third year. We leaned in hard our disaster recovery processes, we've leaned in hard on our bug bounties, pen tests, kind of, found this incremental approach that, you know, day one, I remember we turned on our bug bounty and it was a scary day as the reports kept coming in. But we take on one thing at a time and continue to build on it and make it an essential part of how we build systems.Corey: It really has to be built in. It feels like security is not something could be slapped on as an afterthought, however much companies try to do that. Especially, again, as we started this episode with, you're dealing with communication with people's kids. That is something that people have remarkably little sense of humor around. And rightfully so.Seeing that there is as much if not more care taken around security than there is stability is generally the sign of a well-run organization. If there's a security lapse, I expect certain vendors to rip the power out of their data centers rather than run in an insecure fashion. And your job done correctly—which clearly you have gotten to—means that you never have to make that decision because you've approached this the right way from the beginning. Nothing's perfect, but there's always the idea of actually caring about it being the first step.Peter: Yeah. And the other side of that was talking about stability, and again, it's avoiding the either/or situation. We can work in as well along those two—stability and security—we work in our cost of goods sold and our operating leverage in other aspects of our business. And every single one of them, it's our co-number one priorities are stability and security. And if it costs us a bit more money, if it takes our dev team a little longer, there's not a choice at that point. We're doing the correct thing.Corey: Saving money is almost never the primary objective of any company that you really want to be dealing with unless something bizarre is going on.Peter: Yeah. Our philosophy on, you know, any cost reduction has been this should have zero negative impact on our stability. If we do not feel we can safely do this, we won't. And coming back to the Spot Instance piece, that was a journey for us. And you know, we tested the waters a bit and we got to a point, we worked very closely with Amazon's team, and we came to that conclusion that we can safely do this. And we've been doing it for over a year and seen no adverse effects.Corey: Yeah. And a lot of shops I've talked to folks about well, when we go and do a consulting project, it's, “Okay. There's a lot of things that could have been done before we got here. Why hasn't any of that been addressed?” And the answer is, “Well. We tried to save money once and it caused an outage and then we weren't allowed to save money anymore. And here we are.” And I absolutely get that perspective. It's a hard balance to strike. It always is.Peter: Yeah. The other aspect where stability and security kind of intertwine is you can think about security as InfoSec in our systems and locking things down, but at the end of the day, why are we doing all that? It's for the benefit of our users. And Remind, as a communication platform, and safety and security of our users is as dependent on us being up and available so that teachers can reach out to parents with important communication. And things like attendance, things like natural disasters, or lockdowns, or any of the number of difficult situations schools find themselves in. This is part of why we take that stewardship that we have so seriously is that being up and protecting a user's data just has such a huge impact on education in this country.Corey: It's always interesting to talk to folks who insists they're making the world a better place. And it's, “What do you do?” “We're improving ad relevance.” I mean, “Okay, great, good for you.” You're serving a need that I would I would not shy away from classifying what you do, fundamentally, as critical infrastructure, and that is always a good conversation to have. It's nice being able to talk to folks who are doing things that you can unequivocally look at and say, “This is a good thing.”Peter: Yeah. And around 80% of public schools in the US are using Remind in some capacity. And so we're not a product that's used in a few civic regions. All across the board. One of my favorite things about working in Remind is meeting people and telling them where I work, and they recognize it.They say, “Oh, I have that app, I use that app. I love it.” And I spent years and ads before this, and you know, I've been there and no one ever told me they were glad to see an ad. That's never the case. And it's been quite a rewarding experience coming in every day, and as you said, being part of this critical infrastructure. That's a special thing.Corey: I look forward to installing the app myself as my eldest prepares to enter public school in the fall. So, now at least I'll have a hotline of exactly where to complain when I didn't get the attendance message because, you know, there's no customer quite like a whiny customer.Peter: They're still customers. [laugh]. Happy to have them.Corey: True. We tend to be. I want to thank you for taking so much time out of your day to speak with me. If people want to learn more about what you're up to, where's the best place to find you?Peter: So, from an engineering perspective at Remind, we have our blog, engineering.remind.com. If you want to reach out to me directly. I'm on LinkedIn; good place to find me or you can just reach out over email directly, peterh@remind101.com.Corey: And we will put all of that into the show notes. Thank you so much for your time. I appreciate it.Peter: Thanks, Corey.Corey: Peter Hamilton, VP of Technology at Remind. This has been a promoted episode brought to us by our friends at Redis, and I'm Cloud Economist Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry and insulting comment that you will then hope that Remind sends out to 20 million students all at once.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

The Six Five with Patrick Moorhead and Daniel Newman
Exploring AWS’ Recent Announcements - The Six Five Insiders Edition

The Six Five with Patrick Moorhead and Daniel Newman

Play Episode Listen Later May 14, 2020 26:32


On this episode of The Six Five - Insiders Edition hosts Patrick Moorhead and Daniel Newman welcome Jeff Barr, Vice President and Chief Evangelist at AWS to discuss AWS’ response to COVID-19 and exciting new product announcements.   AWS’ Response to COVID-19   During this COVID-19 pandemic, AWS has been focused on helping customers continue to operate as efficiently as possible. One of the first things the company did was make it easier for businesses to get access to Amazon Workspaces, a virtual desktop in the cloud. They also had special offers for Amazon Chime and Amazon WorkDocs to help companies facilitate collaboration.   In addition to helping customers, AWS launched the Diagnostic Development Initiative (DDI) to provide support for innovation around both testing and development of different kinds of solutions. They’ve also made AWS compute power more available to researchers who are studying different possible solutions and vaccines.   Finally, AWS launched a public data lake with curated data sets so experiments could actually run queries on real data which will hopefully allow scientists and researchers to find a cure faster.   AWS Virtual Summit   Traditionally, AWS hosts Global Summits throughout the year, but given the current stay-at-home orders still in place, they’ve had to pivot to a virtual summit the first of which was Wednesday, May 13.   The summit is free, online and anyone interested can join. The event is designed to bring the cloud computing community together to connect, collaborate, and learn about AWS. Attendees will hear from CTO Werner Vogels, CEO Andy Jassy, and several other AWS employees who are subject matter experts in various AWS categories.   Breaking Down AWS Announcements   Jeff, Patrick and Daniel spent time discussing several of the new AWS announcements from the last few months that are making a difference for customers all over the world.   Amazon Macie   Macie is a security service that uses machine learning to automatically discover and protect sensitive data in AWS. This service has been around for about a year, but AWS has recently made some great additions including updating the machine learning models so customers can scan for even more types of private information. They’ve also added some customizability if customers have special data types that might have different kinds of proprietary or sensitive data inside. The best part is AWS has lowered the price down to a fifth of what it previously cost so more customers are able to benefit from the different services.   Amazon Elasticsearch Service   Data is growing exponentially in quantity and size and Amazon Elasticsearch Service customers are needing new ways to store and access data as efficiently as possible, specifically data that was collected for the long-term. The current storage tier was called Hot for quickly accessed storage. AWS just introduced a new tier — Ultra Warm — that will hold the more historic data that customers don’t need as often and it will take slightly longer to access.   Amazon AppFlow   SaaS applications have been highly functional, but the data created and collected across these many applications has effectively been in a silo. Amazon AppFlow is a service that allows customers to securely transfer data between SaaS applications. Customers can unlock the access to that data, making it easier for data to flow from the SaaS app into AWS as well as the other way around. and the other way around as well. Customers can run data flows on a schedule, in response to an event, or on demand — basically whenever they need it.   Amazon Kendra   AWS’ enterprise search tool Kendra gives customers powerful natural language search capabilities across websites and applications so users can easily find information in the data spread across the enterprise.   While most search tools use keyword queries, Kendra is able to use natural language questions to search through portals, wikis, databases, and document repositories to find whatever is needed. It not only captures the data, but also the access permissions for the data too.   Amazon Augmented AI (A2I)   Many current machine learning applications require humans to review predictions to ensure results are correct. Amazon Augmented AI or A2I makes it easy to build the workflows required for these reviews and provides a built-in review system for common machine learning use cases.   Customers are able to choose three different categories of human reviewers. They can use the 500,000 global workers that make use of Mechanical Turk. There's a set of third party organizations that have a base of pre-authorized workers, or organizations can make use of a private pool or workers.   AWS Snow Family   The Snow Family devices are physical devices that have both storage and local compute power making it easy to migrate data into and out of AWS. Recently AWS launched an improved version of the Snowball Edge Storage Optimized devices. These devices provide both block storage and Amazon S3-compatible object storage.   AWS did a hardware refresh, added additional processing power, and added some additional SSD storage inside. Now if you launch EC2 instances on the Snowball Edge those instances have access to this SSD powered storage.   You can use these devices for data collection, machine learning and processing, and storage in environments with limited connectivity giving customers the ability to use them basically anywhere.   Finally, AWS also made it easier for our customers to set up and manage the Snowball Family devices with AWS OpsHub, a graphical user interface where customers can unlock, configure, copy data to and from the device via drag and drop even if they're not connected to the Internet.   If you’d like to learn more about any of these announcements, be sure to visit the AWS website. Make sure to listen to the entire episode below and while you’re at it, be sure to hit subscribe so you never miss an episode.

Podcast AWS Brasil
EP3: Trilha Updates – Novidades da semana com Vinicius Senger

Podcast AWS Brasil

Play Episode Listen Later Apr 27, 2020 5:15


Neste episódio Vinicius Senger, nosso Developer Advocate, fala sobre algumas novidade da semana como Online Tech Talks, Deep Composer, Snowball Edge e também sobre a redução de preços de Data Transfer Out para a região de São Paulo.

Podcast AWS Brasil
EP3: Trilha Updates - Novidades da semana com Vinicius Senger

Podcast AWS Brasil

Play Episode Listen Later Apr 27, 2020 5:15


Neste episódio Vinicius Senger, nosso Developer Advocate, fala sobre algumas novidade da semana como Online Tech Talks, Deep Composer, Snowball Edge e também sobre a redução de preços de Data Transfer Out para a região de São Paulo.

AWS re:Invent 2019
STG214: Data migration and edge computing with the AWS Snow family

AWS re:Invent 2019

Play Episode Listen Later Dec 7, 2019 53:04


Many organizations still have data they want to move to AWS, and have disconnected, remote field operations that require local computing capabilities. The AWS Snow family-AWS Snowball, AWS Snowball Edge, and AWS Snowmobile-helps you move large quantities of data offline and enables data processing and storage at edge locations when network capacity is constrained or nonexistent. This session provides an update on the Snow family and dives into the operational details of migrating data and how to build edge computing architectures with Snowball Edge. Learn when and how to use the service for your migration and computing needs.

family snow aws edge computing data migration snowball edge aws snowball edge
AWS Podcast
#341: November 2019 Update Show

AWS Podcast

Play Episode Listen Later Nov 10, 2019 39:12


Simon and Nicki share a broad range of interesting updates! 00:48 Storage 01:43 Compute 05:27 Networking 16:07 Databases 12:03 Developer Tools 13:18 Analytics 19:06 IoT 20:42 Customer Engagement 21:03 End User Computing 22:31 Machine Learning 25:27 Application Integration 27:35 Management and Governance 29:17 Media 30:53 Security 32:56 Blockchain 33:14 Quick Starts 33:51 Training 36:11 Public Datasets 37:12 Robotics Shownotes: Topic || Storage AWS Snowball Edge now supports offline software updates for Snowball Edge devices in air-gapped environments | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-snowball-edge-now-supports-offline-software-updates-for-snowball-edge-devices-in-air-gapped-environments/ Topic || Compute Now Available: Amazon EC2 High Memory Instances with up to 24 TB of memory, Purpose-built to Run Large In-memory Databases, like SAP HANA | https://aws.amazon.com/about-aws/whats-new/2019/10/now-available-amazon-ec2-high-memory-instances-purpose-built-run-large-in-memory-databases/ Introducing Availability of Amazon EC2 A1 Bare Metal Instances | https://aws.amazon.com/about-aws/whats-new/2019/10/introducing-availability-of-amazon-ec2-a1-bare-metal-instances/ Windows Nodes Supported by Amazon EKS | https://aws.amazon.com/about-aws/whats-new/2019/10/windows-nodes-supported-by-amazon-eks/ Amazon ECS now Supports ECS Image SHA Tracking | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-ecs-now-supports-ecs-image-sha-tracking/ AWS Serverless Application Model feature support updates for Amazon API Gateway and more | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-serverless-application-model-feature-support-updates-for-amazon-api-gateway-and-more/ Queuing Purchases of EC2 RIs | https://aws.amazon.com/about-aws/whats-new/2019/10/queuing-purchases-of-ec2-ris/ Topic || Network AWS Direct Connect Announces the Support for Granular Cost Allocation and Removal of Payer ID Restriction for Direct Connect Gateway Association. | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-direct-connect-aws-direct-connect-announces-the-support-for-granular-cost-allocation-and-removal-of-payer-id-restriction-for-direct-connect-gateway-association/ AWS Direct Connect Announces Resiliency Toolkit to Help Customers Order Resilient Connectivity to AWS | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-direct-connect-announces-resiliency-toolkit-to-help-customers-order-resilient-connectivity-to-aws/ Amazon VPC Traffic Mirroring Now Supports AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-vpc-traffic-mirroring-now-supports-aws-cloudformation/ Application Load Balancer and Network Load Balancer Add New Security Policies for Forward Secrecy with More Stringent Protocols and Ciphers | https://aws.amazon.com/about-aws/whats-new/2019/10/application-load-balancer-and-network-load-balancer-add-new-security-policies-for-forward-secrecy-with-more-strigent-protocols-and-ciphers/ Topic || Databases Amazon RDS on VMware is now generally available | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-rds-on-vmware-is-now-generally-available/ Amazon RDS Enables Detailed Backup Storage Billing | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-rds-enables-detailed-backup-storage-billing/ Amazon RDS for PostgreSQL Supports Minor Version 11.5, 10.10, 9.6.15, 9.5.19, 9.4.24, adds Transportable Database Feature | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-rds-for-postgresql-supports-minor-version-115-1010-9615-9515-9424-adds-transportable-database-feature/ Amazon ElastiCache launches self-service updates for Memcached and Redis Cache Clusters | https://aws.amazon.com/about-aws/whats-new/2019/10/elasticache-memcached-self-service-updates/ Amazon DocumentDB (with MongoDB compatibility) adds additional Aggregation Pipeline Capabilities including $lookup | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-documentdb-add-additional-aggregation-pipeline-capabilities/ Amazon Neptune now supports Streams to capture graph data changes | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-neptune-now-supports-streams-to-capture-graph-data-changes/ Amazon Neptune now supports SPARQL 1.1 federated query | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-neptune-now-supports-SPARQL-11-federated-query/ Topic || Developer Tools AWS CodePipeline Enables Setting Environment Variables on AWS CodeBuild Build Jobs | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-codepipeline-enables-setting-environment-variables-on-aws-codebuild-build-jobs/ AWS CodePipeline Adds Execution Visualization to Pipeline Execution History | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-codepipeline-adds-execution-visualization-to-pipeline-execution-history/ Topic || Analytics Amazon Redshift introduces AZ64, a new compression encoding for optimized storage and high query performance | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-redshift-introduces-az64-a-new-compression-encoding-for-optimized-storage-and-high-query-performance/ Amazon Redshift Improves Performance of Inter-Region Snapshot Transfers | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-redshift-improves-performance-of-inter-region-snapshot-transfers/ Amazon Elasticsearch Service provides option to mandate HTTPS | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-elasticsearch-service-provides-option-to-mandate-https/ Amazon Athena now provides an interface VPC endpoint | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-athena-now-provides-an-interface-VPC-endpoint/ Amazon Kinesis Data Firehose adds cross-account delivery to Amazon Elasticsearch Service | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-kinesis-data-firehose-adds-cross-account-delivery-to-amazon-elasticsearch-service/ Amazon Kinesis Data Firehose adds support for data stream delivery to Amazon Elasticsearch Service 7.x clusters | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-kinesis-data-firehose-adds-support-data-stream-delivery-amazon-elasticsearch-service/ Amazon QuickSight announces Data Source Sharing, Table Transpose, New Filtering and Analytical Capabilities | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-quicksight-announces-data-source-sharing-table-transpose-new-filtering-analytics-capabilities/ AWS Glue now provides ability to use custom certificates for JDBC Connections | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-glue-now-provides-ability-to-use-custom-certificates-for-jdbc-connections/ You can now expand your Amazon MSK clusters and deploy new clusters across 2-AZs | https://aws.amazon.com/about-aws/whats-new/2019/10/now-expand-your-amazon-msk-clusters-and-deploy-new-clusters-across-2-azs/ Amazon EMR Adds Support for Spark 2.4.4, Flink 1.8.1, and the Ability to Reconfigure Multiple Master Nodes | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-emr-adds-support-for-spark-2-4-4-flink-1-8-1-and-ability-to-reconfigure-multiple-master-nodes/ Topic || IoT Two New Solution Accelerators for AWS IoT Greengrass Machine Learning Inference and Extract, Transform, Load Functions | https://aws.amazon.com/about-aws/whats-new/2019/10/two-new-solution-accelerators-for-aws-iot-greengrass-machine-lea/ AWS IoT Core Adds the Ability to Retrieve Data from DynamoDB using Rule SQL | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-iot-core-adds-ability-to-retrieve-data-from-dynamodb-using-rule-sql/ PSoC 62 Prototyping Kit is now qualified for Amazon FreeRTOS | https://aws.amazon.com/about-aws/whats-new/2019/10/psoc-62-prototyping-kit-qualified-for-amazon-freertos/ Topic || Customer Engagement Amazon Pinpoint Adds Support for Message Templates | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-pinpoint-adds-support-for-message-templates/ Topic || End User Computing Amazon AppStream 2.0 adds support for 4K Ultra HD resolution on 2 monitors and 2K resolution on 4 monitors | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-appstream-2-adds-support-for-4k-ultra-hd-resolution-on-2-monitors-and-2k-resolution-on-4-monitors/ Amazon AppStream 2.0 Now Supports FIPS 140-2 Compliant Endpoints | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-appstream-2-now-supports-fips-140-2-compliant-endpoints/ Amazon Chime now supports screen sharing from Mozilla Firefox and Google Chrome without a plug-in or extension | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-chime-now-supports-screen-sharing-from-mozilla-firefox-and-google-chrome-without-a-plug-in-or-extension/ Topic || Machine Learning Amazon Translate now adds support for seven new languages - Greek, Romanian, Hungarian, Ukrainian, Vietnamese, Thai, and Urdu | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-translate-adds-support-seven-new-languages/ Introducing Amazon SageMaker ml.p3dn.24xlarge instances, optimized for distributed machine learning with up to 4x the network bandwidth of ml.p3.16xlarge instances | https://aws.amazon.com/about-aws/whats-new/2019/10/introducing-amazon-sagemaker-mlp3dn24xlarge-instances/ SageMaker Notebooks now support diffing | https://aws.amazon.com/about-aws/whats-new/2019/10/sagemaker-notebooks-now-support-diffing/ Amazon Lex Adds Support for Checkpoints in Session APIs | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-lex-adds-support-for-checkpoints-in-session-apis/ Amazon SageMaker Ground Truth Adds Built-in Workflows for the Verification and Adjustment of Data Labels | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-sagemaker-ground-truth-adds-built-in-workflows-for-verification-and-adjustment-of-data-labels/ AWS Chatbot Now Supports Notifications from AWS Config | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-chatbot-now-supports-notifications-from-aws-config/ AWS Deep Learning Containers now support PyTorch | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-deep-learning-containers-now-support-pytorch/ Topic || Application Integration AWS Step Functions expands Amazon SageMaker service integration | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-step-functions-expands-amazon-sagemaker-service-integration/ Amazon EventBridge now supports AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-eventbridge-supports-aws-cloudformation/ Amazon API Gateway now supports access logging to Amazon Kinesis Data Firehose | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-api-gateway-now-supports-access-logging-to-amazon-kinesis-data-firehose/ Topic || Management and Governance AWS Backup Enhances SNS Notifications to filter on job status | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-backup-enhances-sns-notifications-to-filter-on-job-status/ AWS Managed Services Console now supports search and usage-based filtering to improve change type discovery | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-managed-services-console-now-supports-search-and-usage-based-filtering-to-improve-change-type-discovery/ AWS Console Mobile Application Launches Federated Login for iOS | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-console-mobile-application-launches-federated-login-for-ios/ Topic || Media Announcing New AWS Elemental MediaConvert Features for Accelerated Transcoding, DASH, and AVC Video Quality | https://aws.amazon.com/about-aws/whats-new/2019/10/announcing-new-aws-elemental-mediaconvert-features-for-accelerated-transcoding-dash-and-avc-video-quality/ Topic || Security Amazon Cognito Increases CloudFormation Support | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-cognito-increases-cloudformation-support/ Amazon Inspector adds CIS Benchmark support for Windows 2016 | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-inspector-adds-cis-benchmark-support-for-windows-2016/ AWS Firewall Manager now supports management of Amazon VPC security groups | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-firewall-manager-now-supports-management-of-amazon-vpc-security-groups/ Amazon GuardDuty Adds Three New Threat Detections | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-guardduty-adds-three-new-threat-detections/ Topic || Block Chain New Quick Start deploys Amazon Managed Blockchain | https://aws.amazon.com/about-aws/whats-new/2019/10/new-quick-start-deploys-amazon-managed-blockchain/ Topic || AWS Quick Starts New Quick Start deploys TIBCO JasperReports Server on AWS | https://aws.amazon.com/about-aws/whats-new/2019/10/new-quick-start-deploys-tibco-jasperreports-server-on-aws/ Topic || Training New Training Courses Teach New APN Partners to Better Help Their Customers | https://aws.amazon.com/about-aws/whats-new/2019/10/new-training-courses-teach-new-apn-partners-to-better-help-their-customers/ New Courses Available to Help You Grow and Accelerate Your AWS Cloud Skills | https://aws.amazon.com/about-aws/whats-new/2019/10/new-courses-available-to-help-you-grow-and-accelerate-your-aws-cloud-skills/ New Digital Course on Coursera - AWS Fundamentals: Migrating to the Cloud | https://aws.amazon.com/about-aws/whats-new/2019/10/new-digital-course-on-coursera-aws-fundamentals-migrating-to-the-cloud/ Topic || Public Data Sets New AWS Public Datasets Available from Audi, MIT, Allen Institute for Cell Science, Finnish Meteorological Institute, and others | https://aws.amazon.com/about-aws/whats-new/2019/10/new-aws-public-datasets-available/ Topic || Robotics AWS RoboMaker introduces support for Robot Operating System 2 (ROS2) in beta release | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-robomaker-introduces-support-robot-operating-system-2-beta-release/

management mit greek cloud transform ios windows ukrainian spark ability thai vietnamese streams removal aws hungarian databases romanian tb 2k adjustment vmware checkpoint google chrome extract verification workflows urdu mongodb flink pytorch mozilla firefox allen institute 4k ultra hd vpc ciphers dynamodb sap hana amazon sagemaker amazon rds memcached azs amazon eks aws cloudformation aws glue amazon ecs amazon athena aws config amazon api gateway amazon quicksight amazon managed blockchain amazon chime application load balancer amazon neptune amazon inspector sparql amazon elasticache amazon documentdb amazon appstream amazon elasticsearch service amazon vpc amazon msk snowball edge amazon freertos
AWS Podcast
#328: August 2019 Update Show #1

AWS Podcast

Play Episode Listen Later Aug 18, 2019 55:44


It is a MASSIVE episode of updates that Simon and Nikki do their best to cover! There is also an EXTRA SPECIAL bonus just for AWS Podcast listeners! Special Discount for Intersect Tickets: https://int.aws/podcast use discount code 'podcast' - note that tickets are limited! Chapters: 02:19 Infrastructure 03:07 Storage 05:34 Compute 13:47 Network 14:54 Databases 17:45 Migration 18:36 Developer Tools 21:39 Analytics 29:25 IoT 33:24 End User Computing 34:08 Machine Learning 40:21 AR and VR 41:11 Application Integration 43:57 Management and Governance 48:04 Customer Engagement 49:13 Media 50:17 Mobile 50:36 Security 51:26 Gaming 51:39 Robotics 52:13 Training Shownotes: Special Discount for Intersect Tickets: https://int.aws/podcast use discount code 'podcast' - note that tickets are limited! Topic || Infrastructure Announcing the new AWS Middle East (Bahrain) Region | https://aws.amazon.com/about-aws/whats-new/2019/07/announcing-the-new-aws-middle-east--bahrain--region-/ Topic || Storage EBS default volume type updated to GP2 | https://aws.amazon.com/about-aws/whats-new/2019/07/ebs-default-volume-type-updated-to-gp2/ AWS Backup will Automatically Copy Tags from Resource to Recovery Point | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-backup-will-automatically-copy-tags-from-resource-to-recovery-point/ Configuration update for Amazon EFS encryption of data in transit | https://aws.amazon.com/about-aws/whats-new/2019/07/configuration-update-for-amazon-efs-encryption-data-in-transit/ AWS Snowball and Snowball Edge available in Seoul – Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-snowball-and-aws-snowball-edge-available-in-asia-pacific-seoul-region/ Amazon S3 adds support for percentiles on Amazon CloudWatch Metrics | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-s3-adds-support-for-percentiles-on-amazon-cloudwatch-metrics/ Amazon FSx Now Supports Windows Shadow Copies for Restoring Files to Previous Versions | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-fsx-now-supports-windows-shadow-copies-for-restoring-files-to-previous-versions/ Amazon CloudFront Announces Support for Resource-Level and Tag-Based Permissions | https://aws.amazon.com/about-aws/whats-new/2019/08/cloudfront-resource-level-tag-based-permission/ Topic || Compute Amazon EC2 AMD Instances are Now Available in additional regions | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-amd-instances-available-in-additional-regions/ Amazon EC2 P3 Instances Featuring NVIDIA Volta V100 GPUs now Support NVIDIA Quadro Virtual Workstation | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-p3-nstances-featuring-nvidia-volta-v100-gpus-now-support-nvidia-quadro-virtual-workstation/ Introducing Amazon EC2 I3en and C5n Bare Metal Instances | https://aws.amazon.com/about-aws/whats-new/2019/08/introducing-amazon-ec2-i3en-and-c5n-bare-metal-instances/ Amazon EC2 C5 New Instance Sizes are Now Available in Additional Regions | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-ec2-c5-new-instance-sizes-are-now-available-in-additional-regions/ Amazon EC2 Spot Now Available for Red Hat Enterprise Linux (RHEL) | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-spot-now-available-red-hat-enterprise-linux-rhel/ Amazon EC2 Now Supports Tagging Launch Templates on Creation | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-now-supports-tagging-launch-templates-on-creation/ Amazon EC2 On-Demand Capacity Reservations Can Now Be Shared Across Multiple AWS Accounts | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-on-demand-capacity-reservations-shared-across-multiple-aws-accounts/ Amazon EC2 Fleet Now Lets You Modify On-Demand Target Capacity | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-ec2-fleet-modify-on-demand-target-capacity/ Amazon EC2 Fleet Now Lets You Set A Maximum Price For A Fleet Of Instances | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-ec2-fleet-now-lets-you-submit-maximum-price-for-fleet-of-instances/ Amazon EC2 Hibernation Now Available on Ubuntu 18.04 LTS | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-hibernation-now-available-ubuntu-1804-lts/ Amazon ECS services now support multiple load balancer target groups | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecs-services-now-support-multiple-load-balancer-target-groups/ Amazon ECS Console now enables simplified AWS App Mesh integration | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecs-console-enables-simplified-aws-app-mesh-integration/ Amazon ECR now supports increased repository and image limits | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecr-now-supports-increased-repository-and-image-limits/ Amazon ECR Now Supports Immutable Image Tags | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecr-now-supports-immutable-image-tags/ Amazon Linux 2 Extras now provides AWS-optimized versions of new Linux Kernels | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-linux-2-extras-provides-aws-optimized-versions-of-new-linux-kernels/ Lambda@Edge Adds Support for Python 3.7 | https://aws.amazon.com/about-aws/whats-new/2019/08/lambdaedge-adds-support-for-python-37/ AWS Batch Now Supports the Elastic Fabric Adapter | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-batch-now-supports-elastic-fabric-adapter/ Topic || Network Elastic Fabric Adapter is officially integrated into Libfabric Library | https://aws.amazon.com/about-aws/whats-new/2019/07/elastic-fabric-adapter-officially-integrated-into-libfabric-library/ Now Launch AWS Glue, Amazon EMR, and AWS Aurora Serverless Clusters in Shared VPCs | https://aws.amazon.com/about-aws/whats-new/2019/08/now-launch-aws-glue-amazon-emr-and-aws-aurora-serverless-clusters-in-shared-vpcs/ AWS DataSync now supports Amazon VPC endpoints | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-datasync-now-supports-amazon-vpc-endpoints/ AWS Direct Connect Now Supports Resource Based Authorization, Tag Based Authorization, and Tag on Resource Creation | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-direct-connect-now-supports-resource-based-authorization-tag-based-authorization-tag-on-resource-creation/ Topic || Databases Amazon Aurora Multi-Master is Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-aurora-multimaster-now-generally-available/ Amazon DocumentDB (with MongoDB compatibility) Adds Aggregation Pipeline and Diagnostics Capabilities | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-documentdb-with-mongodb-compatibility-adds-aggregation-pipeline-and-diagnostics-capabilities/ Amazon DynamoDB now helps you monitor as you approach your account limits | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-dynamodb-now-helps-you-monitor-as-you-approach-your-account-limits/ Amazon RDS for Oracle now supports new instance sizes | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-rds-for-oracle-now-supports-new-instance-sizes/ Amazon RDS for Oracle Supports Oracle Management Agent (OMA) version 13.3 for Oracle Enterprise Manager Cloud Control 13c | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-rds-for-oracle-supports-oracle-management-agent-oma-version133-for-oracle-enterprise-manager-cloud-control13c/ Amazon RDS for Oracle now supports July 2019 Oracle Patch Set Updates (PSU) and Release Updates (RU) | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-rds-for-oracle-supports-july-2019-oracle-patch-set-and-release-updates/ Amazon RDS SQL Server now supports changing the server-level collation | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-rds-sql-server-supports-changing-server-level-collation/ PostgreSQL 12 Beta 2 Now Available in Amazon RDS Database Preview Environment | https://aws.amazon.com/about-aws/whats-new/2019/08/postgresql-beta-2-now-available-in-amazon-rds-database-preview-environment/ Amazon Aurora with PostgreSQL Compatibility Supports Publishing PostgreSQL Log Files to Amazon CloudWatch Logs | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-aurora-with-postgresql-compatibility-support-logs-to-cloudwatch/ Amazon Redshift Launches Concurrency Scaling in Five additional AWS Regions, and Enhances Console Performance Graphs in all supported AWS Regions | https://aws.amazon.com/about-aws/ whats-new/2019/08/amazon-redshift-launches-concurrency-scaling-five-additional-regions-enhances-console-performance-graphs/ Amazon Redshift now supports column level access control with AWS Lake Formation | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-redshift-spectrum-now-supports-column-level-access-control-with-aws-lake-formation/ Topic || Migration AWS Migration Hub Now Supports Import of On-Premises Server and Application Data From RISC Networks to Plan and Track Migration Progress | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-migration-hub-supports-import-of-on-premises-server-application-data-from-risc-networks-to-track-migration-progress/ Topic || Developer Tools AWS CodePipeline Achieves HIPAA Eligibility | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-codepipeline-achieves-hipaa-eligibility/ AWS CodePipeline Adds Pipeline Status to Pipeline Listing | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-codepipeline-adds-pipeline-status-to-pipeline-listing/ AWS Amplify Console adds support for automatically deploying branches that match a specific pattern | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-amplify-console-support-git-based-branch-pattern-detection/ Amplify Framework Adds Predictions Category | https://aws.amazon.com/about-aws/whats-new/2019/07/amplify-framework-adds-predictions-category/ Amplify Framework adds local mocking and testing for GraphQL APIs, Storage, Functions, and Hosting | https://aws.amazon.com/about-aws/whats-new/2019/08/amplify-framework-adds-local-mocking-and-testing-for-graphql-apis-storage-functions-hostings/ Topic || Analytics AWS Lake Formation is now generally available | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-lake-formation-is-now-generally-available/ Announcing PartiQL: One query language for all your data | https://aws.amazon.com/blogs/opensource/announcing-partiql-one-query-language-for-all-your-data/ AWS Glue now supports the ability to run ETL jobs on Apache Spark 2.4.3 (with Python 3) | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-glue-now-supports-ability-to-run-etl-jobs-apache-spark-243-with-python-3/ AWS Glue now supports additional configuration options for memory-intensive jobs submitted through development endpoints | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-glue-now-supports-additional-configuration-options-for-memory-intensive-jobs-submitted-through-deployment-endpoints/ AWS Glue now provides the ability to bookmark Parquet and ORC files using Glue ETL jobs | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-glue-now-provides-ability-to-bookmark-parquet-and-orc-files-using-glue-etl-jobs/ AWS Glue now provides FindMatches ML transform to deduplicate and find matching records in your dataset | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-glue-provides-findmatches-ml-transform-to-deduplicate/ Amazon QuickSight adds support for custom colors, embedding for all user types and new regions! | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-quicksight-adds-support-for-custom-colors-embedding-for-all-user-types-and-new-regions/ Achieve 3x better Spark performance with EMR 5.25.0 | https://aws.amazon.com/about-aws/whats-new/2019/08/achieve-3x-better-spark-performance-with-emr-5250/ Amazon EMR now supports native EBS encryption | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon_emr_now_supports_native_ebs_encryption/ Amazon Athena adds Support for AWS Lake Formation Enabling Fine-Grained Access Control on Databases, Tables, and Columns | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-athena-adds-support-for-aws-lake-formation-enabling-fine-grained-access-control-on-databases-tables-columns/ Amazon EMR Integration With AWS Lake Formation Is Now In Beta, Supporting Database, Table, and Column-level access controls for Apache Spark | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-emr-integration-with-aws-lake-formation-now-in-beta-supporting-database-table-column-level-access-controls/ Topic || IoT AWS IoT Device Defender Expands Globally | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-iot-device-defender-expands-globally/ AWS IoT Device Defender Supports Mitigation Actions for Audit Results | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-iot-device-defender-supports-mitigation-actions-for-audit-results/ AWS IoT Device Tester v1.3.0 is Now Available for Amazon FreeRTOS 201906.00 Major | https://aws.amazon.com/about-aws/whats-new/2019/07/aws_iot_device_tester_v130_for_amazon_freertos_201906_00_major/ AWS IoT Events actions now support AWS Lambda, SQS, Kinesis Firehose, and IoT Events as targets | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-iot-events-supports-invoking-actions-to-lambda-sqs-kinesis-firehose-iot-events/ AWS IoT Events now supports AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-iot-events-now-supports-aws-cloudformation/ Topic || End User Computing AWS Client VPN now adds support for Split-tunnel | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-client-vpn-now-adds-support-for-split-tunnel/ Introducing AWS Chatbot (beta): ChatOps for AWS in Amazon Chime and Slack Chat Rooms | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-aws-chatbot-chatops-for-aws/ Amazon AppStream 2.0 Adds CLI Operations for Programmatic Image Creation | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-appstream-2-adds-cli-operations-for-programmatic-image-creation/ NICE DCV Releases Version 2019.0 with Multi-Monitor Support on Web Client | https://aws.amazon.com/about-aws/whats-new/2019/08/nice-dcv-releases-version-2019-0-with-multi-monitor-support-on-web-client/ New End User Computing Competency Solutions | https://aws.amazon.com/about-aws/whats-new/2019/08/end-user-computing-competency-solutions/ Amazon WorkDocs Migration Service | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon_workdocs_migration_service/ Topic || Machine Learning SageMaker Batch Transform now enables associating prediction results with input attributes | https://aws.amazon.com/about-aws/whats-new/2019/07/sagemaker-batch-transform-enable-associating-prediction-results-with-input-attributes/ Amazon SageMaker Ground Truth Adds Data Labeling Workflow for Named Entity Recognition | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-sagemaker-ground-truth-adds-data-labeling-workflow-for-named-entity-recognition/ Amazon SageMaker notebooks now available with pre-installed R kernel | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-sagemaker-notebooks-available-with-pre-installed-r-kernel/ New Model Tracking Capabilities for Amazon SageMaker Are Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/08/new-model-tracking-capabilities-for-amazon-sagemaker-now-generally-available/ Amazon Comprehend Custom Entities now supports multiple entity types | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-comprehend-custom-entities-supports-multiple-entity-types/ Introducing Predictive Maintenance Using Machine Learning | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-predictive-maintenance-using-machine-learning/ Amazon Transcribe Streaming Now Supports WebSocket | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-transcribe-streaming-now-supports-websocket/ Amazon Polly Launches Neural Text-to-Speech and Newscaster Voices | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-polly-launches-neural-text-to-speech-and-newscaster-voices/ Manage a Lex session using APIs on the client | https://aws.amazon.com/about-aws/whats-new/2019/08/manage-a-lex-session-using-apis-on-the-client/ Amazon Rekognition now detects violence, weapons, and self-injury in images and videos; improves accuracy for nudity detection | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-rekognition-now-detects-violence-weapons-and-self-injury-in-images-and-videos-improves-accuracy-for-nudity-detection/ Topic || AR and VR Amazon Sumerian Now Supports Physically-Based Rendering (PBR) | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-sumerian-now-supports-physically-based-rendering-pbr/ Topic || Application Integration Amazon SNS Message Filtering Adds Support for Attribute Key Matching | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-sns-message-filtering-adds-support-for-attribute-key-matching/ Amazon SNS Adds Support for AWS X-Ray | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-sns-adds-support-for-aws-x-ray/ Temporary Queue Client Now Available for Amazon SQS | https://aws.amazon.com/about-aws/whats-new/2019/07/temporary-queue-client-now-available-for-amazon-sqs/ Amazon MQ Adds Support for AWS Key Management Service (AWS KMS), Improving Encryption Capabilities | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-mq-adds-support-for-aws-key-management-service-improving-encryption-capabilities/ Amazon MSK adds support for Apache Kafka version 2.2.1 and expands availability to EU (Stockholm), Asia Pacific (Mumbai), and Asia Pacific (Seoul) | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-msk-adds-support-apache-kafka-version-221-expands-availability-stockholm-mumbai-seoul/ Amazon API Gateway supports secured connectivity between REST APIs & Amazon Virtual Private Clouds in additional regions | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-api-gateway-supports-secured-connectivity-between-reset-apis-and-amazon-virtual-private-clouds-in-additional-regions/ Topic || Management and Governance AWS Cost Explorer now Supports Usage-Based Forecasts | https://aws.amazon.com/about-aws/whats-new/2019/07/usage-based-forecasting-in-aws-cost-explorer/ Introducing Amazon EC2 Resource Optimization Recommendations | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-amazon-ec2-resource-optimization-recommendations/ AWS Budgets Announces AWS Chatbot Integration | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-budgets-announces-aws-chatbot-integration/ Discovering Documents Made Easy in AWS Systems Manager Automation | https://aws.amazon.com/about-aws/whats-new/2019/07/discovering-documents-made-easy-in-aws-systems-manager-automation/ AWS Systems Manager Distributor makes it easier to create distributable software packages | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-systems-manager-distributor-makes-it-easier-to-create-distributable-software-packages/ Now use AWS Systems Manager Maintenance Windows to select resource groups as targets | https://aws.amazon.com/about-aws/whats-new/2019/07/now-use-aws-systems-manager-maintenance-windows-to-select-resource-groups-as-targets/ Use AWS Systems Manager to resolve operational issues with your .NET and Microsoft SQL Server Applications | https://aws.amazon.com/about-aws/whats-new/2019/08/use-aws-systems-manager-to-resolve-operational-issues-with-your-net-and-microsoft-sql-server-applications/ CloudWatch Logs Insights adds cross log group querying | https://aws.amazon.com/about-aws/whats-new/2019/07/cloudwatch-logs-insights-adds-cross-log-group-querying/ AWS CloudFormation now supports higher StackSets limits | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-cloudformation-now-supports-higher-stacksets-limits/ Topic || Customer Engagement Introducing AI-Driven Social Media Dashboard | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-ai-driven-social-media-dashboard/ New Amazon Connect integration for ChoiceView from Radish Systems on AWS | https://aws.amazon.com/about-aws/whats-new/2019/07/new-amazon-connect-integration-for-choiceview-from-radish-systems-on-aws/ Amazon Pinpoint Adds Campaign and Application Metrics APIs | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-pinpoint-adds-campaign-and-application-metrics-apis/ Topic || Media AWS Elemental Appliances and Software Now Available in the AWS Management Console | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-elemental-appliances-and-software-now-available-in-aws-management-console/ AWS Elemental MediaConvert Expands Audio Support and Improves Performance | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-elemental-mediaconvert-expands-audio-support-and-improves-performance/ AWS Elemental MediaConvert Adds Ability to Prioritize Transcoding Jobs | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-elemental-mediaconvert-adds-ability-to-prioritize-transcoding-jobs/ AWS Elemental MediaConvert Simplifies Editing and Sharing of Settings | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-elemental-mediaconvert-simplifies-editing-and-sharing-of-settings/ AWS Elemental MediaStore Now Supports Resource Tagging | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-elemental-mediastore-now-supports-resource-tagging/ AWS Elemental MediaLive Enhances Support for File-Based Inputs for Live Channels | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-elemental-medialive-enhances-support-for-file-based-inputs-for-live-channels/ Topic || Mobile AWS Device Farm improves device start up time to enable instant access to devices | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-device-farm-improves-device-start-up-time-to-enable-instant-access-to-devices/ Topic || Security Introducing the Amazon Corretto Crypto Provider (ACCP) for Improved Cryptography Performance | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-the-amazon-corretto-crypto-provider/ AWS Secrets Manager now supports VPC endpoint policies | https://aws.amazon.com/about-aws/whats-new/2019/07/AWS-Secrets-Manager-now-supports-VPC-endpoint-policies/ Topic || Gaming Lumberyard Beta 1.20 Now Available | https://aws.amazon.com/about-aws/whats-new/2019/07/lumberyard-beta-120-now-available/ Topic || Robotics AWS RoboMaker now supports offline logs and metrics for the AWS RoboMaker CloudWatch cloud extension | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-robomaker-now-supports-offline-logs-metrics-aws-robomaker-cloudwatch-cloud-extension/ Topic || Training New AWS Certification Exam Vouchers Make Certifying Groups Easier | https://aws.amazon.com/about-aws/whats-new/2019/07/new-aws-certification-exam-vouchers-make-certifying-groups-easier/ Announcing New Resources and Website to Accelerate Your Cloud Adoption | https://aws.amazon.com/about-aws/whats-new/2019/07/announcing-new-resources-and-website-to-accelerate-your-cloud-adoption/ AWS Developer Series Relaunched on edX | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-developer-series-relaunched-on-edx/

AWS re:Invent 2018
STG391: Post-Production Media Delivery at Scale with AWS

AWS re:Invent 2018

Play Episode Listen Later Nov 30, 2018 61:05


Netflix is using AWS Snowball Edge to deliver post-production content to our asset management system, called Content Hub, in the AWS Cloud. Production companies have been historically using LTO tapes to move data around, and that has well-known complications. In order to accelerate and secure our media workflows Netflix has shifted to using Snowball Edge devices for data migration. Please join us to learn how Netflix is using the Snowball Edge service at scale.

AWS Podcast
#261: EC2 Compute Instances for AWS Snowball Edge

AWS Podcast

Play Episode Listen Later Sep 2, 2018 25:26


Do you need to run compute remotely? Perhaps you need to provide IT shipboard? Or in the battlefield? Simon speaks with Ian Perez-Ponce all about this new capability. Shownotes: Jeff Barr’s Blog Post: https://aws.amazon.com/blogs/aws/new-ec2-compute-instances-for-aws-snowball-edge/ Getting Started with Amazon EC2 Instances on Snowball Edge: https://docs.aws.amazon.com/snowball/latest/developer-guide/using-ec2.html

getting started blog post compute instances snowball edge aws snowball edge amazon ec2 instances
AWS Podcast
#229: Redefining Edge Computing

AWS Podcast

Play Episode Listen Later Feb 4, 2018 21:47


With the GA launch of AWS Greengrass and bundled support for embedded Lambda Compute in Snowball Edge appliances, AWS is broadening the scope of opportunity for customers with varying edge computing needs. Simon speaks with Todd Varland and Ian Perez Ponce about what "edge computing" really means and the emerging use cases they see across a range of industry sectors. They also discuss the value Snowball Edge delivers as a hybrid platform capable of providing ephemeral compute resources and petabyte-scale storage virtually anywhere. Shownotes: Get Started with AWS Greengrass: https://bit.ly/learngg AWS Snowball Edge (including torture-test video!): https://aws.amazon.com/snowball-edge

ga redefining aws edge computing aws greengrass snowball edge
AWS re:Invent 2017
STG201: Storage State of the Union

AWS re:Invent 2017

Play Episode Listen Later Nov 30, 2017 60:51


In this session, learn about all of the AWS storage solutions, and get guidance about which ones to use for different use cases. We discuss the core AWS storage services. These include Amazon Simple Storage Service (Amazon S3), Amazon Glacier, Amazon Elastic File System (Amazon EFS), and Amazon Elastic Block Store (Amazon EBS). We also discuss data transfer services such as AWS Snowball, Snowball Edge, and AWS Snowmobile, and hybrid storage solutions such as AWS Storage Gateway.

state of the union storage aws amazon glacier snowball edge aws storage gateway
AWS re:Invent 2017
STG323: Migrating Millions of Video Content Files to The Cloud Using AWS Snowball

AWS re:Invent 2017

Play Episode Listen Later Nov 30, 2017 51:45


Join us for an overview of AWS Snowball and Snowball Edge, a collection of self-service storage appliances built for petabyte-scale data ingest and export operations in the cloud. AWS Snowball and Snowball Edge make it easy to migrate mixed data types to the cloud at scale, whether in support of enterprise workload transformation, active archive, backup & recovery or data lake seeding. In this session, you will hear from organizations that are using AWS Snowball to migrate their critical data assets to the cloud with minimal cost and operational overhead. See how quickly you can accelerate your cloud migration timeline using AWS Snowball and Snowball Edge.