POPULARITY
A quick roundup update as 2023 draws to an end with my cataract surgery, Glacier backup, and I have a new DJI Air 3 drone, so I'm airborne again! Details on blog: https://mbp.ac/832 Music by Martin Bailey
A quick roundup update as 2023 draws to an end with my cataract surgery, Glacier backup, and I have a new DJI Air 3 drone, so I'm airborne again! Details on blog: https://mbp.ac/832 Music by Martin Bailey
Today I share a handful of images from a recent walk in the park, and an update on my NAS cloud backup to Amazon Glacier via Starlink. Big thumbs up! Details on blog: https://mbp.ac/828 Music by Martin Bailey
Today I share a handful of images from a recent walk in the park, and an update on my NAS cloud backup to Amazon Glacier via Starlink. Big thumbs up! Details on blog: https://mbp.ac/828 Music by Martin Bailey
About KevinKevin Miller is currently the global General Manager for Amazon Simple Storage Service (S3), an object storage service that offers industry-leading scalability, data availability, security, and performance. Prior to this role, Kevin has had multiple leadership roles within AWS, including as the General Manager for Amazon S3 Glacier, Director of Engineering for AWS Virtual Private Cloud, and engineering leader for AWS Virtual Private Network and AWS Direct Connect. Kevin was also Technical Advisor to the Senior Vice President for AWS Utility Computing. Kevin is a graduate of Carnegie Mellon University with a Bachelor of Science in Computer Science.Links Referenced: snark.cloud/shirt: https://snark.cloud/shirt aws.amazon.com/s3: https://aws.amazon.com/s3 TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us in part by our friends at Datadog. Datadog is a SaaS monitoring and security platform that enables full-stack observability for modern infrastructure and applications at every scale. Datadog enables teams to see everything: dashboarding, alerting, application performance monitoring, infrastructure monitoring, UX monitoring, security monitoring, dog logos, and log management, in one tightly integrated platform. With 600-plus out-of-the-box integrations with technologies including all major cloud providers, databases, and web servers, Datadog allows you to aggregate all your data into one platform for seamless correlation, allowing teams to troubleshoot and collaborate together in one place, preventing downtime and enhancing performance and reliability. Get started with a free 14-day trial by visiting datadoghq.com/screaminginthecloud, and get a free t-shirt after installing the agent.Corey: Managing shards. Maintenance windows. Overprovisioning. ElastiCache bills. I know, I know. It's a spooky season and you're already shaking. It's time for caching to be simpler. Momento Serverless Cache lets you forget the backend to focus on good code and great user experiences. With true autoscaling and a pay-per-use pricing model, it makes caching easy. No matter your cloud provider, get going for free at gomomento.co/screaming. That's GO M-O-M-E-N-T-O dot co slash screaming.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Right now, as I record this, we have just kicked off our annual charity t-shirt fundraiser. This year's shirt showcases S3 as the eighth wonder of the world. And here to either defend or argue the point—we're not quite sure yet—is Kevin Miller, AWS's vice president and general manager for Amazon S3. Kevin, thank you for agreeing to suffer the slings and arrows that are no doubt going to be interpreted, misinterpreted, et cetera, for the next half hour or so.Kevin: Oh, Corey, thanks for having me. And happy to do that, and really flattered for you to be thinking about S3 in this way. So more than happy to chat with you.Corey: It's absolutely one of those services that is foundational to the cloud. It was the first AWS service that was put into general availability, although the beta folks are going to argue back and forth about no, no, that was SQS instead. I feel like now that Mai-Lan handles both SQS and S3 as part of her portfolio, she is now the final arbiter of that. I'm sure that's an argument for a future day. But it's impossible to imagine cloud without S3.Kevin: I definitely think that's true. It's hard to imagine cloud, actually, with many of our foundational services, including SQS, of course, but we are—yes, we were the first generally available service with S3. And pretty happy with our anniversary being Pi Day, 3/14.Corey: I'm also curious, your own personal trajectory has been not necessarily what folks would expect. You were the general manager of Amazon Glacier, and now you're the general manager and vice president of S3. So, I've got to ask, because there are conflicting reports on this depending upon what angle you look at, are Glacier and S3 the same thing?Kevin: Yes, I was the general manager for S3 Glacier prior to coming over to S3 proper, and the answer is no, they are not the same thing. We certainly have a number of technologies where we're able to use those technologies both on S3 and Glacier, but there are certainly a number of things that are very distinct about Glacier and give us that ability to hit the ultra-low price points that we do for Glacier Deep Archive being as low as $1 per terabyte-month. And so, that definitely—there's a lot of actual ingenuity up and down the stack, from hardware to software, everywhere in between, to really achieve that with Glacier. But then there's other spots where S3 and Glacier have very similar needs, and then, of course, today many customers use Glacier through S3 as a storage class in S3, and so that's a great way to do that. So, there's definitely a lot of shared code, but certainly, when you get into it, there's [unintelligible 00:04:59] to both of them.Corey: I ran a number of obnoxiously detailed financial analyses, and they all came away with, unless you have a very specific very nuanced understanding of your data lifecycle and/or it is less than 30 or 60 days depending upon a variety of different things, the default S3 storage class you should be using for virtually anything is Intelligent Tiering. That is my purely economic analysis of it. Do you agree with that? Disagree with that? And again, I understand that all of these storage classes are like your children, and I am inviting you to tell me which one of them is your favorite, but I'm absolutely prepared to do that.Kevin: Well, we love Intelligent Tiering because it is very simple; customers are able to automatically save money using Intelligent Tiering for data that's not being frequently accessed. And actually, since we launched it a few years ago, we've already saved customers more than $250 million using Intelligent Tiering. So, I would say today, it is our default recommendation in almost every case. I think that the cases where we would recommend another storage class as the primary storage class tend to be specific to the use case where—and particularly for use cases where customers really have a good understanding of the access patterns. And we saw some customers do for their certain dataset, they know that it's going to be heavily accessed for a fixed period of time, or this data is actually for archival, it'll never be accessed, or very rarely if ever access, just maybe in an emergency.And those kinds of use cases, I think actually, customers are probably best to choose one of the specific storage classes where they're, sort of, paying that the lower cost from day one. But again, I would say for the vast majority of cases that we see, the data access patterns are unpredictable and customers like the flexibility of being able to very quickly retrieve the data if they decide they need to use it. But in many cases, they'll save a lot of money as the data is not being accessed, and so, Intelligent Tiering is a great choice for those cases.Corey: I would take it a step further and say that even when customers believe that they are going to be doing a deeper analysis and they have a better understanding of their data flow patterns than Intelligent Tiering would, in practice, I see that they rarely do anything about it. It's one of those things where they're like, “Oh, yeah, we're going to set up our own lifecycle policies real soon now,” whereas, just switch it over to Intelligent Tiering and never think about it again. People's time is worth so much more than the infrastructure they're working on in almost every case. It doesn't seem to make a whole lot of sense unless you have a very intentioned, very urgent reason to go and do that stuff by hand in most cases.Kevin: Yeah, that's right. I think I agree with you, Corey. And certainly, that is the recommendation we lead with customers.Corey: In previous years, our charity t-shirt has focused on other areas of AWS, and one of them was based upon a joke that I've been telling for a while now, which is that the best database in the world is Route 53 and storing TXT records inside of it. I don't know if I ever mentioned this to you or not, but the first iteration of that joke was featuring around S3. The challenge that I had with it is that S3 Select is absolutely a thing where you can query S3 with SQL which I don't see people doing anymore because Athena is the easier, more, shall we say, well-articulated version of all of that. And no, no, that joke doesn't work because it's actually true. You can use S3 as a database. Does that statement fill you with dread? Regret? Am I misunderstanding something? Or are you effectively running a giant subversive database?Kevin: Well, I think that certainly when most customers think about a database, they think about a collection of technology that's applied for given problems, and so I wouldn't count S3 as providing the whole range of functionality that would really make up a database. But I think that certainly a lot of the primitives and S3 Select as a great example of a primitive are available in S3. And we're looking at adding, you know, additional primitives going forward to make it possible to, you know, to build a database around S3. And as you see, other AWS services have done that in many ways. For example, obviously with Amazon Redshift having a lot of capability now to just directly access and use data in S3 and make that a super seamless so that you can then run data warehousing type queries on top of S3 and on top of your other datasets.So, I certainly think it's a great building block. And one other thing I would actually just say that you may not know, Corey, is that one of the things over the last couple of years we've been doing a lot more with S3 is actually working to directly contribute improvements to open-source connector software that uses S3, to make available automatically some of the performance improvements that can be achieved either using both the AWS SDK, and also using things like S3 Select. So, we started with a few of those things with Select; you're going to see more of that coming, most likely. And some of that, again, the idea there as you may not even necessarily know you're using Select, but when we can identify that it will improve performance, we're looking to be able to contribute those kinds of improvements directly—or we are contributing those directly to those open-source packages. So, one thing I would definitely recommend customers and developers do is have a capability of sort of keeping that software up-to-date because although it might seem like those are sort of one-and-done kind of software integrations, there's actually almost continuous improvement now going on, and around things like that capability, and then others we come out with.Corey: What surprised me is just how broadly S3 has been adopted by a wide variety of different clients' software packages out there. Back when I was running production environments in anger, I distinctly remember in one Ubuntu environment, we wound up installing a specific package that was designed to teach apt how to retrieve packages and its updates from S3, which was awesome. I don't see that anymore, just because it seems that it is so easy to do it now, just with the native features that S3 offers, as well as an awful lot of software under the hood has learned to directly recognize S3 as its own thing, and can react accordingly.Kevin: And just do the right thing. Exactly. No, we certainly see a lot of that. So that's, you know—I mean, obviously making that simple for end customers to use and achieve what they're trying to do, that's the whole goal.Corey: It's always odd to me when I'm talking to one of my clients who is looking to understand and optimize their AWS bill to see outliers in either direction when it comes to S3 itself. When they're driving large S3 bills as in a majority of their spend, it's, okay, that is very interesting. Let's dive into that. But almost more interesting to me is when it is effectively not being used at all. When, oh, we're doing everything with EBS volumes or EFS.And again, those are fine services. I don't have any particular problem with them anymore, but the problem I have is that the cloud long ago took what amounts to an economic vote. There's a tax savings for storing data in an object store the way that you—and by extension, most of your competitors—wind up pricing this, versus the idea of on a volume basis where you have to pre-provision things, you don't get any form of durability that extends beyond the availability zone boundary. It just becomes an awful lot of, “Well, you could do it this way. But it gets really expensive really quickly.”It just feels wild to me that there is that level of variance between S3 just sort of raw storage basis, economically, as well as then just the, frankly, ridiculous levels of durability and availability that you offer on top of that. How did you get there? Was the service just mispriced at the beginning? Like oh, we dropped to zero and probably should have put that in there somewhere.Kevin: Well, no, I wouldn't call it mispriced. I think that the S3 came about when we took a—we spent a lot of time looking at the architecture for storage systems, and knowing that we wanted a system that would provide the durability that comes with having three completely independent data centers and the elasticity and capability where, you know, customers don't have to provision the amount of storage they want, they can simply put data and the system keeps growing. And they can also delete data and stop paying for that storage when they're not using it. And so, just all of that investment and sort of looking at that architecture holistically led us down the path to where we are with S3.And we've definitely talked about this. In fact, in Peter's keynote at re:Invent last year, we talked a little bit about how the system is designed under the hood, and one of the thing you realize is that S3 gets a lot of the benefits that we do by just the overall scale. The fact that it is—I think the stat is that at this point more than 10,000 customers have data that's stored on more than a million hard drives in S3. And that's how you get the scale and the capability to do is through massive parallelization. Where customers that are, you know, I would say building more traditional architectures, those are inherently typically much more siloed architectures with a relatively small-scale overall, and it ends up with a lot of resource that's provisioned at small-scale in sort of small chunks with each resource, that you never get to that scale where you can start to take advantage of the some is more than the greater of the parts.And so, I think that's what the recognition was when we started out building S3. And then, of course, we offer that as an API on top of that, where customers can consume whatever they want. That is, I think, where S3, at the scale it operates, is able to do certain things, including on the economics, that are very difficult or even impossible to do at a much smaller scale.Corey: One of the more egregious clown-shoe statements that I hear from time to time has been when people will come to me and say, “We've built a competitor to S3.” And my response is always one of those, “Oh, this should be good.” Because when people say that, they generally tend to be focusing on one or maybe two dimensions that doesn't work for a particular use case as well as it could. “Okay, what was your story around why this should be compared to S3?” “Well, it's an object store. It has full S3 API compatibility.” “Does it really because I have to say, there are times where I'm not entirely convinced that S3 itself has full compatibility with the way that its API has been documented.”And there's an awful lot of magic that goes into this too. “Okay, great. You're running an S3 competitor. Great. How many buildings does it live in?” Like, “Well, we have a problem with the s at the end of that word.” It's, “Okay, great. If it fits on my desk, it is not a viable S3 competitor. If it fits in a single zip code, it is probably not a viable S3 competitor.” Now, can it be an object store? Absolutely. Does it provide a new interface to some existing data someone might have? Sure why not. But I think that, oh, it's S3 compatible, is something that gets tossed around far too lightly by folks who don't really understand what it is that drives S3 and makes it special.Kevin: Yeah, I mean, I would say certainly, there's a number of other implementations of the S3 API, and frankly we're flattered that customers recognize and our competitors and others recognize the simplicity of the API and go about implementing it. But to your point, I think that there's a lot more; it's not just about the API, it's really around everything surrounding S3 from, as you mentioned, the fact that the data in S3 is stored in three independent availability zones, all of which that are separated by kilometers from each other, and the resilience, the automatic failover, and the ability to withstand an unlikely impact to one of those facilities, as well as the scalability, and you know, the fact that we put a lot of time and effort into making sure that the service continues scaling with our customers need. And so, I think there's a lot more that goes into what is S3. And oftentimes just in a straight-up comparison, it's sort of purely based on just the APIs and generally a small set of APIs, in addition to those intangibles around—or not intangibles, but all of the ‘-ilities,' right, the elasticity and the durability, and so forth that I just talked about. In addition to all that also, you know, certainly what we're seeing for customers is as they get into the petabyte and tens of petabytes, hundreds of petabytes scale, their need for the services that we provide to manage that storage, whether it's lifecycle and replication, or things like our batch operations to help update and to maintain all the storage, those become really essential to customers wrapping their arms around it, as well as visibility, things like Storage Lens to understand, what storage do I have? Who's using it? How is it being used?And those are all things that we provide to help customers manage at scale. And certainly, you know, oftentimes when I see claims around S3 compatibility, a lot of those advanced features are nowhere to be seen.Corey: I also want to call out that a few years ago, Mai-Lan got on stage and talked about how, to my recollection, you folks have effectively rebuilt S3 under the hood into I think it was 235 distinct microservices at the time. There will not be a quiz on numbers later, I'm assuming. But what was wild to me about that is having done that for services that are orders of magnitude less complex, it absolutely is like changing the engine on a car without ever slowing down on the highway. Customers didn't know that any of this was happening until she got on stage and announced it. That is wild to me. I would have said before this happened that there was no way that would have been possible except it clearly was. I have to ask, how did you do that in the broad sense?Kevin: Well, it's true. A lot of the underlying infrastructure that's been part of S3, both hardware and software is, you know, you wouldn't—if someone from S3 in 2006 came and looked at the system today, they would probably be very disoriented in terms of understanding what was there because so much of it has changed. To answer your question, the long and short of it is a lot of testing. In fact, a lot of novel testing most recently, particularly with the use of formal logic and what we call automated reasoning. It's also something we've talked a fair bit about in re:Invent.And that is essentially where you prove the correctness of certain algorithms. And we've used that to spot some very interesting, the one-in-a-trillion type cases that S3 scale happens regularly, that you have to be ready for and you have to know how the system reacts, even in all those cases. I mean, I think one of our engineers did some calculations that, you know, the number of potential states for S3, sort of, exceeds the number of atoms in the universe or something so crazy. But yet, using methods like automated reasoning, we can test that state space, we can understand what the system will do, and have a lot of confidence as we begin to swap, you know, pieces of the system.And of course, nothing in S3 scale happens instantly. It's all, you know, I would say that for a typical engineering effort within S3, there's a certain amount of effort, obviously, in making the change or in preparing the new software, writing the new software and testing it, but there's almost an equal amount of time that goes into, okay, and what is the process for migrating from System A to System B, and that happens over a timescale of months, if not years, in some cases. And so, there's just a lot of diligence that goes into not just the new systems, but also the process of, you know, literally, how do I swap that engine on the system. So, you know, it's a lot of really hard working engineers that spent a lot of time working through these details every day.Corey: I still view S3 through the lens of it is one of the easiest ways in the world to wind up building a static web server because you basically stuff the website files into a bucket and then you check a box. So, it feels on some level though, that it is about as accurate as saying that S3 is a database. It can be used or misused or pressed into service in a whole bunch of different use cases. What have you seen from customers that has, I guess, taught you something you didn't expect to learn about your own service?Kevin: Oh, I'd say we have those [laugh] meetings pretty regularly when customers build their workloads and have unique patterns to it, whether it's the type of data they're retrieving and the access pattern on the data. You know, for example, some customers will make heavy use of our ability to do [ranged gets 00:22:47] on files and [unintelligible 00:22:48] objects. And that's pretty good capability, but that can be one where that's very much dependent on the type of file, right, certain files have structure, as far as you know, a header or footer, and that data is being accessed in a certain order. Oftentimes, those may also be multi-part objects, and so making use of the multi-part features to upload different chunks of a file in parallel. And you know, also certainly when customers get into things like our batch operations capability where they can literally write a Lambda function and do what they want, you know, we've seen some pretty interesting use cases where customers are running large-scale operations across, you know, billions, sometimes tens of billions of objects, and this can be pretty interesting as far as what they're able to do with them.So, for something is sort of what you might—you know, as simple and basics, in some sense, of GET and PUT API, just all the capability around it ends up being pretty interesting as far as how customers apply it and the different workloads they run on it.Corey: So, if you squint hard enough, what I'm hearing you tell me is that I can view all of this as, “Oh, yeah. S3 is also compute.” And it feels like that as a fast-track to getting a question wrong on one of the certification exams. But I have to ask, from your point of view, is S3 storage? And whether it's yes or no, what gets you excited about the space that it's in?Kevin: Yeah well, I would say S3 is not compute, but we have some great compute services that are very well integrated with S3, which excites me as well as we have things like S3 Object Lambda, where we actually handle that integration with Lambda. So, you're writing Lambda functions, we're executing them on the GET path. And so, that's a pretty exciting feature for me. But you know, to sort of take a step back, what excites me is I think that customers around the world, in every industry, are really starting to recognize the value of data and data at large scale. You know, I think that actually many customers in the world have terabytes or more of data that sort of flows through their fingers every day that they don't even realize.And so, as customers realize what data they have, and they can capture and then start to analyze and make ultimately make better business decisions that really help drive their top line or help them reduce costs, improve costs on whether it's manufacturing or, you know, other things that they're doing. That's what really excites me is seeing those customers take the raw capability and then apply it to really just to transform how they not just how their business works, but even how they think about the business. Because in many cases, transformation is not just a technical transformation, it's people and cultural transformation inside these organizations. And that's pretty cool to see as it unfolds.Corey: One of the more interesting things that I've seen customers misunderstand, on some level, has been a number of S3 releases that focus around, “Oh, this is for your data lake.” And I've asked customers about that. “So, what's your data lake strategy?” “Well, we don't have one of those.” “You have, like, eight petabytes and climbing in S3? What do you call that?” It's like, “Oh, yeah, that's just a bunch of buckets we dump things into. Some are logs of our assets and the rest.” It's—Kevin: Right.Corey: Yeah, it feels like no one thinks of themselves as having anything remotely resembling a structured place for all of the data that accumulates at a company.Kevin: Mm-hm.Corey: There is an evolution of people learning that oh, yeah, this is in fact, what it is that we're doing, and this thing that they're talking about does apply to us. But it almost feels like a customer communication challenge, just because, I don't know about you, but with my legacy AWS account, I have dozens of buckets in there that I don't remember what the heck they're for. Fortunately, you folks don't charge by the bucket, so I can smile, nod, remain blissfully ignorant, but it does make me wonder from time to time.Kevin: Yeah, no, I think that what you hear there is actually pretty consistent with what the reality is for a lot of customers, which is in distributed organizations, I think that's bound to happen, you have different teams that are working to solve problems, and they are collecting data to analyze, they're creating result datasets and they're storing those datasets. And then, of course, priorities can shift, and you know, and there's not necessarily the day-to-day management around data that we might think would be expected. I feel [we 00:26:56] sort of drew an architecture on a whiteboard. And so, I think that's the reality we are in. And we will be in, largely forever.I mean, I think that at a smaller-scale, that's been happening for years. So, I think that, one, I think that there's a lot of capability just being in the cloud. At the very least, you can now start to wrap your arms around it, right, where used to be that it wasn't even possible to understand what all that data was because there's no way to centrally inventory it well. In AWS with S3, with inventory reports, you can get a list of all your storage and we are going to continue to add capability to help customers get their arms around what they have, first off; understand how it's being used—that's where things like Storage Lens really play a big role in understanding exactly what data is being accessed and not. We're definitely listening to customers carefully around this, and I think when you think about broader data management story, I think that's a place that we're spending a lot of time thinking right now about how do we help customers get their arms around it, make sure that they know what's the categorization of certain data, do I have some PII lurking here that I need to be very mindful of?And then how do I get to a world where I'm—you know, I won't say that it's ever going to look like the perfect whiteboard picture you might draw on the wall. I don't think that's really ever achievable, but I think certainly getting to a point where customers have a real solid understanding of what data they have and that the right controls are in place around all that data, yeah, I think that's directionally where I see us heading.Corey: As you look around how far the service has come, it feels like, on some level, that there were some, I guess, I don't want to say missteps, but things that you learned as you went along. Like, back when the service was in beta, for example, there was no per-request charge. To my understanding that was changed, in part because people were trying to use it as a file system, and wow, that suddenly caused a tremendous amount of load on some of the underlying systems. You originally launched with a BitTorrent endpoint as an option so that people could download through peer-to-peer approaches for large datasets and turned out that wasn't really the way the internet evolved, either. And I'm curious, if you were to have to somehow build this off from scratch, are there any other significant changes you would make in how the service was presented to customers in how people talked about it in the early days? Effectively given a mulligan, what would you do differently?Kevin: Well, I don't know, Corey, I mean, just given where it's grown to in macro terms, you know, I definitely would be worried taking a mulligan, you know, that I [laugh] would change the sort of the overarching trajectory. Certainly, I think there's a few features here and there where, for whatever reason, it was exciting at the time and really spoke to what customers at the time were thinking, but over time, you know, sort of quickly those needs move to something a little bit different. And, you know, like you said things like the BitTorrent support is one where, at some level, it seems like a great technical architecture for the internet, but certainly not something that we've seen dominate in the way things are done. Instead, you know, we've largely kind of have a world where there's a lot of caching layers, but it still ends up being largely client-server kind of connections. So, I don't think I would do a—I certainly wouldn't do a mulligan on any of the major functionality, and I think, you know, there's a few things in the details where obviously, we've learned what really works in the end. I think we learned that we wanted bucket names to really strictly conform to rules for DNS encoding. So, that was the change that was made at some point. And we would tweak that, but no major changes, certainly.Corey: One subject of some debate while we were designing this year's charity t-shirt—which, incidentally, if you're listening to this, you can pick up for yourself at snark.cloud/shirt—was the is S3 itself dependent upon S3? Because we know that every other service out there is as well, but it is interesting to come up with an idea of, “Oh, yeah. We're going to launch a whole new isolated region of S3 without S3 to lean on.” That feels like it's an almost impossible bootstrapping problem.Kevin: Well, S3 is not dependent on S3 to come up, and it's certainly a critical dependency tree that we look at and we track and make sure that we'd like to have an acyclic graph as we look at dependencies.Corey: That is such a sophisticated way to say what I learned the hard way when I was significantly younger and working in production environments: don't put the DNS servers needed to boot the hypervisor into VMs that require a working hypervisor. It's one of those oh, yeah, in hindsight, that makes perfect sense, but you learn it right after that knowledge really would have been useful.Kevin: Yeah, absolutely. And one of the terms we use for that, as well as is the idea of static stability, or that's one of the techniques that can really help with isolating a dependency is what we call static stability. We actually have an article about that in the Amazon Builder Library, which there's actually a bunch of really good articles in there from very experienced operations-focused engineers in AWS. So, static stability is one of those key techniques, but other techniques—I mean, just pure minimization of dependencies is one. And so, we were very, very thoughtful about that, particularly for that core layer.I mean, you know, when you talk about S3 with 200-plus microservices, or 235-plus microservices, I would say not all of those services are critical for every single request. Certainly, a small subset of those are required for every request, and then other services actually help manage and scale the kind of that inner core of services. And so, we look at dependencies on a service by service basis to really make sure that inner core is as minimized as possible. And then the outer layers can start to take some dependencies once you have that basic functionality up.Corey: I really want to thank you for being as generous with your time as you have been. If people want to learn more about you and about S3 itself, where should they go—after buying a t-shirt, of course.Kevin: Well, certainly buy the t-shirt. First, I love the t-shirts and the charity that you work with to do that. Obviously, for S3, it's aws.amazon.com/s3. And you can actually learn more about me. I have some YouTube videos, so you can search for me on YouTube and kind of get a sense of myself.Corey: We will put links to that into the show notes, of course. Thank you so much for being so generous with your time. I appreciate it.Kevin: Absolutely. Yeah. Glad to spend some time. Thanks for the questions, Corey.Corey: Kevin Miller, vice president and general manager for Amazon S3. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, ignorant comment talking about how your S3 compatible service is going to blow everyone's socks off when it fails.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
将来めったに使わないけれども消してしまうわけにはいかない、重要で大容量なデータの保存について話しています。
Description: Shon Gerber from ReduceCyberRisk.com reveals to you the steps and the cybersecurity training you need to grow your Information Security career while protecting your business and reduce your company’s cyber risk. Shon utilizes his expansive knowledge while providing superior training from his years of cybersecurity experience. In this episode, Shon will talk about recent Security News: FBI: BEC Scam Losses – $1.2 Billion WordPress: Social Share Plugin – Exploited Ransomware: Stuart, FL still recovering EMP / GMD Events: Businesses need a plan Our Cybersecurity Training for the Week is: Amazon Glacier - Deep Archive Want to find Shon Gerber / Reduce Cyber Risk elsewhere on the internet? LinkedIn – www.linkedin.com/in/shongerber ReduceCyberRisk.com - https://reducecyberrisk.com/ Facebook - https://www.facebook.com/CyberRiskReduced/ LINKS: ThreatPost o https://threatpost.com/fbi-bec-scam-losses-double/144038/ The Hacker News o https://thehackernews.com/2019/04/wordpress-plugin-hacking.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheHackersNews+%28The+Hackers+News+-+Cyber+Security+Blog%29&m=1 Dark Reading o https://www.darkreading.com/endpoint/city-of-stuart-still-recovering-from-ryuk-ransomware-attack-/d/d-id/1334510?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple CSO o https://www.csoonline.com/article/3390976/why-your-business-continuity-and-disaster-recovery-plans-should-account-for-emp-attacks-and-gmd-eve.html?upd=1556125631099 ISC2 Training Study Guide o https://www.isc2.org/Training/Self-Study-Resources Amazon Glacier Deep Archive https://aws.amazon.com/blogs/aws/new-amazon-s3-storage-class-glacier-deep-archive/
See all of this week's mentioned content : http://lon.tv/ww231 - This week I talk about CES 2019 plans, Roku's quiet addition of MPEG-2 hardware decoding for cord cutters, B&H apparently opposed CT tax collection efforts earlier this year, and we'll compare Amazon Glacier vs. S3 for backing up your data. Subscribe for more! http://lon.tv/s VIDEO INDEX: 00:46 - Supporter Thank Yous 01:10 - (non)Ad - http://lon.tv/amazondevices 02:06 - Week in Review: Main Channel 03:30 - On My Mind: CES 2019 Planning 07:17 - News: HDHomerun Premium TV Contract Issue 09:27 - News: B&H Pushed Back Against CT Tax Authorities 11:20 - Q&A: Reviewing Pre-Release PC Products 13:58 - Q&A: Amazon Glacier Backup 21:02 - Q&A: Roku MPEG-2 Hardware Decoding Support 24:12 - Q&A For You: What do you think of the Roku Players? 24:33 - Pick of the Week: http://lon.tv/lgrlemmings 25:30 - Coming Up This Week 26:40 - Helping the channel 27:03 - My other channels Subscribe to my email list to get a weekly digest of upcoming videos! - http://lon.tv/email See my second channel for supplementary content : http://lon.tv/extras Join the Facebook group to connect with me and other viewers! http://lon.tv/facebookgroup Visit the Lon.TV store to purchase some of my previously reviewed items! http://lon.tv/store Read more about my transparency and disclaimers: http://lon.tv/disclosures Want to chat with other fans of the channel? Visit our forums! http://lon.tv/forums Want to help the channel? Start a Member subscription or give a one time tip! http://lon.tv/support or contribute via Venmo! lon@lon.tv Follow me on Facebook! http://facebook.com/lonreviewstech Follow me on Twitter! http://twitter.com/lonseidman Catch my longer interviews in audio form on my podcast! http://lon.tv/itunes http://lon.tv/stitcher or the feed at http://lon.tv/podcast/feed.xml Follow me on Google+ http://lonseidman.com We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.
IT infrastructure teams with on-premises applications have to manage storage arrays throughout their never-ending lifecycle, including capacity planning guesswork, hardware failures, system migrations, and more. There are cloud-enabled alternatives to buying more and more storage arrays. With AWS Storage Gateway, you can start using Amazon S3, Amazon Glacier, and Amazon EBS in hybrid architectures with on-premises applications for storage, backup, disaster recovery, tiered storage, hybrid data lakes, and ML. In this session, learn how to use AWS Storage Gateway to seamlessly connect your applications to AWS storage services with familiar block-and-file storage protocols and a local cache for fast access to hot data. We demonstrate our latest capabilities and share best practices from experienced customers.
As your data stores grow, managing and operating on your stored objects becomes increasingly difficult to scale. In this session, AWS experts demonstrate Amazon S3 features you can use to perform and manage operations across any number of objects, from hundreds to billions, stored in Amazon S3. Learn how to monitor performance, ensure compliance, automate actions, and optimize storage across all your Amazon S3 objects. We also provide relevant use cases that demonstrate the full range of Amazon S3 capabilities and options, such as copying objects across buckets to create development environments, restricting access to sensitive data, or restoring many objects from Amazon Glacier.
You've designed and built a well-architected data lake and ingested extreme amounts of structured and unstructured data. Now what? In this session, we explore real-world use cases where data scientists, developers, and researchers have discovered new and valuable ways to extract business insights using advanced analytics and machine learning. We review Amazon S3, Amazon Glacier, and Amazon EFS, the foundation for the analytics clusters and data engines. We also explore analytics tools and databases, including Amazon Redshift, Amazon Athena, Amazon EMR, Amazon QuickSight, Amazon Kinesis, Amazon RDS, and Amazon Aurora; and we review the AWS machine learning portfolio and AI services such as Amazon SageMaker, AWS Deep Learning AMIs, Amazon Rekognition, and Amazon Lex. We discuss how all of these pieces fit together to build intelligent applications.
Flexibility is key when building and scaling a data lake. The analytics solutions you use in the future will almost certainly be different from the ones you use today, and choosing the right storage architecture gives you the agility to quickly experiment and migrate with the latest analytics solutions. In this session, we explore best practices for building a data lake in Amazon S3 and Amazon Glacier for leveraging an entire array of AWS, open source, and third-party analytics tools. We explore use cases for traditional analytics tools, including Amazon EMR and AWS Glue, as well as query-in-place tools like Amazon Athena, Amazon Redshift Spectrum, Amazon S3 Select, and Amazon Glacier Select. Complete Title: AWS re:Invent 2018: [REPEAT 1] Data Lake Implementation: Processing & Querying Data in Place (STG204-R1)
Learn best practices for Amazon S3 performance optimization, security, data protection, storage management, and much more. In this session, we look at common Amazon S3 use cases and ways to manage large volumes of data within Amazon S3. We discuss the latest performance improvements and how they impact previous guidance. We also talk about the Amazon S3 data resilience model and how architecture for the AWS Regions and Availability Zones impact architecture for fault tolerance.
Tape backups. Yes, they're still a thing. If you want to stop using tapes but need to store immutable backups for compliance or operational reasons, attend this session to learn how to make an easy switch to a cloud-based virtual tape library (VTL). AWS Storage Gateway provides a seamless drop-in replacement for tape backups with its Tape Gateway. It works with the major backup software products, so you simply change the target for your backups, and they go to a VTL that stores virtual tapes on Amazon S3 and Amazon Glacier. Come see how it works.
Have you ever had sleepless nights because you couldn't meet your Recovery Point and Time Objectives? What about recovering data in the event of a disaster? If you're a backup or storage architect, the answer is most likely "yes." Come to this session to learn how Cohesity can help you build an enterprise-grade solution for long-term retention, development and testing, and disaster recovery. Hear how Airbud Entertainment is using the Cohesity DataPlatform and AWS storage services, such as Amazon S3, and Amazon Glacier, to simplify their backup and long-term retention strategy and architecture. This session is brought to you by AWS partner, Cohesity, Inc.
Ecco una soluzione a basso costo per professionisti, per dati che vogliamo salvaguardare.
This is your chance to learn directly from top CTOs and Cloud Architects from some of the most innovative AWS customers. In this lightning round session, we'll have an action-packed hour, jumping straight to the architecture and technical detail for some of the most innovative data storage solutions of 2017. Hear how Insitu collects and analyzes data from drone flights in the field with AWS Snowball Edge. See how iRobot collects and analyzes IoT data from their robotic vacuums, mops, and pool cleaners. Learn how Viber maintains a petabyte-scale data lake on Amazon S3. Understand how Alert Logic scales their massive SaaS cloud security solution on Amazon S3 & Amazon Glacier.
Amazon S3 & Amazon Glacier provide the durable, scalable, secure and cost-effective storage you need for your data lake. But, as your data lake grows, the resources needed to analyze all the data can become expensive, or queries may take longer than desired. AWS provides query-in-place services like Amazon Athena and Amazon Redshift Spectrum to help you analyze this data easily and more cost-effectively than ever before. In this session, we will talk about how AWS query-in-place services and other tools work with Amazon S3 & Amazon Glacier and the optimizations you can use to analyze and process this data, cheaply and effectively.
Learn how to build a data lake for analytics in Amazon S3 and Amazon Glacier. In this session, we discuss best practices for data curation, normalization, and analysis on Amazon object storage services. We examine ways to reduce or eliminate costly extract, transform, and load (ETL) processes using query-in-place technology, such as Amazon Athena and Amazon Redshift Spectrum. We also review custom analytics integration using Apache Spark, Apache Hive, Presto, and other technologies in Amazon EMR. You'll also get a chance to hear from Airbnb & Viber about their solutions for Big Data analytics using S3 as a data lake.
As your business grows, you gain more and more data. When managed appropriately, you can make this data a strategic asset to your organization. In this session, you'll learn how to use storage management tools for end to end management of your storage, helping you organize, analyze, optimize and protect your data. You'll see how S3 Analytics - Storage Class Analysis helps you set more intelligent Lifecycle Policies to reduce TCO; Object Tagging gives you more management flexibility; Cross-Region Replication provides efficient data movement; Amazon Macie helps you ensure data security; and much more. Then, Paul Fisher, Technical Fellow at Alert Logic, will demonstrate how his organization uses S3 storage management features in their infrastructure.
Today, data backup isn't enough. IT teams with a cloud data management strategy become a data broker for the business. Data helps the business improve company reputation, drive revenue, and satisfy customers. With a hybrid architecture approach to managing data on-premises and in the cloud, the business can be more agile and more responsive than today. Find out what your IT peers are doing with cloud data management (hint: it's more than backup). Learn how data backup, recovery, management, and e-discovery capabilities can help maximize your use of AWS. See what your peers are doing to best move, manage, and use data across on-premises storage and cloud services. In this session, you learn steps for seamless, risk-free migration to different AWS services (Amazon EC2, Amazon RDS, Amazon S3, Amazon S3 - Infrequent Access class, Amazon Glacier and AWS Snowball); tactics for streamlined, enterprise-class disaster recovery; ways to save money by retiring expensive alternatives like tape storage; single view e-discovery across hybrid locations with dynamic data indexing across on-premises and cloud storage; and how to achieve holistic data protection across storage locations. Session sponsored by Commvault
Surveys consistently rank backup as one of the first workloads to move to the cloud. But what does it really look like? This session gives backup managers and admins the straight story on streamlining AWS Cloud integration with existing on-premises data backup software, tape processes, virtual tape libraries, third-party snapshots, file servers, and archives. Learn how to choose the right integration with varying degrees of disruption, how to automatically migrate data for cost reductions and compliance, and how to recover individual files or many files fast. We discuss Amazon S3, Amazon Glacier, Amazon EFS, AWS Snowball, AWS Storage Gateway (both as VTL and File Gateway), and third-party partner integrations.
Learn how to build an archive in Amazon Glacier, which provides cost-effective retention and compliance options and exciting new features.
Learn from our engineering experts how we've designed Amazon S3 and Amazon Glacier to be durable, available, and massively scalable. Hear how Sprinklr architected their environment for the ultimate in high availability for their mission-critical applications. In this session, we'll discuss AWS Region and Availability Zone architecture, storage classes, built-in and on-demand data replication, and much more.
In this session, learn about all of the AWS storage solutions, and get guidance about which ones to use for different use cases. We discuss the core AWS storage services. These include Amazon Simple Storage Service (Amazon S3), Amazon Glacier, Amazon Elastic File System (Amazon EFS), and Amazon Elastic Block Store (Amazon EBS). We also discuss data transfer services such as AWS Snowball, Snowball Edge, and AWS Snowmobile, and hybrid storage solutions such as AWS Storage Gateway.
Turner Broadcasting is using the AWS Cloud to provide storage and content processing required to enable mission-critical video libraries. Turner is creating a copy of CNN's 37-year news video library in AWS to take advantage of the cost and architectural benefits of cloud storage. This project has unique requirements around retrieval times, and Turner partnered with AWS to drive specific capabilities such as those Amazon Glacier expedited and bulk retrieval options. These cloud-based archives can enable Turner to use other cloud-based value-add services, such as AI/ML/search, and media supply chains efficiently. Turner's global content exploitation strategies call for extensive versioning of content assets required for distribution to different platforms, products, and regions. Today, this involves complex workflows to derive multiple downstream versions. Adopting the SMPTE Interoperable Mastering Format (IMF) and cloud-based object storage, Turner will dramatically simplify these workflows by enabling cloud-based automation and elastic scalability. Hear Turner's strategy, implementation around these media workloads, and lessons learned.
For many securities organizations, post-trade processing is expensive, cumbersome, and time-consuming. This is in part due to the massive volumes of data required for processing a trade and the limited agility of the technology on which many organizations rely today. In order to create efficiencies and move faster, many financial services organizations are working with AWS to implement post-trade solutions built with AWS storage services (Amazon S3 and Amazon Glacier) and big data capabilities (Amazon Athena, Amazon EMR, Amazon Redshift, and Amazon QuickSight ). In this session, we walk through a trade capture and regulatory reporting solution that uses the aforementioned AWS services. We also provide guidance around obtaining data-driven insights (from pixels to pictures); bolstering encryption with AWS KMS; and maintaining transparency and control with Amazon CloudWatch and Amazon CloudTrail (which also helps meet SEC Rule 613 that requires the creation of comprehensive consolidated audit trails).
4K video has resulted in a huge uptick in resource requirements, which is difficult to scale in a traditional environment. The cloud is perfect to handle problems of this scale. However, many unanswered questions remain around best practices and suitable architectures for dealing with massive, high-quality assets. We define problem cases and discuss practical architectural patterns to handle these challenges by using AWS services such as Amazon EC2 (graphical instances), Amazon EMR, Amazon S3, Amazon S3 Transfer Acceleration, Amazon Glacier, AWS Snowball, and magnetic Amazon EBS volumes. The best practices we discuss can also help architects and engineers dealing with non-video data. Also, Amazon Studios presents how, powered by AWS, they solved many of these problems and can create, manage, and distribute Emmy and Oscar Award-winning content.
Do you have on-premises tape backups or expensive VTL hardware? Worried about moving cases of tapes off site? Not sure about the integrity of your data on tape? Learn how to use AWS services, including AWS Storage Gateway, to replace existing traditional approaches. Using Storage Gateway and standard backup software, you can back up to Amazon S3 and Amazon Glacier or tier snapshots to AWS. This enables both long-term data retention for compliance, and also recovery into Amazon EC2, locally, or to another site in case of a disaster. Southern Oregon University shares how they replaced tape backups with AWS, and the lessons learned in the process.
The advent of 4K video has resulted in a huge uptick in resource requirements, which is difficult to scale in a traditional environment. The cloud is a perfect environment for handling problems of this scale; however, there are many unanswered questions around best practices and suitable architectures for dealing with massive, high-quality assets. In this session, we will define problem cases and discuss practical architectural patterns for dealing with these challenges by using AWS services such as Amazon EC2 (graphical instances), Amazon EMR, Amazon S3, Amazon S3 Transfer Acceleration, Amazon Glacier, AWS Snowball, and the new magnetic EBS volumes. The best practices that we'll discuss will also be helpful to architects and engineers who are dealing with non-video data. Amazon Studios will present how, powered by AWS, they solved many of these problems and are able to create, manage, and distribute Emmy Award-winning content.
Without careful planning, data management can quickly turn complex with a runaway cost structure. Enterprise customers are turning to the cloud to solve long-term data archive needs such as reliability, compliance, and agility while optimizing the overall cost. Come to this session and hear how AWS customers are using Amazon Glacier to simplify their archiving strategy. Learn how customers architect their cloud archiving applications and share integration to streamline their organization's data management and establish successful IT best practices.
Not just for archiving or compliance use cases, Amazon Glacier accommodates customers simply looking to replace their on-premises long term storage with a cost efficient, durable, cloud option, from which they can easily and quickly access their data when they need to. This session will introduce newly launched features for Amazon Glacier, review the current service feature set, and share the global data center shut down and storage strategy for Sony DADC New Media Solutions (NMS). NMS is Sony’s digital servicing division providing global digital distribution, linear playout and white label OTT/Commerce solutions for clients such as BBC Worldwide, NBCUniversal, Sony Playstation, and Funimation Entertainment.
FINRA partnered with AWS product teams to leverage Amazon EMR and Amazon S3 extensively to build an advanced analytics solution. In this session, you'll hear how FINRA implemented a data lake on S3 to provide a single source for their big data analytics platform. FINRA ingests 75 billion records each day of stock market transactions, with an AWS storage footprint of 20 petabytes across S3 and Amazon Glacier. To deal with this workload, FINRA has architected a platform that separates storage from compute to manage capacity for each independently, leading to improved performance and cost effectiveness. You'll also learn how FINRA was able to leverage Hbase on Amazon EMR to achieve significant benefits over running Hbase on a fixed capacity cluster. FINRA was able to implement a system that seamlessly scales in response to data growth and can scale quickly in response to user traffic. By working with multiple clusters, FINRA can now isolate ETL and user query workloads and has achieved rapid, built-in disaster recovery capability by leveraging data storage on S3 to run from multiple AZs and across regions.
En este episodio de Promopodcast vamos a tratar tres temas. Uno de ellos podríamos decir que es para podcasters y es sobre cómo hacer copias de seguridad de nuestros episodios, para asegurarnos que duren más de lo que pueda durar nuestro actual hosting de audios; una opción que propongo es Backblaze, pero existen otras como Amazon Glacier. Los otros dos temas que tocamos son tanto del interés de los podcasters como de los oyentes, y de ambos grupos reclamo respuesta a estas cuestiones: cuándo publicar un podcast (día y hora) y cómo escuchamos los podcasts.Patrocinado por los cursos de marketing online Joan Boluda, más de 400 vídeos ya a tu disposición y uno nuevo cada día que te guian paso a paso dentro de esta disciplina y todas las herramientas que necesarias para crear tu negocio online y darlo a conocer. Descúbrelos en http://boluda.com/emilcarBusca los enlaces de este episodio en http://emilcar.fm, donde también esperamos tus comentarios. Hosted on Mumbler.io
En este episodio de Promopodcast vamos a tratar tres temas. Uno de ellos podríamos decir que es para podcasters y es sobre cómo hacer copias de seguridad de nuestros episodios, para asegurarnos que duren más de lo que pueda durar nuestro actual hosting de audios; una opción que propongo es Backblaze, pero existen otras como Amazon Glacier. Los otros dos temas que tocamos son tanto del interés de los podcasters como de los oyentes, y de ambos grupos reclamo respuesta a estas cuestiones: cuándo publicar un podcast (día y hora) y cómo escuchamos los podcasts.Patrocinado por los cursos de marketing online Joan Boluda, más de 400 vídeos ya a tu disposición y uno nuevo cada día que te guian paso a paso dentro de esta disciplina y todas las herramientas que necesarias para crear tu negocio online y darlo a conocer. Descúbrelos en http://boluda.com/emilcarBusca los enlaces de este episodio en http://emilcar.fm, donde también esperamos tus comentarios.
En este episodio de Promopodcast vamos a tratar tres temas. Uno de ellos podríamos decir que es para podcasters y es sobre cómo hacer copias de seguridad de nuestros episodios, para asegurarnos que duren más de lo que pueda durar nuestro actual hosting de audios; una opción que propongo es Backblaze, pero existen otras como Amazon Glacier. Los otros dos temas que tocamos son tanto del interés de los podcasters como de los oyentes, y de ambos grupos reclamo respuesta a estas cuestiones: cuándo publicar un podcast (día y hora) y cómo escuchamos los podcasts.Patrocinado por los cursos de marketing online Joan Boluda, más de 400 vídeos ya a tu disposición y uno nuevo cada día que te guian paso a paso dentro de esta disciplina y todas las herramientas que necesarias para crear tu negocio online y darlo a conocer. Descúbrelos en http://boluda.com/emilcarBusca los enlaces de este episodio en http://emilcar.fm, donde también esperamos tus comentarios.
So Paul’s been sick, and that’s why we haven’t been recording. But it’s OK - we managed to put together a great show about whales, colds, and Cory Doctorow. We also cover lots of great movies and stuff. Oh, and we talk about arkOS some more, because it looks awesome. The Venture Bros. T-shirt club. Chris’s current T-Shirt this episode, Commodore. Car Talk. Redshirts Our first Montreal Sauce book club book? Why not? Find it and read it. Then, you can nod your head or shout at your listening device while we review it. Fun? Calibre upload books to your Kindle. Cory Doctorow’s Books. Bye bye open source Google Apps. Hello evil Google closed source. Android Dev interested in AROMA? Here’s more info. Official They Might Be Giants Youtube channel. I heart Crazy Eyes from Orange is the New Black. Saving Silverman, the Jason Biggs film Paul was thinking of. It was not a good movie, so Chris doesn’t know why he’s adding the link. Here’s the trailer for Heckler. Schneider v. Ebert. X-Men: Days of Future Past Trailer. VHX.tv Distribute your short films and get paid in a world without DVDs. Yo-yo boy? Hilarious wire work with Jet Li. Chewbacca Impression from Benedict Cumberbatch (AKA Benjamin Lumberjack) while Harrison Ford looks lost in old age. That said, he and Chewbacca don’t get along well anymore. Attack the Block trailer of awesome sauce. Joe Cornish, may be the director for Star Trek 3. Star Wars Ep. VII gets new writers Justin Hall’s interview with Jamie Wilkinson. arkOS, check out this awesome project. Host your own cloud. Frenzy A Dropbox powered social network. Interesting. Amazon Glacier as a cheap backup solution & some other options. One Thing Well A simple site featuring simple solutions. Fargo, an outliner from Small Picture. A Tumblr application for the Mac, Tubl.me. Silly Twitter accoutns, SeinfeldToday & TNG_S8 or Star Trek The Next Generation season 8. Not to plug Chirp again, but hey we do play it during the podcast. The Indiegogo Kickstarter Campaign. How Google coders work. What’s a Boss key? See you in a few weeks when we’ll have a special surprise for your ears! Support Montreal Sauce on Patreon
Marco's new-new app, Bugshot, and some of its design decisions. Cutting features from 1.0 and trying to keep Bugshot from taking too much time. Bugshot gets the John Siracusa treatment. Exploring NAS options and initial impressions of the Synology DS1813+. Economics of FreeNAS or Mac Mini alternatives. iSCSI on Macs: the free $89 globalSAN initiator and the $195 ATTO initiator, which comes recommended by storage expert Dave Nanian. NAS backup options, since Backblaze doesn't do network drives: CrashPlan (with widespread upload-speed issues), or Arq (with potentially expensive Amazon Glacier or S3 costs). Backing up Mac filesystem metadata, Backup Bouncer, and current scores of online backup apps. Data hoarding and falling into John's backup vortex. The Apple Keynotes podcast feed. Sponsored by: Mind Blitz: An action-puzzle twist on the classic memory matching game. Transporter: Private cloud storage. Use coupon code atp for 10% off.
Do you create megabytes, gigabytes or even terabytes of audio and have no idea where to store it? Audio storage can be a problem especially if you like to record in the highest quality .wav file. A solution I can advise is to back up your audio to the cloud. I use Amazon S3 to backup audio and transfer large audio files to clients. If you're not so concerned about having access to the audio and prefer to be more of an 'audio hoarder' Amazon Glacier will provide a better long term solution. You'll be in good company - did you know that Dropbox uses Amazon Web Services to backup and sync files?Amazon S3: http://aws.amazon.com/s3Amazon Glacier: http://aws.amazon.com/glacier
Do you create megabytes, gigabytes or even terabytes of audio and have no idea where to store it? Audio storage can be a problem especially if you like to record in the highest quality .wav file. A solution I can advise is to back up your audio to the cloud. I use Amazon S3 to backup audio and transfer large audio files to clients. If you're not so concerned about having access to the audio and prefer to be more of an 'audio hoarder' Amazon Glacier will provide a better long term solution. You'll be in good company - did you know that Dropbox uses Amazon Web Services to backup and sync files?Amazon S3: http://aws.amazon.com/s3Amazon Glacier: http://aws.amazon.com/glacier
Культ Гаечки: Морисса Мейер раздает айфоны и хантит мачо Рецепт от OnLive: как нагреть инвесторов Билл Гейтс: инвестиции в туалет Кирилл: инвестиции в инвестиции Билла Гейста в туалет Мясо от Тиля: 100 k$ за гамбургер Учёные из Гарварда записали 643 килобайта данных в молекулу ДНК Y Combinator: выбор Bycombinator Amazon Glacier: долгоиграющее хранилище данных по $0,01 за 1 ГБ в месяц Новости РБ: более 60% жителей Беларуси ежедневно пользуются интернетом Никто не знал: Yota в России больше нет Новости РБ — Совладелец Wargaming.net купил компанию в Австралии за 45 млн долларов Инвесторы продают акции Facebook, чтобы купить акции Яндекс