Podcasts about storj

  • 77PODCASTS
  • 104EPISODES
  • 40mAVG DURATION
  • ?INFREQUENT EPISODES
  • Apr 7, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about storj

Latest podcast episodes about storj

Wise Decision Maker Show
#305: Inclusivity Is Key for Distributed Work: Katherine Johnson, Chief Governance Officer at Storj

Wise Decision Maker Show

Play Episode Listen Later Apr 7, 2025 13:53


In this episode of the Wise Decision Maker Show, Dr. Gleb Tsipursky speaks to Katherine Johnson, Chief Governance Officer at Storj, about why inclusivity is key for distributed work.You can learn about Storj at https://www.storj.io/

Tech Path Podcast
Adobe Stealing Your Personal Files | Web3 Storage & Privacy with Sarah Buxton

Tech Path Podcast

Play Episode Listen Later Jun 7, 2024 22:22


A change to Adobe terms & conditions for apps like Photoshop has outraged many professional users, concerned that the company is claiming the right to access their content, use it freely, and even sub-licence it to others. The company is requiring users to agree to the new terms in order to continue using their Adobe apps, locking them out until they do so.Guest: Sarah BuxtonSarah's Twitter ➜ https://x.com/ForTheBux~This Episode is Sponsored By Coinbase~ Get up to $200 for getting started on Coinbase➜ https://bit.ly/CBARRON00:00 intro00:25 Sponsor: Coinbase01:18 Sarah Buxton01:36 Adobe Terms of Service Update02:35 Lawyer on Adobe Terms03:36 Will Adobe Backtrack?04:45 Duncan Jones Shreds Adobe05:15 Cloud Storage & Personal Storage06:23 Adobe Stealing Art for AI07:52 Web3 Catalyst?08:53 STORJ, Arweave, & Filecoin10:20 Flow Storage11:40 Hivemapper & Render Network12:54 Only1 vs OnlyFans16:29 Influencers Comes To Web318:48 Unlonely Challengers Edition on BASE21:26 outro#crypto #adobe #ethereum~Adobe Stealing Your Personal Files?

Blockchain Dialogues
EP 59 - DECENTRALIZED STORAGE - INTERVIEW - JACOB WILLOUGHBY - CTO STORJ

Blockchain Dialogues

Play Episode Listen Later Nov 1, 2023 63:20


In this episode, we interview Jacob Willoughby, CTO at Storj, which is a blockchain based cloud storage solution. We take a deep dive into the Storj protocol, discussing the real world use cases of decentralized storage of data, and its role in Web3 going forward. Guest – Jacob Willoughby, CTO STORJ Website - https://www.storj.io/ Partnerships ------------- Next Block Expo 2023 - Dec 4-5, 2023, Berlin, Germany Use registration link https://nextblockexpo.com/ and promo code "bcdialogues" for a 10% discount

Blockchain Won't Save the World
S3E24 Cloud Killer: Storj and the Decentralised Storage Movement w. Ben Golub

Blockchain Won't Save the World

Play Episode Listen Later Oct 13, 2023 38:58


For Web3 and Decentralised Apps (DApps) to reach mainstream, we need a place to store data that isn't just the Blockchain. We need decentralised storage! And Storj has been leading the way since 2014. But there's a twist. Decentralised storage isn't just (only) about enabling Web3...I'm joined by Ben Golub, CEO of Storj Labs and the former CEO of Docker, who knows a thing or two about scaling technology. And we're going to have a 'full stack' conversation about how, when and if DApps will ever reach mainstream.In this show we'll be discussing:- The history of DApps and how they differ from 'traditional' apps- The importance of Decentralised Storage to scale DApps- How Storj is differentiated from other solutions (centralised and decentralised)- Why you shouldn't store everything on a Blockchain- What more is needed to see wider adoption of Web3- Advice to anyone looking to build DApps that scale

Late Confirmation by CoinDesk
MONEY REIMAGINED: Decentralized Storage, AI, and Blockchain Convergence | Shawn Wilkinson's Insights on Industry Transformation

Late Confirmation by CoinDesk

Play Episode Listen Later Oct 11, 2023 31:07


An optimistic perspective on the fusion of AI and blockchain technologies, with augmented reality as a potential emerging frontier.This episode is sponsored by PayPalIn this episode of "Money Reimagined," Sheila Warren delves into the exciting convergence of blockchain, AI, and digital assets, particularly emphasizing decentralized storage and processing, with Shawn Wilkinson Founder & CSO of Storj. Wilkinson highlights his journey from founding a distributed cloud storage company, Storj, to developing a distributed GPU platform for AI. The conversation touches on the challenges of crypto adoption and the critical role of token economics in incentivizing engagement in distributed computing and storage projects. Furthermore, Shawn discusses the potential cost savings and security benefits of decentralized systems in AI and cloud computing. Regulatory challenges in the US crypto market are explored, with an emphasis on the importance of building practical, real-world solutions. LINKS:storj.io -PYUSD, a stablecoin made for Payments. 1USD = 1PYUSD.Introducing PayPal's new digital currency, PayPal USD (PYUSD), a stablecoin backed by U.S. dollar deposits, U.S Treasuries and similar cash equivalents. Buy, sell, hold, and transfer it in our app or site and explore Web3 with a payments brand that has been trusted for over 20 years.Get Started now at paypal.com/pyusd-Money Reimagined has been produced by senior producer Michele Musso, edited by associate producer Ryan Huntington and our executive producer is Jared Schwartz. Our theme song is “The News Tonight ” by Shimmer. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

CoinDesk's Money Reimagined
Decentralized Storage, AI, and Blockchain Convergence | Shawn Wilkinson's Insights on Industry Transformation

CoinDesk's Money Reimagined

Play Episode Listen Later Oct 11, 2023 31:07


An optimistic perspective on the fusion of AI and blockchain technologies, with augmented reality as a potential emerging frontier.This episode is sponsored by PayPalIn this episode of "Money Reimagined," Sheila Warren delves into the exciting convergence of blockchain, AI, and digital assets, particularly emphasizing decentralized storage and processing, with Shawn Wilkinson Founder & CSO of Storj. Wilkinson highlights his journey from founding a distributed cloud storage company, Storj, to developing a distributed GPU platform for AI. The conversation touches on the challenges of crypto adoption and the critical role of token economics in incentivizing engagement in distributed computing and storage projects. Furthermore, Shawn discusses the potential cost savings and security benefits of decentralized systems in AI and cloud computing. Regulatory challenges in the US crypto market are explored, with an emphasis on the importance of building practical, real-world solutions. LINKS:storj.io -PYUSD, a stablecoin made for Payments. 1USD = 1PYUSD.Introducing PayPal's new digital currency, PayPal USD (PYUSD), a stablecoin backed by U.S. dollar deposits, U.S Treasuries and similar cash equivalents. Buy, sell, hold, and transfer it in our app or site and explore Web3 with a payments brand that has been trusted for over 20 years.Get Started now at paypal.com/pyusd-Money Reimagined has been produced by senior producer Michele Musso, edited by associate producer Ryan Huntington and our executive producer is Jared Schwartz. Our theme song is “The News Tonight ” by Shimmer. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Web3 Talks
Revolutionizing Cloud Storage: Insights from StorJ Labs' COO, John Gleeson

Web3 Talks

Play Episode Listen Later Jul 5, 2023 56:05


Environment Variables
We Answer Your Questions!

Environment Variables

Play Episode Listen Later Jul 5, 2023 56:04


On this episode of Environment Variables, host Chris Adams is joined by Asim Hussain as they dive into a mailbag session, bringing you the most burning unanswered questions from the recent live virtual event on World Environment Day that was hosted by the Green Software Foundation on June 5 2023. Asim and Chris will tackle your questions on the environmental impact of AI computation, the challenges of location shifting, the importance of low-carbon modes, and how to shift the tech mindset from "more is more" (Jevons Paradox). Chock-full of stories about projects implementing green software practices, and valuable resources, listen now to have your thirst for curiosity quenched!

Screaming in the Cloud
Making Open-Source Multi-Cloud Truly Free with AB Periasamy

Screaming in the Cloud

Play Episode Listen Later Mar 28, 2023 40:04


AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world.  AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Nifty Business: Daily NFT Show
Filecoin: The Web3 Dropbox, Powering Most NFTs

Nifty Business: Daily NFT Show

Play Episode Listen Later Feb 11, 2023 7:16


Decentralization is a hot topic in the NFT community. Many enthusiasts want a web3 alternative for everything. But, is blockchain technology always the best solution? While it can offer security and decentralization, it can also be slow, expensive, and inefficient for many use cases. In this episode, we'll explore how Filecoin, the Web3 version of cloud storage, has revolutionized the way we store, manage, and access our digital files.Mentioned:Filecoin / IPFs Youtube: https://youtu.be/_lNL1uU_I58 Filecoin: https://filecoin.io/ Arweave: https://arweave.org/ Sia: https://sia.tech/ Storj: https://www.storj.io 12 Eth Essentials: 12 Ethereum Essentials Newsletter: https://NiftyBusinessweek.com/Twitter @TropicVibes: https://twitter.com/TropicVibesEmail: mail[at]niftybusiness.coNFT 101 Episodes:#36 - Web 3.0 Explained #225 - NFTs Explained#30 - 10 Reasons to Buy NFTs#7 - NFT Words & Verbiage#47 - NFT Words & Verbiage Part II#97 - NFT Words & Verbiage Part IIINeed a Ledger Hardware (Cold) Wallet?*Using this referral link supports this show at no extra cost to you:Ledger Affiliate Link Recommended Reading for Web3 Enthusiasts:The Bitcoin Standard: https://amzn.to/3K31jvLThe 10 Best-Ever Anxiety Management Techniques: https://amzn.to/3YphPL2*Amazon affiliate links

Rethinking with Dror Poleg
Crypto and the Conservation of Centralization

Rethinking with Dror Poleg

Play Episode Listen Later Nov 21, 2022 11:17


You can't decentralize the web. At best, you can kill old winners and pick new ones. Here's how. Originally published in November 2021 here. Subscribe to Dror's newsletter on DrorPoleg.com. The web is broken. A handful of companies dominates it: Google (and Baidu) tracks all our queries, Facebook (and Tencent) monitors our social interactions, Twitter (and Weibo) decides what we're allowed to share, Amazon (and Alibaba) dominates retail, etc. Above these corporate giants, governments from Beijing to D.C. encroach on the free flow of information in the name of "social harmony" or "public health." Crypto and blockchain-based applications aim to steer the web toward its original vision: an open network, based on public-domain protocols, controlled by no one. It promises to enable "decentralized" alternatives to the tech and government giants we all know and love. This effort can be divided into two main fronts: Decentralized Utility and Decentralized Ownership. Decentralized Utility aims to provide online services without relying on a centralized system. For example, instead of storing your files on server farms owned by Amazon (AWS) or Microsoft (Azure), you can store them on Arweave, Storj, Filecoin. The latter will keep your files encrypted on a network of computers governed by a protocol that cannot be stopped or altered by any individual entity. Decentralized Ownership aims to share the ownership and governance of digital platforms with their users and stakeholders. Mirror, for example, enables writers to publish their content online, monetize it, as well as own a piece of the publishing platform itself and vote on how it is operated. Helium and Livepeer operate networks of wireless hotspots and video streaming infrastructure. These networks are maintained and secured by users who own specific tokens that compensate them for their services and enable them to participate in governance. These crypto projects are still small and experimental. They point towards an alternative way of building, maintaining, and marketing the type of services that giant, centralized corporations currently provide. But decentralizing one class of internet companies does not guarantee that a new class of centralized giants will not emerge in their stead. In fact, decentralizing power from one pair of hands is almost guaranteed to concentrate that power in another pair of hands. A powerful theory and the history of the internet itself explain why. Let's start with the theory.

LINUX Unplugged
476: Canary in the Photo Mine

LINUX Unplugged

Play Episode Listen Later Sep 19, 2022 87:04


We've gone deep to find our perfect Google Photos replacement. This week we'll share our setup that we think works great, is easy to use, and is fully backed up.

TechLeaderBoard
StorJ Joins 'A Conversation with Uphold'

TechLeaderBoard

Play Episode Listen Later Sep 18, 2022 36:52


UBC News World
Storj vs Filecoin Latest Review from Verge Hunter

UBC News World

Play Episode Listen Later Sep 3, 2022 4:18


Read Verge Hunter's latest review about decentralized cloud storage companies

Talent Acquisition Trends & Strategy
Ep 16: Katherine Johnson, Chief People & Legal Officer, Head of Compliance - Storj Labs

Talent Acquisition Trends & Strategy

Play Episode Play 26 sec Highlight Listen Later Jun 14, 2022 42:13


Katherine Johnson, Chief People & Legal Officer, Head of Compliance at Storj, joins host James Mackey to discuss what the "Web 3" world of decentralization must get right (and where Web 2 failed), hiring policies and parameters to ensure a diverse workforce; applying the Rooney Rule in Corporate America, the responsibility of SMBs to implement DE&I into the company's framework, and a lot more!Episode Chapters00:20 Who is Katherine Johnson and what is Storj?03:45 How owning a legal practice, Wall Street experience of anti money-laundering, compliance, and regulation helped her transition into Crypto & blockchain04:45 Katherine's upcoming book about what Web 2 got wrong, Web 3 must get right08:05 The responsibility to implement DE&I into your company's framework12:15 How to approach job descriptions15:05 Being inclusive is about making everyone feel welcome and being mindful21:45 Hiring policies and parameters to ensure a diverse workforce; the Rooney Rule28:45 Impacts of DE&I efforts34:45 Storj Institute helps under-represented get a foothold in tech35:45 Lessons learned from working at an international, multicultural company40:45 How to get in touch with Katherine

Screaming in the Cloud
Developing Storage Solutions Before the Rest with AB Periasamay

Screaming in the Cloud

Play Episode Listen Later Feb 2, 2022 38:54


About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world.  AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy MinIO Slack channel: https://minio.slack.com/join/shared_invite/zt-11qsphhj7-HpmNOaIh14LHGrmndrhocA LinkedIn: https://www.linkedin.com/in/abperiasamy/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is, in AWS, with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem, and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai, and Stax have seen significant results by using them, and it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.in a siloCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by someone who's doing something a bit off the beaten path when we talk about cloud. I've often said that S3 is sort of a modern wonder of the world. It was the first AWS service brought into general availability. Today's promoted guest is the co-founder and CEO of MinIO, Anand Babu Periasamy, or AB as he often goes, depending upon who's talking to him. Thank you so much for taking the time to speak with me today.AB: It's wonderful to be here, Corey. Thank you for having me.Corey: So, I want to start with the obvious thing, where you take a look at what is the cloud and you can talk about AWS's ridiculous high-level managed services, like Amazon Chime. Great, we all see how that plays out. And those are the higher-level offerings, ideally aimed at problems customers have, but then they also have the baseline building blocks services, and it's hard to think of a more baseline building block than an object store. That's something every cloud provider has, regardless of how many scare quotes there are around the word cloud; everyone offers the object store. And your solution is to look at this and say, “Ah, that's a market ripe for disruption. We're going to build through an open-source community software that emulates an object store.” I would be sitting here, more or less poking fun at the idea except for the fact that you're a billion-dollar company now.AB: Yeah.Corey: How did you get here?AB: So, when we started, right, we did not actually think about cloud that way, right? “Cloud, it's a hot trend, and let's go disrupt is like that. It will lead to a lot of opportunity.” Certainly, it's true, it lead to the M&S, right, but that's not how we looked at it, right? It's a bad idea to build startups for M&A.When we looked at the problem, when we got back into this—my previous background, some may not know that it's actually a distributed file system background in the open-source space.Corey: Yeah, you were one of the co-founders of Gluster—AB: Yeah.Corey: —which I have only begrudgingly forgiven you. But please continue.AB: [laugh]. And back then we got the idea right, but the timing was wrong. And I had—while the data was beginning to grow at a crazy rate, end of the day, GlusterFS has to still look like an FS, it has to look like a file system like NetApp or EMC, and it was hugely limiting what we can do with it. The biggest problem for me was legacy systems. I have to build a modern system that is compatible with a legacy architecture, you cannot innovate.And that is where when Amazon introduced S3, back then, like, when S3 came, cloud was not big at all, right? When I look at it, the most important message of the cloud was Amazon basically threw everything that is legacy. It's not [iSCSI 00:03:21] as a Service; it's not even FTP as a Service, right? They came up with a simple, RESTful API to store your blobs, whether it's JavaScript, Android, iOS, or [AAML 00:03:30] application, or even Snowflake-type application.Corey: Oh, we spent ten years rewriting our apps to speak object store, and then they released EFS, which is NFS in the cloud. It's—AB: Yeah.Corey: —I didn't realize I could have just been stubborn and waited, and the whole problem would solve itself. But here we are. You're quite right.AB: Yeah. And even EFS and EBS are more for legacy stock can come in, buy some time, but that's not how you should stay on AWS, right? When Amazon did that, for me, that was the opportunity. I saw that… while world is going to continue to produce lots and lots of data, if I built a brand around that, I'm not going to go wrong.The problem is data at scale. And what do I do there? The opportunity I saw was, Amazon solved one of the largest problems for a long time. All the legacy systems, legacy protocols, they convinced the industry, throw them away and then start all over from scratch with the new API. While it's not compatible, it's not standard, it is ridiculously simple compared to anything else.No fstabs, no [unintelligible 00:04:27], no [root 00:04:28], nothing, right? From any application anywhere you can access was a big deal. When I saw that, I was like, “Thank you Amazon.” And I also knew Amazon would convince the industry that rewriting their application is going to be better and faster and cheaper than retrofitting legacy applications.Corey: I wonder how much that's retconned because talking to some of the people involved in the early days, they were not at all convinced they [laugh] would be able to convince the industry to do this.AB: Actually, if you talk to the analyst reporters, the IDC's, Gartner's of the world to the enterprise IT, the VMware community, they would say, “Hell no.” But if you talk to the actual application developers, data infrastructure, data architects, the actual consumers of data, for them, it was so obvious. They actually did not know how to write an fstab. The iSCSI and NFS, you can't even access across the internet, and the modern applications, they ran across the globe, in JavaScript, and all kinds of apps on the device. From [Snap 00:05:21] to Snowflake, today is built on object store. It was more natural for the applications team, but not from the infrastructure team. So, who you asked that mattered.But nevertheless, Amazon convinced the rest of the world, and our bet was that if this is going to be the future, then this is also our opportunity. S3 is going to be limited because it only runs inside AWS. Bulk of the world's data is produced everywhere and only a tiny fraction will go to AWS. And where will the rest of the data go? Not SAN, NAS, HDFS, or other blob store, Azure Blob, or GCS; it's not going to be fragmented. And if we built a better object store, lightweight, faster, simpler, but fully compatible with S3 API, we can sweep and consolidate the market. And that's what happened.Corey: And there is a lot of validity to that. We take a look across the industry, when we look at various standards—I mean, one of the big problems with multi-cloud in many respects is the APIs are not quite similar enough. And worse, the failure patterns are very different, of I don't just need to know how the load balancer works, I need to know how it breaks so I can detect and plan for that. And then you've got the whole identity problem as well, where you're trying to manage across different frames of reference as you go between providers, and leads to a bit of a mess. What is it that makes MinIO something that has been not just something that has endured since it was created, but clearly been thriving?AB: The real reason, actually is not the multi-cloud compatibility, all that, right? Like, while today, it is a big deal for the users because the deployments have grown into 10-plus petabytes, and now the infrastructure team is taking it over and consolidating across the enterprise, so now they are talking about which key management server for storing the encrypted keys, which key management server should I talk to? Look at AWS, Google, or Azure, everyone has their own proprietary API. Outside they, have [YAML2 00:07:18], HashiCorp Vault, and, like, there is no standard here. It is supposed to be a [KMIP 00:07:23] standard, but in reality, it is not. Even different versions of Vault, there are incompatibilities for us.That is where—like from Key Management Server, Identity Management Server, right, like, everything that you speak around, how do you talk to different ecosystem? That, actually, MinIO provides connectors; having the large ecosystem support and large community, we are able to address all that. Once you bring MinIO into your application stack like you would bring Elasticsearch or MongoDB or anything else as a container, your application stack is just a Kubernetes YAML file, and you roll it out on any cloud, it becomes easier for them, they're able to go to any cloud they want. But the real reason why it succeeded was not that. They actually wrote their applications as containers on Minikube, then they will push it on a CI/CD environment.They never wrote code on EC2 or ECS writing objects on S3, and they don't like the idea of [past 00:08:15], where someone is telling you just—like you saw Google App Engine never took off, right? They liked the idea, here are my building blocks. And then I would stitch them together and build my application. We were part of their application development since early days, and when the application matured, it was hard to remove. It is very much like Microsoft Windows when it grew, even though the desktop was Microsoft Windows Server was NetWare, NetWare lost the game, right?We got the ecosystem, and it was actually developer productivity, convenience, that really helped. The simplicity of MinIO, today, they are arguing that deploying MinIO inside AWS is easier through their YAML and containers than going to AWS Console and figuring out how to do it.Corey: As you take a look at how customers are adopting this, it's clear that there is some shift in this because I could see the story for something like MinIO making an awful lot of sense in a data center environment because otherwise, it's, “Great. I need to make this app work with my SAN as well as an object store.” And that's sort of a non-starter for obvious reasons. But now you're available through cloud marketplaces directly.AB: Yeah.Corey: How are you seeing adoption patterns and interactions from customers changing as the industry continues to evolve?AB: Yeah, actually, that is how my thinking was when I started. If you are inside AWS, I would myself tell them that why don't use AWS S3? And it made a lot of sense if it's on a colo or your own infrastructure, then there is an object store. It even made a lot of sense if you are deploying on Google Cloud, Azure, Alibaba Cloud, Oracle Cloud, it made a lot of sense because you wanted an S3 compatible object store. Inside AWS, why would you do it, if there is AWS S3?Nowadays, I hear funny arguments, too. They like, “Oh, I didn't know that I could use S3. Is S3 MinIO compatible?” Because they will be like, “It came along with the GitLab or GitHub Enterprise, a part of the application stack.” They didn't even know that they could actually switch it over.And otherwise, most of the time, they developed it on MinIO, now they are too lazy to switch over. That also happens. But the real reason that why it became serious for me—I ignored that the public cloud commercialization; I encouraged the community adoption. And it grew to more than a million instances, like across the cloud, like small and large, but when they start talking about paying us serious dollars, then I took it seriously. And then when I start asking them, why would you guys do it, then I got to know the real reason why they wanted to do was they want to be detached from the cloud infrastructure provider.They want to look at cloud as CPU network and drive as a service. And running their own enterprise IT was more expensive than adopting public cloud, it was productivity for them, reducing the infrastructure, people cost was a lot. It made economic sense.Corey: Oh, people always cost more the infrastructure itself does.AB: Exactly right. 70, 80%, like, goes into people, right? And enterprise IT is too slow. They cannot innovate fast, and all of those problems. But what I found was for us, while we actually build the community and customers, if you're on AWS, if you're running MinIO on EBS, EBS is three times more expensive than S3.Corey: Or a single copy of it, too, where if you're trying to go multi-AZ and you have the replication traffic, and not to mention you have to over-provision it, which is a bit of a different story as well. So, like, it winds up being something on the order of 30 times more expensive, in many cases, to do it right. So, I'm looking at this going, the economics of running this purely by itself in AWS don't make sense to me—long experience teaches me the next question of, “What am I missing?” Not, “That's ridiculous and you're doing it wrong.” There's clearly something I'm not getting. What am I missing?AB: I was telling them until we made some changes, right—because we saw a couple of things happen. I was initially like, [unintelligible 00:12:00] does not make 30 copies. It makes, like, 1.4x, 1.6x.But still, the underlying block storage is not only three times more expensive than S3, it's also slow. It's a network storage. Trying to put an object store on top of it, another, like, software-defined SAN, like EBS made no sense to me. Smaller deployments, it's okay, but you should never scale that on EBS. So, it did not make economic sense. I would never take it seriously because it would never help them grow to scale.But what changed in recent times? Amazon saw that this was not only a problem for MinIO-type players. Every database out there today, every modern database, even the message queues like Kafka, they all have gone scale-out. And they all depend on local block store and putting a scale-out distributed database, data processing engines on top of EBS would not scale. And Amazon introduced storage optimized instances. Essentially, that reduced to bet—the data infrastructure guy, data engineer, or application developer asking IT, “I want a SuperMicro, or Dell server, or even virtual machines.” That's too slow, too inefficient.They can provision these storage machines on demand, and then I can do it through Kubernetes. These two changes, all the public cloud players now adopted Kubernetes as the standard, and they have to stick to the Kubernetes API standard. If they are incompatible, they won't get adopted. And storage optimized that is local drives, these are machines, like, [I3 EN 00:13:23], like, 24 drives, they have SSDs, and fast network—like, 25-gigabit 200-gigabit type network—availability of these machines, like, what typically would run any database, HDFS cluster, MinIO, all of them, those machines are now available just like any other EC2 instance.They are efficient. You can actually put MinIO side by side to S3 and still be price competitive. And Amazon wants to—like, just like their retail marketplace, they want to compete and be open. They have enabled it. In that sense, Amazon is actually helping us. And it turned out that now I can help customers build multiple petabyte infrastructure on Amazon and still stay efficient, still stay price competitive.Corey: I would have said for a long time that if you were to ask me to build out the lingua franca of all the different cloud providers into a common API, the S3 API would be one of them. Now, you are building this out, multi-cloud, you're in all three of the major cloud marketplaces, and the way that you do that and do those deployments seems like it is the modern multi-cloud API of Kubernetes. When you first started building this, Kubernetes was very early on. What was the evolution of getting there? Or were you one of the first early-adoption customers in a Kubernetes space?AB: So, when we started, there was no Kubernetes. But we saw the problem was very clear. And there was containers, and then came Docker Compose and Swarm. Then there was Mesos, Cloud Foundry, you name it, right? Like, there was many solutions all the way up to even VMware trying to get into that space.And what did we do? Early on, I couldn't choose. I couldn't—it's not in our hands, right, who is going to be the winner, so we just simply embrace everybody. It was also tiring that to allow implement native connectors to all of them different orchestration, like Pivotal Cloud Foundry alone, they have their own standard open service broker that's only popular inside their system. Go outside elsewhere, everybody was incompatible.And outside that, even, Chef Ansible Puppet scripts, too. We just simply embraced everybody until the dust settle down. When it settled down, clearly a declarative model of Kubernetes became easier. Also Kubernetes developers understood the community well. And coming from Borg, I think they understood the right architecture. And also written in Go, unlike Java, right?It actually matters, these minute new details resonating with the infrastructure community. It took off, and then that helped us immensely. Now, it's not only Kubernetes is popular, it has become the standard, from VMware to OpenShift to all the public cloud providers, GKS, AKS, EKS, whatever, right—GKE. All of them now are basically Kubernetes standard. It made not only our life easier, it made every other [ISV 00:16:11], other open-source project, everybody now can finally write one code that can be operated portably.It is a big shift. It is not because we chose; we just watched all this, we were riding along the way. And then because we resonated with the infrastructure community, modern infrastructure is dominated by open-source. We were also the leading open-source object store, and as Kubernetes community adopted us, we were naturally embraced by the community.Corey: Back when AWS first launched with S3 as its first offering, there were a bunch of folks who were super excited, but object stores didn't make a lot of sense to them intrinsically, so they looked into this and, “Ah, I can build a file system and users base on top of S3.” And the reaction was, “Holy God don't do that.” And the way that AWS decided to discourage that behavior is a per request charge, which for most workloads is fine, whatever, but there are some that causes a significant burden. With running something like MinIO in a self-hosted way, suddenly that costing doesn't exist in the same way. Does that open the door again to so now I can use it as a file system again, in which case that just seems like using the local file system, only with extra steps?AB: Yeah.Corey: Do you see patterns that are emerging with customers' use of MinIO that you would not see with the quote-unquote, “Provider's” quote-unquote, “Native” object storage option, or do the patterns mostly look the same?AB: Yeah, if you took an application that ran on file and block and brought it over to object storage, that makes sense. But something that is competing with object store or a layer below object store, that is—end of the day that drives our block devices, you have a block interface, right—trying to bring SAN or NAS on top of object store is actually a step backwards. They completely missed the message that Amazon told that if you brought a file system interface on top of object store, you missed the point, that you are now bringing the legacy things that Amazon intentionally removed from the infrastructure. Trying to bring them on top doesn't make it any better. If you are arguing from a compatibility some legacy applications, sure, but writing a file system on top of object store will never be better than NetApp, EMC, like EMC Isilon, or anything else. Or even GlusterFS, right?But if you want a file system, I always tell the community, they ask us, “Why don't you add an FS option and do a multi-protocol system?” I tell them that the whole point of S3 is to remove all those legacy APIs. If I added POSIX, then I'll be a mediocre object storage and a terrible file system. I would never do that. But why not write a FUSE file system, right? Like, S3Fs is there.In fact, initially, for legacy compatibility, we wrote MinFS and I had to hide it. We actually archived the repository because immediately people started using it. Even simple things like end of the day, can I use Unix [Coreutils 00:19:03] like [cp, ls 00:19:04], like, all these tools I'm familiar with? If it's not file system object storage that S3 [CMD 00:19:08] or AWS CLI is, like, to bloatware. And it's not really Unix-like feeling.Then what I told them, “I'll give you a BusyBox like a single static binary, and it will give you all the Unix tools that works for local filesystem as well as object store.” That's where the [MC tool 00:19:23] came; it gives you all the Unix-like programmability, all the core tool that's object storage compatible, speaks native object store. But if I have to make object store look like a file system so UNIX tools would run, it would not only be inefficient, Unix tools never scaled for this kind of capacity.So, it would be a bad idea to take step backwards and bring legacy stuff back inside. For some very small case, if there are simple POSIX calls using [ObjectiveFs 00:19:49], S3Fs, and few, for legacy compatibility reasons makes sense, but in general, I would tell the community don't bring file and block. If you want file and block, leave those on virtual machines and leave that infrastructure in a silo and gradually phase them out.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: So, my big problem, when I look at what S3 has done is in it's name because of course, naming is hard. It's, “Simple Storage Service.” The problem I have is with the word simple because over time, S3 has gotten more and more complex under the hood. It automatically tiers data the way that customers want. And integrated with things like Athena, you can now query it directly, whenever of an object appears, you can wind up automatically firing off Lambda functions and the rest.And this is increasingly looking a lot less like a place to just dump my unstructured data, and increasingly, a lot like this is sort of a database, in some respects. Now, understand my favorite database is Route 53; I have a long and storied history of misusing services as databases. Is this one of those scenarios, or is there some legitimacy to the idea of turning this into a database?AB: Actually, there is now S3 Select API that if you're storing unstructured data like CSV, JSON, Parquet, without downloading even a compressed CSV, you can actually send a SQL query into the system. IN MinIO particularly the S3 Select is [CMD 00:21:16] optimized. We can load, like, every 64k worth of CSV lines into registers and do CMD operations. It's the fastest SQL filter out there. Now, bringing these kinds of capabilities, we are just a little bit away from a database; should we do database? I would tell definitely no.The very strength of S3 API is to actually limit all the mutations, right? Particularly if you look at database, they're dealing with metadata, and querying; the biggest value they bring is indexing the metadata. But if I'm dealing with that, then I'm dealing with really small block lots of mutations, the separation of objects storage should be dealing with persistence and not mutations. Mutations are [AWS 00:21:57] problem. Separation of database work function and persistence function is where object storage got the storage right.Otherwise, it will, they will make the mistake of doing POSIX-like behavior, and then not only bringing back all those capabilities, doing IOPS intensive workloads across the HTTP, it wouldn't make sense, right? So, object storage got the API right. But now should it be a database? So, it definitely should not be a database. In fact, I actually hate the idea of Amazon yielding to the file system developers and giving a [file three 00:22:29] hierarchical namespace so they can write nice file managers.That was a terrible idea. Writing a hierarchical namespace that's also sorted, now puts tax on how the metadata is indexed and organized. The Amazon should have left the core API very simple and told them to solve these problems outside the object store. Many application developers don't need. Amazon was trying to satisfy everybody's need. Saying no to some of these file system-type, file manager-type users, what should have been the right way.But nevertheless, adding those capabilities, eventually, now you can see, S3 is no longer simple. And we had to keep that compatibility, and I hate that part. I actually don't mind compatibility, but then doing all the wrong things that Amazon is adding, now I have to add because it's compatible. I kind of hate that, right?But now going to a database would be pushing it to the whole new level. Here is the simple reason why that's a bad idea. The right way to do database—in fact, the database industry is already going in the right direction. Unstructured data, the key-value or graph, different types of data, you cannot possibly solve all that even in a single database. They are trying to be multimodal database; even they are struggling with it.You can never be a Redis, Cassandra, like, a SQL all-in-one. They tried to say that but in reality, that you will never be better than any one of those focused database solutions out there. Trying to bring that into object store will be a mistake. Instead, let the databases focus on query language implementation and query computation, and leave the persistence to object store. So, object store can still focus on storing your database segments, the table segments, but the index is still in the memory of the database.Even the index can be snapshotted once in a while to object store, but use objects store for persistence and database for query is the right architecture. And almost all the modern databases now, from Elasticsearch to [unintelligible 00:24:21] to even Kafka, like, message queue. They all have gone that route. Even Microsoft SQL Server, Teradata, Vertica, name it, Splunk, they all have gone object storage route, too. Snowflake itself is a prime example, BigQuery and all of them.That's the right way. Databases can never be consolidated. There will be many different kinds of databases. Let them specialize on GraphQL or Graph API, or key-value, or SQL. Let them handle the indexing and persistence, they cannot handle petabytes of data. That [unintelligible 00:24:51] to object store is how the industry is shaping up, and it is going in the right direction.Corey: One of the ways I learned the most about various services is by talking to customers. Every time I think I've seen something, this is amazing. This service is something I completely understand. All I have to do is talk to one more customer. And when I was doing a bill analysis project a couple of years ago, I looked into a customer's account and saw a bucket with okay, that has 280 billion objects in it—and wait was that billion with a B?And I asked them, “So, what's going on over there?” And there's, “Well, we built our own columnar database on top of S3. This may not have been the best approach.” It's, “I'm going to stop you there. With no further context, it was not, but please continue.”It's the sort of thing that would never have occurred to me to even try, do you tend to see similar—I would say they're anti-patterns, except somehow they're made to work—in some of your customer environments, as they are using the service in ways that are very different than ways encouraged or even allowed by the native object store options?AB: Yeah, when I first started seeing the database-type workloads coming on to MinIO, I was surprised, too. That was exactly my reaction. In fact, they were storing these 256k, sometimes 64k table segments because they need to index it, right, and the table segments were anywhere between 64k to 2MB. And when they started writing table segments, it was more often [IOPS-type 00:26:22] I/O pattern, then a throughput-type pattern. Throughput is an easier problem to solve, and MinIO always saturated these 100-gigabyte NVMe-type drives, they were I/O intensive, throughput optimized.When I started seeing the database workloads, I had to optimize for small-object workloads, too. We actually did all that because eventually I got convinced the right way to build a database was to actually leave the persistence out of database; they made actually a compelling argument. If historically, I thought metadata and data, data to be very big and coming to object store make sense. Metadata should be stored in a database, and that's only index page. Take any book, the index pages are only few, database can continue to run adjacent to object store, it's a clean architecture.But why would you put database itself on object store? When I saw a transactional database like MySQL, changing the [InnoDB 00:27:14] to [RocksDB 00:27:15], and making changes at that layer to write the SS tables [unintelligible 00:27:19] to MinIO, and then I was like, where do you store the memory, the journal? They said, “That will go to Kafka.” And I was like—I thought that was insane when it started. But it continued to grow and grow.Nowadays, I see most of the databases have gone to object store, but their argument is, the databases also saw explosive growth in data. And they couldn't scale the persistence part. That is where they realized that they still got very good at the indexing part that object storage would never give. There is no API to do sophisticated query of the data. You cannot peek inside the data, you can just do streaming read and write.And that is where the databases were still necessary. But databases were also growing in data. One thing that triggered this was the use case moved from data that was generated by people to now data generated by machines. Machines means applications, all kinds of devices. Now, it's like between seven billion people to a trillion devices is how the industry is changing. And this led to lots of machine-generated, semi-structured, structured data at giant scale, coming into database. The databases need to handle scale. There was no other way to solve this problem other than leaving the—[unintelligible 00:28:31] if you looking at columnar data, most of them are machine-generated data, where else would you store? If they tried to build their own object storage embedded into the database, it would make database mentally complicated. Let them focus on what they are good at: Indexing and mutations. Pull the data table segments which are immutable, mutate in memory, and then commit them back give the right mix. What you saw what's the fastest step that happened, we saw that consistently across. Now, it is actually the standard.Corey: So, you started working on this in 2014, and here we are—what is it—eight years later now, and you've just announced a Series B of $100 million dollars on a billion-dollar valuation. So, it turns out this is not just one of those things people are using for test labs; there is significant momentum behind using this. How did you get there from—because everything you're saying makes an awful lot of sense, but it feels, at least from where I sit, to be a little bit of a niche. It's a bit of an edge case that is not the common case. Obviously, I missing something because your investors are not the types of sophisticated investors who see something ridiculous and, “Yep. That's the thing we're going to go for.” There right more than they're not.AB: Yeah. The reason for that was the saw what we were set to do. In fact, these are—if you see the lead investor, Intel, they watched us grow. They came into Series A and they saw, everyday, how we operated and grew. They believed in our message.And it was actually not about object store, right? Object storage was a means for us to get into the market. When we started, our idea was, ten years from now, what will be a big problem? A lot of times, it's hard to see the future, but if you zoom out, it's hidden in plain sight.These are simple trends. Every major trend pointed to world producing more data. No one would argue with that. If I solved one important problem that everybody is suffering, I won't go wrong. And when you solve the problem, it's about building a product with fine craftsmanship, attention to details, connecting with the user, all of that standard stuff.But I picked object storage as the problem because the industry was fragmented across many different data stores, and I knew that won't be the case ten years from now. Applications are not going to adopt different APIs across different clouds, S3 to GCS to Azure Blob to HDFS to everything is incompatible. I saw that if I built a data store for persistence, industry will consolidate around S3 API. Amazon S3, when we started, it looked like they were the giant, there was only one cloud industry, it believed mono-cloud. Almost everyone was talking to me like AWS will be the world's data center.I certainly see that possibility, Amazon is capable of doing it, but my bet was the other way, that AWS S3 will be one of many solutions, but not—if it's all incompatible, it's not going to work, industry will consolidate. Our bet was, if world is producing so much data, if you build an object store that is S3 compatible, but ended up as the leading data store of the world and owned the application ecosystem, you cannot go wrong. We kept our heads low and focused on the first six years on massive adoption, build the ecosystem to a scale where we can say now our ecosystem is equal or larger than Amazon, then we are in business. We didn't focus on commercialization; we focused on convincing the industry that this is the right technology for them to use. Once they are convinced, once you solve business problems, making money is not hard because they are already sold, they are in love with the product, then convincing them to pay is not a big deal because data is so critical, central part of their business.We didn't worry about commercialization, we worried about adoption. And once we got the adoption, now customers are coming to us and they're like, “I don't want open-source license violation. I don't want data breach or data loss.” They are trying to sell to me, and it's an easy relationship game. And it's about long-term partnership with customers.And so the business started growing, accelerating. That was the reason that now is the time to fill up the gas tank and investors were quite excited about the commercial traction as well. And all the intangible, right, how big we grew in the last few years.Corey: It really is an interesting segment, that has always been something that I've mostly ignored, like, “Oh, you want to run your own? Okay, great.” I get it; some people want to cosplay as cloud providers themselves. Awesome. There's clearly a lot more to it than that, and I'm really interested to see what the future holds for you folks.AB: Yeah, I'm excited. I think end of the day, if I solve real problems, every organization is moving from compute technology-centric to data-centric, and they're all looking at data warehouse, data lake, and whatever name they give data infrastructure. Data is now the centerpiece. Software is a commodity. That's how they are looking at it. And it is translating to each of these large organizations—actually, even the mid, even startups nowadays have petabytes of data—and I see a huge potential here. The timing is perfect for us.Corey: I'm really excited to see this continue to grow. And I want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?AB: I'm always on the community, right. Twitter and, like, I think the Slack channel, it's quite easy to reach out to me. LinkedIn. I'm always excited to talk to our users or community.Corey: And we will of course put links to this in the [show notes 00:33:58]. Thank you so much for your time. I really appreciate it.AB: Again, wonderful to be here, Corey.Corey: Anand Babu Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with what starts out as an angry comment but eventually turns into you, in your position on the S3 product team, writing a thank you note to MinIO for helping validate your market.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Great Things with Great Tech!
Episode 40 - Filebase

Great Things with Great Tech!

Play Episode Listen Later Dec 30, 2021 45:33


In this episode I talk with Joshua Noble, CEO and Co-Founder at Filebase. Filebase aim to make decentralized storage accessible and easy to use for everyone. They are doing this by building a scalable, secure and performant access layer to various decentralized storage networks, with a familiar S3-compatible interface. Josh and I talk about how Filebase is bridging storage platforms between traditional S3 based offerings and the decentralized world of Blockchain Storage. There is a reason why Filebase has been listed as one of the 10 Hottest Data Storage Startups of 2021 by CRN! Filebase was founded in 2018 and is Head Quartered out of Boston, USA. ☑️ Technology and Technology Partners Mentioned: #Blockchain, #Web3, Web2.0, Amazon, Kubernetes, Veeam, CommVault, Sia, Storj, SkyNet, Decentralized Storage ☑️ Raw Talking Points: * What is Decentralized Storage? * 3x Redundancy/Sharding/No SPOF * Backup Partnerships * How do you absorb/leverage market fluctuations? * How is this different from AWS/etc? • Pricing compared to AWS etc * Edge Layer Cache * Object Maps • Web3 Adoption / Usecases * AWS OUTAGE * Hackathon with Akash * Web2 vs Web3 - Bridging the two * SIA/Skynet/Storj ☑️ Web: https://filebase.com/ ☑️ Docs: https://docs.filebase.com/ ☑️ 5GB Always Free Offer: https://filebase.com/signup ☑️ Interested in being on #GTwGT? Contact via Twitter @GTwGTPodcast ☑️ Music: https://www.bensound.com

MikoBits Blockchain NFT and DeFi show
Episode 34: Blockchain Scaling Ethereum with SKALE (SKL) CEO Jack O'Holleran

MikoBits Blockchain NFT and DeFi show

Play Episode Listen Later Dec 14, 2021 58:40


SKALE is a novel EVM and Ethereum compatible scalability layer. Episode Index: 1) What is SKALE 2) Current SKALE network status 135 nodes 3) Ethereum compatible scaling, pay in ETH 4) Why developers should choose SKALE for ETH scalability 5) Killer Applications for SKALE 6) Applications for the real world 7) "Salesforce" CRM on blockchain 8) A DAO to replace Amazon Web Services 9) Filecoin, SIAcoin, Storj, S3 Storage on Chain 10) Polkadot substrate, cosmos IBC and the future of cross-chain integration 11) The importance of ecosystem compatibility and toolchain like metamask 12) Technical origins of SKALE and the Java Virtual Machine 13) DeFi, governance, and incentives 14) Where are we in the evolution of DeFi? The "Cambrian Explosion" 15) Parallels to the "sharing economy" and the rise of Uber and AirBnB 16) How scaling breaks everything open 17) Lack of trust is expensive 18) Bitcoin and why Bitcoin matters

The Trader Cobb Crypto Podcast
Another Slow Start For Bitcoin

The Trader Cobb Crypto Podcast

Play Episode Listen Later Dec 7, 2021 4:58


The volatility has slowed right up today so far and it's been a slow grind for BTC, ADA, SOL, ETH, AVAX, BNB and pretty much everything. Cryptotraders are waiting to see what comes next and although there have been some trades that provided profits in ENJ and STORJ it has been the lower time frames only See acast.com/privacy for privacy and opt-out information.

Strap on your Boots!
Episode 85: Where cloud storage is headed in the future

Strap on your Boots!

Play Episode Listen Later Dec 6, 2021 22:35


In today's episode I talk about the difference between distributed versus centralized cloud storage with Henry Wilson, an avid community supporter of a forward thinking company disrupting the cloud storage industry called ScPrime. Most companies use Amazon AWS, or other cloud storage companies to store their massive amounts of data. But it comes at a price, both in their wallet, and their security. We talk about how distributed storage is much more reasonably priced, and a lot safer from hackers. We also touch base on ScPrime's new XaMiner host computer that jumps into the gig economy just like Uber and AirBnB did. To learn more about ScPrime visit their site at https://ScPri.me or to order an XaMiner you can visit https://XaMiner.net

SuperC -  虛擬貨幣學習坊
EP112: 回顧幣市今年格局變化快速,熱點板塊你跟上了嗎?Web3.0 既熟悉又陌生,區塊鏈上的Web3.0 又會是什麼樣子呢?PEOPLE / ENS / NUMS / STORJ / IMX 閒聊討論

SuperC - 虛擬貨幣學習坊

Play Episode Listen Later Nov 28, 2021 59:05


小額贊助支持本節目: https://pay.firstory.me/user/supercforcrypto 留言告訴我你對這一集的想法: https://open.firstory.me/story/ckwj7c5ib1ttx081758dd0kb0?m=comment - 冷錢包專區

Quant Trading Live Report
Great crypto trading day with STORJ but more active trading while hodl underperforms

Quant Trading Live Report

Play Episode Listen Later Nov 19, 2021 5:37


Strangely, I find it quite interesting how a big pullback day with cryptocurrency can be highly profitable. MANA and STORJ did very well as hinted in this video. Make sure you get in on the following contacts to continue the conversation: Telegram  news channel: https://t.me/quantlabs Chat server: https://quantlabs.info/

Quant Trading Live Report
Newest cryptocurrency coins to make profit this weekend

Quant Trading Live Report

Play Episode Listen Later Nov 13, 2021 21:36


There are some OK to choose from better profit. After this latest Bitcoin and cryptocurrency pullback. There are some coins I have not seen in a while including XMR CHZ and STORJ. I offer tips on using Kraken only for live trading while other exchanges are well known scams. Follow how these potential coins will turn oout on my chat server quantlabs.info Join my chat server if you want some secret trading PDFs https://quantlabs.net/contact/

New To Crypto
How Arweave and Decentralized Data Storage Will Change History

New To Crypto

Play Episode Listen Later Sep 3, 2021 10:49 Transcription Available


Welcome to the What is Arweave episode. Learn how it works and why it's disrupting the decentralized data storage market.  What is the AR Token? Join your host Crypto Travels Michael now to find out.The New to Crypto Podcast is designed to guide you through the crypto landscape with pinpoint accuracy. New episodes are added daily. Be sure to subscribe to the podcast and listen to all of the episodes to help you in your cryptocurrency journey. 

Cryptocurrency India Weekly
India Ranked Sixth in DeFi Adoption+RBI Could Begin CBDC trials by December+More Crypto News

Cryptocurrency India Weekly

Play Episode Listen Later Aug 29, 2021 6:08


Here are the top cryptocurrency news headlines from India this week:India ranked sixth in DeFi adoption: https://blog.chainalysis.com/reports/2021-global-defi-adoption-index ;RBI could begin CBDC trials by December: https://www.cnbc.com/2021/08/27/india-central-bank-rbi-digital-rupee-trials-could-begin-by-december.html ;Bitcoin rally brings Indian investors back to crypto: https://economictimes.indiatimes.com/markets/cryptocurrency/bitcoin-rally-draws-indians-back-to-cryptos/articleshow/85605361.cms ;India Covid Crypto Relief Fund to donate $15 million to UNICEF India: https://economictimes.indiatimes.com/tech/tech-bytes/india-covid-crypto-relief-fund-to-donate-15-million-to-unicef/articleshow/85694682.cms ;Cashaa ties up with Polygon to offer DeFi solutions: https://www.livemint.com/market/cryptocurrency/cashaa-ties-up-with-polygon-to-offer-defi-solutions-to-masses-11629717659044.html ;Former RBI governor Raghuram Rajan optimistic about the future of cryptocurrencies: https://www.indiatoday.in/business/story/raghuram-rajan-optimistic-about-the-future-of-cryptocurrencies-here-s-what-he-said-1846102-2021-08-27 ;CoinDCX joins Advertising Standards Council of India: https://www.livemint.com/companies/news/coindcx-joins-advertising-regulator-to-enhance-confidence-in-crypto-11629879495281.html ;Bitbns exchange updates its bank account details;Zebpay lists Graph Protocol (GRT) in its INR and USDT market and Storj in its USDT market;WazirX lists Arweave (AR), DODO and FLOW in its USDT market;Reading Recommendation:3 ways traders use Bitcoin futures to generate profit: https://cointelegraph.com/news/3-ways-traders-use-bitcoin-futures-to-generate-profit

Daily Crypto Report
"Microstrategy announces new $177M Bitcoin buy" August 26, 2021

Daily Crypto Report

Play Episode Listen Later Aug 26, 2021 2:43


Today's blockchain and cryptocurrency news Brought to you by ungrocery.com Bitcoin is up 1% at $47,168 Ethereum is up 1% at $3,113 and Cardano is up 1% at 2.58 Radicle up 20% Tokocrypto up 20% Storj up 15% Police in Brazil have seized $28.8M in crypto and made 5 arrests. Microstrategy has announced a new Bitcoin buy of $177M. Anchorage has hired the former Wells Fargo Digital Assets Executive. 3LAU has raised $16M to tokenize music royalties with Royal. NFT+AI platrofm Althea AI has raised $16M Euler has raised $8M in a series A.

Balfül podcast :: technológia / gazdaság / üzlet
#84 Mi lesz velünk, ha jön a datageddon?

Balfül podcast :: technológia / gazdaság / üzlet

Play Episode Listen Later Jul 25, 2021 51:56


Tudtátok, hogy ha az emberiség a mai gyorsuló ütemben generálja tovább az adatot, akkor néhány száz év múlva már a világ összes atomja sem lenne elég adatot tárolni? Ma még bőszen használjuk a nagy szolgáltatók felhős tárhelyeit, de mi lesz, ha mindennek vége, mert már egyszerűen nincs hová letenni a sok petabyte-nyi 4K-s cicás videót? Beszélgetünk az alternatívákról és egy érdekes decentralizált tárolási megoldásról, a fejlesztőknek szánt Storj projektről.

The New Stack Podcast
When Is Decentralized Storage the Right Choice?

The New Stack Podcast

Play Episode Listen Later Jul 14, 2021 26:10


The amount of data created has doubled every year, presenting a host of challenges for organizations: security and privacy issues for starters, but also storage costs. What situations call for that data move to decentralized cloud storage rather than on-prem or even a single public cloud storage setup? What are the advantages and challenges of a decentralized cloud storage solution for data, and how can those be navigated?On this episode of Makers, the New Stack podcast, Ben Golub, CEO of Storj, and Krista Spriggs, software engineering manager at the company, were joined by Alex Williams, founder and publisher of The New Stack, along with Heather Joslyn, TNS' features editor. Golub and Spriggs talked about how decentralized storage for data makes sense for organizations concerned about cloud costs, security, and  resiliency.

Quant Trading Live Report
STORJ and WAVES only cryto coin movers in quiet market

Quant Trading Live Report

Play Episode Listen Later Jul 2, 2021 18:09


STORJ WAVES returned 10% and 5% overnight  That is pretty good for a pretty well dead Kraken crypto market. When you look at today's US market with decent jobs report, you would expect the market to pop which they did not. Not only that, nothing else is reallymoving except for 1-2% in silver. So why would want to miss out on these small coins that can do 10 times of the mainstream market?   Lowest monthly cost you will get https://quantlabs.net/shop/product/quant-analytics/ 50% discount applied https://quantlabs.net/shop/product/quant-analytics-subscription-6-month-promo/ Get a free trading PDFs https://quantlabs.net/contact-quantlabsnet/ Here is my Telegram https://t.me/quantlabs Discord server https://discord.gg/RhdbV2hx https://quantlabs.net/blog/2021/07/storj-and-waves-only-cryto-coin-movers-in-quiet-market/

Quant Trading Live Report
/home/caustic/Documents/Zoom/2021-07-02 11.17.34 STORJ and WAVES only cryto coin movers 84853120167

Quant Trading Live Report

Play Episode Listen Later Jul 2, 2021 18:09


STORJ WAVES returned 10% and 5% overnight  That is pretty good for a pretty well dead Kraken crypto market. When you look at today's US market with decent jobs report, you would expect the market to pop which they did not. Not only that, nothing else is reallymoving except for 1-2% in silver. So why would want to miss out on these small coins that can do 10 times of the mainstream market?   Get a free trading PDFs https://quantlabs.net/contact-quantlabsnet/ Here is my Telegram https://t.me/quantlabs Discord server https://discord.gg/RhdbV2hx    

The New Stack Podcast
Why One Storage Provider Adopted Go as Its Programming Language

The New Stack Podcast

Play Episode Listen Later Jun 30, 2021 25:55


Go owes its popularity to a number of factors as Golang advocates often speak of its speed, robustness and versatility, especially compared to C++ and Java and JavaScript. In this The New Stack Makers podcast, hosts TNS' Alex Williams, founder and publisher, and Darryl Taft, news editor, cover the reasons for decentralized storage provider Storj's shift to Go with featured guests Storj's JT Olio, CTO, and Natalie Villasana, software engineer.Storj's needs for Go to support its development and operations stems from its unique requirements as the “Airbnb for hard drives,” Olio explained.

The DeFi Download
How to market a crypto company with Jeremy Epstein.

The DeFi Download

Play Episode Listen Later May 21, 2021 44:19


This week's DeFi Download episode features a conversation between the CEO of Radix DLT Piers Ridyard, and Jeremy Epstein, founder of Never Stop Marketing and Co-Chief Investment Officer of Crypto Futura Fund. In this episode, they delve into crypto marketing and address topics like community building, tribalism, and what it's like to brief generals in the Pentagon about the blockchain.Jeremy is the author of three books about the intersection between blockchain and marketing. He has written nearly 1000 blog posts on the blockchain and the impact of crypto-economics and has served as an advisor to OpenBazaar, Dapper Labs, the cryptocurrency FAME, DAOstack, Storj, Arweave, Formatic (now Magic), SingularityNET, Zcash.Jeremy is also a mentor on the Outliers Ventures program Base Camp, and he has twice briefed three-star generals at the Pentagon on the impact of cryptocurrencies.01:17 Jeremy's viewpoint on marketing in the cryptocurrency space and how it differs from traditional marketing.02:59 Given that Bitcoin is a community project that became valuable due to the collective belief of people wanting it to be so, how has Jeremy seen the sophistication of ways of thinking about building communities change over the years since Bitcoin's inception?6:23 What is the crypto world bringing into the traditional world, according to Jeremy? Is the traditional world being inspired by what's going on in crypto, or is it being forced to adapt and change because people expect a new type of interactivity?7:49 The emergence of tribes and tribalism in the crypto space.19:21 Jeremy discusses his experience briefing the Pentagon.22:15 Jeremy's crypto moment and what made him realize that crypto was more than just a strange technology fad.26:27 Jeremy's advice on constructing a project's narrative or story and how to think about it from the perspective of an external consumer.29:03 How Jeremy assisted Zcash in refining its marketing message.31:04 Radix's experience working with Jeremy as a marketing advisor.36:12 What are some of the upcoming innovations in the intersection of marketing, public ledgers, and decentralized applications?38:29 What is the definition of Wallet Relationship Management? Is the concept in conflict with the desires of the crypto community?Further resources:Company Website: Never Stop Marketing Twitter: @jer979Blog: Never Stop Marketing Blog Books & Publications: neverstopmarketing.com/download

Home Gadget Geeks (Video Large)
Crypto Exchanges, Tesla Test Drive and Leaving YouTube Premium – HGG487

Home Gadget Geeks (Video Large)

Play Episode Listen Later Apr 24, 2021


We spend some time this week talking about using Crypto Exchanges and after Coinbase, what might be your next exchange. Jim does a Tesla test drive last weekend and gives his thoughts. We wrap with some thoughts on Mike's PC upgrade as he considers joining Storj again.  All that and more! <span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span> Full show notes, transcriptions, audio and video at http://theAverageGuy.tv/hgg487 Join Jim Collison / @jcollison and Mike Wieger / @WiegerTech for show #487 of Home Gadget Geeks brought to you by the Average Guy Network. WANT TO SUBSCRIBE? http://theAverageGuy.tv/subscribe

Home Gadget Geeks (Video Small)
Crypto Exchanges, Tesla Test Drive and Leaving YouTube Premium – HGG487

Home Gadget Geeks (Video Small)

Play Episode Listen Later Apr 24, 2021


We spend some time this week talking about using Crypto Exchanges and after Coinbase, what might be your next exchange. Jim does a Tesla test drive last weekend and gives his thoughts. We wrap with some thoughts on Mike's PC upgrade as he considers joining Storj again.  All that and more! <span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span> Full show notes, transcriptions, audio and video at http://theAverageGuy.tv/hgg487 Join Jim Collison / @jcollison and Mike Wieger / @WiegerTech for show #487 of Home Gadget Geeks brought to you by the Average Guy Network. WANT TO SUBSCRIBE? http://theAverageGuy.tv/subscribe

Home Gadget Geeks (Audio MP3)
Crypto Exchanges, Tesla Test Drive and Leaving YouTube Premium – HGG487

Home Gadget Geeks (Audio MP3)

Play Episode Listen Later Apr 24, 2021 67:07


We spend some time this week talking about using Crypto Exchanges and after Coinbase, what might be your next exchange. Jim does a Tesla test drive last weekend and gives his thoughts. We wrap with some thoughts on Mike’s PC upgrade as he considers joining Storj again.  All that and more! <span data-mce-type=”bookmark” style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” class=”mce_SELRES_start”></span> Full show notes, transcriptions, audio and video at http://theAverageGuy.tv/hgg487 Join Jim Collison / @jcollison and Mike Wieger / @WiegerTech for show #487 of Home Gadget Geeks brought to you by the Average Guy Network. WANT TO SUBSCRIBE? http://theAverageGuy.tv/subscribe

Jägaren
Jens Storjägarn från Täxan Eriksson

Jägaren

Play Episode Listen Later Apr 21, 2021 39:49


Jens Eriksson, även känd som Storjägarn från Täxan lever med Williams syndrom. Jens är en person som har inspirerat många och i avsnittet delar han med sig av sina upplevelser och erfarenheter. Bl a om björnjakt i Kanada och licensjakt på lodjur i Täxan där Jens är uppvuxen.

Observy McObservface
Prototypal Ostriches – The Science of Finding Your Birds with Jocelyn Matthews

Observy McObservface

Play Episode Listen Later Apr 21, 2021 46:30 Transcription Available


Jonan Scheffler interviews Manager of Community Relations at Storj, Jocelyn Matthews, about camaraderie, community building, and the concept that in a sense, all communities are virtual, whether you're physically going somewhere or not, because the concept of community – the meaning of community –  really exists in your head ::mind blown::

Your Voice First
Decentralizing Alexa4Musicians

Your Voice First

Play Episode Listen Later Apr 12, 2021 12:53


IPFS: https://ipfs.io/ FileCoin: https://filecoin.io/ Storj: https://www.storj.io/ NFT: https://www.nft.kred/ attn.live: https://www.attn.live/ --- Send in a voice message: https://anchor.fm/voicefirstai/message Support this podcast: https://anchor.fm/voicefirstai/support

Jaktstugan Podcast
Avsnitt 30 säsong 2

Jaktstugan Podcast

Play Episode Listen Later Mar 12, 2021 54:18


Rickard tillsammans med Jimmy ringer till Storjägarn från texan. En väldigt målmedveten kille som varit på sin drömjakt i Kanada. Hör Jens (som han egentligen heter) berätta sig och sin resa. Mvh RT

The Bitcoin Podcast
The Bitcoin Podcast #328-Evan Kuo Ampleforth

The Bitcoin Podcast

Play Episode Listen Later Nov 9, 2020 107:18


Dee and Jessie on this roundtable(Corey is on Sabbatical for two weeks) talk about what the next big thing in Bitcoin will be: lighting pools, Wrap Bitcoin, or something completely different. Our Guest is Evan Kuo of Ampleforth.Links: Ampleforth WebsiteTwitterDiscordWhitepaperSponsor Links AvalancheThe Bitcoin Podcast Social MediaJoin-Slack Bitcoin StorePatreonDonate!Discuss

The Blockchain Debate Podcast
Motion: ZK rollup has a better set of security/scalability tradeoff than optimistic rollup (Alex Gluchowski vs. John Adler, co-host: James Prestwich)

The Blockchain Debate Podcast

Play Episode Play 54 sec Highlight Listen Later Nov 4, 2020 86:30 Transcription Available


Guests:Alex Gluchowski (@gluk64)John Adler (@jadler0)Host:Richard Yan (@gentso09)James Prestwich (@_prestwich, special co-host)Today's motion is “ZK rollup has a better set of security/scalability tradeoff than Optimistic rollup.”Rollups are a class of layer-2 Ethereum scalability solutions. They allow an off-chain aggregation of transactions inside a smart contract. Users can transact inside the contract with security guarantees, and they will settle to the mainchain at some future point.ZK and optimistic rollups are different in the way they ensure the validity of these transactions that are being kept off-chain.The ZK approach uses math. It bundles the transactions, compresses them, and adds a zero-knowledge proof that indicates the validity of the state transitions. When the transaction is sent to the mainchain, the block is verified by the attached zero-knowledge proof.The optimistic approach uses economic incentives. An operator publishes a state root that isn't constantly checked by the rollup smart contract. Instead, everybody hopes that the state transition is correct. However, other operators or users can challenge the validity of the transactions, revert the incorrect block, and slash malicious operators.We compared the two approaches from the standpoint of security, usability, capital efficiency of exits and more.Today's debaters are John Adler and Alex Gluchowski. John is the proposer of the original construction of the optimistic rollup and cofounded Celestia, and Alex is implementing a ZK rollup at Matter Labs. Our co-host James Prestwich is a security consultant and auditor for solidity contracts, among many other things.If you're into crypto and like to hear two sides of the story, be sure to also check out our previous episodes. We've featured some of the best known thinkers in the crypto space.If you would like to debate or want to nominate someone, please DM me at @blockdebate on Twitter.Please note that nothing in our podcast should be construed as financial advice.Source of select items discussed in the debate:Coindesk's layman guide for rollups: https://www.coindesk.com/ethereum-dapps-rollups-heres-whyAlex Gluchowski on the difference between the two rollups: https://medium.com/matter-labs/optimistic-vs-zk-rollup-deep-dive-ea141e71e075John Adler explains optimistic rollup: https://medium.com/@adlerjohn/the-why-s-of-optimistic-rollup-7c6a22cbb61a Celestia: https://celestia.org/Fuel Labs: https://fuel.sh/Matter Labs: https://matter-labs.io/Debater bios:Alex Gluchowski is co-founder of Matter Labs, currently working on scaling Ethereum with zkSNARKs. He was previously CTO of PaulCamper, an online platform for sharing campervans and caravans in Europe.John Adler is co-founder of Celestia and Fuel Labs. He is the original proposer of the optimistic rollup construction. He previously did layer-2 research at ConsenSys. Interestingly, he is a self-proclaimed blockchain skeptic.James Prestwich founded cross-chain solution company Summa, subsequently acquired by the layer-1 blockchain firm Celo. He previously founded Storj, a decentralized cloud storage provider.

The Business of Open Source
Disrupting the Cloud Storage Market with Ben Golub

The Business of Open Source

Play Episode Listen Later Oct 7, 2020 24:52


This conversation covers: The advantages of using a distributed data storage model. How Storj is creating new revenue models for open-source projects, and how the open-source community is responding. The business and engineering reasons why users decide to opt for cloud-native, according to Ben. Viewing cloud-native as a journey, instead of a destination — and some of the top mistakes that people tend to make on the journey. Ben also talks about the top pitfalls people make with storage and management. Why businesses are often caught off guard with high storage costs, and how Storj is working to make it easier for customers.  Avoiding vendor lock-in with storage. Advice for people who are just getting started on their cloud journey. The person who should be responsible for making a cloud journey successful. Links: Storj Labs: https://storj.io/ Twitter: https://twitter.com/golubbe GitHub: https://github.com/golubbe TranscriptEmily: Hi everyone. I'm Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product's value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn't talk about them. Instead, we talk a lot about technical reasons. I'm hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you'll join me.Emily: Welcome to The Business of Cloud Native, my name is Emily Omier. I'm your host, and today I'm chatting with Ben Golub. Ben, thank you so much for joining us.Ben: Oh, Thank you for having me.Emily: And I always like to just start off with having you introduce yourself. So, not only where you work and what your job title is, but what you actually spend your day doing.Ben: [laughs]. Okay. I'm Ben Golub. I'm currently the executive chair and CEO of Storj Labs, which is a decentralized storage service. We kind of like to think of it as the Airbnb of disk drives, But probably most of the people on your podcast who, if they're familiar with the, sort of, cloud-native space would have known me as the former CEO of Docker from when it was released up until a few years ago. But yeah, I tend to spend my days doing a lot of stuff, in addition to family and dealing with COVID, running startups. This is now my seventh startup, fourth is a CEO.Emily: Tell me a little bit, like, you know, when you stumble into your home office—just kidding—nobody is going to the office, I know. But when you start your day, what sort of tasks are on your todo list? So, what do you actually spend your time doing?Ben: Sure. We've got a great team of people who are running a decentralized storage company. But of course, we are decentralized in more ways than one. We are 45 people spread across 15 different countries, trying to build a network that provides enterprise-grade storage on disk drives that we don't own, that are spread across 85 different countries. So, there's a lot of coordination, a lot of making sure that everybody has the context to do the right thing, and that we stay focused on doing the right thing for our users, doing the right thing for our suppliers, doing the right thing for each other, as well.Emily: One of the reasons I thought it'd be really interesting to talk with you is that I know your goal is to, sort of, revolutionize some of the business models related to managing storage. Can you talk about that a little bit more?Ben: Sure. Sure. I mean, obviously, there's been a big trend over the past several years towards the Cloud in general, and a big part of the [laughs] Cloud is storage. Actually, AWS started with S3, and it's a $90 billion market that's growing. The world's going to create enough data this year to fill a stack of CD-ROMs, to the orbit of Mars and back. And yet prices haven't come down, really, in about five years, and the whole market is controlled by essentially three players, Microsoft, Google, in the largest, Amazon, who also happen to be three of the five largest companies on the planet. And we think that data is so critical to everything that we do that we want to make sure that it doesn't stay centralized in the hands of a few, but that we, sort of, create a more, sort of, democratic—if you will—way of handling data that also addresses some of the serious privacy, data mining, and security concerns that happen when all the data is held by only a few people.Emily: With this, I'm sure you've heard about digital vegans. So, people who try to avoid all of the big tech giants—Ben: Right, right.Emily: Does this make it possible to do that?Ben: Well, so we're more of a back end. So, we're a service that people who produce-consumer-facing services use. But absolutely, if somebody—and we actually have people who want to create a more secure way of providing data backup, more secure way of enabling data communications, video sharing, all these sorts of things, and they can use us and service those [laughs] digital vegans, if you will.Emily: So, if I'm creating a SaaS product for digital vegans, I would go with you?Ben: I would hope you'd consider us, yeah. And by the way, I mean, also people who have mainstream applications use us as well. I mean, so we have people who are working with us who may have sensitive medical data on people, or people who are doing advanced research into areas like COVID, and they're using us partially because we're more secure and more private, but also because we are less likely to be hacked. And also because frankly faster, cheaper, more resilient.Emily: I was just going to ask, what are the advantages of distributed storage?Ben: Yeah. We benefit from all the same things that the move towards cloud-native in general benefits from, right? When you take workloads, and you take data, and you spread them across large numbers of devices that are operated independently, you get more resilience, you get more security, you can get better performance because things are closer to the edge. And all of these are benefits that are, sort of, inherent to doing things in a decentralized way as opposed to a centralized way. And then, quite frankly we're cheaper. I mean, because of the economics and doing this this way, we can price anywhere from a half to a third of what the large cloud providers offer, and do so profitably for ourselves.Emily: You also offer some new revenue models for open-source projects. Can you talk about that a little bit more?Ben: Sure, I mean, obviously I come from an open-source background, and one of the big stories of open-source for the past several years is the challenges for open-source companies in monetizing, and in particular, in a cloud world, a large number of open-source companies are now facing the situation where their products, completely legally but nonetheless, not in a fiscally sustainable way, are run by the large cloud companies and essentially given away as a loss leader. So, that a large cloud company might take a great product from Mongo or Redis, or Elastic, and run it essentially for free, give it away for free, not pay Mongo, Elastic, or Redis. And the cloud companies monetize that by charging customers for compute, and storage, and bandwidth. But unfortunately, the people who've done all the work to build this great product don't have the opportunity to share in the monetization. And it makes it really very hard to adopt a SaaS model for the cloud companies, which for many of them is really the best way that they would normally have for monetizing their efforts. So, what we have done is we've launched a program that basically turns it on its head and says, “Hey, if you are an open-source project, and you integrate with us in a way that your users send data to us, we'll share the revenue back with you. And as more of your users share more data with us, we'll send more money back to you.” And we think that that's the way it should be. If people are building great open-source projects that generate usage and revolutionize computing, they should be rewarded as well.Emily: How important is this to the open-source community? How challenging is it to find a way to support an open-source project?Ben: It's critical. I mean, if you look at the most—I'd start by saying two-thirds of all cloud workloads are open-source, and yet in the $180 billion cloud market, less than $5 billion [unintelligible] going back to the open-source projects that have built these things. And it's not easy to build an open-source project, and it takes resources. And even if you have a large community, you have developers who have families, or [laughs] need to eat, right? And so, as an open-source company, what you really want to be able to do is become self-sustaining. And while having contributions is great, ultimately, if open-source projects don't become self-sustaining, they die. Emily: A question just, sort of, about the open-source ethos: I mean, how does the community about open-source feel about this? It is obvious developers have to eat just like everybody else, and it seems like it should be obvious that they should also be rewarded when they have a project that's successful. But sometimes you hear that not everybody is comfortable with open-source being monetized in any way. It's like a dirty word.Ben: Yeah. I mean, I think [unintelligible] some people who object to open-source being monetized, and that tends to be a fringe, but I think there's a larger percentage that don't like the notion that you have to come up with a more restrictive license in order to monetize. And I think unfortunately a lot of open-source companies have felt the need to adopt more restrictive licenses in order to prevent their product being taken and used as a loss leader by the large cloud companies. And I guess our view is, “Hey, what the world doesn't need is a different kind of license. It needs a different kind of cloud.” And that's, and that's what we've been doing. And I think our approach has, frankly, gotten a lot of enthusiasm and support because it feels fair. It's not, it's not trying to block people from doing what they want to do with open-source and saying, “This usage is good, this is bad.” It's just saying, “Hey, here's a new viable model for monetizing open-source that is fair to the open-source companies.”Emily: So, does Storj just manage storage? Or, where's the compute coming from?Ben: It's a good question. And so, generally speaking, the compute can either be done on-premise, it can be done at the end. And we're, sort of, working with both kinds. We ourselves don't offer a compute service, but because the world is getting more decentralized, and because, frankly, the rise of cloud-native approaches, people are able to have the compute and the storage happening in different places.Emily: How challenging is it to work with storage, and how similar of an experience is it to working with something like AWS for an end-user? I just want to get my app up.Ben: Sure, sure. If you have an S3 compatible application, we're also S3 compatible. So, if you've written your application to run on AWS S3, or frankly, these days most people use the S3 API for Google and Microsoft as well, it's really not a big effort to transition. You change a few lines of code, and suddenly, the data is being stored in one place versus the other. We also have native libraries in a lot of different languages and bindings, so for people who want to take full advantage of everything that we have to offer, it's a little bit more work, but for the most part, our aim is to say, “You don't have to change the way that you do storage in order to get a much better way of doing storage.”Emily: So, let me ask a couple questions just related to the topic of our podcast, the business of cloud-native. What do you think are the reasons that end users decide to go for cloud-native?Ben: Oh, I think there are huge advantages across the board. There are certainly a lot of infrastructural advantages: the fact that you can scale much more quickly, the fact that you can operate much more efficiently, the fact that you are able to be far more resilient, these are all benefits that seemed to come with adopting more cloud-native approaches on the infrastructure side if you will. But for many users, the bigger advantages come from running your applications in a more cloud-native way. Rather than having a big monolithic application that's tied tightly to a big monolithic piece of hardware, and both are hard to change, and both are at risk, if you write applications composed of smaller pieces that can be modified quickly and independently by small teams and scale independently, that's just a much more scalable, faster way to build, frankly, better applications. You couldn't have a Zoom, or a Facebook, or Google search, or any of these massive-scale, rapidly changing applications being written in the traditional way.Emily: Those sound kind of like engineering reasons for cloud-native. What about business reasons?Ben: Right. So, the business reasons [unintelligible], sort of, come alongside. I mean, so when you're able to write applications faster, modify them faster, adapt to a changing environment faster, do it with fewer people, all of those end up having real big business benefits. Being able to scale flexibly, these give huge economic benefits, but I think the economic benefits on the infrastructure side are probably outweighed by the business flexibility: the fact that you can build things quickly and modify them quickly, and react quickly to changing environment, that's [unintelligible]. Obviously, again, you use Zoom as an example. There's this two-week period, back in March, where suddenly almost every classroom and every business started using Zoom, and Zoom was able to scale rapidly, adapt rapidly, and suddenly support that. And that's because it was done in a more—in a cloud-native way.Emily: I mean, it's interesting, one of the tensions that I've seen in this space is that some people like to talk a lot about cost benefits. So, we're going to move to cloud-native because it's cheap, we're going to reduce costs. And then there's other people that say, well, this isn't really a cost story. It's a flexibility and agility, a speed story.Ben: Yeah, yeah. And I think the answer is it can be both. What I always say, though, is cloud-native is not really a destination, it's a journey. And how far we go along with that path, and whether you emphasize the operational side versus—or the infrastructural side versus the development side, it sort of depends on who you are, and what your application is, and how much it needs to scale. And it's absolutely the case that for many companies and applications if they try to look like Google from day one, they're going to fail. And they don't need to because it's—the way you build an application that's going to be servicing hundreds of million people is different than the way you build an application, there's going to be servicing 50,000 people.Emily: What do you see is that some of the biggest misconceptions or mistakes that people make on this journey?Ben: So, I think one is clearly that they knew it as an all or nothing proposition, and they don't think about why they're going on the journey. I think a second mistake that they often make is that they underestimate the organizational change that it takes to build things in the cloud-native way. And obviously, the people, and how they work together, and how you organize, is as big transition for many people as the tech stack that you'd use. And I think the third is that they don't take full advantage of what it takes to move a traditional application to run it in a cloud-native infrastructure. And you can get a lot of benefits, frankly, just by containerizing or Docker-izing a traditional app and moving it online.Emily: What about specifically related to storage and data management? What do you think are some misconceptions or pitfalls?Ben: Right. So, I think that the challenge that many people have when they deal with storage is that they don't think about the data at rest. They don't think about the security issues that are inherent in having data that can be attacked in a single place, or needs to be retrieved from a single place. And part of why we built Storj, frankly, is a belief that if you take data and you encrypt it, and you break it up into pieces, and you distribute those pieces, you actually are doing things in a much better way that's inherent, that you're not dependent on any one data center being up, or any one administrator doing their job correctly, or any password being strong. By reducing the susceptibility to single points of failure, you can create an environment that's more secure, much faster, much more reliable. And that's math. And it gets kind of shocking to see that people who make the journey to cloud-native, while they're changing lots of other aspects of their infrastructure and their applications, repeating the same mistakes that people have been making for 30 years in terms of data access, security, and distribution.Emily: Do you think that that is partially a skills gap?Ben: It may be a skills gap, but it's also, frankly, there's been a dearth of viable other options. And I think that—we frequently when I'm talking with customers, they all say, “Hey, we've been thinking about being decentralized for a while, but it just has been too difficult to do.” Or there have been decentralized options, but they're, sort of, toys. And so, what we've aimed to do is create a decentralized storage solution that is enterprise-grade, is S3 compatible, so it's easy to adopt, but that brings all the benefits of decentralization.Emily: I'm also just curious because of the sort of organizational changes that need to happen. I mean, everybody, particularly in a large organization, is going to have these super-specific areas of expertise, and to a certain extent, you have to bring them all together.Ben: You do. Right. You do have to. And so I'm a big believer in you pick pilot projects that you do with a small team, and you get some wins, and nothing helps evangelize change better than wins. And it's hard to get people to change if they don't see success, and a better world at the end of the tunnel. And so, what we've tried to do, and what I think people doing in the cloud-native journey often do, is you say, “Let's take a small low-risk application or small, low-risk dataset, handle it in a different way, and show the world that it can be done better,” right? Or, “Show our organization that it can be better.” And then build up not only muscle memory around how you do this, but you build up natural advocates in the organization.Emily: Going back to this idea of costs, you mentioned that Storj can reduce costs substantially. Do you think a lot of organizations are surprised at how much cloud storage costs?Ben: Yes. And unfortunately, it's a surprise that comes over time. I mean, you… I think the typical story if you get started with Cloud. And there's not a lot of large upfront costs when your usage is low. So, yeah, so you start with somebody pulling out their credit card and building their pilot project, and just charging themselves directly to charging themselves directly to—you know, charging their Amazon, or their Google, or their Microsoft directly to their credit card, then they move to paying through a centralized organization. But then as they grow, suddenly, this thing that seemed really low price becomes very, very expensive, and they feel trapped. And data, in particular, has this—in some ways, it grows a lot faster than compute. Because, generally speaking, you're keeping around the data that you've created. So, you have this base of data that grows so slowly that you're creating more data every day, but you're also storing all the data that you've had in the past. So, it grows a lot more exponentially than compute, often. And because data at rest is somewhat expensive to move around, people often find themselves regretting their decisions a few months into the project, if they're stuck with one centralized provider. And the providers make it very difficult and expensive to move data out.Emily: What advice would you have to somebody who's at that stage, at the just getting started, whipping out my credit card stage? What do you do to avoid that sinking feeling in your stomach five months from now?Ben: Right. I mean, I guess what I would say is that don't make yourself dependent on any one provider or any one person. And that's because things have gotten so much more compatible, and that's on the storage side by the things that we do, on the compute side by the use of containers and Docker. You don't need to lock yourself in, as long as you're thoughtful at the outset.Emily: And who's the right person to be thinking about these things?Ben: That's a good question. So, you know, I'd like to say the individual developer, except developers for the most part, they have something that they want to build, [laughs] they want to get it built as fast as possible and they don't want to worry about infrastructure. But I really think it's probably that set of people that we call DevOps people that really should be thinking about this, to be thinking not only how can we enable people to build and deploy and secure faster, but how can we build and secure and deploy in a way that doesn't make us dependent on centralized services?Emily: Do you have other pieces of advice for somebody setting out on the “Cloud journey,” in quotes, too basically avoid the feeling, midway through, that they messed up.Ben: So, I think that part of it is being thoughtful about how you set off on this cloud journey. I mean, know where you want to end up, I think this [unintelligible]. You want to set off on a journey across the country, it's good to know that you want to end up in Oregon versus you want to end up in Utah, or Arizona. [unintelligible] from east to west, and making sure your whole organization has a view of where you want to get. And then along the way, you can say, “You know what? Let's course-correct.” But if you are going down on the cloud journey because you want to save money, you want to have flexibility, you don't want to be locked in, you want to be able to move stuff to the edge, then thinking really seriously about whether your approach towards the Cloud is helping you achieve those ends. And, again, my view is that if you are going off on a journey to the Cloud, and you are locking yourself into a large provider that is highly centralized, you're probably not going to achieve those aims in the long run.Emily: And then again, who is the persona who needs to be thinking this? And ultimately, whose responsibility is it to make a cloud journey successful?Ben: So, I think that generally speaking, a cloud journey past these initial pilots where I think pilots are often, it's a small team that are proving that things can be done in a cloud-native way, they should do whatever it takes to prove that something can be done, and get some successes. But then I think that the head of engineering, the Vice President of Operations, the person who's heading up DevOps should be thoughtful, and should be thinking about where the organization is going, from that initial pilot into developing the long-term strategy.Emily: Anything else that you'd like to add?Ben: Well, these are a lot of really good questions, so I appreciate all your questions and the topic in general. I guess I would just add, maybe my own personal bias, that data is important. The cloud is important, but data is really important. And as, you know, look at the world creating enough data this year to fill a stack of CD-ROMs, to the orbit of Mars and back, some of that is cat videos, but also buried in there is probably the cure to COVID, and the cure for cancer, and a new form of energy. And so, making it possible for people to create, and store, and retrieve, and use data in a way that's cost-effective, where they don't have to throw out data, that is secure and private, that's a really noble goal. And that's a really important thing, I think, for all of us to embrace.Emily: Just a couple of final questions. The first one, I just like to ask everybody, what is your favorite can't-live-without software engineering tool?Ben: Honestly, I think that collaboration tools, writ large, are important. And whether that's things like GitHub, or things like video conferencing, or things like shared meeting spaces, it's really the tools enable groups of people to work together that I think are the most important.Emily: Where can people connect with you or follow you?Ben: Oh, so I'm on Twitter, @golubbe, G-O-L-U-B-B-E. And that's probably the best place to initially reach out to me, but then I [blog], and I'm on GitHub as well. I'm not that great [unintelligible].Emily: Well, thank you so much for joining us. This was a great conversation.Ben: Oh, thank you, Emily. I had a great conversation as well.Emily: Thanks for listening. I hope you've learned just a little bit more about The Business of Cloud Native. If you'd like to connect with me or learn more about my positioning services, look me up on LinkedIn: I'm Emily Omier—that's O-M-I-E-R—or visit my website which is emilyomier.com. Thank you, and until next time.Announcer: This has been a HumblePod production. Stay humble.

The Bitcoin Podcast
BlockChannel Episode 16: Partly Cloudy, with a Chance of Ethereum

The Bitcoin Podcast

Play Episode Listen Later Apr 3, 2017 42:30


On this episode of BlockChannel, McKie, Dee and Dr. Petty sit down with Shawn Wilkinson of Storj. Storj is a decentralized cloud storage provider that was created back in 2014. Since its inception, Storj has been hard at work on creating “cloud farms” of user submitted disk storage, where individuals who “rent' out their additional disk space can receive monetary incentives for supporting the network. Originally built on XCP (Counterparty), Storj now plans to make a migration away from the now abandoned project (XCP is platform created originally on the bitcoin blockchain), and move their token and service over to Ethereum, to leverage its smart contracting capabilities, and reduced fees. Support the Show! ETH Donation Address: 0xa368e33E927D825F5FD05463E6A781414672251c Show Link(s): Storj: storj.io Ethereum: ethereum.org CounterParty: counterparty.io/ Swarm: swarm-gateways.net/bzz:/theswarm.eth/ Intro/Outro Music “Wait Luther by Faruhdey: Faruhdey — Wait-luther-outro Show Sponsor(s): Gnosis PM: gnosis.pm

The Bitcoin Game
The Bitcoin Game #43: Monero's 'FluffyPony' Part 2

The Bitcoin Game

Play Episode Listen Later Mar 29, 2017 70:40


Hello, welcome to episode 43 of The Bitcoin Game,I'm Rob Mitchell. This is part two of my interview with Riccardo "FluffyPony" Spagni of Monero. In part one, we focused on Riccardo's past and the early days of Monero. In today's episode, Riccardo gives his views on topics like Ethereum, Proof of Work, Monero's regular hard forks, Barry Silbert's love of Ethereum Classic, tinfoil hat theories about Gavin Andresen, Riccardo's CEO title, and much more. SHOW LINKS Monero - getmonero.org Monero Contributors - openhub.net/p/monero Sia vs. Storj vs. Maidsafe - forum.sia.tech/topic/21/sia-vs-storj-vs-maidsafe Riccardo on Twitter - @FluffyPonyZA STAY IN TOUCH https://Twitter.com/TheBTCGame http://TheBitcoinGame.com Rob@TheBitcoinGame.com Bitcoin tipping address for this episode: 1G8HDg5EsPQpamKYS2bDya9Riv9xv1nVo5 I set up a Monero wallet, if you prefer tipping that way:4AKF3sqUsLvLPXH6KEVHb6QYV2pQWfnkyJwnTYBZUcHA36iFHJgvhEr6X2yo6aysGTMpMwWmK2fWhjXsiQmfenpzPDPjNFP Thanks so much for taking the time to listen to The Bitcoin Game! SPONSOR While much of Bitcoiners' time is spent in the world of digital assets, sometimes it's nice to own a physical representation of the virtual things you care about. For just the price of a cup of coffee or two (at Starbucks), you can own your own Bitcoin Keychainor the newer Bitcoin Fork Pen. As Seen On TechCrunch• Engadget• Ars Technica• Popular Mechanics Maxim• Inc.• Vice• RT• Bitcoin Magazine• VentureBeat CoinDesk• Washington Post• Forbes• Fast Company http://bkeychain.com http://bitcoinforks.com CREDITS All music in this episode of The Bitcoin Gamewas created by Rob Mitchell, or recorded at jams with friends (like Mike Coleman). The Bitcoin Gamebox art was created from an illustration by Rock Barcellos.

The Crypto Show
James Prestwich Storj.io, Lyn Ulbricht and Robert Lindsay Nathan

The Crypto Show

Play Episode Listen Later Aug 22, 2016 95:34


Tonight we talk with James Pretwich COO of Storj.io about the latest with the company and Storj can be viable alternative to the Cloud. Then Lyn Ulbricht mother of Ross Ulbricht calls in for the most recent updates on Ross's case. Robert Lindsasay Nathan also joins us to offer up some of his music to the FreeRoss.org legal fund. You can contribute to that by emailing us at the1cryptoshow@gmail.com

The Bitcoin Game
The Bitcoin Game 17 - Yoshi Goto - Nicolas Courtois - Shawn Wilkinson

The Bitcoin Game

Play Episode Listen Later May 14, 2015 39:55


Hello, welcome to episode 17 of The Bitcoin Game, I'm Rob Mitchell. In this episode I speak with Yoshi Goto of BitMain (maker of the AntMiner), outspoken cryptographer Nicolas Courtois, and Storj creator Shawn Wilkinson. These interviews took place at the 2015 Texas Bitcoin conference. Nicholas Courtois and Shawn Wilkinson were both speakers at this conference, while Yoshi Goto has presented at other Bitcoin events. MAGIC WORD Listen for the magic word, and submit it to your LetsTalkBitcoin.com account to claim a share of this week's distribution of LTBcoin. Listeners now have a full week from the release date to claim a magic word. The magic word for this episode must be submitted by 5:00am Pacific Time on May 21, 2015. SHOW LINKS BitMain https://www.bitmaintech.com AntPool https://antpool.com Nicolas Courtois http://nicolascourtois.com Ethereum https://www.ethereum.org Ralph Merkle http://en.wikipedia.org/wiki/Ralph_Merkle Neal Koblitz http://en.wikipedia.org/wiki/Neal_Koblitz RFC 6979 http://www.rfc-editor.org/info/rfc6979 Johoe interviewed on The Bitcoin Game https://letstalkbitcoin.com/blog/post/the-bitcoin-game-7-bitcoin-hero-jochen-aka-johoe Storj http://storj.io Shawn Wilkinson's Site http://super3.org Challenge-Response Authentication http://en.wikipedia.org/wiki/Challenge%E2%80%93response_authentication Factom http://factom.org Maidsafe http://maidsafe.net Texas Bitcoin Conference http://texasbitcoinconference.com Send Small Donations to Red Cross for Nepal Earthquake Recovery https://blog.changetip.com/donations-red-cross-nepal-earthquake SPONSOR Bitcoin Keychains by Bkeychain You've seen these keychains on dozens and dozens of websites, it's about time you had one of your own! These substantial metal keychains make great conversation starters, and they also make great gifts to or from Bitcoiners. You can find a list of online retailers at Bkeychain.com, and several support Bitcoin so much, they don't even accept fiat currency. So what are you waiting for? http://Bkeychain.com MUSIC All music in this episode was created by me. Thanks for listening! STAY IN TOUCH https://Twitter.com/TheBTCGame http://TheBitcoinGame.com Email me at Rob at TheBitcoinGame.com

The Bitcoin Game
The Bitcoin Game #4 - David A. Johnston

The Bitcoin Game

Play Episode Listen Later Nov 14, 2014 45:31


Welcome to episode number four of The Bitcoin Game. I'm your host, Rob Mitchell. In this episode I talk to serial entrepreneur and angel investor David Johnston. We discuss the distributed app projects he's involved in, his impressive business background, how he went all in on Bitcoin several years ago, and much more. David is a real champion of decentralization open source projects. Listen for the magic word, and submit it to your LetsTalkBitcoin.com account to claim your share of the weekly distribution of LTBcoin. The magic word must be submitted within four days of this episode's release. Show links Ethereum https://www.ethereum.org David's Crowdsale Best Practices White Paper on GitHub https://github.com/DavidJohnstonCEO/CrowdsaleBestPractices DApps Fund http://www.dappsfund.com Andreas Antonopoulos speaks with the Canadian Senates' Committee on Banking, Trade and Commerce http://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-152-towards-innovation-and-progress La`Zooz - Decentralized Transportation & RideSharing Platform http://www.lazooz.net Factom (white paper due November 17, 2014) http://www.factom.org Storj http://storj.io Maidsafe http://maidsafe.net BlockAuth https://blockauth.com Blockstream - Sidechains White Paper http://www.blockstream.com Music Music in this episode was created by me, or with friends and family. Ganesh Painting Company is the name of one of the jam bands I feature live recordings of regularly. Some of the musicians you're hearing in the band are Mike Coleman, Rick Marshal, and Michael Goldstein. The last song features vocals by Angelo Spyropoulos. Feel free to contact me if you want more info about any music you hear on the podcast. Stay in touch with the show https://Twitter.com/TheBTCGame http://TheBitcoinGame.com Email me at Rob at TheBitcoinGame.com Bitcoin tip address: 1G8HDg5EsPQpamKYS2bDya9Riv9xv1nVo5