POPULARITY
“If you're not careful, cloud computing can lose more money faster than any invention in history” - Mark Robinson, Infrastructure Engineer at PlaidThis week, guest host Ben Lloyd Pearson sits down with Plaid's Mark Robinson to learn how he helped Plaid save 25% in costs by optimizing existing resources and eliminating waste in cloud computing.Mark explains the importance of understanding your cloud bill, identifying areas of overspend, and implementing changes that lead to significant savings. From the basics of tagging resources to the intricacies of optimizing network and storage costs, Mark offers practical tips that can help you uncover countless optimization opportunities.Tune in to learn about the rewards of improving cloud cost efficiency, the role of organizational buy-in, and the benefits of making cost optimization a company-wide value.Episode Highlights: 00:56 How did cloud computing get so expensive?02:34 Digging into what your costs actually are04:55 How can you account for the various services you use?07:35 Where are organizations going to get the most value out of?12:26 Cloud costs relation to better code quality16:08 Blockers in organizations to cost savings19:32 Getting buy-in from leadership on cutting cloud costsShow Notes:Download your copy of the Essential Guide to Software Engineering Intelligencehttps://www.finops.org/https://www.linkedin.com/in/mark-robinson-944084bSupport the show: Subscribe to our Substack Leave us a review Subscribe on YouTube Follow us on Twitter or LinkedIn Offers: Learn about Continuous Merge with gitStream Get your DORA Metrics free forever
In this celebration-themed episode of the PowerShell Podcast, Steven Judd makes his long-awaited return. We announce the first PowerShell Podcast MVP winner. We share tips about regex and URL encoding and dive deep into PowerShell on Linux, CloudShell, and becoming a lifelong learner. All this and more! Guest Bio and links: Steven Judd is a 25+ year IT Pro and most recently an Infrastructure Engineer at Tenstreet LLC. His previous recent experience includes Enterprise Email Administrator, Digital Security Analyst, and Cloud and DevOps Advisor for cloud-focused solutions and infrastructure. He has been using PowerShell since 2010 and co-developed a custom training program for PowerShell. Most recently, he was a Senior Editor for the Modern IT Automation with PowerShell book. He is also a co-author and co-editor of the PowerShell Conference Book 3 He loves to help people learn and recognize the value of automation. He spends his free time learning more about PowerShell, digital security, and cloud technologies, along with creating and telling Dad jokes. You can find him hanging out on the PowerShell Discord Server, running marathons, playing the cello, plus a handful of other hobbies he can't seem to quit. Please follow him on Twitter, @stevenjudd, read his blog, and review, use, and improve his code on GitHub. PowerShell Podcast Home page: https://www.pdq.com/resources/the-powershell-podcast/ PowerShell Pro Tips - https://www.youtube.com/watch?v=K95ovoMh170 https://discord.gg/pdq https://www.thingiverse.com/thing:6584540 https://sid-500.com/2024/05/14/hyper-v-enabling-vm-resource-metering/ https://devblogs.microsoft.com/commandline/winget-commandnotfound/ https://leanpub.com/modernautomationwithpowershell https://www.amazon.com/PowerShell-Conference-Book-3/dp/B08MGR749H/ https://www.youtube.com/watch?v=BZZM6i8AE1Y https://aka.ms/psdiscord https://twitter.com/stevenjudd/ https://blog.stevenjudd.com/ https://github.com/stevenjudd Judd Song Special: https://www.youtube.com/watch?v=EQIVWhKhwPA https://www.youtube.com/watch?v=gqzXGGld5-c https://www.youtube.com/watch?v=eh-72yBP7sw https://www.youtube.com/watch?v=u9Dg-g7t2l4 https://www.youtube.com/watch?v=JEz1qGS0T1Q
In an era where digital connectivity is synonymous with economic empowerment, a glaring disparity exists. A large section of the world is without this connectivity which also leads to a plethora of challenges pertaining to education, quality of life and basic human needs. In this episode of The Brand Called You, Nasrat Khalid gives us insight into the life of a refugee and the challenges faced by them. He also sheds light on the genesis of @ASEELApp and how it is helping Afghanistan with digital connectivity. About Nasrat Khalid Nasrat Khalid is a systems Infrastructure Engineer based in the United States. He has been featured by TIME, NPR, and Al Jazeera, and is a recipient of the Andrew Rice Award. He is also the founder of @ASEELApp and is passionate about expanding access to the digital economy. --- Support this podcast: https://podcasters.spotify.com/pod/show/tbcy/support
Jake Gold, Infrastructure Engineer at Bluesky, joins Corey on Screaming in the Cloud to discuss his experience helping to build Bluesky and why he's so excited about it. Jake and Corey discuss the major differences when building a truly open-source social media platform, and Jake highlights his focus on reliability. Jake explains why he feels downtime can actually be a huge benefit to reliability engineers, and why how he views abstractions based on the size of the team he's working on. Corey and Jake also discuss whether cloud is truly living up to its original promise of lowered costs. About JakeJake Gold leads infrastructure at Bluesky, where the team is developing and deploying the decentralized social media protocol, ATP. Jake has previously managed infrastructure at companies such as Docker and Flipboard, and most recently, he was the founding leader of the Robot Reliability Team at Nuro, an autonomous delivery vehicle company.Links Referenced: Bluesky: https://blueskyweb.xyz/ Bluesky waitlist signup: https://bsky.app TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. In case folks have missed this, I spent an inordinate amount of time on Twitter over the last decade or so, to the point where my wife, my business partner, and a couple of friends all went in over the holidays and got me a leather-bound set of books titled The Collected Works of Corey Quinn. It turns out that I have over a million words of shitpost on Twitter. If you've also been living in a cave for the last year, you'll notice that Twitter has basically been bought and driven into the ground by the world's saddest manchild, so there's been a bit of a diaspora as far as people trying to figure out where community lives.Jake Gold is an infrastructure engineer at Bluesky—which I will continue to be mispronouncing as Blue-ski because that's the kind of person I am—which is, as best I can tell, one of the leading contenders, if not the leading contender to replace what Twitter was for me. Jake, welcome to the show.Jake: Thanks a lot, Corey. Glad to be here.Corey: So, there's a lot of different angles we can take on this. We can talk about the policy side of it, we can talk about social networks and things we learn watching people in large groups with quasi-anonymity, we can talk about all kinds of different nonsense. But I don't want to do that because I am an old-school Linux systems administrator. And I believe you came from the exact same path, given that as we were making sure that I had, you know, the right person on the show, you came into work at a company after I'd left previously. So, not only are you good at the whole Linux server thing; you also have seen exactly how good I am not at the Linux server thing.Jake: Well, I don't remember there being any problems at TrueCar, where you worked before me. But yeah, my background is doing Linux systems administration, which turned into, sort of, Linux programming. And these days, we call it, you know, site reliability engineering. But yeah, I discovered Linux in the late-90s, as a teenager and, you know, installing Slackware on 50 floppy disks and things like that. And I just fell in love with the magic of, like, being able to run a web server, you know? I got a hosting account at, you know, my local ISP, and I was like, how do they do that, right?And then I figured out how to do it. I ran Apache, and it was like, still one of my core memories of getting, you know, httpd running and being able to access it over the internet and telling my friends on IRC. And so, I've done a whole bunch of things since then, but that's still, like, the part that I love the most.Corey: The thing that continually surprises me is just what I think I'm out and we've moved into a fully modern world where oh, all I do is I write code anymore, which I didn't realize I was doing until I realized if you call YAML code, you can get away with anything. And I get dragged—myself getting dragged back in. It's the falling back to fundamentals in these weird moments of yes, yes, immutable everything, Infrastructure is code, but when the server is misbehaving and you want to log in and get your hands dirty, the skill set rears its head yet again. At least that's what I've been noticing, at least as far as I've gone down a number of interesting IoT-based projects lately. Is that something you experience or have you evolved fully and not looked back?Jake: Yeah. No, what I try to do is on my personal projects, I'll use all the latest cool, flashy things, any abstraction you want, I'll try out everything, and then what I do it at work, I kind of have, like, a one or two year, sort of, lagging adoption of technologies, like, when I've actually shaken them out in my own stuff, then I use them at work. But yeah, I think one of my favorite quotes is, like, “Programmers first learn the power of abstraction, then they learn the cost of abstraction, and then they're ready to program.” And that's how I view infrastructure, very similar thing where, you know, certain abstractions like container orchestration, or you know, things like that can be super powerful if you need them, but like, you know, that's generally very large companies with lots of teams and things like that. And if you're not that, it pays dividends to not use overly complicated, overly abstracted things. And so, that tends to be [where 00:04:22] I follow up most of the time.Corey: I'm sure someone's going to consider this to be heresy, but if I'm tasked with getting a web application up and running in short order, I'm putting it on an old-school traditional three-tier architecture where you have a database server, a web server or two, and maybe a job server that lives between them. Because is it the hotness? No. Is it going to be resume bait? Not really.But you know, it's deterministic as far as where things live. When something breaks, I know where to find it. And you can miss me with the, “Well, that's not webscale,” response because yeah, by the time I'm getting something up overnight, to this has to serve the entire internet, there's probably a number of architectural iterations I'm going to be able to go through. The question is, what am I most comfortable with and what can I get things up and running with that's tried and tested?I'm also remarkably conservative on things like databases and file systems because mistakes at that level are absolutely going to show. Now, I don't know how much you're able to talk about the Blue-ski infrastructure without getting yelled at by various folks, but how modern versus… reliable—I guess that's probably a fair axis to put it on: modernity versus reliability—where on that spectrum, does the official Blue-ski infrastructure land these days?Jake: Yeah. So, I mean, we're in a fortunate position of being an open-source company working on an open protocol, and so we feel very comfortable talking about basically everything. Yeah, and I've talked about this a bit on the app, but the basic idea we have right now is we're using AWS, we have auto-scaling groups, and those auto-scaling groups are just EC2 instances running Docker CE—the Community Edition—for the runtime and for containers. And then we have a load balancer in front and a Postgres multi-AZ instance in the back on RDS, and it is really, really simple.And, like, when I talk about the difference between, like, a reliability engineer and a normal software engineer is, software engineers tend to be very feature-focused, you know, they're adding capabilities to a system. And the goal and the mission of a reliability team is to focus on reliability, right? Like, that's the primary thing that we're worried about. So, what I find to be the best resume builder is that I can say with a lot of certainty that if you talk to any teams that I've worked on, they will say that the infrastructure I ran was very reliable, it was very secure, and it ended up being very scalable because you know, the way we solve the, sort of, integration thing is you just version your infrastructure, right? And I think this works really well.You just say, “Hey, this was the way we did it now and we're going to call that V1. And now we're going to work on V2. And what should V2 be?” And maybe that does need something more complicated. Maybe you need to bring in Kubernetes, you maybe need to bring in a super-cool reverse proxy that has all sorts of capabilities that your current one doesn't.Yeah, but by versioning it, you just—it takes away a lot of the, sort of, interpersonal issues that can happen where, like, “Hey, we're replacing Jake's infrastructure with Bob's infrastructure or whatever.” I just say it's V1, it's V2, it's V3, and then I find that solves a huge number of the problems with that sort of dynamic. But yeah, at Bluesky, like, you know, the big thing that we are focused on is federation is scaling for us because the idea is not for us to run the entire global infrastructure for AT Proto, which is the protocol that Bluesky is based on. The idea is that it's this big open thing like the web, right? Like, you know, Netscape popularized the web, but they didn't run every web server, they didn't run every search engine, right, they didn't run all the payment stuff. They just did all of the core stuff, you know, they created SSL, right, which became TLS, and they did all the things that were necessary to make the whole system large, federated, and scalable. But they didn't run it all. And that's exactly the same goal we have.Corey: The obvious counterexample is, no, but then you take basically their spiritual successor, which is Google, and they build the security, they build—they run a lot of the servers, they have the search engine, they have the payments infrastructure, and then they turn a lot of it off for fun and… I would say profit, except it's the exact opposite of that. But I digress. I do have a question for you that I love to throw at people whenever they start talking about how their infrastructure involves auto-scaling. And I found this during the pandemic in that a lot of people believed in their heart-of-hearts that they were auto-scaling, but people lie, mostly to themselves. And you would look at their daily or hourly spend of their infrastructure and their user traffic dropped off a cliff and their spend was so flat you could basically eat off of it and set a table on top of it. If you pull up Cost Explorer and look through your environment, how large are the peaks and valleys over the course of a given day or week cycle?Jake: Yeah, no, that's a really good point. I think my basic approach right now is that we're so small, we don't really need to optimize very much for cost, you know? We have this sort of base level of traffic and it's not worth a huge amount of engineering time to do a lot of dynamic scaling and things like that. The main benefit we get from auto-scaling groups is really just doing the refresh to replace all of them, right? So, we're also doing the immutable server concept, right, which was popularized by Netflix.And so, that's what we're really getting from auto-scaling groups. We're not even doing dynamic scaling, right? So, it's not keyed to some metric, you know, the number of instances that we have at the app server layer. But the cool thing is, you can do that when you're ready for it, right? The big issue is, you know, okay, you're scaling up your app instances, but is your database scaling up, right, because there's not a lot of use in having a whole bunch of app servers if the database is overloaded? And that tends to be the bottleneck for, kind of, any complicated kind of application like ours. So, right now, the bill is very flat; you could eat off, and—if it wasn't for the CDN traffic and the load balancer traffic and things like that, which are relatively minor.Corey: I just want to stop for a second and marvel at just how educated that answer was. It's, I talk to a lot of folks who are early-stage who come and ask me about their AWS bills and what sort of things should they concern themselves with, and my answer tends to surprise them, which is, “You almost certainly should not unless things are bizarre and ridiculous. You are not going to build your way to your next milestone by cutting costs or optimizing your infrastructure.” The one thing that I would make sure to do is plan for a future of success, which means having account segregation where it makes sense, having tags in place so that when, “Huh, this thing's gotten really expensive. What's driving all of that?” Can be answered without a six-week research project attached to it.But those are baseline AWS Hygiene 101. How do I optimize my bill further, usually the right answer is go build. Don't worry about the small stuff. What's always disturbing is people have that perspective and they're spending $300 million a year. But it turns out that not caring about your AWS bill was, in fact, a zero interest rate phenomenon.Jake: Yeah. So, we do all of those basic things. I think I went a little further than many people would where every single one of our—so we have different projects, right? So, we have the big graph server, which is sort of like the indexer for the whole network, and we have the PDS, which is the Personal Data Server, which is, kind of, where all of people's actual social data goes, your likes and your posts and things like that. And then we have a dev staging, sandbox, prod environment for each one of those, right? And there's more services besides. But the way we have it is those are all in completely separated VPCs with no peering whatsoever between them. They are all on distinct IP addresses, IP ranges, so that we could do VPC peering very easily across all of them.Corey: Ah, that's someone who's done data center work before with overlapping IP address ranges and swore, never again.Jake: Exactly. That is when I had been burned. I have cleaned up my mess and other people's messes. And there's nothing less fun than renumbering a large complicated network. But yeah, so once we have all these separate VPCs and so it's very easy for us to say, hey, we're going to take this whole stack from here and move it over to a different region, a different provider, you know?And the other thing is that we're doing is, we're completely cloud agnostic, right? I really like AWS, I think they are the… the market leader for a reason: they're very reliable. But we're building this large federated network, so we're going to need to place infrastructure in places where AWS doesn't exist, for example, right? So, we need the ability to take an environment and replicate it in wherever. And of course, they have very good coverage, but there are places they don't exist. And that's all made much easier by the fact that we've had a very strong separation of concerns.Corey: I always found it fun that when you had these decentralized projects that were invariably NFT or cryptocurrency-driven over the past, eh, five or six years or so, and then AWS would take a us-east-1 outage in a variety of different and exciting ways,j and all these projects would go down hard. It's, okay, you talk a lot about decentralization for having hard dependencies on one company in one data center, effectively, doing something right. And it becomes a harder problem in the fullness of time. There is the counterargument, in that when us-east-1 is having problems, most of the internet isn't working, so does your offering need to be up and running at all costs? There are some people for whom that answer is very much, yes. People will die if what we're running is not up and running. Usually, a social network is not on that list.Jake: Yeah. One of the things that is surprising, I think, often when I talk about this as a reliability engineer, is that I think people sometimes over-index on downtime, you know? They just, they think it's much bigger deal than it is. You know, I've worked on systems where there was credit card processing where you're losing a million dollars a minute or something. And like, in that case, okay, it matters a lot because you can put a real dollar figure on it, but it's amazing how a few of the bumps in the road we've already had with Bluesky have turned into, sort of, fun events, right?Like, we had a bug in our invite code system where people were getting too many invite codes and it was sort of caused a problem, but it was a super fun event. We all think back on it fondly, right? And so, outages are not fun, but they're not life and death, generally. And if you look at the traffic, usually what happens is after an outage traffic tends to go up. And a lot of the people that joined, they're just, they're talking about the fun outage that they missed because they weren't even on the network, right?So, it's like, I also like to remind people that eBay for many years used to have, like, an outage Wednesday, right? Whereas they could put a huge dollar figure on how much money they lost every Wednesday and yet eBay did quite well, right? Like, it's amazing what you can do if you relax the constraints of downtime a little bit. You can do maintenance things that would be impossible otherwise, which makes the whole thing work better the rest of the time, for example.Corey: I mean, it's 2023 and the Social Security Administration's website still has business hours. They take a nightly four to six-hour maintenance window. It's like, the last person out of the office turns off the server or something. I imagine some horrifying mainframe job that needs to wind up sweeping after itself are running some compute jobs. But yeah, for a lot of these use cases, that downtime is absolutely acceptable.I am curious as to… as you just said, you're building this out with an idea that it runs everywhere. So, you're on AWS right now because yeah, they are the market leader for a reason. If I'm building something from scratch, I'd be hard-pressed not to pick AWS for a variety of reasons. If I didn't have cloud expertise, I think I'd be more strongly inclined toward Google, but that's neither here nor there. But the problem is these large cloud providers have certain economic factors that they all treat similarly since they're competing with each other, and that causes me to believe things that aren't necessarily true.One of those is that egress bandwidth to the internet is very expensive. I've worked in data centers. I know how 95th percentile commit bandwidth billing works. It is not overwhelmingly expensive, but you can be forgiven for believing that it is looking at cloud environments. Today, Blue-ski does not support animated GIFs—however you want to mispronounce that word—they don't support embedded videos, and my immediate thought is, “Oh yeah, those things would be super expensive to wind up sharing.”I don't know that that's true. I don't get the sense that those are major cost drivers. I think it's more a matter of complexity than the rest. But how are you making sure that the large cloud provider economic models don't inherently shape your view of what to build versus what not to build?Jake: Yeah, no, I kind of knew where you're going as soon as you mentioned that because anyone who's worked in data centers knows that the bandwidth pricing is out of control. And I think one of the cool things that Cloudflare did is they stopped charging for egress bandwidth in certain scenarios, which is kind of amazing. And I think it's—the other thing that a lot of people don't realize is that, you know, these network connections tend to be fully symmetric, right? So, if it's a gigabit down, it's also a gigabit up at the same time, right? There's two gigabits that can be transferred per second.And then the other thing that I find a little bit frustrating on the public cloud is that they don't really pass on the compute performance improvements that have happened over the last few years, right? Like computers are really fast, right? So, if you look at a provider like Hetzner, they're giving you these monster machines for $128 a month or something, right? And then you go and try to buy that same thing on the public, the big cloud providers, and the equivalent is ten times that, right? And then if you add in the bandwidth, it's another multiple, depending on how much you're transferring.Corey: You can get Mac Minis on EC2 now, and you do the math out and the Mac Mini hardware is paid for in the first two or three months of spinning that thing up. And yes, there's value in AWS's engineering and being able to map IAM and EBS to it. In some use cases, yeah, it's well worth having, but not in every case. And the economics get very hard to justify for an awful lot of work cases.Jake: Yeah, I mean, to your point, though, about, like, limiting product features and things like that, like, one of the goals I have with doing infrastructure at Bluesky is to not let the infrastructure be a limiter on our product decisions. And a lot of that means that we'll put servers on Hetzner, we'll colo servers for things like that. I find that there's a really good hybrid cloud thing where you use AWS or GCP or Azure, and you use them for your most critical things, you're relatively low bandwidth things and the things that need to be the most flexible in terms of region and things like that—and security—and then for these, sort of, bulk services, pushing a lot of video content, right, or pushing a lot of images, those things, you put in a colo somewhere and you have these sort of CDN-like servers. And that kind of gives you the best of both worlds. And so, you know, that's the approach that we'll most likely take at Bluesky.Corey: I want to emphasize something you said a minute ago about CloudFlare, where when they first announced R2, their object store alternative, when it first came out, I did an analysis on this to explain to people just why this was as big as it was. Let's say you have a one-gigabyte file and it blows up and a million people download it over the course of a month. AWS will come to you with a completely straight face, give you a bill for $65,000 and expect you to pay it. The exact same pattern with R2 in front of it, at the end of the month, you will be faced with a bill for 13 cents rounded up, and you will be expected to pay it, and something like 9 to 12 cents of that initially would have just been the storage cost on S3 and the single egress fee for it. The rest is there is no egress cost tied to it.Now, is Cloudflare going to let you send petabytes to the internet and not charge you on a bandwidth basis? Probably not. But they're also going to reach out with an upsell and they're going to have a conversation with you. “Would you like to transition to our enterprise plan?” Which is a hell of a lot better than, “I got Slashdotted”—or whatever the modern version of that is—“And here's a surprise bill that's going to cost as much as a Tesla.”Jake: Yeah, I mean, I think one of the things that the cloud providers should hopefully eventually do—I hope Cloudflare pushes them in this direction—is to start—the original vision of AWS when I first started using it in 2006 or whenever launched, was—and they said this—they said they're going to lower your bill every so often, you know, as Moore's law makes their bill lower. And that kind of happened a little bit here and there, but it hasn't happened to the same degree that you know, I think all of us hoped it would. And I would love to see a cloud provider—and you know, Hetzner does this to some degree, but I'd love to see these really big cloud providers that are so great in so many ways, just pass on the savings of technology to the customer so we'll use more stuff there. I think it's a very enlightened viewpoint is to just say, “Hey, we're going to lower the costs, increase the efficiency, and then pass it on to customers, and then they will use more of our services as a result.” And I think Cloudflare is kind of leading the way in there, which I love.Corey: I do need to add something there—because otherwise we're going to get letters and I don't think we want that—where AWS reps will, of course, reach out and say that they have cut prices over a hundred times. And they're going to ignore the fact that a lot of these were a service you don't use in a region you couldn't find a map if your life depended on it now is going to be 10% less. Great. But let's look at the general case, where from C3 to C4—if you get the same size instance—it cut the price by a lot. C4 to C5, somewhat. C5 to C6 effectively is no change. And now, from C6 to C7, it is 6% more expensive like for like.And they're making noises about price performance is still better, but there are an awful lot of us who say things like, “I need ten of these servers to live over there.” That workload gets more expensive when you start treating it that way. And maybe the price performance is there, maybe it's not, but it is clear that the bill always goes down is not true.Jake: Yeah, and I think for certain kinds of organizations, it's totally fine the way that they do it. They do a pretty good job on price and performance. But for sort of more technical companies—especially—it's just you can see the gaps there, where that Hetzner is filling and that colocation is still filling. And I personally, you know, if I didn't need to do those things, I wouldn't do them, right? But the fact that you need to do them, I think, says kind of everything.Corey: Tired of wrestling with Apache Kafka's complexity and cost? Feel like you're stuck in a Kafka novel, but with more latency spikes and less existential dread by at least 10%? You're not alone.What if there was a way to 10x your streaming data performance without having to rob a bank? Enter Redpanda. It's not just another Kafka wannabe. Redpanda powers mission-critical workloads without making your AWS bill look like a phone number.And with full Kafka API compatibility, migration is smoother than a fresh jar of peanut butter. Imagine cutting as much as 50% off your AWS bills. With Redpanda, it's not a pipedream, it's reality.Visit go.redpanda.com/duckbill today. Redpanda: Because your data infrastructure shouldn't give you Kafkaesque nightmares.Corey: There are so many weird AWS billing stories that all distill down to you not knowing this one piece of trivia about how AWS works, either as a system, as a billing construct, or as something else. And there's a reason this has become my career of tracing these things down. And sometimes I'll talk to prospective clients, and they'll say, “Well, what if you don't discover any misconfigurations like that in our account?” It's, “Well, you would be the first company I've ever seen where that [laugh] was not true.” So honestly, I want to do a case study if we do.And I've never had to write that case study, just because it's the tax on not having the forcing function of building in data centers. There's always this idea that in a data center, you're going to run out of power, space, capacity, at some point and it's going to force a reckoning. The cloud has what distills down to infinite capacity; they can add it faster than you can fill it. So, at some point it's always just keep adding more things to it. There's never a let's clean out all of the cruft story. And it just accumulates and the bill continues to go up and to the right.Jake: Yeah, I mean, one of the things that they've done so well is handle the provisioning part, right, which is kind of what you're getting out there. One of the hardest things in the old days, before we all used AWS and GCP, is you'd have to sort of requisition hardware and there'd be this whole process with legal and financing and there'd be this big lag between the time you need a bunch more servers in your data center and when you actually have them, right, and that's not even counting the time takes to rack them and get them, you know, on network. The fact that basically, every developer now just gets an unlimited credit card, they can just, you know, use that's hugely empowering, and it's for the benefit of the companies they work for almost all the time. But it is an uncapped credit card. I know, they actually support controls and things like that, but in general, the way we treated it—Corey: Not as much as you would think, as it turns out. But yeah, it's—yeah, and that's a problem. Because again, if I want to spin up $65,000 an hour worth of compute right now, the fact that I can do that is massive. The fact that I could do that accidentally when I don't intend to is also massive.Jake: Yeah, it's very easy to think you're going to spend a certain amount and then oh, traffic's a lot higher, or, oh, I didn't realize when you enable that thing, it charges you an extra fee or something like that. So, it's very opaque. It's very complicated. All of these things are, you know, the result of just building more and more stuff on top of more and more stuff to support more and more use cases. Which is great, but then it does create this very sort of opaque billing problem, which I think, you know, you're helping companies solve. And I totally get why they need your help.Corey: What's interesting to me about distributed social networks is that I've been using Mastodon for a little bit and I've started to see some of the challenges around a lot of these things, just from an infrastructure and architecture perspective. Tim Bray, former Distinguished Engineer at AWS posted a blog post yesterday, and okay, well, if Tim wants to put something up there that he thinks people should read, I advise people generally read it. I have yet to find him wasting my time. And I clicked it and got a, “Server over resource limits.” It's like wow, you're very popular. You wound up getting—got effectively Slashdotted.And he said, “No, no. Whatever I post a link to Mastodon, two thousand instances all hidden at the same time.” And it's, “Oh, yeah. The hug of death. That becomes a challenge.” Not to mention the fact that, depending upon architecture and preferences that you make, running a Mastodon instance can be extraordinarily expensive in terms of storage, just because it'll, by default, attempt to cache everything that it encounters for a period of time. And that gets very heavy very quickly. Does the AT Protocol—AT Protocol? I don't know how you pronounce it officially these days—take into account the challenges of running infrastructures designed for folks who have corporate budgets behind them? Or is that really a future problem for us to worry about when the time comes?Jake: No, yeah, that's a core thing that we talked about a lot in the recent, sort of, architecture discussions. I'm going to go back quite a ways, but there were some changes made about six months ago in our thinking, and one of the big things that we wanted to get right was the ability for people to host their own PDS, which is equivalent to, like, posting a WordPress or something. It's where you post your content, it's where you post your likes, and all that kind of thing. We call it your repository or your repo. But that we wanted to make it so that people could self-host that on a, you know, four or five $6-a-month droplet on DigitalOcean or wherever and that not be a problem, not go down when they got a lot of traffic.And so, the architecture of AT Proto in general, but the Bluesky app on AT Proto is such that you really don't need a lot of resources. The data is all signed with your cryptographic keys—like, not something you have to worry about as a non-technical user—but all the data is authenticated. That's what—it's Authenticated Transfer Protocol. And because of that, it doesn't matter where you get the data, right? So, we have this idea of this big indexer that's looking at the entire network called the BGS, the Big Graph Server and you can go to the BGS and get the data that came from somebody's PDS and it's just as good as if you got it directly from the PDS. And that makes it highly cacheable, highly conducive to CDNs and things like that. So no, we intend to solve that problem entirely.Corey: I'm looking forward to seeing how that plays out because the idea of self-hosting always kind of appealed to me when I was younger, which is why when I met my wife, I had a two-bedroom apartment—because I lived in Los Angeles, not San Francisco, and could afford such a thing—and the guest bedroom was always, you know, 10 to 15 degrees warmer than the rest of the apartment because I had a bunch of quote-unquote, “Servers” there, meaning deprecated desktops that my employer had no use for and said, “It's either going to e-waste or your place if you want some.” And, okay, why not? I'll build my own cluster at home. And increasingly over time, I found that it got harder and harder to do things that I liked and that made sense. I used to have a partial rack in downtown LA where I ran my own mail server, among other things.And when I switched to Google for email solutions, I suddenly found that I was spending five bucks a month at the time, instead of the rack rental, and I was spending two hours less a week just fighting spam in a variety of different ways because that is where my technical background lives. Being able to not have to think about problems like that, and just do the fun part was great. But I worry about the centralization that that implies. I was opposed to it at the idea because I didn't want to give Google access to all of my mail. And then I checked and something like 43% of the people I was emailing were at Gmail-hosted addresses, so they already had my email anyway. What was I really doing by not engaging with them? I worry that self-hosting is going to become passe, so I love projects that do it in sane and simple ways that don't require massive amounts of startup capital to get started with.Jake: Yeah, the account portability feature of AT Proto is super, super core. You can backup all of your data to your phone—the [AT 00:28:36] doesn't do this yet, but it most likely will in the future—you can backup all of your data to your phone and then you can synchronize it all to another server. So, if for whatever reason, you're on a PDS instance and it disappears—which is a common problem in the Mastodon world—it's not really a problem. You just sync all that data to a new PDS and you're back where you were. You didn't lose any followers, you didn't lose any posts, you didn't lose any likes.And we're also making sure that this works for non-technical people. So, you know, you don't have to host your own PDS, right? That's something that technical people can self-host if they want to, non-technical people can just get a host from anywhere and it doesn't really matter where your host is. But we are absolutely trying to avoid the fate of SMTP and, you know, other protocols. The web itself, right, is sort of… it's hard to launch a search engine because the—first of all, the bar is billions of dollars a year in investment, and a lot of websites will only let us crawl them at a higher rate if you're actually coming from a Google IP, right? They're doing reverse DNS lookups, and things like that to verify that you are Google.And the problem with that is now there's sort of this centralization with a search engine that can't be fixed. With AT Proto, it's much easier to scrape all of the PDSes, right? So, if you want to crawl all the PDSes out on the AT Proto network, they're designed to be crawled from day one. It's all structured data, we're working on, sort of, how you handle rate limits and things like that still, but the idea is it's very easy to create an index of the entire network, which makes it very easy to create feed generators, search engines, or any other kind of sort of big world networking thing out there. And then without making the PDSes have to be very high power, right? So, they can do low power and still scrapeable, still crawlable.Corey: Yeah, the idea of having portability is super important. Question I've got—you know, while I'm talking to you, it's, we'll turn this into technical support hour as well because why not—I tend to always historically put my Twitter handle on conference slides. When I had the first template made, I used it as soon as it came in and there was an extra n in the @quinnypig username at the bottom. And of course, someone asked about that during Q&A.So, the answer I gave was, of course, n+1 redundancy. But great. If I were to have one domain there today and change it tomorrow, is there a redirect option in place where someone could go and find that on Blue-ski, and oh, they'll get redirected to where I am now. Or is it just one of those 404, sucks to be you moments? Because I can see validity to both.Jake: Yeah, so the way we handle it right now is if you have a, something.bsky.social name and you switch it to your own domain or something like that, we don't yet forward it from the old.bsky.social name. But that is totally feasible. It's totally possible. Like, the way that those are stored in your what's called your [DID record 00:31:16] or [DID document 00:31:17] is that there's, like, a list that currently only has one item in general, but it's a list of all of your different names, right? So, you could have different domain names, different subdomain names, and they would all point back to the same user. And so yeah, so basically, the idea is that you have these aliases and they will forward to the new one, whatever the current canonical one is.Corey: Excellent. That is something that concerns me because it feels like it's one of those one-way doors, in the same way that picking an email address was a one-way door. I know people who still pay money to their ancient crappy ISP because they have a few mails that come in once in a while that are super-important. I was fortunate enough to have jumped on the bandwagon early enough that my vanity domain is 22 years old this year. And my email address still works,which, great, every once in a while, I still get stuff to, like, variants of my name I no longer use anymore since 2005. And it's usually spam, but every once in a blue moon, it's something important, like, “Hey, I don't know if you remember me. We went to college together many years ago.” It's ho-ly crap, the world is smaller than we think.Jake: Yeah.j I mean, I love that we're using domains, I think that's one of the greatest decisions we made is… is that you own your own domain. You're not really stuck in our namespace, right? Like, one of the things with traditional social networks is you're sort of, their domain.com/yourname, right?And with the way AT Proto and Bluesky work is, you can go and get a domain name from any registrar, there's hundreds of them—you know, we'd like Namecheap, you can go there and you can grab a domain and you can point it to your account. And if you ever don't like anything, you can change your domain, you can change, you know which PDS you're on, it's all completely controlled by you. And there's nearly no way we as a company can do anything to change that. Like, that's all sort of locked into the way that the protocol works, which creates this really great incentive where, you know, if we want to provide you services or somebody else wants to provide you services, they just have to compete on doing a really good job; you're not locked in. And that's, like, one of my favorite features of the network.Corey: I just want to point something out because you mentioned oh, we're big fans of Namecheap. I am too, for weird half-drunk domain registrations on a lark. Like, “Why am I poor?” It's like, $3,000 a month of my budget goes to domain purchases, great. But I did a quick whois on the official Bluesky domain and it's hosted at Route 53, which is Amazon's, of course, premier database offering.But I'm a big fan of using a enterprise registrar for enterprise-y things. Wasabi, if I recall correctly, wound up having their primary domain registered through GoDaddy, and the public domain that their bucket equivalent would serve data out of got shut down for 12 hours because some bad actor put something there that shouldn't have been. And GoDaddy is not an enterprise registrar, despite what they might think—for God's sake, the word ‘daddy' is in their name. Do you really think that's enterprise? Good luck.So, the fact that you have a responsible company handling these central singular points of failure speaks very well to just your own implementation of these things. Because that's the sort of thing that everyone figures out the second time.Jake: Yeah, yeah. I think there's a big difference between corporate domain registration, and corporate DNS and, like, your personal handle on social networking. I think a lot of the consumer, sort of, domain registries are—registrars—are great for consumers. And I think if you—yeah, you're running a big corporate domain, you want to make sure it's, you know, it's transfer locked and, you know, there's two-factor authentication and doing all those kinds of things right because that is a single point of failure; you can lose a lot by having your domain taken. So, I completely agree with you on there.Corey: Oh, absolutely. I am curious about this to see if it's still the case or not because I haven't checked this in over a year—and they did fix it. Okay. As of at least when we're recording this, which is the end of May 2023, Amazon's Authoritative Name Servers are no longer half at Oracle. Good for them. They now have a bunch of Amazon-specific name servers on them instead of, you know, their competitor that they clearly despise. Good work, good work.I really want to thank you for taking the time to speak with me about how you're viewing these things and honestly giving me a chance to go ambling down memory lane. If people want to learn more about what you're up to, where's the best place for them to find you?Jake: Yeah, so I'm on Bluesky. It's invite only. I apologize for that right now. But if you check out bsky.app, you can see how to sign up for the waitlist, and we are trying to get people on as quickly as possible.Corey: And I will, of course, be talking to you there and will put links to that in the show notes. Thank you so much for taking the time to speak with me. I really appreciate it.Jake: Thanks a lot, Corey. It was great.Corey: Jake Gold, infrastructure engineer at Bluesky, slash Blue-ski. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that will no doubt result in a surprise $60,000 bill after you posted.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
This week, we gain insights into the profession of privacy engineering with guest Menotti Minutillo, a Sr. Privacy Engineering Manager with 15+ years of experience leading critical programs and product delivery at companies like Uber, Thrive Global & Twitter. He started his career in 2007 on Wall Street as a DevOps & Infrastructure Engineer; and now, Menotti is a sought-after technical privacy expert and Privacy Tech Advisor. In this conversation, we discuss privacy engineering approaches that have work, the skillsets required for privacy engineering, and the current climate for landing privacy engineering roles.Menotti sees privacy engineering as the practice of building or improving info systems to advance a set of privacy goals. It's like a 'layer cake' in that you have different protections and risk reductions based on threat modeling, as well as different specialization capabilities for larger orgs.It makes a lot of sense that he's held weaving roles from company to company. His journey into privacy engineering was originally 'adjacent work' and today, he shares lessons learned from taking a PET like differential privacy from the lab to systematizing it into an organization to deploying it in the real-world. In this episode, we delve into tools, technical processes, technical standards, the maturing landscape for privacy engineers, and how the success of privacy is coupled with the success of each product shipped.Topics Covered:How Menotti found his way to managing privacy engineering teamsMenotti's definition of 'privacy engineer' & the skillsets requiredWhat it was like to work at Uber & Twitter, which have multiple privacy engineering teamsBest practices for setting up teams & deploying solutionsPrivacy outcomes that privacy engineers should keep top of mindBest practices for privacy architectureMenotti positive experience while at Uber working with Privacy Researchers from UC Berkeley to take differential privacy from the lab to a real-world deploymentLessons learned from times of transition, including while at Twitter during Musk's takeover Whether privacy was a 'zero interest rate bet,' and what that means for privacy engineering roles given current economic realitiesResources Mentioned:Check out the PEPR conferenceRead 'Was Privacy a Zero Interest Rate Bet?'Guest Info:Follow Menotti on LinkedInConnect with Menotti on Mastadon Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Watch the live stream: Watch on YouTube About the show Sponsored by Datadog: pythonbytes.fm/datadog Special guest: Brian Skinn (Twitter | Github) Michael #1: OpenBB wants to be an open source challenger to Bloomberg Terminal OpenBB Terminal provides a modern Python-based integrated environment for investment research, that allows an average joe retail trader to leverage state-of-the-art Data Science and Machine Learning technologies. As a modern Python-based environment, OpenBBTerminal opens access to numerous Python data libraries in Data Science (Pandas, Numpy, Scipy, Jupyter) Machine Learning (Pytorch, Tensorflow, Sklearn, Flair) Data Acquisition (Beautiful Soup, and numerous third-party APIs) They have a discord community too BTW, seem to be a successful open source project: OpenBB Raises $8.5M in Seed Round Funding Following Open Source Project Gamestonk Terminal's Success Great graphics / gallery here. Way more affordable than the $1,900/mo/user for the Bloomberg Terminal Brian #2: Python f-strings https://fstring.help Florian Bruhin Quick overview of cool features of f-strings, made with Jupyter Python f-strings Are More Powerful Than You Might Think Martin Heinz More verbose discussion of f-strings Both are great to up your string formatting game. Brian S. #3: pyproject.toml and PEP 621 Support in setuptools PEP 621: “Storing project metadata in pyproject.toml” Authors: Brett Cannon, Dustin Ingram, Paul Ganssle, Pradyun Gedam, Sébastien Eustace, Thomas Kluyver, Tzu-ping Chung (Jun-Oct 2020) Covers build-tool-independent fields (name, version, description, readme, authors, etc.) Various tools had already implemented pyproject.toml support, but not setuptools Including: Flit, Hatch, PDM, Trampolim, and Whey (h/t: Scikit-HEP) Not Poetry yet, though it's under discussion setuptools support had been discussed pretty extensively, and had been included on the PSF's list of fundable packaging improvements Initial experimental implementation spearheaded by Anderson Bravalheri, recently completed Seeking testing and bug reports from the community (Discuss thread) I tried it on one of my projects — it mostly worked, but revealed a bug that Anderson fixed super-quick (proper handling of a dynamic long_description, defined in setup.py) Related tools (all early-stage/experimental AFAIK) ini2toml (Anderson Bravalheri) — Can convert setup.cfg (which is in INI format) to pyproject.toml Mostly worked well for me, though I had to manually fix a couple things, most of which were due to limitations of the INI format INI has no list syntax! validate-pyproject (Anderson Bravalheri) — Automated pyproject.toml checks pyproject-fmt (Bernát Gábor) — Autoformatter for pyproject.toml Don't forget to use it with build, instead of via a python setup.py invocation! $ pip install build $ python -m build Will also want to constrain your setuptools version in the build-backend.requires key of pyproject.toml (you are using PEP517/518, right??) Michael #4: JSON Web Tokens @ jwt.io JSON Web Tokens are an open, industry standard RFC 7519 method for representing claims securely between two parties. Basically a visualizer and debugger for JWTs Enter an encoded token Select a decryption algorithm See the payload data verify the signature List of libraries, grouped by language Brian #5: Autocorrect and other Git Tricks - Waylon Walker - Use `git config --global help.autocorrect 10` to have git automatically run the command you meant in 1 second. The `10` is 10 x 1/10 of a second. So `50` for 5 seconds, etc. Automatically set upstream branch if it's not there git config --global push.default current You may NOT want to do this if you are not careful with your branches. From https://stackoverflow.com/a/22933955 git commit -a Automatically “add” all changed and deleted files, but not untracked files. From https://git-scm.com/docs/git-commit#Documentation/git-commit.txt--a Now most of my interactions with git CLI, especially for quick changes, is: $ git checkout main $ git pull $ git checkout -b okken_something $ git commit -a -m 'quick message' $ git push With these working, with autocorrect $ git chkout main $ git pll $ git comit -a -m 'quick message' $ git psh Brian S. #6: jupyter-tempvars Jupyter notebooks are great, and the global namespace of the Python kernel backend makes it super easy to flow analysis from one cell to another BUT, that global namespace also makes it super easy to footgun, when variables leak into/out of a cell when you don't want them to jupyter-tempvars notebook extension Built on top of the tempvars library, which defines a TempVars context manager for handling temporary variables When you create a TempVars context manager, you provide it patterns for variable names to treat as temporary In its simplest form, TempVars (1) clears matching variables from the namespace on entering the context, and then (2) clears them again upon exiting the context, and restoring their prior values, if any TempVars works great, but it's cumbersome and distracting to manually include it in every notebook cell where it's needed With jupyter-tempvars, you instead apply tags with a specific format to notebook cells, and the extension automatically wraps each cell's code in a TempVars context before execution Javascript adapted from existing extensions Patching CodeCell.execute, from the jupyter_contrib_nbextensions ‘Execution Dependencies' extension, to enclose the cell code with the context manager Listening for the ‘kernel ready' event, from [jupyter-black](https://github.com/drillan/jupyter-black/blob/d197945508a9d2879f2e2cc99cafe0cedf034cf2/kernel_exec_on_cell.js#L347-L350), to import the [TempVars](https://github.com/bskinn/jupyter-tempvars/blob/491babaca4f48c8d453ce4598ac12aa6c5323181/src/jupyter_tempvars/extension/jupyter_tempvars.js#L42-L46) context manager upon kernel (re)start See the README (with animated GIFs!) for installation and usage instructions It's on PyPI: $ pip install jupyter-tempvars And, I made a shortcut install script for it: $ jupyter-tempvars install && jupyter-tempvars enable Please try it out, find/report bugs, and suggest features! Future work Publish to conda-forge (definitely) Adapt to JupyterLab, VS Code, etc. (pending interest) Extras Brian: Ok. Python issues are now on GitHub. Seriously. See for yourself. Lorem Ipsum is more interesting than I realized. O RLY Cover Generator Example: Michael: New course: Secure APIs with FastAPI and the Microsoft Identity Platform Pyenv Virtualenv for Windows (Sorta'ish) Hipster Ipsum Brian S.: PSF staff is expanding PSF hiring an Infrastructure Engineer Link now 404s, perhaps they've made their hire? Last year's hire of the Packaging Project Manager (Shamika Mohanan) Steering Council supports PSF hiring a second developer-in-residence PSF has chosen its new Executive Director: Deb Nicholson! PyOhio 2022 Call for Proposals is open Teaser tweet for performance improvements to pydantic Jokes: https://twitter.com/CaNerdIan/status/1512628780212396036 https://www.reddit.com/r/ProgrammerHumor/comments/tuh06y/i_guess_we_all_have_been_there/ https://twitter.com/PR0GRAMMERHUM0R/status/1507613349625966599
I want to start this post with a quote by an Infrastructure Engineer from a food and beverage company that we received a few days ago, “As always, very impressed with Support. We rarely need them, but the team was great helping us through this one and their help was much appreciated.” Feedback is a gift as it helps us understand what we are doing well but also where we need to improve. I see feedback like this everyday and it reiterates why Nutanix Support is known throughout the industry for delivering world-class support.Year over year we continue to execute not because we have done it before but because we are never satisfied. We are very proud of the accomplishments we have made as we strive to continuously learn, innovate and improve. This requires us to listen to what our customers as well as employees are saying.That's why I'm excited to announce that for nine consecutive years we have received the NorthFace ScoreBoard (NFSB) Service Award for 2021, with the special Summit classification that recognizes exceptional organizations who have earned the award for seven or more years in a row. Every year, the NFSB Service Award recognizes companies who provide superior customer service as voted by their customers. The award uses the image of Mount Everest to symbolize exceeding customer expectations.Host: Andy WhitesideCo-host: Harvey GreenCo-host: Jirah Cox
In this episode, we talk to Mircea Colonescu, Infrastructure Engineer at Informal leading the operations of the Cephalopod Validator. The mission of Informal Systems is a vision is an open-source ecosystem of cooperatively owned and governed distributed organizations running on reliable distributed systems to bring verifiability to distributed systems and organizations. Cephalopod equipment operate infrastructure for decentralized technologies designed to support new forms of economic coordination and participant-owned networks. Mircea's GitHub (https://github.com/mircea-c_) We spoke to Mircea about Cephalopod, and: Validating as a business Validators and operators Testing & breaking software VR Security Relaying DLT Formal verifications Gaming Motivation Cooperation Disconnecting & burnout The projects and people that have been mentioned in this episode: | Tendermint (https://tendermint.com/) | Cosmos (https://cosmos.network/) | IBC (https://ibcprotocol.org/) | Cephalopod equipment (https://cephalopod.equipment) | Informal Systems (https://informal.systems) | Strangelove validator (https://www.strangelove.ventures/) | Morgan Stanley (https://en.wikipedia.org/wiki/Morgan_Stanley) | Cyber (https://www.citizencosmos.space/cyber) | Urbit (https://urbit.org/) | Starcraft (https://en.wikipedia.org/wiki/StarCraft) |Ethan Buchman (https://www.citizencosmos.space/ethan-buchman-cosmos) | If you like what we do at Citizen Cosmos: Stake with Citizen Cosmos validator (https://www.citizencosmos.space/staking) Help support the project via Gitcoin Grants (https://gitcoin.co/grants/1113/citizen-cosmos-podcast) Listen to the YouTube version (https://www.youtube.com/watch?v=ziUUcZjLJT0) Read our blog (https://citizen-cosmos.github.io/blog/) Check out our GitHub (https://github.com/citizen-cosmos/Citizen-Cosmos) Join our Telegram (https://t.me/citizen_cosmos) Follow us on Twitter (https://twitter.com/cosmos_voice) Sign up to the RSS feed (https://www.citizencosmos.space/rss) Special Guest: Mircea Colonescu .
This episode of the podcast is a #LightningFriday from October 29th, 2021. In this episode we had the Voltage team come on to explain their platform and what it has to offer. #LightningFriday is where we get together on Twitter Spaces to discuss the Lightning Network and take audience questions. This week we have the Voltage team on to talk about node operations, management, and scaling. Time stamps: 0:00 - Opening Remarks 5:12 - Voltage Team Introduces Themselves 7:03 - What is Voltage? 10:15 - Can you trust Voltage? 13:02 - What challenges would a new user experience? 22:45 - What challenges would a business experience? 27:17 - Is Flow suitable for larger businesses? 32:05 - Why is BTC Pay Server warning people about Lightning Network? 43:50 - Is Voltage going to in the Bitcoin Lightning stack for a business? 45:53 - Anticipatory Projects 48:21 - Are there advantages to having more than one node? 50:02 - The Controversial part: Why sidecar? Why not Liquidity ads? 52:27 - Why LND? 56:43 - Thoughts on Liquidity ads? 1:02:40 - How is UX friendliness growing the Lightning Network? 1:10:25 - When is Voltage adding LNURL-Auth? 1:25:06 - The Future of Voltage 1:12:54 - What about Voltage can be improved? 1:15:13 - Do you see Voltage changing over time? 1:16:20 - Why does Voltage suck? How could it improve? 1:18:00 - Geographic Diversification of Voltage Cloud Infrastructure 1:20:25 - What is the future of the Lightning Network? 1:23:20 - Has Voltage built any monitoring infrastructure? 1:25:07 - Lightning Address 1:29:36 - What are you excited about in LN? 1:32:40 - When is the new version of BTCPay Server making it to Voltage? 1:33:58 - How do we feel about LN as a data layer? 1:42:25 - How to make Lightning Network a data layer while protecting against spam? Our Guests: * Graham; Voltage CEO (https://twitter.com/gkrizek) * Nate; Support Engineer (https://twitter.com/beeforbacon1) * Bob; Infrastructure Engineer (https://twitter.com/BitcoinCoderBob)
Think globally. Act locally. Infrastructure Engineer, Solomon Anderson, joins the show to chat about how data operates, the art over science of balancing new features with supporting existing infrastructure, and the vast environmental impact of something as miniscule as open browser tabs on our laptops. **Show Links** Solomon Anderson on LinkedIn | https://www.linkedin.com/in/solomonranderson/ Solomon Anderson on Twitter | https://twitter.com/1991DBA JMG Careers Page | https://jmg.mn/careers Connect with Tim Bornholdt on LinkedIn | https://www.linkedin.com/in/timbornholdt/ Chat with The Jed Mahonis Group about your app dev questions | https://jmg.mn Episode show notes | https://constantvariables.co Leave a review on Apple Podcasts | https://constantvariables.co/review Twin Cities Podcast Hosts UNPLUGGED event details | https://emamo.com/event/twin-cities-startup-week-2021/s/twin-cities-podcast-hosts-unplugged-part-1-okd8rN
Prashant is an Infrastructure Engineer with experience of almost a decade in the industry. He is a Microsoft Most Valuable Professional and engages in community doing advocacy work. He shares his experience around working in infrastructure, opportunities around it and how to get started in this path. Full Video: https://bit.ly/ttp-v-ep8
In this new episode of our Fika Sessions, we meet Daniel Gonzalez, Build & Infrastructure Engineer at Massive Entertainment, to discuss details about what it takes to get the actual game to your machines. So you have all the graphics, audio, animations and gameplay programming. How do you get all of that into one package you can play on your console or PC? Enter the Build Engineers. Daniel and his team take all of that, put it on their machines and start creating their game builds. Join us as we find out more about how that process works, but also about concepts such as devops (development operations), automation and alerts when working on major computer engineering projects. Also, for the fika itself, what ARE those things?! Tools & Links: Google Site Reliability Engineering Handbook - https://sre.google/sre-book/table-of-contents/
Al talks to us about being an infrastructure engineer at GTSI. He walks us through the highs of working on the roof to the lows of his final days. Credits: Hosts: Jake & Stephen Intro Music: Blue Dot Sessions
Brian Jones, Infrastructure Cloud Architect, and Brian Poirier Sr. Infrastructure Engineer from Liberty Mutual Insurance join the show today to talk about their roles in shared services, about the stack of technologies they use and support and how they're working toward migration from a primarily on-premises server environment to one that incorporates a variety of cloud services including MongoDB Atlas.
This week the guys are joined by their good friend and frequent AOC cameo, Guy Harriss. They discuss the basics of computer infrastructure engineering, how to find ways to enjoy your life to the fullest, and all of their love for diving into new things. You can find The Art of Craftsmanship on YouTube, Instagram, and Patreon here... youtube.com/theartofcraftsmanship @theartofcraftsmanship patreon.com/theartofcraftsmanship Recommendations: Dustin: Knife maker Tomas Rucker: @tomasruckerknives Devon: The Nomadic Woodsman on YouTube https://www.youtube.com/watch?v=c0kYyO0Tb-Y Guy: Book: "Rich Dad Poor Dad" by Robert Kiyosaki
Chi è quel professionista che analizza, progetta e trova l'infrastruttura aziendale rispondente alle esigenze delle aziende?È il Server Infrastructure Engineer, come Simone Laera consulente e specialista di Seven IT, che ci illustrerà l'evoluzione di questa figura ed i requisiti oggi immancabili per potersi definire tale.
Social engineers use text from legitimate recent warnings. Cybercrooks go for whatever they can get from software about to reach the end of its life. A big database filled with individual information is leaked from a Chinese government contractor. In the race to do whatever it is US companies hope to do with TikTok, Microsoft is apparently out, but Oracle is apparently in. Rick Howard looks at red versus blue. Our gust is Colby Prior, Infrastructure Engineer for AusCERT, on running honeypots. And the FBI wants you to know, contrary what you may have seen online, that Oregon wildfires are not extremist arson. For links to all of today's stories check out our CyberWire daily news brief: https://www.thecyberwire.com/newsletters/daily-briefing/9/178
Mambo vipi, kwenye toleo hili nimezungumza na Jonathan Kayumbo yeye ni DevOps and Infrastructure Engineer nchini Sweden, nilitaka kufahanu kuhusu DevOps, ina faida gani, vipi tunaweza itumia hapa Bongo na vipi kuhusu tools zinazohitajika.Jonathan amesoma Tanzania na kuweza kwenda kufanya kazi nchini Sweden, anafanya kazi kama DevOps and Infrastructure Engineer kwenye moja ya kampuni kubwa kule Sweden. Ametueleza namna alivyopambana kuendana na utendaji wa kazi na kuadopt best practices za Software Development.
Starting your new job as Infrastructure Engineer in a large bank with your to-be boss and his key architects just leaving feels like Chaos! Maybe that’s why Tammy Butow has made a career in Chaos and Site Reliability Engineering. In this episode, Tammy shares her experiences of bring reliability into highly complex systems at NAB, Digital Ocean, DropBox or now Gremlin through chaos engineering. You learn about the importance to know and baseline your metrics, to define your SLIs and SLOs and to continuously run your fire drills to ensure your system is as reliable as it has to be.If you want to learn more check out Tammy’s presentations on speakerdeck and make sure to join the chaosengineering slack channel.https://www.linkedin.com/in/tammybutow/https://speakerdeck.com/tammybutowhttps://slofile.com/slack/chaosengineering
Starting your new job as Infrastructure Engineer in a large bank with your to-be boss and his key architects just leaving feels like Chaos! Maybe that’s why Tammy Butow has made a career in Chaos and Site Reliability Engineering. In this episode, Tammy shares her experiences of bring reliability into highly complex systems at NAB, Digital Ocean, DropBox or now Gremlin through chaos engineering. You learn about the importance to know and baseline your metrics, to define your SLIs and SLOs and to continuously run your fire drills to ensure your system is as reliable as it has to be.If you want to learn more check out Tammy’s presentations on speakerdeck and make sure to join the chaosengineering slack channel.https://www.linkedin.com/in/tammybutow/https://speakerdeck.com/tammybutowhttps://slofile.com/slack/chaosengineering
Ell and Wes are joined by Infrastructure Engineer Seth McCombs for a chat about how he got started in tech, the hard transition from legacy data centers to the cloud, and why being honest about both success and failure can lead to a better open source community. Special Guest: Seth McCombs.
Ell and Wes are joined by Infrastructure Engineer Seth McCombs for a chat about how he got started in tech, the hard transition from legacy data centers to the cloud, and why being honest about both success and failure can lead to a better open source community. Special Guest: Seth McCombs.
Dave Evans, Infrastructure Engineer at Duquesne Light, describes his experience automating development activities for Oracle with Terraform and Go libraries on FlashArray. Hear about how he used Pure Code (code.purestorage.com) and Pure's integrated REST APIs to build consistent scripted templates to improve efficiency around automating the provisioning of compute and storage nodes for developers. For more information, check out: https://blog.purestorage.com/pure-storage-terraform-provider/
Can you explain Cloud Native? What are the key OpenSource frameworks you need to know? How about all these OpenSource Licensing models? Why do they exist? Which one to use? What are the monetization models and why to watch closely how Big IT & Cloud companies are impacting this space?Carmen Andoh (@carmatrocity), Program Manager at Google and former Infrastructure Engineer at Travis CI, helps us understand how to navigate the Cloud Native & OpenSource world and gives answer to all the questions above. The IT world is changing but its up to us to shape the future by inventing it. If you want to learn more after listening check out the CNCF Trailmap and follow up with Carmen on social media to get access to her material around that topic!Trailmaphttps://github.com/cncf/trailmap
Can you explain Cloud Native? What are the key OpenSource frameworks you need to know? How about all these OpenSource Licensing models? Why do they exist? Which one to use? What are the monetization models and why to watch closely how Big IT & Cloud companies are impacting this space?Carmen Andoh (@carmatrocity), Program Manager at Google and former Infrastructure Engineer at Travis CI, helps us understand how to navigate the Cloud Native & OpenSource world and gives answer to all the questions above. The IT world is changing but its up to us to shape the future by inventing it. If you want to learn more after listening check out the CNCF Trailmap and follow up with Carmen on social media to get access to her material around that topic!Trailmaphttps://github.com/cncf/trailmap
Robby speaks with Charity Majors, CTO of Honeycomb about her work as an Infrastructure Engineer, how Honeycomb was created, all about working and testing in production, and why software engineers should be "on call" for their code. Helpful links Follow Charity on Twitter Honeycomb The Honeycomb blog Charity's blog Sapiens: A Brief History of Humankind Database Reliability Engineering Subscribe to Maintainable on: Apple Podcasts Overcast Or search "Maintainable" wherever you stream your podcasts. Loving Maintainable? Leave a rating and review on Apple Podcasts to help grow our reach. Brought to you by the team at Planet Argon.
October 16, 2018 I had the pleasure of interviewing Michael Tidwell (twitter: @miketwenty1). Mike is a Proof of Work & Chainpoint Enthusiast and an Infrastructure Engineer at Tierion. He is also the Founder of TAB (The Atlanta Blockchain) and CKO of the Atlanta Bitcoin Embassy. We discussed Mr. Tidwell's recent interview with Craig Wright, Tether, Nouriel, and the merits of Bitcoin Maximalism. Mike is hosting a Blockchain Conference of Substance in = Atlanta on Feb 9/10, you can check it out here: http://tabconf.com (he said HodlCast listeners get a 10% discount). You can always join us on the Bitcorns.com counterparty farming game, Mr. Tidwell is a farmer & a rapper - https://soundcloud.com/miketwenty1/bitcorns
Jameson Lopp is a Professional Cypherpunk, Infrastructure Engineer at Casa and Bitcoin Philosopher. Jameson enjoys building technology that empowers individuals and is interested in opportunities within the Bitcoin and crypto asset ecosystem. Graduating from the University of North Carolina with a degree in computer science, Jameson is passionate about sharing his expertise and technical knowledge of crypto assets, his philosophical approach to understanding the systems, and his opinion on the consensus of their participants. For comprehensive shownotes, a complete bio and links in the episode take a look below or click the episode title.
The Role of Infrastructure Engineer with Elliott S. Elliott Schuhardt, Infrastructure Engineer
Zach Hughes is the Director of Technical Services at CHS, Inc. He has 17 years of experience in enterprise IT infrastructure in a variety of disciplines including cloud computing, application hosting, data centers, converged infrastructure, security, and IT leadership. He is passionate about innovating infrastructure technology solutions that create a competitive advantage for business. Prior to CHS, Zach held various IT leadership and Sr. Engineering positions at Wells Fargo and GMAC Financial Services. Zach earned his MA in Organizational Leadership from Bethel University. Show notes at http://hellotechpros.com/zach-hughes-leadership/ Sponsors Minio Cloud Services Burdene - The bot that remembers where you parked your car.
Part Two in our 2015 New Hampshire Liberty Forum Discussions Is it time to start creating a detailed, documented history of the Free State Project? Alex's Panel - Technology: A Force Multiplier for Activism New Hampshire Liberty Activists use social networks, databases and web applications to hit way above their weight class. Join the creators of some of this technology to talk about challenges, opportunities, and dreams. Bring your own ideas to change the world! This will be a panel discussion. Look Closer: Alex's Site: free State Chronicles - https://freestatechronicles.org/ Free State Project - https://freestateproject.org/ Free Keene - http://freekeene.com/ Hilarious Colbert Report Lampoons Free Keene - http://freekeene.com/2014/11/20/hilarious-colbert-report-lampoons-free-keene/
How do companies like Dell manage their client IT environment and maintain standards? Hear best practices on how Microsoft Systems Manager Server (SMS) can be used to inventory hardware and software, distribute software updates and patches, meter use, all using an open API. Featured speakers are Takis Petropoulos – IT Manager for Systems Management, Monitoring and Architecture Standards -- and Donnie Taylor, Infrastructure Engineer, who manage SMS for Dell.