Podcasts about traceroute

  • 14PODCASTS
  • 36EPISODES
  • 34mAVG DURATION
  • ?INFREQUENT EPISODES
  • Dec 17, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about traceroute

Latest podcast episodes about traceroute

LINUX Unplugged
541: Out with a Bang

LINUX Unplugged

Play Episode Listen Later Dec 17, 2023 76:18


The stories that kept us talking all year, and are only getting hotter! Plus the big flops we're still sore about. Special Guest: Kenji Berthold.

amazon wine vulnerability bang element rust ubuntu gnome docker hdr cdn trippy canonical synapse wayland cve ipfs apple silicon libreoffice chris fisher linux kernel varnish linux mint apple m2 xfce ffmpeg kde plasma lxd intel nuc btrfs jupiter broadcasting rocky linux linux gaming almalinux centos stream linux podcast opensuse leap codeweavers xfs linux unplugged traceroute wes payne
Traceroute
Traceroute Season 3

Traceroute

Play Episode Listen Later Oct 25, 2023 1:54


Season 3 of Traceroute starts November 2nd with a special 3-part episode exploring humanity's burgeoning relationship to AI. Don't miss it!

ai traceroute
@BEERISAC: CPS/ICS Security Podcast Playlist
Josh Varghese: Holistic, Scalable OT Network Design

@BEERISAC: CPS/ICS Security Podcast Playlist

Play Episode Listen Later Sep 28, 2023 74:12


Podcast: The PrOTect OT Cybersecurity Podcast (LS 29 · TOP 10% what is this?)Episode: Josh Varghese: Holistic, Scalable OT Network DesignPub date: 2023-09-21About Josh Varghese: Josh Varghese, founder of Traceroute, is a seasoned industrial networking expert who has dedicated himself to serving the dynamic industrial/OT market. With nearly a decade of experience as a technical lead at Industrial Networking Solutions, where he established their technical support and application engineering department, Josh cultivated a deep understanding of the industry. He now leads Traceroute, offering a comprehensive suite of services including consulting, design, solution architecture, and more, while maintaining invaluable relationships with clients and vendors forged during his career.In this episode, Aaron and Josh Varghese discuss:Navigating vendor dependence and networking complexity in industrial environmentsOvercoming resistance to technology advancements in industrial settingsThe challenges of IT-OT convergence and the importance of OT knowledge transferThe importance of empathy and collaboration in an SDN-driven futureKey Takeaways:In the world of industrial networking, the critical importance of bridging the gap between vendors, asset owners, and complex OT environments becomes glaringly evident, as a lack of expertise and responsibility often leads to network disasters and production outages, emphasizing the need for specialized support and education in this field.Getting burned by poorly configured solutions in the industrial technology realm has led to a reluctance to embrace advancements; however, with proper configuration and understanding, these advancements can be highly beneficial.Bridging the gap between IT and OT, and improving basic understanding of network concepts, is crucial for overcoming resistance to new technology adoption and ensuring operational resilience in a world where automation and physical processes intersect in every aspect of business.In the evolving landscape of IT and OT collaboration, the key to success lies in fostering understanding, empathy, and effective communication between the two sides, rather than imposing complexity or hierarchies, while emerging technologies like SDN offer promise but must address the challenge of simplifying network management in the OT space. "So much of what has happened in the last five to ten years in our space has been around wanting to look at lateral traffic movement or visibility to more traffic. And it's all been very difficult to accomplish because the architecture and the technology available in traditional networking makes it so. You and I have talked about wanting to fast forward to a scenario with sensors in the switch, full visibility, and all this stuff. SDN gets us there like in the snap of a finger." — Josh Varghese Connect with Josh Varghese: Website: www.traceroutellc.comEmail: josh@traceroutellc.comLinkedIn: https://www.linkedin.com/in/varghesejmTraceroute's OT networking training in Dallas-Fort Worth on February 8-9, 2024:https://www.traceroutellc.com/s/Traceroute-DFW-Training-Flyer.pdfThe best (or arguably “worst”) kept secret in OT networking is Software Defined Networking: https://www.linkedin.com/posts/varghesejm_industrialnetworking-otnetworking-otsdn-activity-6963503182421377024--52t/Connect with Aaron:LinkedIn: https://www.linkedin.com/in/aaronccrowLearn more about Industrial Defender:Website: https://www.industrialdefender.com/podcast LinkedIn: https://www.linkedin.com/company/industrial-defender-inc/Twitter: https://twitter.com/iDefend_ICSYouTube: https://www.youtube.com/@industrialdefender7120Audio production by Turnkey Podcast Productions. You're the expert. Your podcast will prove it. The podcast and artwork embedded on this page are from Aaron Crow, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.

The PrOTect OT Cybersecurity Podcast
Josh Varghese: Holistic, Scalable OT Network Design

The PrOTect OT Cybersecurity Podcast

Play Episode Listen Later Sep 21, 2023 74:12


About Josh Varghese: Josh Varghese, founder of Traceroute, is a seasoned industrial networking expert who has dedicated himself to serving the dynamic industrial/OT market. With nearly a decade of experience as a technical lead at Industrial Networking Solutions, where he established their technical support and application engineering department, Josh cultivated a deep understanding of the industry. He now leads Traceroute, offering a comprehensive suite of services including consulting, design, solution architecture, and more, while maintaining invaluable relationships with clients and vendors forged during his career.In this episode, Aaron and Josh Varghese discuss:Navigating vendor dependence and networking complexity in industrial environmentsOvercoming resistance to technology advancements in industrial settingsThe challenges of IT-OT convergence and the importance of OT knowledge transferThe importance of empathy and collaboration in an SDN-driven futureKey Takeaways:In the world of industrial networking, the critical importance of bridging the gap between vendors, asset owners, and complex OT environments becomes glaringly evident, as a lack of expertise and responsibility often leads to network disasters and production outages, emphasizing the need for specialized support and education in this field.Getting burned by poorly configured solutions in the industrial technology realm has led to a reluctance to embrace advancements; however, with proper configuration and understanding, these advancements can be highly beneficial.Bridging the gap between IT and OT, and improving basic understanding of network concepts, is crucial for overcoming resistance to new technology adoption and ensuring operational resilience in a world where automation and physical processes intersect in every aspect of business.In the evolving landscape of IT and OT collaboration, the key to success lies in fostering understanding, empathy, and effective communication between the two sides, rather than imposing complexity or hierarchies, while emerging technologies like SDN offer promise but must address the challenge of simplifying network management in the OT space. "So much of what has happened in the last five to ten years in our space has been around wanting to look at lateral traffic movement or visibility to more traffic. And it's all been very difficult to accomplish because the architecture and the technology available in traditional networking makes it so. You and I have talked about wanting to fast forward to a scenario with sensors in the switch, full visibility, and all this stuff. SDN gets us there like in the snap of a finger." — Josh Varghese Connect with Josh Varghese: Website: www.traceroutellc.comEmail: josh@traceroutellc.comLinkedIn: https://www.linkedin.com/in/varghesejmTraceroute's OT networking training in Dallas-Fort Worth on February 8-9, 2024:https://www.traceroutellc.com/s/Traceroute-DFW-Training-Flyer.pdfThe best (or arguably “worst”) kept secret in OT networking is Software Defined Networking: https://www.linkedin.com/posts/varghesejm_industrialnetworking-otnetworking-otsdn-activity-6963503182421377024--52t/Connect with Aaron:LinkedIn: https://www.linkedin.com/in/aaronccrowLearn more about Industrial Defender:Website: https://www.industrialdefender.com/podcast LinkedIn: https://www.linkedin.com/company/industrial-defender-inc/Twitter: https://twitter.com/iDefend_ICSYouTube: https://www.youtube.com/@industrialdefender7120Audio production by Turnkey Podcast Productions. You're the expert. Your podcast will prove it.

Traceroute
The World's Strangest Librarian, Part 2

Traceroute

Play Episode Listen Later Jul 20, 2023 31:51


In Part 2 of Traceroute's season finale, we look at the fallout of the copyright infringement decision against The Internet Archive. If information eventually becomes commoditized, will we find someone to be a fair and responsible arbiter of history?With nothing less than the future of our digitized history at stake, the final episode of Season 2 of Traceroute explores the threats and challenges the Internet Archive faces in the wake of its copyright infringement case. We are joined by Rebecca Tushnet, the Harvard Law professor who defended the Archive in the case, to discuss the potential fallout of the court's ruling: are we moving towards a society where information is owned by an elite few and “rented out” at a price? If so, do we risk manipulation of that information for the sake of profit? Or will we find among our archivists, preservationists, librarians, and even activists a person who can be responsible enough to be dubbed “The Arbiter of History?” Additional ResourcesConnect with Grace Andrews: LinkedIn or Twitter.Connect with Amy Tobey on Twitter.Connect with Fen Aldrich on Twitter.Connect with John Taylor on LinkedIn.Connect with Rebecca Tushnet on Twitter.Connect with the NEDCC.Visit Origins.dev for more information.Traceroute is a podcast from Equinix, and is a production of Stories Bureau. This episode was produced by John Taylor, with help from Tim Balint and Cat Bagsic. It was edited by Joshua Ramsey and mixed by Jeremy Tuttle, with additional editing and sound design by Mathr de Leon. Our theme song was composed by Ty Gibbons.

Traceroute
The World's Strangest Librarian, Part 1

Traceroute

Play Episode Listen Later Jul 13, 2023 37:27


In Traceroute's Season 2 finale, we explore the Herculean efforts to back up the entire internet and save all human knowledge for future generations. But if information becomes commoditized, then who will own history?As season 2 of Traceroute comes to a close, we take an in-depth look at one of the most important issues in tech today: the intersection between information, access, and ownership. In part one, we're introduced to Alexis Rossi, the Director of Collections at the Internet Archive, a different kind of librarian (at a different kind of library) that's attempting to back up the entire internet, as well as the breadth of human knowledge. But undertaking this mammoth tasks forces Alexis—and indeed all of us—to ask some critical questions: who or what decides what gets preserved…and why. But even as we made huge technical strides in preserving our history, more questions arise: as our analog history turns to dust, is the digital representation we replace it with actually history? Is history lost when all the artifacts are replicas, or do we qualify it somehow as an approximation of history?Additional ResourcesConnect with Grace Andrews: LinkedIn or Twitter.Connect with Amy Tobey on Twitter.Connect with Fen Aldrich on Twitter.Connect with John Taylor on LinkedIn.Connect with the NEDCC.Visit Origins.dev for more informationEnjoyed This Episode?If you did, be sure to follow and share it with your friends!Post a review and share it! If you enjoyed tuning in, then leave us a review. You can also share this with your friends and colleagues! Introduce them to the people and organizations who played a role in inventing the internet. ---Traceroute is a podcast from Equinix, and is a production Stories Bureau. This episode was produced by John Taylor, with help from Tim Balint and Cat Bagsic. It was edited by Joshua Ramsey and mixed by Jeremy Tuttle, with additional editing and sound design by Mathr de Leon. Our theme song was composed by Ty Gibbons.

Traceroute
Give Us Your Vibes

Traceroute

Play Episode Listen Later Jun 29, 2023 1:11


While you're waiting for more Traceroute, why not tell us what you think so far?https://6slkekdn0ti.typeform.com/to/cplf7MtBIt's a few quick questions, nothing too personal, but it'll go a long way towards helping us make Traceroute the best podcast it can be!--If you've been listening to Traceroute this season, you know we've been exploring the intersection of humanity and technology across hundreds of years into our past and thousands of years into our future. And we're not done yet. We're taking a short break before we drop a really special two-part season finale on the 13th and 20th of July. So go listen to the whole season and get ready for our biggest story yet!

vibes traceroute
Traceroute
The Mother of All Errors

Traceroute

Play Episode Listen Later Jun 8, 2023 38:27


When we peel back the layers of the stack, there's one human characteristic we're sure to find: errors. Mistakes, mishaps, and miscalculations are fundamental to being human, and as such, error is built into every piece of infrastructure and code we create. Of course, learning from our errors is critical in our effort to create functional, reliable tech. But could our mistakes be as important to technological development as our ideas? And what happens when we try to change our attitude towards errors…or remove them entirely? In this fascinating episode of Traceroute, we start back in 1968, when “The Mother of All Demos“ was supposed to change the face of personal computing…before the errors started. We're then joined by Andrew Clay Shafer, a DevOps pioneer who has seen the evolution of “errors” to “incidents” through practices like Scrum, Agile, and Chaos Engineering. We also speak with Courtney Nash, a Cognitive Neuroscientist and Researcher whose Verica Open Incident Directory (VOID) has changed the way we look at incident reporting. Additional ResourcesConnect with Amy Tobey: LinkedIn or TwitterConnect with Fen Aldrich: LinkedIn or TwitterConnect with John Taylor on LinkedInConnect With Courtney Nash on TwitterConnect with Andrew Clay Shafter on TwitterVisit Origins.dev for more informationEnjoyed This Episode?If you did, be sure to follow and share it with your friends! We'd also appreciate a five-star review on Apple Podcasts - it really helps people find the show!Traceroute is a podcast from Equinix and is a production of Stories Bureau. This episode was produced by John Taylor with help from Tim Balint and Cat Bagsic. It was edited by Joshua Ramsey and mixed by Jeremy Tuttle, with additional editing and sound design by Mathr de Leon. Our theme song was composed by Ty Gibbons.

Screaming in the Cloud
Creating A Resilient Security Strategy Through Chaos Engineering with Kelly Shortridge

Screaming in the Cloud

Play Episode Listen Later May 30, 2023 32:21


Kelly Shortridge, Senior Principal Engineer at Fastly, joins Corey on Screaming in the Cloud to discuss their recently released book, Security Chaos Engineering: Sustaining Resilience in Software and Systems. Kelly explains why a resilient strategy is far preferable to a bubble-wrapped approach to cybersecurity, and how developer teams can use evidence to mitigate security threats. Corey and Kelly discuss how the risks of working with complex systems is perfectly illustrated by Jurassic Park, and Kelly also highlights why it's critical to address both system vulnerabilities and human vulnerabilities in your development environment rather than pointing fingers when something goes wrong.About KellyKelly Shortridge is a senior principal engineer at Fastly in the office of the CTO and lead author of "Security Chaos Engineering: Sustaining Resilience in Software and Systems" (O'Reilly Media). Shortridge is best known for their work on resilience in complex software systems, the application of behavioral economics to cybersecurity, and bringing security out of the dark ages. Shortridge has been a successful enterprise product leader as well as a startup founder (with an exit to CrowdStrike) and investment banker. Shortridge frequently advises Fortune 500s, investors, startups, and federal agencies and has spoken at major technology conferences internationally, including Black Hat USA, O'Reilly Velocity Conference, and SREcon. Shortridge's research has been featured in ACM, IEEE, and USENIX, spanning behavioral science in cybersecurity, deception strategies, and the ROI of software resilience. They also serve on the editorial board of ACM Queue.Links Referenced: Fastly: https://www.fastly.com/ Personal website: https://kellyshortridge.com Book website: https://securitychaoseng.com LinkedIn: https://www.linkedin.com/in/kellyshortridge/ Twitter: https://twitter.com/swagitda_ Bluesky: https://shortridge.bsky.social TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Have you listened to the new season of Traceroute yet? Traceroute is a tech podcast that peels back the layers of the stack to tell the real, human stories about how the inner workings of our digital world affect our lives in ways you may have never thought of before. Listen and follow Traceroute on your favorite platform, or learn more about Traceroute at origins.dev. My thanks to them for sponsoring this ridiculous podcast. Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. My guest today is Kelly Shortridge, who is a Senior Principal Engineer over at Fastly, as well as the lead author of the recently released Security Chaos Engineering: Sustaining Resilience in Software and Systems. Kelly, welcome to the show.Kelly: Thank you so much for having me.Corey: So, I want to start with the honest truth that in that title, I think I know what some of the words mean, but when you put them together in that particular order, I want to make sure we're talking about the same thing. Can you explain that like I'm five, as far as what your book is about?Kelly: Yes. I'll actually start with an analogy I make in the book, which is, imagine you were trying to rollerblade to some destination. Now, one thing you could do is wrap yourself in a bunch of bubble wrap and become the bubble person, and you can waddle down the street trying to make it to your destination on the rollerblades, but if there's a gust of wind or a dog barks or something, you're going to flop over, you're not going to recover. However, if you instead do what everybody does, which is you know, kneepads and other things that keep you flexible and nimble, the gust you know, there's a gust of wind, you can kind of be agile, navigate around it; if a dog barks, you just roller-skate around it; you can reach your destination. The former, the bubble person, that's a lot of our cybersecurity today. It's just keeping us very rigid, right? And then the alternative is resilience, which is the ability to recover from failure and adapt to evolving conditions.Corey: I feel like I am about to torture your analogy to death because back when I was in school in 2000, there was an annual tradition at the school I was attending before failing out, where a bunch of us would paint ourselves green every year and then bike around the campus naked. It was the green bike ride. So, one year I did this on rollerblades. So, if you wind up looking—there's the bubble wrap, there's the safety gear, and then there's wearing absolutely nothing, which feels—Kelly: [laugh]. Yes.Corey: —kind of like the startup approach to InfoSec. It's like, “It'll be fine. What's the worst that happens?” And you're super nimble, super flexible, until suddenly, oops, now I really wish I'd done things differently.Kelly: Well, there's a reason why I don't say rollerblade naked, which other than it being rather visceral, what you described is what I've called YOLOSec before, which is not what you want to do. Because the problem when you think about it from a resilience perspective, again, is you want to be able to recover from failure and adapt. Sure, you can oftentimes move quickly, but you're probably going to erode software quality over time, so to a certain point, there's going to be some big incident, and suddenly, you aren't fast anymore, you're actually pretty slow. So, there's this, kind of, happy medium where you have enough, I would like security by design—we can talk about that a bit if you want—where you have enough of the security by design baked in and you can think of it as guardrails that you're able to withstand and recover from any failure. But yeah, going naked, that's a recipe for not being able to rollerblade, like, ever again, potentially [laugh].Corey: I think, on some level, that the correct dialing in of security posture is going to come down to context, in almost every case. I'm building something in my spare time in the off hours does not need the same security posture—mostly—as we are a bank. It feels like there's a very wide gulf between those two extremes. Unfortunately, I find that there's a certain tone-deafness coming from a lot of the security industry around oh, everyone must have security as their number one thing, ever. I mean, with my clients who I fixed their AWS bills, I have to care about security contractually, but the secrets that I hold are boring: how much money certain companies pay another very large company.Yes, I'll get sued into oblivion if that leaks, but nobody dies. Nobody is having their money stolen as a result. It's slightly embarrassing in the tech press for a cycle and then it's over and done with. That's not the same thing as a brief stint I did running tech ops at Grindr ten years ago where, leak that database and people will die. There's a strong difference between those threat models, and on some level, being able to act accordingly has been one of the more eye-opening approaches to increasing velocity in my experience. Does that align with the thesis of your book, since my copy has not yet arrived for this recording?Kelly: Yes. The book, I am not afraid to say it depends on the book, and you're right, it depends on context. I actually talk about this resilience potion recipe that you can check out if you want, these ingredients so we can sustain resilience. A key one is defining your critical functions, just what is your system's reason for existence, and that is what you want to make sure it can recover and still operate under adverse conditions, like you said.Another example I give all the time is most SaaS apps have some sort of reporting functionality. Guess what? That's not mission-critical. You don't need the utmost security on that, for the most part. But if it's processing transactions, yeah, probably you want to invest more security there. So yes, I couldn't agree more that it's context-dependent and oh, my God, does the security industry ignore that so much of the time, and it's been my gripe for, I feel like as long as I've been in the industry.Corey: I mean, there was a great talk that Netflix gave years ago where they mentioned in passing, that all developers have root in production. And that's awesome and the person next to him was super excited and I looked at their badge, and holy hell, they worked at an actual bank. That seems like a bad plan. But talking to the Netflix speaker after the fact, Dave Hahn, something that I found that was extraordinarily insightful, was that, yeah, well we just isolate off the PCI environment so the rest and sensitive data lives in its own compartmentalized area. So, at that point, yeah, you're not going to be able to break much in that scenario.It's like, that would have been helpful context to put in talk. Which I'm sure he did, but my attention span had tripped out and I missed that. But that's, on some level, constraining blast radius and not having compliance and regulatory issues extending to every corner of your environment really frees you up to do things appropriately. But there are some things where you do need to care about this stuff, regardless of how small the surface area is.Kelly: Agreed. And I introduced the concept of the effort investment portfolio in the book, which is basically, that is where does it matter to invest effort and where can you kind of like, maybe save some resources up. I think one thing you touched on, though, is, we're really talking about isolation and I actually think people don't think about isolation in as detailed or maybe as expansively as they could. Because we want both temporal and logical and spatial isolation. What you talked about is, yeah, there are some cases where you want to isolate data, you want to isolate certain subsystems, and that could be containers, it could also be AWS security groups.It could take a bunch of different forms, it could be something like RLBox in WebAssembly land. But I think that's something that I really try to highlight in the book is, there's actually a huge opportunity for security engineers starting from the design of a system to really think about how can we infuse different forms of isolation to sustain resilience.Corey: It's interesting that you use the word investment. When fixing AWS bills for a living, I've learned over the last almost seven years now of doing this that cost and architecture and cloud are fundamentally the same thing. And resilience is something that comes with a very real cost, particularly when you start looking at what the architectural choices are. And one of the big reasons that I only ever work on a fixed-fee basis is because if I'm charging for a percentage of savings or something, it inspires me to say really uncomfortable things like, “Backups are for cowards.” And, “When was the last time you saw an entire AWS availability zone go down for so long that it mattered? You don't need to worry about that.” And it does cut off an awful lot of cost issues, at the price of making the environment more fragile.That's where one of the context thing starts to come in. I mean, in many cases, if AWS is having a bad day in a given region, well does your business need that workload to be functional? For my newsletter, I have a publication system that's single-homed out of the Oregon region. If that whole thing goes down for multiple days, I'm writing that week's issue by hand because I'm going to have something different to talk about anyway. For me, there is no value in making that investment. But for companies, there absolutely is, but there's also seems to be a lack of awareness around, how much is a reasonable investment in that area when do you start making that investment? And most critically, when do you stop?Kelly: I think that's a good point, and luckily, what's on my side is the fact that there's a lot of just profligate spending in cybersecurity and [laugh] that's really what I'm focused on is, how can we spend those investments better? And I actually think there's an opportunity in many cases to ditch a ton of cybersecurity tools and focus more on some of the stuff he talked about. I agree, by the way that I've seen some threat models where it's like, well, AWS, all regions go down. I'm like, at that point, we have, like, a severe, bigger-than-whatever-you're-thinking-about problem, right?Corey: Right. So, does your business continuity plan account for every one of your staff suddenly quitting on the spot because there's a whole bunch of companies with very expensive consulting, like, problems that I'm going to go work for a week and then buy a house in cash. It's one of those areas where, yeah, people are not going to care about your environment more than they are about their families and other things that are going on. Plan accordingly. People tend to get so carried away with these things with tabletop planning exercises. And then of course, they forget little things like I overwrote the database by dropping the wrong thing. Turns out that was production. [laugh]. Remembering for [a me 00:10:00] there.Kelly: Precisely. And a lot of the chaos experiments that I talk about in the book are a lot of those, like, let's validate some of those basics, right? That's actually some of the best investments you can make. Like, if you do have backups, I can totally see your argument about backups are for cowards, but if you do have them, like, maybe you conduct experiments to make sure that they're available when you need them, and the same thing, even on the [unintelligible 00:10:21] side—Corey: No one cares about backups, but everyone really cares about restores, suddenly, right after—Kelly: Yeah.Corey: —they really should have cared about backups.Kelly: Exactly. So, I think it's looking at those experiments where it's like, okay, you have these basic assumptions in place that you assume to be invariance or assume that they're going to bail you out if something goes wrong. Let's just verify. That's a great place to start because I can tell you—I know you've been to the RSA hall floor—how many cybersecurity teams are actually assessing the efficacy and actually experimenting to see if those tools really help them during incidents. It's pretty few.Corey: Oh, vendors do not want to do those analyses. They don't want you to do those analyses, either, and if you do, for God's sakes, shut up about it. They're trying to sell things here, mostly firewalls.Kelly: Yeah, cybersecurity vendors aren't necessarily happy about my book and what I talk about because I have almost this ruthless focus on evidence and [unintelligible 00:11:08] cybersecurity vendors kind of thrive on a lack of evidence. So.Corey: There's so much fear, uncertainty, and doubt in that space and I do feel for them. It's a hard market to sell in without having to talk about here's the thing that you're defending against. In my case, it's easy to sell the AWS bill is high because if I don't have to explain why more or less setting money on fire as a bad thing, I don't really know what to tell you. I'm going to go look for a slightly different customer profile. That's not really how it works in security, I'm sure there are better go-to-market approaches, but they're hard to find, at least, ones that work holistically.Kelly: There are. And one of my priorities with the book was to really enumerate how many opportunities there are to take software engineering practices that people already know, let's say something like type systems even, and how those can actually help sustain resilience. Even things like integration testing or infrastructure as code, there are a lot of opportunities just to extend what we already do for systems reliability to sustain resilience against things that aren't attacks and just make sure that, you know, we cover a few of those cases as well. A lot of it should be really natural to software engineering teams. Again, security vendors don't like that because it turns out software engineering teams don't particularly like security vendors.Corey: I hadn't noticed that. I do wonder, though, for those who are unaware, chaos engineering started off as breaking things on purpose, which I feel like one person had a really good story and thought about it super quickly when they were about to get fired. Like, “No, no, it's called Chaos Engineering.” Good for them. It's now a well-regarded discipline. But I've always heard of it in the context of reliability of, “Oh, you think your site is going to work if the database falls over? Let's push it over and see what happens.” How does that manifest in a security context?Kelly: So, I will clarify, I think that's a slight misconception. It's really about fixing things in production, and that's the end goal. I think we should not break things just to break them, right? But I'll give a simple example, which I know it's based on what Aaron Rinehart conducted at UnitedHealth Group, which is, okay, let's inject a misconfigured port as an experiment and see what happens, end-to-end. In their case, the firewall only detected the misconfigured port 60% of the time, so 60% of the time, it works every time.But it was actually the cloud, the very common, like, Cloud configuration management tool that caught the change and alerted responders. So, it's that kind of thing where we're still trying to verify those assumptions that we have about our systems and how they behave, again, end-to-end. In a lot of cases, again, with security tools, they are not behaving as we expect. But I still argue security is just a subset of software quality, so if we're experimenting to verify, again, our assumptions and observe system behavior, we're benefiting software quality, and security is just a subset of that. Think about C code, right? It's not like there's, like, a healthy memory corruption, so it's bad for both the quality and security reason.Corey: One problem that I've had in the security space for a while is—let's [unintelligible 00:14:05] on this to AWS for a second because that is the area in which I spend the most of my time, which probably explains a lot about my personality challenges. But the problem that I keep smacking into is if I go ahead and configure everything the way that I should according to best practices and the rest, I wind up with a firehose torrent of information in terms of CloudTrail logs, et cetera. And it's expensive in its own right. But then to sort through it or to do a lot of things in security, there are basically two options. I can either buy a vendor's product, which generally tends to start around $12,000 a year and goes up rapidly from there on my current $6,000 a year bill, so okay, twice as much as the infrastructure for security monitoring. Okay.Or alternately, find a bunch of different random scripts and tools on GitHub of wildly diverging quality and sort of hope for the best on that. It feels like there's nothing in between. And the reason I care about this is not because I'm cheap but because when you have an individual learner who is either a student or a career switcher or someone just trying to experiment with this, you want them to begin as you want them to go on, and things that are no money for an enterprise are all the money to them. They're going to learn to work with the tools that they can afford. That feels like it's a big security swing and a miss. Do you agree or disagree? What's the nuance I'm missing here?Kelly: No, I don't think there's nuance you're missing. I think security observability, for one, isn't a buzzword that particularly exists. I've been trying to make it a thing, but I'm solely one individual screaming into the void. But observability just hasn't been a thing. We haven't really focused on, okay, so what, like, we get data and what do we do with it?And I think, again, from a software engineering perspective, I think there's a lot we can do. One, we can just avoid duplicating efforts. We can treat observability, again, of any sort of issue as similar, whether that's an attack or a performance issue. I think this is another place where security, or any sort of chaos experiment, shines though because if you have an idea of here's an adverse scenario we care about, you can actually see how does it manifest in the logs and you can start to figure out, like, what signals do we actually need to be looking for, what signals mattered to be able to narrow it down. Which again, it involves time and effort, but also, I can attest when you're buying the security vendor tool and, in theory, absolving some of that time and effort, it's maybe, maybe not, because it can be hard to understand what the outcomes are or what the outputs are from the tool and it can also be very difficult to tune it and to be able to explain some of the outputs. It's kind of like trading upfront effort versus long-term overall overhead if that makes sense.Corey: It does. On that note, the title of your book includes the magic key phrase ‘sustaining resilience.' I have found that security effort and investment tends to resemble a fire drill in—Kelly: [laugh].Corey: —an awful lot of places, where, “We care very much about security,” says the company, right after they very clearly failed to care about security, and I know this because I'm reading getting an email about a breach that they've just sent me. And then there's a whole bunch of running around and hair-on-fire moments. But then there's a new shiny that always comes up, a new strategic priority, and it falls to the wayside again. What do you see the drives that sustained effort and focus on resilience in a security context?Kelly: I think it's really making sure you have a learning culture, which sounds very [unintelligible 00:17:30], but things again, like, experiments can help just because when you do simulate those adverse scenarios and you see how your system behaves, it's almost like running an incident and you can use that as very fresh, kind of, like collective memory. And I even strongly recommend starting off with prior incidents in simulating those, just to see like, hey, did the improvements we make actually help? If they didn't, that can be kind of another fire under the butt, so to speak, to continue investing. So, definitely in practice—and there's some case studies in the book—it can be really helpful just to kind of like sustain that memory and sustain that learning and keep things feeling a bit fresh. It's almost like prodding the nervous system a little, just so it doesn't go back to that complacent and convenient feeling.Corey: It's one of the hard problems because—I'm sure I'm going to get castigated for this by some of the listeners—but computers are easy, particularly compared to the people. There are deterministic ways to solve almost any computer problem, but people are always going to be a little bit different, and getting them to perform the same way today that they did yesterday is an exercise in frustration. Changing the culture, changing the approach and the attitude that people take toward a lot of these things feels, from my perspective, like, something of an impossible job. Cultural transformations are things that everyone talks about, but it's rare to see them succeed.Kelly: Yes, and that's actually something that I very strongly weaved throughout the book is that if your security solutions rely on human behavior, they're going to fail. We want to either reduce hazards or eliminate hazards by design as much as possible. So, my view is very much again, like, can you make processes more repeatable? That's going to help security. I definitely do not think that if anyone takes away from my book that they need to have, like, a thousand hours of training to change hearts and minds, then they have completely misunderstood most of the book.The idea is very much like, what are practices that we want for other outcomes anyway—again, reliability or faster time to market—and how can we harness those to also be improving resilience or security at the same time? It's very much trying to think about those opportunities rather than, you know, trying to drill into people's heads, like, “Thou shalt not,” or, “Thou shall.”Corey: Way back in 2018, you gave a keynote at some conference or another and you built the entire thing on the story of Jurassic Park, specifically Ian Malcolm as one of your favorite fictional heroes, and you tied it into security in a bunch of different ways. You hadn't written this book then unless the authorship process is way longer than I think it is. So, I'm curious to get your take on what Jurassic Park can teach us about software security.Kelly: Yes, so I talk about Jurassic Park as a reference throughout the book, frequently. I've loved that book since I was a very young child. Jurassic Park is a great example of a complex system gone wrong because you can't point to any one thing. Like there's Dennis Nedry, you know, messing up the power system, but then there's also the software was looking for a very specific count of dinosaurs and they didn't anticipate there could be more in the count. Like, there are so many different factors that influenced it, you can't actually blame just, like, human error or point fingers at one thing.That's a beautiful example of how things go wrong in our software systems because like you said, there's this human element and then there's also how the humans interact and how the software components interact. But with Jurassic Park, too, I think the great thing is dinosaurs are going to do dinosaur things like eating people, and there are also equivalents in software, like C code. C code is going to do C code things, right? It's not a memory safe language, so we shouldn't be surprised when something goes wrong. We need to prepare accordingly.Corey: “How could this happen? Again?” Yeah.Kelly: Right. At a certain point, it's like, there's probably no way to sufficiently introduce isolation for dinosaurs unless you put them in a bunker where no one can see them, and it's the same thing sometimes with things like C code. There's just no amount of effort you can invest, and you're just kind of investing for a really unclear and generally not fortuitous outcome. So, I like it as kind of this analogy to think about, okay, where do our effort investments make sense and where is it sometimes like, we really just do need to refactor because we're dealing with dinosaurs here.Corey: When I was a kid, that was one of my favorite books, too. The problem is, I didn't realize I was getting a glimpse of my future at a number of crappy startups that I worked at. Because you have John Hammond, who was the owner of the park talking constantly about how, “We spared no expense,” but then you look at what actually happened and he spared every frickin expense. You have one IT person who is so criminally underpaid that smuggling dinosaur embryos off the island becomes a viable strategy for this. He wound up, “Oh, we couldn't find the right DNA, so we're just going to, like, splice some other random stuff in there. It'll be fine.”Then you have the massive overconfidence because it sounds very much like he had this almost Muskian desire to fire anyone who disagreed with him, and yeah, there was a certain lack of investment that could have been made, despite loud protestations to the contrary. I'd say that he is the root cause, he is the proximate reason for the entire failure of the park. But I'm willing to entertain disagreement on that point.Kelly: I think there are other individuals, like Dr. Wu, if you recall, like, deciding to do the frog DNA and not thinking that maybe something could go wrong. I think there was a lot of overconfidence, which you're right, we do see a lot in software. So, I think that's actually another very important lesson is that incentives matter and incentives are very hard to change, kind of like what you talked about earlier. It doesn't mean that we shouldn't include incentives in our threat model.So like, in the book I talked about, our threat models should include things like maybe yeah, people are underpaid or there is a ton of pressure to deliver things quickly or, you know, do things as cheaply as possible. That should be just as much of our threat models as all of the technical stuff too.Corey: I think that there's a lot that was in that movie that was flat-out wrong. For example, one of the kids—I forget her name; it's been a long time—was logging in and said, “Oh, this is Unix. I know Unix.” And having learned Unix as my first basically professional operating system, “No, you don't. No one knows Unix. They get very confused at some point, the question is, just how far down what rabbit hole it is.”I feel so sorry for that kid. I hope she wound up seeking therapy when she was older to realize that, no, you don't actually know Unix. It's not that you're bad at computers, it's that Unix is user-hostile, actively so. Like, the raptors, like, that's the better metaphor when everything winds up shaking out.Kelly: Yeah. I don't disagree with that. The movie definitely takes many liberties. I think what's interesting, though, is that Michael Creighton, specifically, when he talks about writing the book—I don't know how many people know this—dinosaurs were just a mechanism. He knew people would want to read it in airport.What he cared about was communicating really the danger of complex systems and how if you don't respect them and respect that interactivity and that it can baffle and surprise us, like, things will go wrong. So, I actually find it kind of beautiful in a way that the dinosaurs were almost like an afterthought. What he really cared about was exactly what we deal with all the time in software, is when things go wrong with complexity.Corey: Like one of his other books, Airframe, talked about an air disaster. There's a bunch of contributing factors in the rest, and for some reason, that did not receive the wild acclaim that Jurassic Park did to become a cultural phenomenon that we're still talking about, what, 30 years later.Kelly: Right. Dinosaurs are very compelling.Corey: They really are. I have to ask though—this is the joy of having a kid who is almost six—what is your favorite dinosaur? Not a question most people get asked very often, but I am going to trot that one out.Kelly: No. Oh, that is such a good question. Maybe a Deinonychus.Corey: Oh, because they get so angry they spit and kill people? That's amazing.Kelly: Yeah. And I like that, kind of like, nimble, smarter one, and also the fact that most of the smaller ones allegedly had feathers, which I just love this idea of, like, feather-ful murder machines. I have the classic, like, nerd kid syndrome, though, where I read all these dinosaur names as a kid and I've never pronounced them out loud. So, I'm sure there are others—Corey: Yep.Kelly: —that I would just word salad. But honestly, it's hard to go wrong with choosing a favorite dinosaur.Corey: Oh, yeah. I'm sure some paleontologist is sitting out there in the field on the dig somewhere listening to this podcast, just getting very angry at our pronunciation and things. But for God's sake, I call the database Postgres-squeal. Get in line. There's a lot of that out there where looking at a complex system failures and different contributing factors and the rest makes stuff—that's what makes things interesting.I think that there's this the idea of a root cause is almost always incorrect. It's not, “Okay, who tripped over the buried landmine,” is not the interesting question. It's, “Who buried the thing?” What were all the things that wound up contributing to this? And you can't even frame it that way in the blaming context, just because you start doing that and people clam up, and good luck figuring out what really happened.Kelly: Exactly. That's so much of what the cybersecurity industry is focused on is how do we assign blame? And it's, you know, the marketing person clicked on a link. And it's like, they do that thousands of times, like a month, and the one time, suddenly, they were stupid for doing it? That doesn't sound right.So, I'm a big fan of, yes, vanquishing root cause, thinking about contributing factors, and in particular, in any sort of incident review, you have to think about, was there a designer process problem? You can't just think about the human behavior; you have to think about where are the opportunities for us to design things better, to make this secure way more of the default way.Corey: When you talk about resilience and reliability and big, notable outages, most forward-thinking companies are going to go and do a variety of incident reviews and disclosures around everything that happened to it, depending upon levels of trust and whether your NDA'ed or not, and how much gets public is going to vary from place to place. But from a security perspective, that feels like the sort of thing that companies will clam up about and never say a word.Kelly: Yes.Corey: Because I can wind up pouring a couple of drinks into people and get the real story of outages, or the AWS bill, but security stuff, they start to wonder if I'm a state actor, on some level. When you were building all of this, how did you wind up getting people to talk candidly and forthrightly about issues that if it became tied to them that they were talking to this in public would almost certainly have negative career impact for them?Kelly: Yes, so that's almost like a trade secret, I feel like. A lot of it is yes, over the years talking with people over, generally at a conference where you know, things are tipsy. I never want to betray confidentiality, to be clear, but certainly pattern-matching across people's stories.Corey: Yeah, we're both in positions where if even the hint of they can't be trusted enters the ecosystem, I think both of our careers explode and never recover. Like it's—Kelly: Exactly.Corey: —yeah. Oh, yeah. They play fast and loose with secrets is never the reputation you want as a professional.Kelly: No. No, definitely not. So, it's much more pattern matching and trying to generalize. But again, a lot of what can go wrong is not that different when you think about a developer being really tired and making a bunch of mistakes versus an attacker. A lot of times they're very much the same, so luckily there's commonality there.I do wish the security industry was more forthright and less clandestine because frankly, all of the public postmortems that are out there about performance issues are just such, such a boon for everyone else to improve what they're doing. So, that's a change I wish would happen.Corey: So, I have to ask, given that you talk about security, chaos engineering, and resilience-and of course, software and systems—all in the title of the O'Reilly book, who is the target audience for this? Is it folks who have the word security featured three times in their job title? Is it folks who are new to the space? What is your target audience start and stop?Kelly: Yes, so I have kept it pretty broad and it's anyone who works with software, but I'll talk about the software engineering audience because that is, honestly, probably out of anyone who I would love to read the book the most because I firmly believe that there's so much that software engineering teams can do to sustain resilience and security and they don't have to be security experts. So, I've tried to demystify security, make it much less arcane, even down to, like, how attackers, you know, they have their own development lifecycle. I try to demystify that, too. So, it's very much for any team, especially, like, platform engineering teams, SREs, to think about, hey, what are some of the things maybe I'm already doing that I can extend to cover, you know, the security cases as well? So, I would love for every software engineer to check it out to see, like, hey, what are the opportunities for me to just do things slightly differently and have these great security outcomes?Corey: I really want to thank you for taking the time to talk with me about how you view these things. If people want to learn more, where's the best place for them to find you?Kelly: Yes, I have all of the social media which is increasingly fragmented, [laugh] I feel like, but I also have my personal site, kellyshortridge.com. The official book site is securitychaoseng.com as well. But otherwise, find me on LinkedIn, Twitter, [Mastodon 00:30:22], Bluesky. I'm probably blanking on the others. There's probably already a new one while we've spoken.Corey: Blue-ski is how I insist on pronouncing it as well, while we're talking about—Kelly: Blue-ski?Corey: Funhouse pronunciation on things.Kelly: I like it.Corey: Excellent. And we will, of course, put links to all of those things in the [show notes 00:30:37]. Thank you so much for being so generous with your time. I really appreciate it.Kelly: Thank you for having me and being a fellow dinosaur nerd.Corey: [laugh]. Kelly Shortridge, Senior Principal Engineer at Fastly. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment about how our choice of dinosaurs is incorrect, then put the computer away and struggle to figure out how to open a door.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Traceroute
The Ancient as Modern, Again

Traceroute

Play Episode Listen Later May 25, 2023 36:43


Grace Ewura-Esi returns from a trip to Ghana, West Africa, with a new perspective on how technology helps us not only make new discoveries but gives old discoveries a new perspective. In this special episode featuring all four hosts in a fascinating discussion, Grace presents examples like Adinkra, the symbol-based language of the Ghana Empire which is a form of communication based on various observations of and associations between humans and the objects they use, not entirely dissimilar to block code that software engineers use today.In addition, with the assistance of machine learning and artificial intelligence, ancient cultures are creating new visual representations of ancient gods for whom there were no depictions that lasted over the centuries. This same AI may even be used to help other nations, cultures, and tribes reconstruct missing portions of ancient languages and lost artifacts. It's an episode that's part mystery, part paradigm shift, and part digital archeology. As Grace puts it, “it's the ancient as modern, again.” Additional ResourcesConnect with Shweta Saraf: LinkedIn or Twitter.Connect with Grace Andrews: LinkedIn or Twitter.Connect with Amy Tobey: LinkedIn or TwitterConnect with Fen Aldrich: LinkedIn or Twitter.Visit Origins.dev for more informationEnjoyed This Episode?Post a review and share it! If you enjoyed tuning in, then please leave us a review. We'd also appreciate it if you would share the podcast with your friends and colleagues, as you get to know the people and technologies at the center of our digital world.Traceroute is a podcast from Equinix, produced by Stories Bureau. This episode was produced by Grace Ewura-Esi, with help from John Taylor and Mathr de Leon. It was edited by Joshua Ramsey and mixed by Jeremy Tuttle and Tim Balint, with additional editing and sound design by Mathr de Leon. Our theme song was composed by Ty Gibbons.

Screaming in the Cloud
Remote Versus Local Development with Mike Brevoort

Screaming in the Cloud

Play Episode Listen Later May 23, 2023 36:51


Mike Brevoort, Chief Product Officer at Gitpod, joins Corey on Screaming in the Cloud to discuss all the intricacies of remote development and how Gitpod is simplifying the process. Mike explains why he feels the infinite resources cloud provides can be overlooked when discussing remote versus local development environments, and how simplifying build abstractions is a fantastic goal, but that focusing on the tools you use in a build abstraction in the meantime can be valuable. Corey and Mike also dive into the security concerns that come with remote development, and Mike reveals the upcoming plans for Gitpod's local conference environment, CDE Universe. About MikeMike has a passion for empowering people to be creative and work together more effectively. He is the Chief Product Officer at Gitpod striving to remove the friction and drudgery from software development through Cloud Developer Environments. He spent the previous four years at Slack where he created Workflow Builder and “Platform 2.0” after his company Missions was acquired by Slack in 2018. Mike lives in Denver, Colorado and enjoys cycling, hiking and being outdoors.Links Referenced: Gitpod: https://www.gitpod.io/ CDE Universe: https://cdeuniverse.com/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: It's easy to **BEEP** up on AWS. Especially when you're managing your cloud environment on your own!Mission Cloud un **BEEP**s your apps and servers. Whatever you need in AWS, we can do it. Head to missioncloud.com for the AWS expertise you need. Corey: Have you listened to the new season of Traceroute yet? Traceroute is a tech podcast that peels back the layers of the stack to tell the real, human stories about how the inner workings of our digital world affect our lives in ways you may have never thought of before. Listen and follow Traceroute on your favorite platform, or learn more about Traceroute at origins.dev. My thanks to them for sponsoring this ridiculous podcast. Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. I have had loud, angry, and admittedly at times uninformed opinions about so many things over the past few years, but something that predates that a lot is my impression on the idea of using remote systems for development work as opposed to doing local dev, and that extends to build and the rest. And my guest today here to argue with me about some of it—or agree; we'll find out—is Mike Brevoort, Chief Product Officer at Gitpod, which I will henceforth be mispronouncing as JIT-pod because that is the type of jerk I am. Mike, thank you for joining me.Mike: Thank you for insulting my company. I appreciate it.Corey: No, by all means, it's what we do here.Mike: [laugh].Corey: So, you clearly have opinions on the idea of remote versus local development that—I am using the word remote development; I know you folks like to use the word cloud, in place of remote, but I'm curious to figure out is, is that just the zeitgeist that has shifted? Do you have a belief that it should be in particular places, done in certain ways, et cetera? Where do your opinion on this start and stop?Mike: I think that—I mean, remote is accurate, an accurate description. I don't like to emphasize the word remote because I don't think it's important that it's remote or local. I think that the term cloud connotes different values around the elasticity of environments and the resources that are more than what you might have on your local machine versus a remote machine. It's not so much whether the one machine is local or remote as much of it is that there are infinite numbers of resources that you can develop across in the cloud. That's why we tend to prefer our cloud development environments.Corey: From my perspective, I've been spending too many years now living in basically hotels and airports. And when I was doing that, for a long time, the only computer I bring with me has been my iPad Pro. That used to be a little bit on the challenging side and these days, that's gotten capable enough where it's no longer interesting in isolation. But there's no local development environment that is worth basically anything on that. So, I've been SSHing into things and using VI as my development environment for many years.When I started off as a grumpy Unix sysadmin, there was something reassuring about the latest state of whatever it is I'm working on lives in a data center somewhere rather than on a laptop, I'm about to leave behind a coffee shop because I'm careless. So, there's a definite value and sense that I am doing something virtuous, historically. But it didn't occur to me till I started talking to people about this, just how contentious the idea was. People would love to ask all kinds of fun objections to this where it was, “Oh, well, what about when you're on a plane and need to do work?” It's, well, I spend an awful lot of time on planes and that is not a limiting factor in me writing the terrible nonsense that I will charitably called code, in my case. I just don't find that that idea holds up anywhere. The world has become so increasingly interconnected that that seems unlikely. But I do live in San Francisco, so here, every internet is generally pretty decent; not every place is. What are your thoughts?Mike: I agree. I mean, I think one thing is, I would just like not to think about it, whether I can or can't develop because I'm connected or not. And I think that we tend to be in a world where that is moreso the case. And I think a lot of times when you're not connected, you become reconnected soon, like if your connection is not reliable or if you're going in and out of connectivity issues. And when you're trying to work on a local laptop and you're connecting and disconnecting, it's not like we develop these days, and everything is just isolated on our local laptop, especially we talk about cloud a lot on this podcast and a lot of apps now go way beyond just I'm running a process on my machine and I'm connecting to data on my machine.There are local emulators you could use for some of these services, but most of them are inferior. And if you're using SQS or using any other, like, cloud-based service, you're usually, as a developer, connecting to some version of that and if you're disconnected anyway, you're not productive either. And so, I find that it's just like an irrelevant conversation in this new world. And that the way we've developed traditionally has not followed along with this view of I need to pile everything in on my laptop, to be able to develop and be productive has not, like, followed along with the trend that moved into the cloud.Corey: Right. The big problem for a long time has been, how do I make this Mac or Windows laptop look a lot like Linux EC2 instance? And there have been a bunch of challenges and incompatibility issues and the rest, and from my perspective, I like to develop in an environment that at least vaguely resembles the production environment it's going to run in, which in AWS's case, of course, comes down to expensive. Bu-dum-tss.Mike: Yeah, it's a really big challenge. It's been a challenge, right? When you've worked with coworkers that were on a Windows machine and you were on a Mac machine, and you had the one person on their Linux machine forever, and we all struggled with trying to mimic these development environments that were representative, ultimately, of what we would run in production. And if you're counting costs, we can count the cost of those cloud resources, we can count the cost of those laptops, but we also need to count the cost of the people who are using those laptops and how inefficient and how much churn they have, and how… I don't know, there was for years of my career, someone would show up every morning to the stand-up meeting and say, it's like, “Well, I wasted all afternoon yesterday trying to work out my, you know, issues with my development environment.” And it's, like, “I hope I get that sorted out later today and I hope someone can help me.”And so, I think cost is one thing. I think that there's a lot of inconsistencies that lead to a lot of inefficiencies and churn. And I think that, regardless of where you're developing, the more that you can make your environments more consistent and sound, not for you, but for your own team and have those be more representative of what you are running in production, the better.Corey: We should disambiguate here because I fear this is one of the areas where my use case tends to veer off into the trees, which is I tend to operate largely in isolation, from a development point of view. I build small, micro things that wind up doing one thing, poorly. And that is, like, what I do is a proof of concept, or to be funny, or to kick the tires on a new technology. I'll also run a bunch of random things I find off of JIF-ub—yes, that's how I pronounce GitHub. And that's great, but it also feels like I'm learning as a result, every stack, and every language, in every various version that it has, and very few of the cloud development environments that I've seen, really seems to cater to the idea that simultaneously, I want to have certain affordances in my shell environment set up the way that I want them, tab complete this particular suite of tools generically across the board, but then reset to that baseline and go in a bunch of different directions of, today, it's Python in this version and tomorrow, it's Node in this other version, and three, what is a Typescript anyway, and so on and so forth.It feels like it's either, in most cases, you either get this generic, one-size-fits-everyone in this company, for this project, approach, or it's, here's a very baseline untuned thing that does not have any of your dependencies installed. Start from scratch every time. And it's like, feels like there are two paths, and they both suck. Where are you folks at these days on that spectrum?Mike: Yeah, I think that, you know, one, if you do all of that development across all these different libraries and technology stacks and you're downloading all these repos from JIF-hub—I say it right—and you're experimenting, you tend to have a lot of just collision of things. Like if you're using Python, it's, like, really a pain to maintain isolation across projects and not have—like, your environment is, like, one big bucket of things on your laptop and it's very easy to get that into a state where things aren't working, and then you're struggling. There's no big reset on your laptop. I mean, there is but it takes—it's a full reset of everything that you have.And I think the thing that's interesting to me about cloud development environments is I could spin one of these up, I could trash it to all hell and just throw it away and get another one. And I could get another one of those at a base of which has been tuned for whatever project or technology I'm working on. So, I could take—you know, do the effort to pre-setup environments, one that is set up with all of my, like, Python tooling, and another one that's set up with all my, like, Go or Rust tooling, or our front-end development, even as a base repo for what I tend to do or might tend to experiment with. What we find is that, whether you're working alone or you're working with coworkers, that setting up a project and all the resources and the modules and the libraries and the dependencies that you have, like, someone has to do that work to wire that up together and the fact that you could just get an environment and get another one and another one, we use this analogy of, like, tissue boxes where, like, you should just be able to pull a new dev environment out of a tissue box and use it and throw it away and pull as many tissues out of the box as you want. And they should be, like, cheap and ephemeral because—and they shouldn't be long-lived because they shouldn't be able to drift.And whether you're working alone or you're working in a team, it's the same value. The fact that, like, I could pull on these out, I have it. I'm confident in it of what I got. Like for example, ideally, you would just start a dev environment, it's available instantly, and you're ready to code. You're in this project with—and maybe it's a project you've never developed on. Maybe it's an open-source project.This is where I think it really improves the sort of equitability of being able to develop, whether it's in open-source, whether it's inner-source in companies, being able to approach any project with a click of a button and get the same environment that the tech lead on the project who started it five years ago has, and then I don't need to worry about that and I get the same environment. And I think that's the value. And so, whether you're individual or you're on a team, you want to be able to experiment and thrash and do things and be able to throw it away and start over again, and not have to—like for example, maybe you're doing that on your machine and you're working on this thing and then you actually have to do some real work, and then now that you've done something that conflicts with the thing that you're working on and you're just kind of caught in this tangled mess, where it's like, you should just be able to leave that experiment there and just go work on the thing you need to work on. And why can't you have multiples of these things at any given time?Corey: Right. One of the things I loved about EC2 dev environments has been that I can just spin stuff up and okay, great, it's time for a new project. Spin up another one and turn it off when I'm done using it—which is the lie we always tell ourselves in cloud and get charged for things we forget to turn off. But then, okay, I need an Intel box one day. Done. Great, awesome. I don't have any of those lying around here anymore but clickety, clickety, and now I do.It's nice being able to have that flexibility, but it's also sometimes disconcerting when I'm trying to figure out what machine I was on when I was building things and the rest, and having unified stories around this becomes super helpful. I'm also finding that my overpowered desktop is far more cost-efficient when I need to compile something challenging, as opposed to finding a big, beefy, EC2 box for that thing as well. So, much of the time, what my remote system is doing is sitting there bored. Even when I'm developing on it, it doesn't take a lot of modern computer resources to basically handle a text editor. Unless it's Emacs, in which case, that's neither here nor there.Mike: [laugh]. I think that the thing that becomes costly, especially when using cloud development environments, is when you have to continue to run them even when you're not using them for the sake of convenience because you're not done with it, you're in the middle of doing some work and it still has to run or you forget to shut it off. If you are going to just spin up a really beefy EC2 instance for an hour to do that big compile and it costs you 78 cents. That's one thing. I mean, I guess that adds up over time and yes, if you've already bought that Mac Studio that's sitting under your desk, humming, it's going to be more cost-efficient to use that thing.But there's, like, an element of convenience here that, like, what if I haven't bought the Mac Studio, but I still need to do that big beefy compilation? And maybe it's not on a project I work on every single day; maybe it's the one that I'm just trying to help out with or just starting to contribute to. And so, I think that we need to get better about, and something that we're very focused on at JIT-pod, is—Gitpod—is—Corey: [laugh]. I'm going to get you in trouble at this rate.Mike: —[laugh]—is really to optimize that underlying runtime environment so that we can optimize the resources that you're using only when you're using it, but also provide a great user experience. Which is, for me, as someone who's responsible for the product at Gitpod, the thing I want to get to is that you never have to think about a machine. You're not thinking about this dev environment as something that lives somewhere, that you're paying for, that there's a meter spinning that if you forget it, that you're like, ah, it's going to cost me a lot of money, that I have to worry about ever losing it. And really, I just want to be able to get a new environment, have one, use it, come back to it when I need it, have it not cost me a lot of money, and be able to have five or ten of those at a time because I'm not as worried about what it's going to cost me. And I'm sure it'll cost something, but the convenience factor of being able to get one instantly and have it and not have to worry about it ultimately saves me a lot of time and aggravation and improves my ability to focus and get work done.And right now, we're still in this mode where we're still thinking about, is it on my laptop? Is it remote? Is it on this EC2 instance or that EC2 instance? Or is this thing started or stopped? And I think we need to move beyond that and be able to just think of these things as development environments that I use and need and they're there when I want to, when I need to work on them, and I don't have to tend to them like cattle.Corey: Speaking of tending large things in herds—I guess that's sort of for the most tortured analogy slash segway I've come up with recently—you folks have a conference coming up soon in San Francisco. What's the deal with that? And I'll point out, it's all on-site, locally, not in the cloud. So, hmm…Mike: Yeah, so we have a local conference environment, a local conference that we're hosting in San Francisco called CDE Universe on June 1st and 2nd, and we are assembling all the thought leaders in the industry who want to get together and talk about where not just cloud development is going, but really where development is going. And so, there's us, there's a lot of companies that have done this themselves. Like, before I joined Gitpod, I was at Slack for four years and I got to see the transition of a, sort of, remote development hosted on EC2 instances transition and how that really empowered our team of hundreds of engineers to be able to contribute and like work together better, more efficiently, to run this giant app that you can't run just alone on your laptop. And so, Slack is going to be there, they're going to be talking about their transition to cloud development. The Uber team is going to be there, there's going to be some other companies.So, Nathan who's building Zed, he was the one that originally built Adam at GitHub is now building Zed, which is a new IDE, is going to be there. And I can't mention all the speakers, but there's going to be a lot of people that are really looking at how do we drive forward development and development environments. And that experience can get a lot better. So, if you're interested in that, if you're going to be in San Francisco on June 1st and 2nd and want to talk to these people, learn from them, and help us drive this vision forward for just a better development experience, come hang out with us.Corey: I'm a big fan of collaborating with folks and figuring out what tricks and tips they've picked up along the way. And this is coming from the perspective of someone who acts as a solo developer in many cases. But it always drove me a little nuts when you see people spending weeks of their lives configuring their text editor—VIM in my case because I'm no better than these people; I am one of them—and getting it all setup and dialed in. It's, how much productivity you gaining versus how much time are you spending getting there?And then when all was said and done a few years ago, I found myself switching to VS Code for most of what I do, and—because it's great—and suddenly the world's shifting on its axis again. At some point, you want to get away from focusing on productivity on an individualized basis. Now, the rules change when you're talking about large teams where everyone needs a copy of this running locally or in their dev environment, wherever happens to be, and you're right, often the first two weeks of a new software engineering job are, you're now responsible for updating the onboarding docs because it's been ten minutes since the last time someone went through it. And oh, the versions bumped again of what we would have [unintelligible 00:16:44] brew install on a Mac and suddenly everything's broken. Yay. I don't miss those days.Mike: Yeah, the new, like, ARM-based Macs came out and then you were—now all of a sudden, all your builds are broken. We hear that a lot.Corey: Oh, what I love now is that, in many cases, I'm still in a process of, okay, I'm developing locally on an ARM-based Mac and I'm deploying it to a Graviton2-based Lambda or instance, but the CI/CD builder is going to run on Intel, so it's one of those, what is going on here? Like, there's a toolchain lag of round embracing ARM as an architecture. That's mostly been taken care of as things have evolved, but it's gotten pretty amusing at some point, just as quickly that baseline architecture has shifted for some workloads. And for some companies.Mike: Yeah, and things just seem to be getting more [laugh] and more complicated not less complicated, and so I think the more that we can—Corey: Oh, you noticed?Mike: Try to simplify build abstractions [laugh], you know, the better. But I think in those cases where, I think it's actually good for people to struggle with setting up their environment sometime, with caring about the tools that they use and their experience developing. I think there has to be some ROI with that. If it's like a chronic thing that you have to continue to try to fix and make better, it's one thing, but if you spend a whole day improving the tools that you use to make you a better developer later, I think there's a ton of value in that. I think we should care a lot about the tools we use.However, that's not something we want to do every day. I mean, ultimately, I know I don't build software for the sake of building software. I want to create something. I want to create some value, some change in the world. There's some product ultimately that I'm trying to build.And, you know, early on, I've done a lot of work in my career on, like, workflow-type builders and visual builders and I had this incorrect assumption somewhere along the way—and this came around, like, sort of the maker movement, when everybody was talking about everybody should learn how to code, and I made this assumption that everybody really wants to create; everybody wants to be a creator, and if given the opportunity, they will. And I think what I finally learned is that, actually most people don't like to create. A lot of people just want to be served; like, they just want to consume and they don't want the hassle of it. Some people do, if they have the opportunity and the skillsets, too, but it's also similar to, like, if I'm a professional developer, I need to get my work done. I'm not measured on how well my local tooling is set up; I'm sort of measured on my output and the impact that I have in the organization.I tend to think about, like, chefs. If I'm a chef and I work 60 hours in a restaurant, 70 hours in a restaurant, the last thing I want to do is come home and cook myself a meal. And most of the chefs I know actually don't have really nice kitchens at home. They, like, tend to, they want other people to cook for them. And so, I think, like, there's a place in professional setting where you just need to get the work done and you don't want to worry about all the meta things and the time that you could waste on it.And so, I feel like there's a happy medium there. I think it's good for people to care about the tools that they use the environment that they develop in, to really care for that and to curate it and make it better, but there's got to be some ROI and it's got to have value to you. You have to enjoy that. Otherwise, you know, what's the point of it in the first place?Corey: One thing that I used to think about was that if you're working in regulated industries, as I tended to a fair bit, there's something very nice about not having any of the data or IP or anything like that locally. Your laptop effectively just becomes a thin client to something that's already controlled by the existing security and compliance apparatus. That's very nice, where suddenly it's all someone steals my iPad, or I drop it into the bay, it's locked, it's encrypted. Cool, I go to the store, get myself a new one, restore a backup from iCloud, and I'm up and running again in a very short period of time as if nothing had ever changed. Whereas when I was doing a lot of local development and had bad hard drive issues in the earlier part of my career, well, there goes that month.Mike: Yeah, it's a really good point. I think that we're all walking around with these laptops with really sensitive IP on it and that those are in bars and restaurants. And maybe your drives are encrypted, but there's a lot of additional risks, including, you know, everything that is going over the network, whether I'm on a local coffee shop, and you know, the latest vulnerability that, an update I have to do on my Mac if I'm behind. And there's actually a lot of risk and having all that just sort of thrown to the wind and spread across the world and there's a lot of value in having that in a very safe place. And what we've even found that, at Gitpod now, like, the latest product we're working on is one that we called Gitpod Dedicated, which gives you the ability to run inside your own cloud perimeter. And we're doing that on AWS first, and so we can set up and manage an installation of Gitpod inside your own AWS account.And the reason that became important to us is that a lot of companies, a lot of our customers, treat their source code as their most sensitive intellectual property. And they won't allow it to leave their perimeter, like, they may run in AWS, but they have this concept of, sort of like, our perimeter and you're either inside of that and outside of it. And I think this speaks a little bit to a blog post that you wrote a few months ago about the lagging adoption of remote development environments. I think one of those aspects is, sort of, convenience and the user experience, but the other is that you can't use them very well with your stack and all the tools and resources that you need to use if they're not running, sort of, close within your perimeter. And so, you know, we're finding that companies have this need to be able to have greater control, and now with the, sort of, trends around, like, coding assistance and generative AI and it's even the perfect storm of not only am I like sending my source code from my editor out into some [LM 00:22:36], but I also have the risk of an LM that might be compromised, that's injecting code and I'm committing on my behalf that may be introducing vulnerabilities. And so, I think, like, getting that off to a secure space that is consistent and sound and can be monitored, to be kept up-to-date, I think it has the ability to, sort of, greatly increase a customer's security posture.Corey: While we're here kicking the beehive, for lack of a better term, your support for multiple editors in Gitpod the product, I assumed that most people would go with VS Code because I tend to see it everywhere, and I couldn't help but notice that neither VI nor Emacs is one of the options, the last time I checked. What are you seeing as far as popularity contests go? And that might be a dangerous question because I'm not suggesting you alienate many of the other vendors who are available, but in the world I live in, it's pretty clear where the zeitgeist of my subculture is going.Mike: Yeah, I mean, VS Code is definitely the most popular IDE. The majority of people that use Gitpod—and especially we have a, like, a pretty heavy free usage tier—uses it in the browser, just for the convenience of having that in the browser and having many environments in the browser. We tend to find more professional developers use VS Code desktop or the JetBrains suite of IDEs.Corey: Yeah, JetBrains I'm seeing a fair bit of in a bunch of different ways and I think that's actually most of what your other options are. I feel like people have either gone down the JetBrains path or they haven't and it seems like it's very, people who are into it are really into it and people who are not are just, never touch it.Mike: Yeah, and we want to provide the options for people to use the tools that they want to use and feel comfortable on. And we also want to provide a platform for the next generation of IDEs to be able to build on and support and to be able to support this concept of cloud or remote development more natively. So, like I mentioned, Nathan Sobo at Zed, I met up with him last week—I'm in Denver; he's in Boulder—and we were talking about this and he's interested in Zed working in the browser, and he's talked about this publicly. And for us, it's really interesting because, like, IDEs working in the browser is, like, a really great convenience. It's not the perfect way to work, necessarily, in all circumstances.There's some challenges with, like, all this tab sprawl and stuff, but it gives us the opportunity, if we can make Zed work really well in for Gitpod—or anybody else building an IDE—for that to work in the browser. Ultimately what we want is that if you want to use a terminal, we want to create a great experience for you for that. And so, we're working on this ability in Gitpod to be able to effectively, like, bring your own IDE, if you're building on that, and to be able to offer it and distribute on Gitpod, to be able to create a new developer tool and make it so that anybody in their Gitpod workspace can launch that as part of their workspace, part of their tool. And we want to see developer tools and IDEs flourish on top of this platform that is cloud development because we want to give people choice. Like, at Gitpod, we're not building our own IDE anymore.The team started to. They created Theia, which was one of the original cloud, sort of, web-based IDEs that now has been handed over to the Eclipse Foundation. But we moved to VS Code because we found that that's where the ecosystem were. That's where our users were, and our customers, and what they wanted to use. But we want to expand beyond that and give people the ability to choose, not only the options that are available today but the options that should be available in the future. And we think that choice is really important.Corey: When you see people kicking the tires on Gitpod for the first time, where does the bulk of their hesitancy come from? Like, what is it where—people, in my experience, don't love to embrace change. So, it's always this thing, “This thing sucks,” is sort of the default response to anything that requires them to change their philosophy on something. So okay, great. That is a thing that happens. We'll see what people say or do. But are they basing it on anything beyond just familiarity and comfort with the old way of doing things or are there certain areas that you're finding the new customers are having a hard time wrapping their head around?Mike: There's a couple of things. I think one thing is just habit. People have habits and preferences, which are really valuable because it's the way that they've learned to be successful in their careers and the way that they expect things. Sometimes people have these preferences that are fairly well ingrained that maybe are irrational or rational. And so, one thing is just people's force of habit.And then getting used to this idea that if it's not on my laptop, it means—like what you mentioned before, it's always what-ifs of, like, “What if I'm on a plane?” Or like, “What if I'm at the airport in a hurricane?” “What if I'm on a train with a spotty internet connection?” And so, there's all these sort of what-if situations. And once people get past that and they start actually using Gitpod and trying to set their projects up, the other limiting factor we have is just connectivity.And that's, like, connectivity to the other resources that you use to develop. So, whether that's, you know, package or module repositories or that some internal services or a database that might be running behind a firewall, it's like getting connectivity to those things. And that's where the dedicated deployment model that I talked about, running inside of your perimeter on our network, they have control over, kind of helps, and that's why we're trying to overcome that. Or if you're using our SaaS product, using something like Tailscale or a more modern VPN that way. But those are the two main things.It's like familiarity, this comfort for how to work, sort of, in this new world and not having this level of comfort of, like, it's running on this thing I can hold, as well as connectivity. And then there is some cost associated with people now paying for this infrastructure they didn't have to pay for before. And I think it's a, you know, it's a mistake to say that we're going to offset the cost of laptops. Like, that shouldn't be how you justify a cloud development environment. Like—Corey: Yeah, I feel like people are not requesting under-specced laptops much these days anymore.Mike: It's just like, I want to use a good laptop; I want to use a really nice laptop with good hardware and that shouldn't be the cost. The proposition shouldn't be, it's like, “Save a thousand dollars on every developer's laptop by moving this off to the cloud.” It's really the time savings. It's the focus. It's the, you know, removing all of that drift and creating these consistent environments that are more secure, and effectively, like, automating your development environment that's the same for everybody.But that's the—I think habits are the big thing. And there is, you know, I talked about a little bit that element of, like, we still have this concept of, like, I have this environment and I start it and it's there, and I pay for it while it's there and I have to clean it up or I have to make sure it stopped. I think that still exists and it creates a lot of sort of cognitive overhead of things that I have to manage that I didn't have to manage before. And I think that we have to—Gitpod needs to be better there and so does everybody else in the industry—about removing that completely. Like, there's one of the things that I really love that I learned from, like, Stewart Butterfield when I was at Slack was, he always brought up this concept called the convenience threshold.And it was just the idea that when a certain threshold of convenience is met, people's behavior suddenly changes. And as we thought about products and, like, the availability of features, that it really drove how we thought about even how to think about you know, adoption or, like, what is the threshold, what would it take? And, like, a good example of this is even, like, the way we just use credit cards now or debit cards to pay for things all the time, where we're used to carry cash. And in the beginning, when it was kind of novel that you could use a credit card to pay for things, like even pay for gas, you always had to have cash because you didn't know if it'd be accepted. And so, you still had to have cash, you still had to have it on hand, you still had to get it from the ATM, you still have to worry about, like, what if I get there and they don't accept my cards and how much money is it going to be, so I need to make sure I have enough of it.But the convenience of having this card where I don't have to carry cash is I don't have to worry about that anymore, as long as they have money in my bank account. And it wasn't until those cards were accepted more broadly that I could actually rely on having that card and not having the cash. It's similar when it comes to cloud development environments. It needs to be more convenient than my local development environment. It needs to be—it's kind of like early—I remember when laptops became more common, I was used to developing on a desktop, and people were like, nobody's ever going to develop on a laptop, it's not powerful enough, the battery runs out, I have to you know, when I close the lid, when you open the lid, it used to take, like, five minutes before, like, it would resume an unhibernate and stuff, and it was amazing where you could just close it and open it and get back to where you were.But like, that's the case where, like, laptops weren't convenient as desktops were because they were always plugged in, powered on, you can leave them and you can effectively just come back and sit down and pick up where you left off. And so, I think that this is another moment where we need to make these cloud development environments more convenient to be able to use and ultimately better. And part of that convenience is to make it so that you don't have to think about all these parts of them of whether they're running, not running, how much they cost, whether you're going to be there [unintelligible 00:31:35] or lose their data. Like, that should be the value of it that I don't have to think about any of that stuff.Corey: So, my last question for you is, when you take a look at people who have migrated to using Gitpod, specifically from the corporate perspective, what are their realizations after the fact—I mean, assuming they still take your phone calls because that's sort of feedback of a different sort—but what have they realized has worked well? What keeps them happy and coming back and taking your calls?Mike: Yeah, our customers could focus on their business instead of focusing on all the issues that they have with configuring development environments, everything that could go wrong. And so, a good example of this is a customer they have, Quizlet, Quizlet saw a 45-point increase in developer satisfaction and a 60% reduction in incidents, and the time that it takes to onboard new engineers went down to ten minutes. So, we have some customers that we talk to that come to us and say, “It takes us 20 days to onboard an engineer because of all the access they need and everything you need to set up and credentials and things, and now we could boil that down to a button click.” And that's the thing that we tend to hear from people is that, like, they just don't have to worry about this anymore and they tend to be able to focus on their business and what the developers are actually trying to do, which is build their product.And in Quizlet's example, it was really cool to see them mention in one of the recent OpenAI announcements around GPT4 and plugins is they were one of the early customers that built GPT4 plugins, or ChatGPT, and they mentioned that they were sharing a lot of Gitpod URLs around when we reached out to congratulate them. And the thing that was great about that, for us is, like, they were talking about their business and what they were developing and how they were being successful. And we'd rather see Gitpod in your development environment just sort of disappear into the background. We'd actually like to not hear from customers because it's just working so well from them. So, that's what we found is that customers are just able to get to this point where they could just focus on their business and focus on what they're trying to develop and focus on making their customers successful and not have to worry about infrastructure for development.Corey: I think that really says it all. On some level, when you have customers who are happy with what's happening and how they're approaching this, that really is the best marketing story I can think of because you can say anything you want about it, but when customers will go out and say, “Yeah, this has made our lives better; please keep doing what you're doing,” it counts for a lot.Mike: Yeah, I agree. And that's what we're trying to do. You know, we're not trying to win, sort of, a tab versus spaces debate here around local or cloud or—I actually just want to enable customers to be able to do their work of their business and develop software better. We want to try to provide a method and a platform that's extensible and customizable and gives them all the power they need to be able to just be ready to code, to get to work as soon as they can.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where's the best place for them to find you, other than at your conference in San Francisco in a few weeks?Mike: [laugh]. Yeah, thank you. I really appreciate the banter back and forth. And I hope to see you there at our conference. You should come. Consider this an invite for June 1st and 2nd in San Francisco at CDE Universe.Corey: Of course. And we will put links to this in the [show notes 00:34:53]. Thank you so much for being so generous with your time. I appreciate it.Mike: Thanks, Corey. That was really fun.Corey: Mike Brevoort, Chief Product Officer at Gitpod. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment detailing exactly why cloud development is not the future, but then lose your content halfway through because your hard drive crashed.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Screaming in the Cloud
Simplifying Cloud Migration Strategy at Tidal with David Colebatch

Screaming in the Cloud

Play Episode Listen Later May 16, 2023 32:39


David Colebatch, CEO at Tidal.cloud, joins Corey on Screaming in the Cloud to discuss how Tidal is demystifying cloud migration strategy. David and Corey discuss the pros and cons of a hybrid cloud migration strategy, and David reveals the approach that Tidal takes to ensure they're setting their customers up for success. David also discusses the human element to cloud migration initiatives, and how to overcome roadblocks when handling the people side of migrations. Corey and David also expand on all the capabilities cloud migration unlocks, and David explains how that translates to a distributed product team approach.About DavidDavid is the CEO & Founder of Tidal.  Tidal is empowering businesses to transform from traditional on-premises IT-run organizations to lean-agile-cloud powered machines.Links Referenced: Tidal.cloud: https://tidal.cloud Twitter: https://twitter.com/dcolebatch LinkedIn: https://www.linkedin.com/in/davidcolebatch/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey:  LANs of the late 90's and early 2000's were a magical place to learn about computers, hang out with your friends, and do cool stuff like share files, run websites & game servers, and occasionally bring the whole thing down with some ill-conceived software or network configuration. That's not how things are done anymore, but what if we could have a 90's style LAN experience along with the best parts of the 21st century internet? (Most of which are very hard to find these days.) Tailscale thinks we can, and I'm inclined to agree. With Tailscale I can use trusted identity providers like Google, or Okta, or GitHub to authenticate users, and automatically generate & rotate keys to authenticate devices I've added to my network. I can also share access to those devices with friends and teammates, or tag devices to give my team broader access. And that's the magic of it, your data is protected by the simple yet powerful social dynamics of small groups that you trust.Try now - it's free forever for personal use. I've been using it for almost two years personally, and am moderately annoyed that they haven't attempted to charge me for what's become an essential-to-my-workflow service.Corey: Have you listened to the new season of Traceroute yet? Traceroute is a tech podcast that peels back the layers of the stack to tell the real, human stories about how the inner workings of our digital world affect our lives in ways you may have never thought of before. Listen and follow Traceroute on your favorite platform, or learn more about Traceroute at origins.dev. My thanks to them for sponsoring this ridiculous podcast. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while at The Duckbill Group, I like to branch out and try something a little bit different before getting smashed vocally, right back into the box I find myself in for a variety of excellent reasons. One of these areas has been for a while, the idea of working with migrations on getting folks into cloud. There's a lot of cost impact to it, but there's also a lot of things that I generally consider to be unpleasant nonsense with which to deal. My guest today sort of takes a different philosophy to this. David Colebatch is the CEO and founder of Tidal.cloud. David, thank you for joining me.David: Oh, thanks for having me, Corey.Corey: Now, cloud migrations tend to be something that is, I want to say contentious, and for good reason. You have all the cloud providers who are ranting that cloud is the way and the light, as if they've just found religion, and yeah, the fact that it basically turns into a money-printing machine for them has nothing to do with their newfound advocacy for this approach. Now, I do understand that we all have positions that we come from that shape our perspective. You do run and did found a cloud migration company. What's your take on it? Is this as big as the cloud providers say it is, is it overhyped, or is it underhyped?David: I think it's probably in the middle of this stage of the hype cycle. But the reason that that Tidal exists and why I founded it was that many customers were approaching cloud just for cloud's sake, you know, and they were looking at cloud as a place to park VMs. And our philosophy as software engineers at Tidal is that customers were missing out on all the new capabilities that cloud provided, you know, cloud is a new paradigm in compute. And so, our take on it is the customer should not look at cloud as a place to migrate to, but rather as a place to transform to and embrace all the new capabilities that are on offer.Corey: I've been saying for a while that if you sit there and run a total cost analysis for going down the path of a cloud migration, you will not save money in the short term, call it five years or whatnot. So, if you're migrating to the cloud specifically to save money, in the common case, it should be for a capability story, not because it's going to save you money off of what you're currently doing in the data center. Agree, disagree, or it's complicated?David: It's complicated, but you're right in one case: you need to work backwards from the outcomes, I think that much is pretty simple and clear, but many teams overlook that. And again, when you look at cloud for the sake of cloud, you generally do overlook that. But when we work with customers and they log into to our platform, what we find is that they're often articulating their intent as I want to improve business agility, I want to improve staff productivity, and it's less about just moving workloads to the cloud. Anyone can run a VM somewhere. And so, I think, when we work backwards from what the customer is trying to achieve and we look at TCO holistically, not just about how much a computer costs to run and operate in a colo facility, look at it holistically from a staff productivity perspective as well, then the business case for cloud becomes very profound.Corey: I've been saying for a while that I can make a good-faith Total Cost of Ownership analysis—or TCO analysis—in either direction, so tell me what outcome you want and I can come up with a very good-faith effort answer that gives you what you want. I don't think I've seen too many TCO analyses, especially around cloud migrations, that were not justification exercises. They were very rarely open questions. It was, we've decided what we want to do. Now, let's build a business case to do that thing. Agree, disagree?David: [laugh]. Agree. I've seen that. Yeah, we again, like to understand the true picture of total cost of ownership on-premises first, and many customers, depending on who you're engaging with, but on the IT side, might actually shield a few of those costs or they might just not know them. And I'm talking about things like in the facilities, insurance costs, utility bills, and things like that, that might not bubble up.We need to get all those cards on the table in order to conduct a full TCO analysis. And then in the cloud side, we need to look at multiple scenarios per workload. So, we want to understand that lift-and-shift base case that many people come from, but also that transformative migration case which says, I might be running in a server-ful architecture today on-premises, but based on the source code and database analysis that we've done, we can see an easy lift to think like Lambda and serverless frameworks on the cloud. And so, when you take that transformative approach, you may spend some time upfront doing that transformation, or if it's tight fit, it might be really easy; it might actually be faster than reverse-engineering firewall rules and doing a lift-and-shift. And in that case, you can save up to 97% in annual OPEX, which is a huge savings, of course.Corey: You said the magic words, lift-and-shift, which means all right, the gloves come off. Let's have this conversation.David: Oh yeah.Corey: I work on AWS bills for a living. Cloud cost and architecture are fundamentally the same thing, and when I start looking at a company's monthly bill, I can start to see the architectural patterns emerge with no further information than what's shown in the exploded bill view, at least at a high level. It starts to be indicative of different things. And you can generally tell, on some level, when companies have come from a data center environment or at least a data center mentality, in what they've built. And I've talked to a number of companies where they have effectively completely lifted their data center into the cloud and the only real change that they have gotten in terms of value for it has been that machines are going down a lot less because the hard drive failed and they were really bad at replacing hard drives.Now, for companies in that position who have that challenge, yeah, the value is there and it's apparent because I promise, whoever you are, the cloud providers are better at replacing failed hard drives than you are, full stop. And if that's the value proposition you want, great, but it also feels like that is just scratching the surface of what the benefit of cloud providers can be.David: Absolutely. I mean, we look at cloud as a way to unlock new ways of working and it's totally aligned with the new distributed product team approach that many enterprises are pursuing. You know, the rise of Agile and DevOps has sort of facilitated this movement away from single choke points of IT service delivery, like we used to with ITIL, into much more modern ways of working. And so, I imagine when you're looking at those cloud bills, you might see a whole host of workloads centered into one or two accounts, like they've just replicated a data center into one or two accounts and lifted-and-shifted a bunch of EC2 to it. And yeah, that is not the most ideal architectural pattern to follow in the cloud. If you're working backwards from, “I want to improve staff productivity; I want to improve business agility,” you need to do things like limit your blast radius and have a multi-account strategy that supports that.Corey: We've seen this as well and born-in-the-cloud companies, too, because for a long time, that was AWS's guidance of put everything in a single AWS account. The end. And then just, you know, get good with IAM issues. Like, “Well okay, I found that developer environments impacted production.” Then, “Sounds like a skill issue.”Great, but then you also have things that cannot be allocated, like service quotas. When you have something in development run amok and exhaust service quotas for number of EC2 get instance info requests, suddenly, load balancers don't anymore and auto-scaling is kind of aspirational when everything explodes on you. It's the right path, but very often, people got there through following the best advice that AWS offers. I am in the middle of a migration myself from the quote-unquote, “Legacy” AWS account, I built a bunch of stuff in 2016 into its own dedicated account and honestly, it's about as challenging as some data center moves that I've done historically.David: Oh, absolutely. I mean, the cobwebs build up over time and you have a lot of dependencies on services, you completely forget about.Corey: “How do I move this S3 bucket to another account?” “That's the neat part. You don't.”David: [laugh]. We shouldn't just limit that to AWS. I mean, the other cloud providers have similar issues to deal with through their older cloud adoption frameworks which are now playing out. And some of those guidance points were due to technology limitations in the underlying platform, too, and so you know, at the time, that was the best way to go to cloud. But as I think customers have demanded more agility and more control over their blast radiuses and enabling self-service teams, this has forced everyone to sort of come along and embrace this multi-account strategy. Where the challenge is, with a lot of our enterprise clients, and especially in the public—Corey: Embrace it or you'll be made to embrace it.David: Yeah [laugh]. We see with both our enterprise accounts that were early adopters, they certainly have that issue with too much concentration on one or two accounts, but public sector accounts as well, which we're seeing a lot of momentum in, they come from a place where they're heavily regulated and follow heavy architectural standards which dictate some of these things. And so, in order for those clients to be successful in the cloud, they have to have real leadership and real champions that are able to, sort of, forge through some of those issues and break outside of the mold in order to demonstrate success.Corey: On some level, when I see a lift that failed to shift, it's an intentional choice in some cases where the company has decided to improve their data center environment at the cost of their cloud environment. And it feels, on some level, like it's a transitional step, but then it's almost a question that I always have is, was this the grand plan? So, I guess my question for you is, when you see a company that has some workloads in a data center and some living in the cloud provider in what most people call hybrid, is that outcome intentional or is it accidental, where midway through, they realize that some workloads are super hard to migrate? They have a mainframe and there is no AWS/400 available for their use, so they're going to give up halfway, declare victory, and yep we're hybrid now. How did they get there?David: I think it's intentional, quite often that they see hybrid cloud as a stepping stone to going full cloud. And this just comes down to project scoping and governance, too. So, many leaders will draw a ring around the workloads that are easy to migrate and they'll claim success at the end of that and move on to another job quite often. But the visionary leaders will actually chart a path to course that has a hundred percent adoption, full data center closure, off the mainframe, off AS/400, you know, refactored usually, but they'll chart that course at a rate of change that the organization can accept. Because, you know, cloud being a new paradigm, cloud requiring new ways of working, they can't just ram that kind of change through in their enterprise in one or two years; they really need to make sure that it's being absorbed and adopted and embraced by the teams and not alienating the whole company as they go through. And so, I do see it as intentional, but that stepping stone that many companies take is also an okay thing in my mind.Corey: And to be clear, I should bound what I'm saying from the perspective that I'm talking about this from a platonic ideal perspective. I am not suggesting that, “Oh, this thing that you built at your company is crappy,” I mean, any more so than anything else is. I've never yet seen any infrastructure that the people running it would step back and say, “This is amazing and perfect.” Everyone thinks it's a burning dumpster fire of sadness and regret and I'm not entirely sure that they're wrong.I mean, designing an architecture—cloud or otherwise—on a whiteboard is relatively straightforward, for a junior employee, even. The problem is most people don't get to start from scratch and build that thing. There's existing stuff that needs to be migrated in and most of us don't get the luxury of taking two years of downtime for that service while we wind up rebuilding it from scratch. So, it's one of those how do you rebuild a car without taking it off the highway to do it type of questions.David: Well, you want to have a phased migration approach, quite often. Your business can't stop and start because you're doing a migration, so you want to build momentum with the early adopters that are easy to migrate and don't require big interruptions to business. And then for those mission-critical workloads that do need to migrate—and you mentioned mainframe and AS/400 before—they might be areas where you introduce, like, a strangler fig pattern, you know, draw a ring around it, start replicating some services into cloud, and then phase that migration over a year or two, depending on your timeline and scale. And so, we're very much pragmatic in this business that we want to make sure we're doing everything for the right reasons, for the business-led reasons, and fitting in migrations around business objectives and strategies is super critical to success.Corey: What I'm curious about is when we talk about migrations, in fact, when I invited you on the show, and it was like, well, Tidal migrations—one thing I love about calling it that for the domain, in some cases, as well as other things is, “Huh, says right in the tin what it is. Awesome.” But it's migrations, which I assumed to be, you know, from data centers into cloud. That's great. But then you've got the question of, is that what your work looks like? Is it migrations in the other direction? Is cloud repatriation a thing that people are doing, and no one bothered to actually ever bother to demonstrate that to me? Is cloud to cloud? What are you migrating from and to?David: Well, that's great. And we actually dropped migrations from the name.Corey: Oh, my apologies. Events, once again, outpace me.David: Tidal.cloud is our URL and essentially, Corey, the business of migration is something that's only becoming increasingly frequent. Customers are not just migrating from on-premises data centers to cloud, they're also migrating in between their cloud accounts like you are, but also from one cloud provider to another. And our business hypothesis here Tidal is that that innovation cycle is continuing to shrink, and so whereas when I was in the data center automation business, we used to have a 10 and 15-year investment cycle, now customers have embraced continuous delivery of their applications and so there's this huge shift of investment horizons, bringing it down to an almost an annual event for many of the applications that we touch.Corey: You are in fact correct. Tidal.cloud does have a banner at the top that says, “Tidal Migrations is now Tidal.” Yep, you're correct, not that I'm here to like incorrect you on the name of your own company, for God's sake. That's a new level of mansplaining I dare not delve into.But it does say, “Migration made modern,” right at the top, which is great because there's a sense that I've always had that lift-and-shift is poo-pooed as a bad approach to migrating, but I've done it other ways and it becomes disastrous. I've always liked the approach of take something in a data center, migrated into cloud, in the process, changing as few things as possible, and then just get it stable and working there, and step two becomes the transformation because if you try and transform while it moves, yeah, that gets you a little closer to outcome in theory, but when things don't work right—and their computers; let's not kid ourselves, nothing works right—it's a question now of was it my changes? Is it the cloud environment? Is there an unknown dependency that assumes things in the data center that are not true in cloud? It becomes very hard to track down the why of these things.David: There's no one-size-fits-all for migration. It's why we have the seven-hour assessment capabilities. You know, if one application, like you've just talked about, that one application might be better to lift and shift than modernize, there might be real business reasons for doing that. But what we've seen over the years is the customers generally have one migration budget. Now, IT gets one migration budget and they get to end a job in a lift-and-shift scenario and the business says, “Well, what changed? Nothing, my apps still run the same, I don't notice any new capabilities.” And IT then says, “Yeah, yeah. Now, we need the modernization budget to finish.” And they said, “No, no, no. We've just given you a bunch of money. You're not getting any more.”And so, that's what quite often the migrate as a lift-and-shift kind of stalls and you see an exodus of talent out of those organizations, people leave to go on to the next migration project elsewhere and that organization really didn't embrace any of the cloud-native changes that were required. We'd like to really say that—and you saw this on our header—that migrations made modern, we'd like to dispel the myth that you can either migrate or modernize. It's really not an either/or. There's a full spectrum of our methods, like replatform, and refactor, rehosting, in the middle there. And when we work backwards from customers, we want to understand their core objectives for going to cloud, their intent, their, “Why cloud?”We want to understand how it aligns on the cloud value framework, so business agility gains, staff productivity gains, total cost of ownership is important, of course. And then for each of their application workloads, choose the right 6R based on those business outcomes. And it can seem like a complicated or comprehensive problem, but if you automate it like we do, you can get very consistent results very quickly. And that's really the accelerant that we give customers to accelerate their migration to cloud.Corey: One thing that I've noticed—and maybe this makes me cynical—but when I see companies doing lift-and-shift, often they will neglect to do the shift portion of it. Because there's a compelling reason to do a migration to get out of a data center and into a cloud, and often that is a data center contract expiry coming up. But companies are very rarely going to invest the time, energy, and money—which all become the same thing, effectively, at company scale—in refactoring existing applications if they're not already broken.I see that all the time in my work, I don't make recommendations to folks very often have the form, “Oh, just migrate this entire application to serverless and you'll save 80% or more on it.” And it's, “That's great, but that's 18 months' worth of work and it doesn't actually get us closer to our business milestones, so yeah, we're not going to do that.” Cost directly is very rarely a compelling reason to make a migration, but when you're rebuilding something for business purposes, factoring cost concerns into it seems to be a much better way to gain adoption and traction of those ideals.David: Yeah, yeah. Counterpoint on that, when we look at a portfolio of applications, like, hundreds or thousands of applications in an enterprise and we do this type of analysis on them with the customers, what we've learned is that they may refactor and replatform ten, 20% of their workloads, they may rehost 40%, and they'll often turn off the rest, retire them, not migrate them. And many of our enterprise customers that we've spoken to have gone through rationalizations as they've gone to cloud and saved, you know, 59%, just turned off that 59% of an infrastructure, and the apps that they do end up refactoring and modernizing are the ones where either there's a very easy path for them, like, the code is super compatible and written in a way that's fitting with Lambda and so they've done that, or they've got, like you said, business needs coming up. So, the business is already investigating making some changes to the application, they already want to embrace CI/CD pipelines where they haven't today. And for those applications, what we see teams doing is actually building new in the cloud and then managing that as an application migration, like, cutting over that.But in the scheme of an entire portfolio of hundreds or thousands of applications that might be 5, 10, 20% of the portfolio. It won't be all of them. And that's what we say, there's a full spectrum of migration methods and we want to make sure we apply the right ones to each workload.Corey: Yeah, I want to be clear that there are different personas. I find that most of my customers tend to fall into two buckets. The first is that you have the born-in-the-cloud SaaS companies, and that's the world I come from, where you have basically one workload that's 80% of your application spend, your revenue, et cetera. Like, they are not a customer, but take Datadog as an example. Like, the Datadog monitoring application suite would be a good example of this, and then you have a bunch of longtail stuff.Conversely, you've got a large enterprise that might be spending $100 million or so every year, but their largest single application is a couple million bucks because it just has thousands upon thousands of them. And at that point, it becomes much more of a central IT planning problem. In one of those use cases, spending significant effort refactoring and rebuilding things, from an optimization perspective, can pay dividends. In other cases, it tends not to work in quite the same way, just because the economies of scale aren't there. Do you find that most of your customers fall into one of those two buckets? Do you take a different view of the world? How do you see the market?David: Same view, we do. Enterprise customers are generally the areas that we find the most fit with, the ISVs, you know, that have one or two primary applications. Born in the cloud, they don't need to do portfolio assessments. And with the enterprise customers, the central IT bit used to be a blocker and impediment for cloud. We're increasingly seeing more interest from central IT who is trying to lead their organization to cloud, which is great, that's a great sign.But in the past, it had been more of a business-led conversation where one business unit within an enterprise wants to branch away from central IT, and so they take it upon themselves to do an application assessment, they take it upon themselves to get their own cloud accounts, you know, a shadow IT move, in a way. And that had a lot of success because the business would always tie it back to business outcomes that they were trying to achieve. Now, into IT, doing mass migration, mass portfolio assessment, this does require them to engage deeply with the business areas and sometimes we're seeing that happening for the very first time. There's no longer IT at the end of a chain, but rather it's a joint partnership as they go to cloud, which is really cool to see.Corey: When I go to Tidal.cloud, you have a gif—yes, that's how it's pronounced, I'm not going to take debates on that matter—but you have a gif at the top of your site a showing a command line tool that runs an analyze command on an application. What are you looking at to establish an application or workload's suitability for migration? Because I have opinions on this, but you have, you know, a business around this and I'm not going to assume that my strongly-held opinions informed by several weeks of work are going to trump, you know, the thing that your entire company is built around.David: Thanks, Corey. Yeah, you're looking at our command-line utilities there. It's an accompanying part of our product suite. We have a web application and the command-line utilities are what customers use behind their firewall to analyze their applications. The data points that we look at are infrastructure, as you can imagine, you might plug into VMware and discover VMs that are running, we'll look for non-x86 workloads on the network.So, infrastructure is sort of bread and butter; everyone does that. Where Tidal differentiates is going up the stack, analyzing source code, analyzing database technologies, and looking at the schema usage within your on-premises database, for example, which features and functionality are using, and then how that fits to more cloud-native database offerings. And then we'll look at the technology age as well. And when you combine all of those technology factors together, we sort of form a view of what the migration difficulty to cloud will be on various migration outcomes, be it rehost, replatform, or refactor.The other thing that we add there is on the business side and the business intent. So, we want to understand from leadership what their intent is with cloud, and there's some levers they pull in the Tidal platform there. But then we also want to understand from each application owner how they think about their applications, what the value of those applications are to them and what their forward-looking plans are. We capture all these things in our tool, we then run it through our recommendation engine, and that's how we come up with a bespoke migration plan per client.Corey: One of the challenges I have in the cost arena around a lot of these tools that oh, we're going to look at your various infrastructure-as-code situation and see what that's going to cost you for a given change. It's like, sure, that that's not hard from a baseline of I want to spin up ten more EC2 instances. Yes, that is the tricky part of cloud economics known as basic arithmetic. The problem where I see is that okay, and then they're going to run Kubernetes, which has no sense of zone affinity, so it's going to wind up putting nondeterministic amounts of traffic across a AZ boundary and that's going to spike data transfer in some use cases, but none of these tools have any conception as to what those workloads look like. Now, that's a purely cost perspective, but that does have architectural approaches. Do you factor things like that in when you move up the stack?David: Absolutely. And really understanding on a Tidal inventory basis, understanding what the intent is of each of those workloads really does help you, from a cloud economics basics, to work out how much is reasonable in terms of cloud costs. So, for example, in Tidal, if you're doing app assessment, you're capturing any revenue to business that it generates, any staff productivity that it creates. And so, you've got the income side of that application workload. When you map that to on-premises costs and then later to cloud costs, your FinOps job becomes a lot easier because now you have the business context of those workloads too.Corey: So, one of the things that I have found is that you can judge the actual success of a project by how many people who work at the company claimed credit for it on LinkedIn, whereas conversely, when things don't work out super well, it's sort of a crickets moment. I'm curious as to your perspective on whether there is such a thing as a migration failure, or is it simply a, “Oh, we're going to iterate on this in a new direction. We've replaced a failing part, which turned out, from our perspective, to be our CIO, but we have a new one who's going to move us into cloud in the proper time and space.” We go through more of those things than some people do underwear. My God. But is there such a thing as a failed cloud migration?David: There absolutely is. And I get your point that success has many fathers. You know, when clients have brought us in for that success party at the end, you don't recognize everybody there. But you know, failure can be, you know, you've missed on time, scope, or budget, and by those measures, I think 76% of IT projects were failing in 2018, when we ran those numbers.So absolutely, by those metrics, there are failed cloud migrations. What tends to happen is people claim success on the workloads that did migrate. They may then kick it out into a new project scope, the organizational change bit. So, we've had many customers who viewed the cloud migration as a lift-and-shift exercise and failed to execute on the organizational change and then months later realized, oh, that is important in order for my day two operations to really hum, and so then have embarked on that under a separate initiative. So, there's certainly a lot of rescoping that goes on with these things.And what we like to make sure we're teaching people—and we do this for free—is those lessons learned and pitfalls with cloud early on because we don't want to see all those headlines of failed projects under that; we want to make sure that customers are armed with here are the things you should consider to execute on as you go to cloud.Corey: Do you ever run an analysis on a workload when a customer is asking, “So, how should we go about migrating this?” And your answer is, “You should absolutely not?”David: Well, all applications can go to cloud, it's just a matter of how much elbow grease you want to put into it. And so, the absolutely not call comes from when that app doesn't provide any utility to the business or maybe it has a useful life of six more months and the data center is going to be alive for seven. So, that's when those types of judgment calls come in. Other times we've seen, you know, there's already a replacement initiative underway by the business. IT wasn't aware of it, but through our process and methodology, they engaged with the business for the first time and learned about it. And so, that helps them to avoid needing to migrate workloads because the business is already moving to Salesforce, for example.Corey: I imagine you're also relatively used to the sinking realization that customers often have when they're used to data center thinking and you ask them a question, like, “How many gigabytes a month does your application server send back and forth to your database server?” And their response, very reasonably, is, “Why on earth would I know the answer to that quest—oh, God. You mean, that's how it bills?” It's the sense of everything is different in cloud, sometimes, subtly, sometimes massively. But it's a different way of thinking.So, I guess my last real big question for you on this is, moving technology is relatively straightforward but migrating people is very challenging. How do you find that the people and the processes that have grown up in data center environments with people whose identities are inextricably linked the technology they work on, being faced with the idea of it is now time to pick up and move these things into an environment where things that were incredibly valuable guardrails in a data center environment no longer serve you well?David: Yeah. The people side of cloud migration is the more challenging part. It's actually one of the reasons we introduced a service offering around people change management. The general strategy is sort of the Kotter change process of creating that guiding coalition, the people who want to do something different, get them outside of IT, reporting out to the executives directly, so they're unencumbered by the traditional processes. And once they start to demonstrate some success of a new way of working, a new paradigm, you kind of sell that back into the organization in order to drive that change.It's getting a lot easier to position that organizational change aspects with customers. There's enough horror stories out there of people that did not take that approach. And quite rightly. I mean, it's tough to imagine, as a customer, like, if I'm applying my legacy processes to cloud migration, why would I expect to get anything but a legacy result? You know, and most of the customers that we talk to that are going to cloud want a transformational outcome, they want more business agility and greater staff productivity, and so they need to recognize that that doesn't come without change to people and change the organization. It doesn't mean you have to change the people out individually, but skilling the way we work, those types of things, are really important to invest in and I'd say even more so than the technology aspects of any cloud migration.Corey: David, I really want to thank you for taking the time to talk to me about something that is, I'd say near and dear to my heart, except I'm trying desperately not to deal with it more than I absolutely have to. If people want to learn more, where's the best place for them to find you?David: Sure. I mean, tidalcloud.com is our website. I'm also on Twitter @dcolebatch. I like to tweet there a little bit, increasingly these days. I'm not on Bluesky yet, though, so I won't see you there. And also on LinkedIn, of course.Corey: And we will, of course, put links to that in the [show notes 00:29:57]. Thank you so much for your time. I really appreciate it.David: Thanks, Corey. Great to be here.Corey: David Colebatch, CEO and founder of Tidal.cloud. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that you will then struggle to migrate to a different podcast platform of your choice.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Traceroute
The Kids Are Alright

Traceroute

Play Episode Listen Later May 11, 2023 31:34


How do we prepare our kids for jobs that don't exist? Studies show that technology is progressing at such a rapid pace that up to 85% of the jobs that will be available in 2040 have not been created yet. Will AI, ML, and hardware advancements create a society where careers we take for granted today won't exist in the future? In this episode featuring hosts Grace Ewura-Esi and Amy Tobey, Producer John Taylor puts a personal face on this idea through his 13-year-old daughter, Ella, who wants to be a chef when she grows up. Together, they explore this issue with Executive Chef-turned-Dell Computer Advocate Tim Banks, as well as employment attorney Michael Lotito, whose Emma Coalition seeks solutions to TIDE, the technologically induced displacement of Employment. Between trips to fully-automated restaurants and the latest advancements in 3D food replication, we discover that Gen Z's humanity may be their biggest asset in tomorrow's job market.Additional ResourcesConnect with Amy Tobey: LinkedIn or TwitterConnect with Grace Andrews: LinkedIn or Twitter.Connect with John Taylor: LinkedInConnect with Alexander Kolchinsky: LinkedInConnect with Michael Lotito: LinkedInConnect With Tim Banks: LinkedInVisit Origins.dev for more informationEnjoyed This Episode?If you did, be sure to follow and share it with your friends!Post a review and share it! If you enjoyed tuning in, then please leave us a review. We'd also appreciate it if you would share the podcast with your friends and colleagues, as you get to know the people and technologies at the center of our digital world.Traceroute is a podcast from Equinix and is a production of Stories Bureau. This episode was produced by John Taylor with help from Tim Balint and Cat Bagsic. It was edited by Joshua Ramsey and mixed by Jeremy Tuttle, with additional editing and sound design by Mathr de Leon. Our theme song was composed by Ty Gibbons.

Screaming in the Cloud
Cutting Costs in Cloud with Everett Berry

Screaming in the Cloud

Play Episode Listen Later May 9, 2023 31:59


Everett Berry, Growth and Open Source at Vantage, joins Corey at Screaming in the Cloud to discuss the complex world of cloud costs. Everett describes how Vantage takes a broad approach to understanding and cutting cloud costs across a number of different providers, and reveals which providers he feels generate large costs quickly. Everett also explains some of his best practices for cutting costs on cloud providers, and explores what he feels the impact of AI will be on cloud providers. Corey and Everett also discuss the pros and cons of AWS savings plans, why AWS can't be counted out when it comes to AI, and why there seems to be such a delay in upgrading instances despite the cost savings. About EverettEverett is the maintainer of ec2instances.info at Vantage. He also writes about cloud infrastructure and analyzes cloud spend. Prior to Vantage Everett was a developer advocate at Arctype, a collaborative SQL client acquired by ClickHouse. Before that, Everett was cofounder and CTO of Perceive, a computer vision company. In his spare time he enjoys playing golf, reading sci-fi, and scrolling Twitter.Links Referenced: Vantage: https://www.vantage.sh/ Vantage Cloud Cost Report: https://www.vantage.sh/cloud-cost-report Everett Berry Twitter: https://twitter.com/retttx Vantage Twitter: https://twitter.com/JoinVantage TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey:  LANs of the late 90's and early 2000's were a magical place to learn about computers, hang out with your friends, and do cool stuff like share files, run websites & game servers, and occasionally bring the whole thing down with some ill-conceived software or network configuration. That's not how things are done anymore, but what if we could have a 90's style LAN experience along with the best parts of the 21st century internet? (Most of which are very hard to find these days.) Tailscale thinks we can, and I'm inclined to agree. With Tailscale I can use trusted identity providers like Google, or Okta, or GitHub to authenticate users, and automatically generate & rotate keys to authenticate devices I've added to my network. I can also share access to those devices with friends and teammates, or tag devices to give my team broader access. And that's the magic of it, your data is protected by the simple yet powerful social dynamics of small groups that you trust.Try now - it's free forever for personal use. I've been using it for almost two years personally, and am moderately annoyed that they haven't attempted to charge me for what's become an essential-to-my-workflow service.Corey: Have you listened to the new season of Traceroute yet? Traceroute is a tech podcast that peels back the layers of the stack to tell the real, human stories about how the inner workings of our digital world affect our lives in ways you may have never thought of before. Listen and follow Traceroute on your favorite platform, or learn more about Traceroute at origins.dev. My thanks to them for sponsoring this ridiculous podcast. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This seems like an opportune moment to take a step back and look at the overall trend in cloud—specifically AWS—spending. And who better to do that than this week, my guest is Everett Berry who is growth in open-source over at Vantage. And they've just released the Vantage Cloud Cost Report for Q1 of 2023. Everett, thank you for joining me.Everett: Thanks for having me, Corey.Corey: I enjoy playing slap and tickle with AWS bills because I am broken in exactly that kind of way where this is the thing I'm going to do with my time and energy and career. It's rare to find people who are, I guess, similarly afflicted. So, it's great to wind up talking to you, first off.Everett: Yeah, great to be with you as well. Last Week in AWS and in particular, your Twitter account, are things that we follow religiously at Vantage.Corey: Uh-oh [laugh]. So, I want to be clear because I'm sure someone's thinking it out there, that, wait, Vantage does cloud cost optimization as a service? Isn't that what I do? Aren't we competitors? And the answer that I have to that is not by any definition that I've ever seen that was even halfway sensible.If SaaS could do the kind of bespoke consulting engagements that I do, we would not sell bespoke consulting engagements because it's easier to click button: receive software. And I also will point out that we tend to work once customers are at a certain point at scale that in many cases is a bit prohibitive for folks who are just now trying to understand what the heck's going on the first time finance has some very pointed questions about the AWS bill. That's how I see it from my perspective, anyway. Agree? Disagree?Everett: Yeah, I agree with that. I think the product solution, the system of record that companies need when they're dealing with Cloud costs ends up being a different service than the one that you guys provide. And I think actually the to work in concert very well, where you establish a cloud cost optimization practice, and then you keep it in place via software and via sort of the various reporting tools that the Vantage provide. So, I completely agree with you. In fact, in the hundreds of customers and deals that Vantage has worked on, I don't think we have ever come up against Duckbill Group. So, that tells you everything you need to know in that regard.Corey: Yeah. And what's interesting about this is that you have a different scale of visibility into the environment. We wind up dealing with a certain profile, or a couple of profiles, in our customer base. We work with dozens of companies a year; you work with hundreds. And that's bigger numbers, of course, but also in many cases at different segments of the industry.I also am somewhat fond of saying that Vantage is more focused on going broad in ways where we tend to focus on going exclusively deep. We do AWS; the end. You folks do a number of different cloud providers, you do Datadog cost visibility. I've lost track of all the different services that you wind up tracking costs for.Everett: Yeah, that's right. We just launched our 11th provider, which was OpenAI and for the first time in this report, we're actually breaking out data among the different clouds and we're comparing services across AWS, Google, and Azure. And I think it's a bit of a milestone for us because we started on AWS, where I think the cost problem is the most acute, if you will, and we've hit a point now across Azure and Google where we actually have enough data to say some interesting things about how those clouds work. But in general, we have this term, single pane of glass, which is the idea that you use 5, 6, 7 services, and you want to bundle all those costs into one report.Corey: Yeah. And that is something that we see in many cases where customers are taking a more holistic look at things. But, on some level, when people ask me, “Oh, do you focus on Google bills, too,” or Azure bills in the early days, it was, “Well, not yet. Let's take a look.” And what I was seeing was, they're spending, you know, millions or hundreds of millions, in some cases, on AWS, and oh, yeah, here's, like, a $300,000 thing we're running over on GCP is a proof-of-concept or some bizdev thing. And it's… yeah, why don't we focus on the big numbers first? The true secret of cloud economics is, you know, big numbers first rather than alphabetical, but don't tell anyone I told you that.Everett: It's pretty interesting you say that because, you know, in this graph where we break down costs across providers, you can really see that effect on Google and Azure. So, for example, the number three spending category on Google is BigQuery and I think many people would say BigQuery is kind of the jewel of the Google Cloud empire. Similarly for Azure, we actually found Databricks showing up as a top-ten service. Compare that to AWS where you just see a very routine, you know, compute, database, storage, monitoring, bandwidth, down the line. AWS still is the king of costs, if you will, in terms of, like, just running classic compute workloads. And the other services are a little bit more bespoke, which has been something interesting to see play out in our data.Corey: One thing that I've heard that's fascinating to me is that I've now heard from multiple Fortune 500 companies where the Datadog bill is now a board-level concern, given the size and scale of it. And for fun, once I modeled out all the instance-based pricing models that they have for the suite of services they offer, and at the time was three or $400 a month, per instance to run everything that they've got, which, you know, when you look at the instances that I have, costing, you know, 15, 20 bucks a month, in some cases, hmm, seems a little out of whack. And I can absolutely see that turning into an unbounded growth problem in kind of the same way. I just… I don't need to conquer the world. I'm not VC-backed. I am perfectly content at the scale that I'm at—Everett: [laugh].Corey: —with the focus on the problems that I'm focused on.Everett: Yeah, Datadog has been fascinating. It's been one of our fastest-growing providers of sort of the ‘others' category that we've launched. And I think the thing with Datadog that is interesting is you have this phrase cloud costs are all about cloud architecture and I think that's more true on Datadog than a lot of other services because if you have a model where you have, you know, thousands of hosts, and then you add-on one of Datadogs 20 services, which charges per host, suddenly your cloud bill has grown exponentially compared to probably the thing that you were after. And a similar thing happens—actually, my favorite Datadog cost recommendation is, when you have multiple endpoints, and you have sort of multiple query parameters for those endpoints, you end up in this cardinality situation where suddenly Datadog is tracking, again, like, exponentially increasing number of data points, which it's then charging to you on a usage-based model. And so, Datadog is great partners with AWS and I think it's no surprise because the two of them actually sort of go hand-in-hand in terms of the way that they… I don't want to say take ad—Corey: Extract revenue?Everett: Yeah, extract revenue. That's a good term. And, you know, you might say a similar thing about Snowflake, possibly, and the way that they do things. Like oh, the, you know, warehouse has to be on for one minute, minimum, no matter how long the query runs, and various architectural decisions that these folks make that if you were building a cost-optimized version of the service, you would probably go in the other direction.Corey: One thing that I'm also seeing, too, is that I can look at the AWS bill—and just billing data alone—and then say, “Okay, you're using Datadog, aren't you?” Like, “How did you know that?” Like, well, first, most people are secondly, CloudWatch is your number two largest service spend right now. And it's the downstream effect of hammering all the endpoints with all of the systems. And is that data you're actually using? Probably not, in some cases. It's, everyone turns on all the Datadog integrations the first time and then goes back and resets and never does it again.Everett: Yeah, I think we have this set of advice that we give Datadog folks and a lot of it is just, like, turn down the ingestion volume on your logs. Most likely, logs from 30 days ago that are correlated with some new services that you spun up—like you just talked about—are potentially not relevant anymore, for the kind of day-to-day cadence that you want to get into with your cloud spending. So yeah, I mean, I imagine when you're talking to customers, they're bringing up sort of like this interesting distinction where you may end up in a meeting room with the actual engineering team looking at the actual YAML configuration of the Datadog script, just to get a sense of like, well, what are the buttons I can press here? And so, that's… yeah, I mean, that's one reason cloud costs are a pretty interesting world is, on the surface level, you may end up buying some RIs or savings plans, but then when you really get into saving money, you end up actually changing the knobs on the services that you're talking about.Corey: That's always a fun thing when we talk to people in our sales process. It's been sord—“Are you just going to come in and tell us to buy savings plans or reserved instances?” Because the answer to that used to be, “No, that's ridiculous. That's not what we do.” But then we get into environments and find they haven't bought any of those things in 18 months.Everett: [laugh].Corey: —and it's well… okay, that's step two. Step one is what are you using you shouldn't be? Like, basically measure first then cut as opposed to going the other direction and then having to back your way into stuff. Doesn't go well.Everett: Yeah. One of the things that you were discussing last year that I thought was pretty interesting was the gp3 volumes that are now available for RDS and how those volumes, while they offer a nice discount and a nice bump in price-to-performance on EC2, actually don't offer any of that on RDS except for specific workloads. And so, I think that's the kind of thing where, as you're working with folks, as Vantage is working with people, the discussion ends up in these sort of nuanced niche areas, and that's why I think, like, these reports, hopefully, are helping people get a sense of, like, well, what's normal in my architecture or where am I sort of out of bounds? Oh, the fact that I'm spending most of my bill on NAT gateways and bandwidth egress? Well, that's not normal. That would be something that would be not typical of what your normal AWS user is doing.Corey: Right. There's always a question of, “Am I normal?” is one of the first things people love to ask. And it comes in different forms. But it's benchmarking. It's, okay, how much should it cost us to service a thousand monthly active users? It's like, there's no good way to say that across the board for everyone.Everett: Yeah. I like the model of getting into the actual unit costs. I have this sort of vision in my head of, you know, if I'm Uber and I'm reporting metrics to the public stock market, I'm actually reporting a cost to serve a rider, a cost to deliver an Uber Eats meal, in terms of my cloud spend. And that sort of data is just ridiculously hard to get to today. I think it's what we're working towards with Vantage and I think it's something that with these Cloud Cost Reports, we're hoping to get into over time, where we're actually helping companies think about well, okay, within my cloud spend, it's not just what I'm spending on these different services, there's also an idea of how much of my cost to deliver my service should be realized by my cloud spending.Corey: And then people have the uncomfortable realization that wait, my bill is less a function of number of customers I have but more the number of engineers I've hired. What's going on with that?Everett: [laugh]. Yeah, it is interesting to me just how many people end up being involved in this problem at the company. But to your earlier point, the cloud spending discussion has really ramped up over the past year. And I think, hopefully, we are going to be able to converge on a place where we are realizing the promise of the cloud, if you will, which is that it's actually cheaper. And I think what these reports show so far is, like, we've still got a long ways to go for that.Corey: One thing that I think is opportune about the timing of this recording is that as of last week, Amazon wound up announcing their earnings. And Andy Jassy has started getting on the earnings calls, which is how you know it's bad because the CEO of Amazon never deigned to show up on those things before. And he said that a lot of AWS employees are focused and spending their time on helping customers lower their AWS bills. And I'm listening to this going, “Oh, they must be talking to different customers than the ones that I'm talking to.” Are you seeing a lot of Amazonian involvement in reducing AWS bills? Because I'm not and I'm wondering where these people are hiding.Everett: So, we do see one thing, which is reps pushing savings plans on customers, which in general, is great. It's kind of good for everybody, it locks people into longer-term spend on Amazon, it gets them a lower rate, savings plans have some interesting functionality where they can be automatically applied to the area where they offer the most discount. And so, those things are all positive. I will say with Vantage, we're a cloud cost optimization company, of course, and so when folks talk to us, they often already have talked to their AWS rep. And the classic scenario is, that the rep passes over a large spreadsheet of options and ways to reduce costs, but for the company, that spreadsheet may end up being quite a ways away from the point where they actually realize cost savings.And ultimately, the people that are working on cloud cost optimization for Amazon are account reps who are comped by how much cloud spending their accounts are using on Amazon. And so, at the end of the day, some of the, I would say, most hard-hitting optimizations that you work on that we work on, end up hitting areas where they do actually reduce the bill which ends up being not in the account manager's favor. And so, it's a real chicken-and-egg game, except for savings plans is one area where I think everybody can kind of work together.Corey: I have found that… in fairness, there is some defense for Amazon in this but their cost-cutting approach has been rightsizing instances, buy some savings plans, and we are completely out of ideas. Wait, can you switch to Graviton and/or move to serverless? And I used to make fun of them for this but honestly that is some of the only advice that works across the board, irrespective in most cases, of what a customer is doing. Everything else is nuanced and it depends.That's why in some cases, I find that I'm advising customers to spend more money on certain things. Like, the reason that I don't charge percentage of savings in part is because otherwise I'm incentivized to say things like, “Backups? What are you, some kind of coward? Get rid of them.” And that doesn't seem like it's going to be in the customer's interest every time. And as soon as you start down that path, it starts getting a little weird.But people have asked me, what if my customers reach out to their account teams instead of talking to us? And it's, we do bespoke consulting engagements; I do not believe that we have ever had a client who did not first reach out to their account team. If the account teams were capable of doing this at the level that worked for customers, I would have to be doing something else with my business. It is not something that we are seeing hit customers in a way that is effective, and certainly not at scale. You said—as you were right on this—that there's an element here of account managers doing this stuff, there's an [unintelligible 00:15:54] incentive issue in part, but it's also, quality is extraordinarily uneven when it comes to these things because it is its own niche and a lot of people focus in different areas in different ways.Everett: Yeah. And to the areas that you brought up in terms of general advice that's given, we actually have some data on this in this report. In particular Graviton, this is something we've been tracking the whole time we've been doing these reports, which is the past three quarters and we actually are seeing Graviton adoption start to increase more rapidly than it was before. And so, for this last quarter Q1, we're seeing 5% of our costs that we're measuring on EC2 coming from Graviton, which is up from, I want to say 2% the previous quarter, and, like, less than 1% the quarter before. The previous quarter, we also reported that Lambda costs are now majority on ARM among the Vantage customer base.And that one makes some sense to me just because in most cases with Lambda, it's a flip of a switch. And then to your archival point on backups, this is something that we report in this one is that intelligent tiering, which we saw, like, really make an impact for folks towards the end of last year, the numbers for that were flat quarter over quarter. And so, what I mean by that is, we reported that I think, like, two-thirds of our S3 costs are still in the standard storage tier, which is the most expensive tier. And folks have enabled S3 intelligent tiering, which moves your data to progressively cheaper tiers, but we haven't seen that increase this quarter. So, it's the same number as it was last quarter.And I think speaks to what you're talking about with a ceiling on some cost optimization techniques, where it's like, you're not just going to get rid of all your backups; you're not just going to get rid of your, you know, Amazon WorkSpaces archived desktop snapshots that you need for some HIPAA compliance reason. Those things have an upper limit and so that's where, when the AWS rep comes in, it's like, as they go through the list of top spending categories, the recommendations they can give start to provide diminishing returns.Corey: I also think this is sort of a law of large numbers issue. When you start seeing a drop off in the growth rate of large cloud providers, like, there's a problem, in that there are only so many exabyte scale workloads that can be moved inside of a given quarter into the cloud. You're not going to see the same unbounded infinite growth that you would expect mathematically. And people lose their minds when they start to see those things pointed out, but the blame that oh, that's caused by cost optimization efforts, with respect, bullshit it is. I have seen customers devote significant efforts to reducing their AWS bills and it takes massive amounts of work and even then they don't always succeed in getting there.It gets better, but they still wind up a year later, having spent more on a month-by-month basis than they did when they started. Sure they understand it better and it's organic growth that's driving it and they've solved the low hanging fruit problem, but there is a challenge in acting as a boundary for what is, in effect, an unbounded growth problem.Everett: Yeah. And speaking to growth, I thought Microsoft had the most interesting take on where things could happen next quarter, and that, of course, is AI. And so, they attributed, I think it was, 1% of their guidance regarding 26 or 27% growth for Q2 Cloud revenue and it attributed 1% of that to AI. And I think Amazon is really trying to be in the room for those discussions when a large enterprise is talking about AI workloads because it's one of the few remaining cloud workloads that if it's not in the cloud already, is generating potentially massive amounts of growth for these guys.And so, I'm not really sure if I believe the 1% number. I think Microsoft may be having some fun with the fact that, of course, OpenAI is paying them for acting as a cloud provider for ChatGPT and further API, but I do think that AWS, although they were maybe a little slow to the game, they did, to their credit, launch a number of AI services that I'm excited to see if that contributes to the cost that we're measuring next quarter. We did measure, for the first time, a sudden increase on those new [Inf1 00:20:17] EC2 instances, which are optimized for machine learning. And I think if AWS can have success moving customers to those the way they have with Graviton, then that's going to be a very healthy area of growth for them.Corey: I'll also say that it's pretty clear to me that Amazon does not know what it's doing in its world of machine-learning-powered services. I use Azure for the [unintelligible 00:20:44] clients I built originally for Twitter, then for Mastodon—I'm sure Bluesky is coming—but the problem that I'm seeing there is across the board, start to finish, that there is no cohesive story from the AWS side of here's a picture tell me what's in it and if it's words, describe it to me. That's a single API call when we go to Azure. And the more that Amazon talks about something, I find, the less effective they're being in that space. And they will not stop talking about machine learning. Yes, they have instances that are powered by GPUs; that's awesome. But they're an infrastructure provider and moving up the stack is not in their DNA. But that's where all the interest and excitement and discussion is going to be increasingly in the AI space. Good luck.Everett: I think it might be something similar to what you've talked about before with all the options to run containers on AWS. I think they today have a bit of a grab bag of services and they may actually be looking forward to the fact that they're these truly foundational models which let you do a number of tasks, and so they may not need to rely so much on you know, Amazon Polly and Amazon Rekognition and sort of these task-specific services, which to date, I'm not really sure of the takeoff rates on those. We have this cloud costs leaderboard and I don't think you would find them in the top 50 of AWS services. But we'll see what happens with that.AWS I think, ends up being surprisingly good at sticking with it. I think our view is that they probably have the most customer spend on Kubernetes of any major cloud, even though you might say Google at first had the lead on Kubernetes and maybe should have done more with GKE. But to date, I would kind of agree with your take on AI services and I think Azure is… it's Azure's to lose for the moment.Corey: I would agree. I think the future of the cloud is largely Azure's to lose and it has been for a while, just because they get user experience, they get how to talk to enterprises. I just… I wish they would get security a little bit more effectively, and if failing that, communicating with their customers about security more effectively. But it's hard for a leopard to change its spots. Microsoft though has demonstrated an ability to change their nature multiple times, in ways that I would have bet were impossible. So, I just want to see them do it again. It's about time.Everett: Yeah, it's been interesting building on Azure for the past year or so. I wrote a post recently about, kind of, accessing billing data across the different providers and it's interesting in that every cloud provider is unique in the way that it simply provides an external endpoint for downloading your billing data, but Azure is probably one of the easiest integrations; it's just a REST API. However, behind that REST API are, like, years and years of different ways to pay Microsoft: are you on a pay-as-you-go plan, are you on an Azure enterprise plan? So, there's all this sort of organizational complexity hidden behind Azure and I think sometimes it rears its ugly head in a way that stringing together services on Amazon may not, even if that's still a bear in and of itself, if you will.Corey: Any other surprises that you found in the Cloud Cost Report? I mean, looking through it, it seems directionally aligned with what I see in my environments with customers. Like for example, you're not going to see Kubernetes showing up as a line item on any of these things just because—Everett: Yeah.Corey: That is indistinguishable from a billing perspective when we're looking at EC2 spend versus control plane spend. I don't tend to [find 00:24:04] too much that's shocking me. My numbers are of course, different percentage-wise, but surprise, surprise, different companies doing different things doing different percentages, I'm sure only AWS knows for sure.Everett: Yeah, I think the biggest surprise was just the—and, this could very well just be kind of measurement method, but I really expected to see AI services driving more costs, whether it was GPU instances, or AI-specific services—which we actually didn't report on at all, just because they weren't material—or just any indication that AI was a real driver of cloud spending. But I think what you see instead is sort of the same old folks at the top, and if you look at the breakdown of services across providers, that's, you know, compute, database, storage, bandwidth, monitoring. And if you look at our percentage of AI costs as a percentage of EC2 costs, it's relatively flat, quarter over quarter. So, I would have thought that would have shown up in some way in our data and we really didn't see it.Corey: It feels like there's a law of large numbers things. Everyone's talking about it. It's very hype right now—Everett: Yeah.Corey: But it's also—you talk to these companies, like, “Okay, we have four exabytes of data that we're storing and we have a couple 100,000 instances at any given point in time, so yeah, we're going to start spending $100,000 a month on our AI adventures and experiments.” It's like, that's just noise and froth in the bill, comparatively.Everett: Exactly, yeah. And so, that's why I think Microsoft's thought about AI driving a lot of growth in the coming quarters is, we'll see how that plays out, basically. The one other thing I would point to is—and this is probably not surprising, maybe, for you having been in the infrastructure world and seeing a lot of this, but for me, just seeing the length of time it takes companies to upgrade their instance cycles. We're clocking in at almost three years since the C6 series instances have been released and for just now seeing C6 and R6 start to edge above 10% of our compute usage. I actually wonder if that's just the stranglehold that Intel has on cloud computing workloads because it was only last year around re:Invent that the C6in and the Intel version of the C6 series instances had been released. So, I do think in general, there's supposed to be a price-to-performance benefit of upgrading your instances, and so sometimes it surprises me to see how long it takes companies to get around to doing that.Corey: Generation 6 to 7 is also 6% more expensive in my sampling.Everett: Right. That's right. I think Amazon has some work to do to actually make that price-to-performance argument, sort of the way that we were discussing with gp2 versus gp3 volumes. But yeah, I mean, other than that, I think, in general, my view is that we're past the worst of it, if you will, for cloud spending. Q4 was sort of a real letdown, I think, in terms of the data we had and the earnings that these cloud providers had and I think Q1 is actually everyone looking forward to perhaps what we call out at the beginning of the report, which is a return to normal spend patterns across the cloud.Corey: I think that it's going to be an interesting case. One thing that I'm seeing that might very well explain some of the reluctance to upgrade EC2 instances has been that a lot of those EC2 instances are databases. And once those things are up and running and working, people are hesitant to do too much with them. One of the [unintelligible 00:27:29] roads that I've seen of their savings plan approach is that you can migrate EC2 spend to Fargate to Lambda—and that's great—but not RDS. You're effectively leaving a giant pile of money on the table if you've made a three-year purchase commitment on these things. So, all right, we're not going to be in any rush to migrate to those things, which I think is AWS getting in its own way.Everett: That's exactly right. When we encounter customers that have a large amount of database spend, the most cost-effective option is almost always basically bare-metal EC2 even with the overhead of managing the backup-restore scalability of those things. So, in some ways, that's a good thing because it means that you can then take advantage of the, kind of, heavy committed use options on EC2, but of course, in other ways, it's a bit of a letdown because, in the ideal case, RDS would scale with the level of workloads and the economics would make more sense, but it seems that is really not the case.Corey: I really want to thank you for taking the time to come on the show and talk to me. I'll include a link in the [show notes 00:28:37] to the Cost Report. One thing I appreciate is the fact that it doesn't have one of those gates in front of it of, your email address, and what country you're in, and how can our salespeople best bother you. It's just, here's a link to the PDF. The end. So, thanks for that; it's appreciated. Where else can people go to find you?Everett: So, I'm on Twitter talking about cloud infrastructure and AI. I'm at@retttx, that's R-E-T-T-T-X. And then of course, Vantage also did quick hot-takes on this report with a series of graphs and explainers in a Twitter thread and that's @JoinVantage.Corey: And we will, of course, put links to that in the [show notes 00:29:15]. Thank you so much for your time. I appreciate it.Everett: Thanks, Corey. Great to chat.Corey: Everett Berry, growth in open-source at Vantage. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment that will increase its vitriol generation over generation, by approximately 6%.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Traceroute
When the Lights Go Out

Traceroute

Play Episode Listen Later Apr 27, 2023 37:02


How do we make technology that lasts? In this episode, Grace Ewura-Esi and Shweta Saraf join Producer John Taylor as he talks with two cutting-edge technologists who are trying to extend the life of the hardware infrastructure around us. From a cell phone tower that can be installed on your roof (and repaired just as easily), to a clock that is built to last ten thousand years, we uncover the common threads that run through technology that's built to last. Woven in this framework is the story of Sandra Rodríguez Cotto, who worked tirelessly to restore civilization—as well as hope itself—to the island of Puerto Rico with the help of the only piece of hardware infrastructure that withstood the powerful forces of Hurricane Maria in 2017.Additional ResourcesConnect with Shweta Saraf: LinkedIn or Twitter.Connect with Grace Ewura-Esi : LinkedIn or Twitter.Connect with Alexander Rose of The Long Now Foundation: LinkedIn.Connect with Dr. Matt Johnson: LinkedIn.Connect with Sandra Rodríguez Cotto: TwitterVisit Origins.dev for more informationEnjoyed This Episode?If you did, be sure to follow and share it with your friends!Post a review and share it! If you enjoyed tuning in, then leave us a review. You can also share this with your friends and colleagues! Traceroute is a podcast from Equinix and is a production of Stories Bureau. This episode was produced by John Taylor with help from Tim Balint and Cat Bagsic. It was edited by Joshua Ramsey and mixed by Jeremy Tuttle, with additional editing and sound design by Mathr de Leon. Our theme song was composed by Ty Gibbons.

Traceroute
Meet Fen Aldrich

Traceroute

Play Episode Listen Later Apr 20, 2023 12:02


In our minisode finale, Equinix Technical Storyteller Grace Ewura-Esi introduces our third new co-host for Season 2, Fen Aldrich, Developer Advocate for Equinix. In a compelling conversation, the two hosts reveal their passion for “digital anthropology,” and the topics they want to cover in the new season of Traceroute. Additional ResourcesConnect with Fen Aldrich: LinkedIn or Twitter.Connect with Grace Ewura-Esi: LinkedIn or Twitter.Visit Origins.dev for more informationEnjoyed This Episode?If you did, be sure to Follow and share it with your friends!Post a review and share it! If you enjoyed tuning in, then leave us a review. You can also share this with your friends and colleagues! Introduce them to the people and organizations who played a role in inventing the internet. For more episode updates, tune in on Apple Podcasts, Spotify, and wherever you get your podcasts.

Traceroute
Meet Shweta Saraf

Traceroute

Play Episode Listen Later Apr 20, 2023 13:30


In the second of three Traceroute minisodes, Technical Storyteller Grace Ewura-Esi introduces a new co-host for Season 2, Shweta Saraf, Director of Platform Networking at Netflix. In a brief but compelling conversation, the two hosts reveal more about themselves, their roles, and their unique perspectives on the central theme of Season 2: the humanity behind the hardware. Additional ResourcesConnect with Shweta Saraf: LinkedIn or Twitter.Connect with Grace Ewura-Esi: LinkedIn or Twitter.Visit Origins.dev for more informationEnjoyed This Episode?If you did, be sure to subscribe and share it with your friends!Post a review and share it! If you enjoyed tuning in, then leave us a review. You can also share this with your friends and colleagues! Introduce them to the people and organizations who played a role in inventing the internet. For more episode updates, tune in on Apple Podcasts, Spotify, and wherever you get your podcasts.

Traceroute
Meet Amy Tobey

Traceroute

Play Episode Listen Later Apr 20, 2023 9:55


In the first of three Traceroute minisodes, Technical Storyteller Grace Ewura-Esi introduces a new co-host for Season 2, Amy Tobey, Senior Principal Engineer at Equinix. In an insightful conversation, the two hosts reveal more about themselves, their roles, and the stories they're looking forward to telling on the new season of Traceroute. Additional ResourcesConnect with Amy Tobey: LinkedIn or Twitter.Connect with Grace Ewura-Esi: LinkedIn or Twitter.Visit Origins.dev for more informationEnjoyed This Episode?If you did, be sure to follow and share it with your friends!Post a review and share it! If you enjoyed tuning in, then leave us a review. You can also share this with your friends and colleagues! Introduce them to the people and organizations who played a role in inventing the internet. For more episode updates, tune in on Apple Podcasts, Spotify, and wherever you get your podcasts.

spotify equinix senior principal engineer traceroute
Traceroute
Traceroute Season 2

Traceroute

Play Episode Listen Later Apr 6, 2023 1:48


Traceroute is back! The award-winning podcast about the inner workings of our digital world returns with new episodes, new co-hosts, and fascinating new stories. This season, each episode will peel back the layers of the stack to find the humanity behind the hardware: stories that reveal not just the origins of our technology, but hardware's very real effect on human lives. Each week, Traceroute brings you a fascinating new story, with a fresh perspective from some of the most intriguing technologists of our time. Additional ResourcesEquinixOrigins.devEnjoyed This Episode?If you did, be sure to subscribe and share it with your friends!Post a review and share it! If you enjoyed tuning in, then leave us a review. You can also share this with your friends and colleagues! Introduce them to the people and organizations who played a role in inventing the internet. Want to learn more? Head on over to Metal Equinix. Have any questions? You can contact us through our website. For more episode updates, tune in on Apple Podcasts, Spotify, and wherever you get your podcasts.

spotify head traceroute
PING
Reverse Traceroute: It's just traceroute, but the other direction

PING

Play Episode Listen Later Mar 29, 2023 39:17


In this episode of PING, Dr Rolf Winter, the Professor of Data Communications at Augsburg University of Applied Sciences discusses his work on ‘reverse traceroute', which is an approach to using the well-known traceroute mechanism but driven from the other end. The inherent problem with traceroute and its related diagnostics is that it only informs you about the path outwards from your address to the other end. Reverse traceroute is an attempt to ‘mechanize' the reverse path information, using proposed new codepoints in the Internet Control Message Protocol (ICMP). Rolf discusses this approach and some of the logistical issues with attempting to modify an established protocol like ICMP, and measurements of the acceptability of proposed new codepoints in the wild. Read more about Professor Winter's work on the APNIC Blog: Troubleshooting ‘the other half' Watch his presentation at DENOG 14 Visit his GitHub code repository

Internet Explorers
IE04 Puttgarden, Rostock ⭤

Internet Explorers

Play Episode Listen Later Dec 23, 2022 27:08


Traceroute nach Helsinki Auf der Sonneninsel Fehmarn verlockt ein fußballfeldgroßer Grenzhandel Busladungen von Menschen zum Alkoholkauf, während die Internet Explorers eine Internetkabel-Beschilderung bestaunen, die unbedingt gesehen werden möchte, sogar aus der Ferne. Ein weiteres Spektakel wartet 30 Seemeilen südöstlich. Am Ostseestrand von Rostock versteckt sich Finnlands vielleicht wichtigster Container. Ein beseeltes Staffelfinale mit niedriger Latenz. Bordershop.com Zdnet: Neues Unterseekabel verbindet Deutschland und Finnland Heise: Ostsee-Highway Zdnet: Finland's 'safe harbour for data' becomes reality with funding for Sweden-free cable Internet Explorers Internet Explorers bei Steady

sweden deutschland steady container spektakel rostock staffelfinale finnlands latenz seemeilen internet explorers traceroute puttgarden
Without Your Head
Without Your Head - Johannes Grenzfurthner director of RAZZENNEST & MASKING THRESHOLD

Without Your Head

Play Episode Listen Later Nov 20, 2022 83:35


Without Your Head Horror Video-Podcast interview with Johannes Grenzfurthner director of RAZZENNEST and MASKING THRESHOLD! Topics discussed: - Razzennest - Masking Threshold - what is horror? - Nightmares Film Festival in Ohio - nerd culture and how its changed over the years - his nerd documentary Traceroute - much more!!! Hosted by "Nasty" Neal Theme by "The Tomb of Nick Cage" https://thetombofnickcage.com Closing track by Music of the Month "Beg 4 Me" by Miss Cherry Delight http://misscherrydelight.com --- Send in a voice message: https://anchor.fm/withoutyourhead/message Support this podcast: https://anchor.fm/withoutyourhead/support

music director head ohio tomb johannes threshold masking nick cage masking threshold nightmares film festival traceroute
Configuration Examples with KevTechify for the Cisco Certified Network Associate (CCNA)
Use Ping and Traceroute to Test Network Connectivity - ICMP - Configuration Examples for Introduction to Networks - CCNA - KevTechify | podcast 22

Configuration Examples with KevTechify for the Cisco Certified Network Associate (CCNA)

Play Episode Listen Later Jul 13, 2022 38:55


In this episode we are going to look at configuring Use Ping and Traceroute to Test Network Connectivity.There are connectivity issues in this activity. In addition to gathering and documenting information about the network, we will locate the problems and implement acceptable solutions to restore connectivity. We will be discussing Test and Restore IPv4 Connectivity and finally Test and Restore IPv6 Connectivity.Thank you so much for watching this episode of my series on Configuration Examples for the Cisco Certified Network Associate (CCNA).Once again, I'm Kevin and this is KevTechify. Let's get this adventure started.All my details and contact information can be found on my website, https://KevTechify.comYouTube Channel: https://YouTube.com/KevTechify-------------------------------------------------------Cisco Certified Network Associate (CCNA)Configuration Examples for Introduction to Networks v1 (ITN)ICMPLab 13.2.7 - Use Ping and Traceroute to Test Network ConnectivityPod Number: 22Season: 1-------------------------------------------------------Equipment I like.Home Lab ►► https://kit.co/KevTechify/home-labNetworking Tools ►► https://kit.co/KevTechify/networking-toolsStudio Equipment ►► https://kit.co/KevTechify/studio-equipment  

BSD Now
458: Traceroute interpretation

BSD Now

Play Episode Listen Later Jun 9, 2022 48:41


Fundamentals of the FreeBSD Shell, Spammers in the Public Cloud, locking user accounts properly, overgrowth on NetBSD, moreutils, ctwm & spleen, interpreting a traceroute, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines Fundamentals of the FreeBSD Shell (https://klarasystems.com/articles/interacting-with-freebsd-learning-the-fundamentals-of-the-freebsd-shell-2/) Spammers in the Public Cloud, Protected by SPF; Intensified Password Groping Still Ongoing; Spamware Hawked to Spamtraps (https://bsdly.blogspot.com/2022/04/spammers-in-public-cloud-protected-by.html) News Roundup A cautionary tale about locking Linux & FreeBSD user accounts (https://www.cyberciti.biz/networking/a-cautionary-tale-about-locking-linux-freebsd-user-accounts/) Overgrowth runs on NetBSD (https://www.reddit.com/r/openbsd_gaming/comments/ucgavg/i_was_able_to_build_overgrowth_on_netbsd/) moreutils (https://joeyh.name/code/moreutils/) NetBSD, CTWM, and Spleen (https://www.cambus.net/netbsd-ctwm-and-spleen/) How to properly interpret a traceroute or mtr (https://phil.lavin.me.uk/2022/03/how-to-properly-interpret-a-traceroute-or-mtr/) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Lets talk a bit about some of the events happening this year, BSDCan in virtual this weekend, emfcamp is this weekend too and in person, MCH is this summer and eurobsdcon is in september. How were the postgres conferences benedict? Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***

Introduction to Networks with KevTechify on the Cisco Certified Network Associate (CCNA)
Verify Connectivity - Build a Small Network - Introduction to Networks - CCNA - KevTechify | Podcast 89

Introduction to Networks with KevTechify on the Cisco Certified Network Associate (CCNA)

Play Episode Listen Later May 14, 2022 18:41


In this episode we are going to look at Verify Connectivity.We will be discussing Verify Connectivity with Ping, Extended Ping, Verify Connectivity with Traceroute, Extended Traceroute, and Network Baseline.Thank you so much for listening to this episode of my series on Introduction to Networks for the Cisco Certified Network Associate (CCNA).Once again, I'm Kevin and this is KevTechify. Let's get this adventure started.All my details and contact information can be found on my website, https://KevTechify.com-------------------------------------------------------Cisco Certified Network Associate (CCNA)Introduction to Networks v1 (ITN)Episode 17 - Build a Small NetworkPart D - Verify ConnectivityPodcast Number: 89-------------------------------------------------------Equipment I like.Home Lab ►► https://kit.co/KevTechify/home-labNetworking Tools ►► https://kit.co/KevTechify/networking-toolsStudio Equipment ►► https://kit.co/KevTechify/studio-equipment  

Manufacturing Hub
Ep. 59 - [Josh Varghese] IT OT Networks, Machine Network Modernization, Network Traffic, WireShark

Manufacturing Hub

Play Episode Listen Later May 6, 2022 89:38


Guest BioJosh Varghese is passionate about technology. Yes, passionate. He compares datasheets like baseball cards, can rattle off networking acronyms like rap lyrics, and listens to ICS security podcasts (yes, there are multiple) in his car. Josh established Traceroute in November 2017 to bring his passion, extensive industrial networking experience, and specialized skillset to the underserved, growing and evolving industrial/OT market. Before starting his own company, Josh was technical lead at Industrial Networking Solutions for almost ten years, where he built the technical support and application engineering department. In that role, Josh was able to work with an impressive list of clients and vendors on a wide range of projects that give him unique insight into the industrial/OT market. Josh also served on multiple vendor advisory councils and worked closely with numerous vendors to provide product feedback and advocate for customer solutions, and he continues many of those relationships at Traceroute.Before joining Industrial Networking Solutions, Josh was an instrumentation and control specialist at Camp Dresser and McKee (now CDM Smith). He designed and implemented SCADA systems for municipal water customers, including several he still works with today. Josh has a Bachelor of Science in electrical engineering from The University of Texas and holds multiple networking and vendor certifications. When he's not working, he spends every moment he can with his wife and three young kids.Main Discussion Points- Understanding the IT/OT Networks in Manufacturing.- Understanding the differences between Managed and Unmanaged Switches.- Debugging Networking Issues in Manufacturing.Theme: Equipment ModernizationManufacturing Hub Episode 59.Slow quotes, no documentation, horrible communication, and shotty support? Does this sound likeyour current systems integrator or retrofit vendor?Look no further than Envision Automation & Controls. Envision Automation & Controls addressesthese problems as they provide accurate quotes in record time (1-3 days) for most projects. As wellas world-class documentation and support. You can expect quality in everything they do fromdiscovery to delivery.Ray says "Envision hit the ground running on our first project together. The rapid quotes,documentation and clear communication are what make it easy for me to keep choosing Envision Automation & Controls.Please visit www.envisions.io for more information or to get a rapid quote. Email them directly atsales@envisions.io or give them a call at 812-618-5089. Its mission is to bring automation &controls of the future into the present—one solution at a time. Their motto: "You envision it... we buildit!"Recommended Materials Traceroute Blog Traceroute Asessments | https://www.traceroutellc.com/s/Networking-Fundamentals-Assessment.pdf / https://www.traceroutellc.com/s/Network-Training-Advanced-Assessment.pdf Non-Industrial Networking Barry Baker/Black Shirt Networking Connect with Us Josh Varghese Vlad Romanov Dave Griffith Manufacturing Hub Let Us Know What You ThinkIf you enjoyed the show, it would mean the world to us if you could leave us a review: https://podcasts.apple.com/us/podcast/manufacturing-hub/id1546805573#manufacturing #automation #industrialnetworking #otnetworking

Introduction to Networks with KevTechify on the Cisco Certified Network Associate (CCNA)
Ping and Traceroute Tests - ICMP - Introduction to Networks - CCNA - KevTechify | Podcast 69

Introduction to Networks with KevTechify on the Cisco Certified Network Associate (CCNA)

Play Episode Listen Later Apr 24, 2022 13:13


In this episode we are going to look at Ping and Traceroute Tests.We will be discussing Ping - Test Connectivity, Ping the Loopback, Ping the Default Gateway, Ping a Remote Host, Traceroute - Test the Path, Round-Trip Time (RTT), and finally IPv4 TTL and IPv6 Hop Limit.Thank you so much for listening to this episode of my series on Introduction to Networks for the Cisco Certified Network Associate (CCNA).Once again, I'm Kevin and this is KevTechify. Let's get this adventure started.All my details and contact information can be found on my website, https://KevTechify.com-------------------------------------------------------Cisco Certified Network Associate (CCNA)Introduction to Networks v1 (ITN)Episode 13 - ICMPPart B - Ping and Traceroute TestsPodcast Number: 69-------------------------------------------------------Equipment I like.Home Lab ►► https://kit.co/KevTechify/home-labNetworking Tools ►► https://kit.co/KevTechify/networking-toolsStudio Equipment ►► https://kit.co/KevTechify/studio-equipment 

tests networks ping ccna loopback icmp i'm kevin traceroute
Traceroute
Episode 7: Compute

Traceroute

Play Episode Listen Later Mar 24, 2022 30:15


The invisible bones holding up the Internet are its hardware. One of the most prominent benefits we are reaping from hardware innovations is cloud services. And as you may have guessed, the cloud isn't actually just somewhere up in space: physical data centers services are necessary to keep them up and running.  In this episode of Traceroute, we take a closer look at hardware and why its advancement is crucial to the development of the internet. We discuss the importance and benefits of optimization for hardware to suit the needs of software. Joined by our guests Amir Michael, Rose Schooler, and Ken Patchett, we explore the synergy of software and hardware in data center services and its effects on the connected world.  Episode Highlights The important Relationship Between Hardware and Software Efficiency depends on understanding how software uses hardware and vice versa Software consumes every just like hardware depending on the way it's written People want software and hardware “out of sight/out of mind,” but hardware is increasing in visibility due to data centers and the cloud As the internet increases, so does the need for better hardware Amir Michael: “There are thousands of people at large companies that are driving not only the design of the hardware, but the supply chains behind them as well. And if you just look at the financial reporting from these companies, they spend billions and billions of dollars on infrastructure.” The Building Blocks Of Getting Online Intel started in 1968, specializing in bulky but efficient memory chips. Now they lay transistors on top of atoms. Microprocessors are in every device now, from cell phones to servers to routers, making foundational microprocessor capability critical The biggest breakthrough came when Intel was able to use their infrastructure to support networking, and could then scale up to data centers and cloud architecture This began the transformation of networking, with storage moving from big fixed function hardware over to software-defined More growth in hardware is on the horizon with things like Artificial Intelligence, 5G, and edge computing  The Birth Of The Cloud The “Metal Rush” of the early 2000s saw companies like Google and Yahoo building their own data centers For smaller companies, this infrastructure development didn't make sense Small business turned to companies like Amazon, which had server resources to spare, and the cloud was born Data centers have scaled in size, but now the need is to optimize efficiency  More and more, hardware is now tailored for specific software applications Unlike software, developing hardware requires a longer production schedule and a more consistent supply chain, which can be difficult The next step is density, where more computing power is packed into less space but with greater efficiencies. Amir Michael: “You know, no one really goes into a bank anymore. Everything's just done over the network over these cloud resources today. It's how we've become accustomed to getting a lot of work done today. And so you need all that infrastructure to drive that. And I think it's just going to become more and more so in the future as well. The Nuts & Bolts Of Data Centers The cloud is simply a combination of data centers of various sizes across the globe that are all connected through a network The first data centers relied on redundancy and stability, so they were built like bomb shelters with backup systems Data centers started redesigning hardware to optimize it for different uses,  depending on who's renting the server space Open compute is the next phase for data centers, where engineers figure out how to get bigger, better, faster and more resilient with existing servers and components Ken Patchett: “Data and the usage of data has become much like a microwave in a home,  it is simply required, is expected. Most people don't look for it, they don't need it,...

Traceroute
Episode 6: Sustainability

Traceroute

Play Episode Listen Later Mar 17, 2022 33:49


Technology is a staple part of our lives. Its continuous growth has improved the world in countless ways. But what most people don't know is the environmental impact of something as mundane as streaming a video. In this episode, we discuss the impacts of data storage, technology, and the Internet on our world. Ali Fenn, David Mytton, and Jonathan Koomey share their insights on investing in sustainability and transitioning to more efficient energy sources. The key to global sustainability lies in the hands of data storage and technology industries. They need to find greener, more sustainable alternatives. If you want to learn about the Internet's environmental impacts and know how you can contribute to investing in sustainability, then this episode of the Traceroute podcast is for you. Episode Highlights [01:23] Areas For Infrastructure SustainabilityThe demand for increased data storage grows globally and daily.  Data centers need more compact and more efficient transistors to decrease their harmful effects on the environment while still providing good service. Ali Fenn, the president of ITRenew, says we should focus on energy, materials, and the manufacturing process for infrastructure sustainability. Spewing a ton of waste on the back end is also alarming. It's vital to consider environmental sustainability for the future of the Internet infrastructure industry. Ali Fenn: “The manufacturing process has this huge carbon impact. So let's think about a less wasteful, less linear stream, and let's at least maximize the value we can get out of all that stuff.” [04:53] Investing in Sustainability by Reusing MaterialsAli didn't think much about the environmental impact of technology infrastructure until she worked at ITRenew, which promotes the reuse of data center hardware. The demand for infrastructure is spurred by hyperscalers, like Google and Facebook. Open hardware is becoming the norm, maximizing the value and longevity of hardware through repurposing and reusing. Open hardware allows ITRenew to grow, buyers to get quality equipment, and hyperscalers to improve their sustainability. A circular economy is about deferring new manufacturing from a carbon perspective without sacrificing quality. Tune in to the full episode to hear Ali's analogy about reusing materials using second-hand cars. [10:23] Data Center Energy ConsumptionOther concerns for investing in sustainability include electricity, materials, and water consumption. The primary resource for Internet usage is electricity. The rapid growth of technology and the Internet leads to colossal consumption of our natural resources and poses a significant threat to the environment. The total amount of data center energy consumption ranges from 200 terawatt-hours to 500 terawatts-hours. Data centers are more efficient now, and the world is transitioning to cloud computing. [14:48] Three Steps for Greener Data CentersWhile data centers have made impressive steps in reducing their carbon impact, there are three steps they can take to become greener.  The first step is to offset all the carbon they emit through electricity generation. Next, match all electricity usage with 100% renewables. Although this is a good step, it may not be sufficient, as data centers still require a local electricity grid. Lastly, use 100% clean energy through power-purchase agreements to gain renewable electricity sources. Governments can encourage companies to move in this direction.  [16:33] Switching to Efficient InfrastructuresDavid Mytton: “Improvements in their facilities mean that they are able to invest in efficiencies.” Many companies are moving in this direction to save money and commit to social and corporate responsibility. Scale still matters in this situation. With sustainability in mind, these companies benefit from their scale and can invest in new programs. Investing in efficient infrastructure may not be affordable for smaller...

David Bombal
#360: Traceroute explained // Featuring Elon Musk // Demo with Windows, Linux, macOS

David Bombal

Play Episode Listen Later Mar 10, 2022 22:35


Does Elon Musk actually understand how the Internet works? Can he explain traceroute and tracert properly? Well... let's see... I'll demonstrate how multiple operating systems: Windows 11, MacOS, Linux use traceroute. There are differences including the fact that Windows uses ICMP, but macOS and Linux use UDP and ICMP. Full Elon Musk Interview: https://youtu.be/jvGnw1sHh9M // MENU // 0:00 ▶️ Introduction 0:08 ▶️ Elon Musk Babylon Bee interview video 1:11 ▶️ How trace route works 1:40 ▶️ What is ping? 1:48 ▶️ Internet Control Message Protocol (ICMP) 2:32 ▶️ How trace route (tracert) works on Windows 3:50 ▶️ What is a router? 4:10 ▶️ Wireshark packet captures 5:21 ▶️ Time To Live (TTL) 10:18 ▶️ Domain lookup using Whois 10:55 ▶️ Time To Live (TTL) (cont'd) 12:10 ▶️ Trace route phone application 13:43 ▶️ Submarine cable map 15:22 ▶️ Traceroute on MacOS 18:34 ▶️ UDP explanation 19:56 ▶️ Traceroute on Linux 21:42 ▶️ Conclusion // iPhone App I used // Name: Network Analyzer Link: https://apps.apple.com/us/app/network... // SOCIAL // Discord: https://discord.com/invite/usKSyzb Twitter: https://www.twitter.com/davidbombal Instagram: https://www.instagram.com/davidbombal LinkedIn: https://www.linkedin.com/in/davidbombal Facebook: https://www.facebook.com/davidbombal.co TikTok: http://tiktok.com/@davidbombal YouTube: https://www.youtube.com/davidbombal // MY STUFF // Monitor: https://amzn.to/3yyF74Y More stuff: https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com elon musk elonmusk babylon bee babylonbee elon musk interview internet star link traceroute tracert trace route internet

Traceroute
Episode 5: Open Source

Traceroute

Play Episode Listen Later Mar 10, 2022 33:36


There is tension between the digital and the physical development spaces. As the world becomes more digital, the distance between software and hardware widens. Only a few people are attempting to bridge the gap. Unspoken competition, gatekeeping, differences in perspective — these reasons and more push experts from the software and hardware spaces apart. But open source is the key to furthering collaboration and innovation in technology development. In this episode of Traceroute, we look deeper into the digital space and how it intrinsically connects to physical hardware. Joining us today are open-source advocates Jon Masters and Brian Fox. They share with us their insights on hardware and software proprietary rights. They also provide context on open-source technology and how vital open source is for innovation and increasing opportunities.   If you are someone looking to explore open source technology, then this episode of the Traceroute podcast might be perfect for you! Episode Highlights [1:50] Behind The Scenes In The Digital SpaceThe utilities we use daily — like water and electric appliances — are built to meet exacting standards to ensure user-friendliness.  Similarly, tech companies build digital infrastructures that most computer users can easily utilize. Jon Masters: “We build very boring, elaborate standards so that the average user, if they don't want to, doesn't have to understand every layer of what's going on.” [03:30] How Open Source Ties Software And Hardware TogetherMany people in the tech space tend to focus on either the physical or digital aspects of technology.  Not being able to grasp the hardware that supports software can be a lost opportunity. Knowing the hardware that goes with your software and how they intertwine can bring many opportunities. The software industry, especially the internet, requires a durable physical backbone. Likewise, hardware can only evolve with new software developments. The reawakening to hardware development mirrors the early stages of the open-source software space. Jon Masters: “If you look at where the industry is going right now, hardware and software, they were always important counterparts to one another.” [06:34] The Definition of SoftwareSoftware is a symbolic way of writing ideas. Similar to the English language, it employs semantics to express the developer's collection of ideas.  Software technology aims to develop a space that allows computers to perform several tasks simultaneously.  To achieve a higher level of computing platform, computer processors would have to undergo time slicing.  An operating system manages the software that runs on a computer, as well as access to hardware devices. Essentially, it serves as the interface between humans and hardware.  [08:47] The Beginning Of The Open Source MovementBack in the day, students and academics wrote numerous codes. They shared these codes in an effort to further the science.  However, the rise of proprietary software ended the open collaboration system of the early days.  Not everyone was onboard with proprietary software—thus, giving birth to the idea of open and free software.  Brian Fox: “I'm working on a vision detection system, and I want the other guy who was working on it to also be able to enhance it in the direction that he cares about or that she cares about. And it shouldn't stop me. That way, we can share and collaborate, and the entire science moves up.” [09:37] Free Vs. Open Source SoftwareBoth free software and open-source software advocate public access to code. However, the idea behind these software types comes from different places of understanding. Free software does not contain any license that prevents it from being shared across different users. The open source software movement is rooted in an ethical understanding that formulas should not be restricted. Anyone can join the open source...

Traceroute
Episode 1: Interconnection

Traceroute

Play Episode Listen Later Feb 24, 2022 36:34


Inventing the internet can be traced from its formation for military and academic use. Since then, we've made huge leaps in terms of communication and interconnectivity. Greater interconnectivity has changed the game for building networks between people. The projects that began in 1966 have fundamentally altered communication practices all over the world. In the first episode of Traceroute, we go back to the start of the Cold War. What was the initial purpose of computer networking? How has it changed over time? We'll answer these questions with insights from Jay Adelson, Sarah Weinberger, John Morris, and Peter Van Camp. In this episode, we'll discover how the very nature of digital communication evolved and continues to evolve today. One major contribution to the interconnectivity we enjoy today is the neutral exchange framework spearheaded by Equinix. Episode Highlights [02:46] DARPA and Improving InterconnectivityThe Defense Advanced Research Projects Agency was created in response to the panic caused by the Soviet Union's Sputnik, the first artificial satellite in the world. DARPA had a broad mandate to take on research projects as directed by the Secretary of Defense.  It tried to create new technologies to keep the Pentagon and the military ahead of the Soviets.  DARPA's priorities were space and defense research. However, it also had to consider effective communication and improving interconnectivity. [04:24] The Birth of ARPANETOne of the research projects funded by DARPA was ARPANET. The concept of computer networks were new, but improved interconnectivity within the organization. In the early days of computers, DARPA hired J.C.R. Licklider. He became fundamental to inventing the internet. Sharon Weinberger: “He sort of looked ahead and said, the way that we work with computers is going to fundamentally change our society.” Their proposal became a prototype. 1969 was the first instance of two computers being connected, and the first message delivered over ARPANET was sent.  It was a struggle to convince people of the benefits of greater interconnectivity. The project's funding was almost cut due to lack of support. [07:41] Interconnecting PeopleMore people realized that having interconnected systems had applications outside military use. The internet left DARPA's hands in the 90s, becoming commercially viable and consumer-friendly. But we can't overlook its military legacy. J.C.R Licklider's hand in inventing the internet also cannot be understated. ARPANET is an example of a successful collaboration between the government and private sector. [09:36] Traffic in the Open WebJohn Morris: “Back in the '80s, commercial communications were prohibited on the internet. The internet was only for government and academic communication.” The internet's evolution to how we know it today started when it was decentralized from government control. Connection points soon became congested and created traffic in physical telecommunication networks. More importantly, opportunities online led to commercial growth and the need for regulation. [13:07] The Telecommunications Act of 1996The main focus of the legislation was to generate competition among phone companies. It also created an opportunity for CLECs (competitive local exchange carriers). They could deliver better connectivity and services to a user through higher-speed internet.  This development led to the birth of broadband internet. It also increased the need for physical connection points to maintain efficient interconnectivity between devices. The '96 Telecommunications Act enabled private organizations separate from phone companies to run exchange points. Competition between phone companies made neutral exchange points that laid the groundwork for the internet today. [16:06] A Faster, Decentralized Internet Cable companies entering the competition for providing internet access opened the debate for open...

Traceroute
Episode 3: Networks

Traceroute

Play Episode Listen Later Feb 24, 2022 35:10


When we open web browsers and streaming services, we expect them to work seamlessly without interruptions. Sounds basic enough, right? But have you considered how much data goes over your local network? Now imagine all the computers communicating worldwide! It took years for internet service providers to make the internet work the way it does today. Without the physical infrastructure underpinning our networks, connecting computers the way they are now would have been impossible. In this episode, Dave Temkin, Ingrid Burrington, Jack Waters, and Andrew Blum join us to discuss how the internet works. They detail the hidden infrastructure involved in getting computers connected around the world. Contrary to what digital natives might think, your connection to the World Wide Web isn't 100% wireless. They also discuss the rise of Netflix and the need for an interconnected and open global network. If you want to understand the massive network of physical infrastructure required to connect computers worldwide, then this episode of the Traceroute podcast is for you. Episode Highlights [01:15] Netflix's Goal and Challenge Dave Temkin: “We always knew that streaming was going to be the future. It's not a coincidence that the company was called Netflix, the intention was always to deliver it over the network. We just needed to feel that the network was ready.” Netflix, the global streaming service that allows uninterrupted streaming, took years to build.  The infrastructure needed to be scalable to a point where it can serve millions of users without breaking the internet.  The key to solving this data transmission challenge is networks. [3:12] What is a Network? Networks are overlapping and interconnecting things. These can be virtually or physically tied together.  The networks that let the internet work require the support of physical infrastructure. Acknowledging this fact helps us understand that the internet is a public resource. People don't see internet infrastructures as public work. Network infrastructure includes data centers, towers, and all the wires, cables, and fibers that connect them.  [5:47] How the Network Market GrewAfter the government relaxed regulations in the 1990s, there was a big wave of infrastructure development.  For example, Williams, an oil and gas company, built fiber networks using their non-operational oil and gas pipelines. Developers built many fiber networks beyond that time's demand. Many of these infrastructures are still in us today.  [6:58] Interconnection and Resiliency of Networks Most people will only think about their own network. In reality, a larger computer network of interconnected cables is the basis of how the internet works. Interconnectivity forms the basis of maintaining a stable internet connection. Hundreds of interconnected cables ensure that computer networks are durable and resilient. Ingrid Burrington: “There is a resiliency built into the way that Internet networks function in that it's not just like one single cable that gets cut and everyone loses their internet access.” [8:18] Level 3's LegacyPhysical linkages are necessary to make the internet work. Many people don't think about this equipment.  For Level 3, internet infrastructure needed to be built from scratch but still have the space for upgrades.  The company built 16,500 miles of network in the United States and 3,500 miles in Europe in 30 months.  Before this network was constructed, the internet ran largely on the legacy of the telephone network.  The demand for the networks Level 3 built did not surface until the late 2000s. While they missed the timing, their legacy remains. [14:38] How The Internet Has ChangedThe emergence of smartphones helped dramatically change the internet's landscape. We now favor cloud, triggering the need for a hybrid cloud provider and such. Jack Waters: “I do think it is probably...

Traceroute
Look Out for Traceroute

Traceroute

Play Episode Listen Later Feb 7, 2022 2:57


Let's peer into the halls of history and explore all the stuff that makes up the internet.. past, present, and future. Look out for Traceroute, coming February 24th, 2022.

traceroute
BSD Now
421: ZFS eats CPU

BSD Now

Play Episode Listen Later Sep 23, 2021 50:42


Useless use of GNU, Meet the 2021 FreeBSD GSoC Students, historical note on Unix portability, vm86-based venix emulator, ZFS Mysteriously Eating CPU, traceroute gets speed boost, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines Useless use of GNU (https://jmmv.dev/2021/08/useless-use-of-gnu.html) Meet the 2021 FreeBSD Google Summer of Code Students (https://freebsdfoundation.org/blog/meet-the-2021-freebsd-google-summer-of-code-students/) News Roundup Large Unix programs were historically not all that portable between Unixes (https://utcc.utoronto.ca/~cks/space/blog/unix/ProgramsVsPortability) References this article: I'm not sure that UNIX won (https://rubenerd.com/im-not-sure-that-unix-won/) *** ### A new path: vm86-based venix emulator (http://bsdimp.blogspot.com/2021/08/a-new-path-vm86-based-venix-emulator.html) *** ### ZFS Is Mysteriously Eating My CPU (http://www.brendangregg.com/blog/2021-09-06/zfs-is-mysteriously-eating-my-cpu.html) *** ### traceroute(8) gets speed boost (http://undeadly.org/cgi?action=article;sid=20210903094704) *** Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Al - TransAtlantic Cables (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/421/feedback/Al%20-%20TransAtlantic%20Cables.md) Christopher - NVMe (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/421/feedback/Christopher%20-%20NVMe.md) JohnnyK - Vivaldi (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/421/feedback/JohnnyK%2-%20Vivaldi.md) *** Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***

SPICYDOG's TechTalks
SPICYDOG's TechTalks EP 82 - Traceroute

SPICYDOG's TechTalks

Play Episode Listen Later May 19, 2021 27:45


คุยกันเรื่อง Traceroute เครื่องมือสำหรับใช้ดูว่า Internet Package ของเราวิ่งผ่านไปที่ไหนบ้าง ใช้สำหรับการ Debug ระบบ Network เวลามีปัญหาได้อย่างดี มาดูกันว่า ไส้ในมันทำงานอย่างไร

tech talks traceroute