Podcasts about jolt award

  • 11PODCASTS
  • 15EPISODES
  • 46mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Nov 19, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about jolt award

Latest podcast episodes about jolt award

Avanscoperta - Interviews with experts
Gojko Adzic - Lizard Optimization (Avanscoperta Meetup)

Avanscoperta - Interviews with experts

Play Episode Listen Later Nov 19, 2024 28:33


Gojko Adzic - Lizard OptimizationLook who's back! Gojko Adzic will present us his new book Lizard Optimization: Unlock product growth by engaging long-tail users.As we read on the back cover: "Lizard Optimization is a technique for designing product development experiments by engaging long-tail users that seem to follow some unexplainable "lizard" logic. It can help you understand your audience better and improve your products. The method is based on the author's experience managing an online software product that grew explosively from November 2021 to November 2022. The key user metric, tracking when people are getting value from the product, increased by more than 500 times in those 12 months (times, not percent). This happened after a period of unremarkable growth and a slow decline. Looking back, the key factor in reversing the decline and unlocking exponential growth was a counter-intuitive approach to engaging users. This book is a summary of the key lessons from that crazy growth phase, synthesized into a simple process that you can apply to improve your products, and help you unlock growth, reduce churn and increase revenue."And as usual - plenty of space for your questions! 

Podcasts
Gojko Adzic - Lizard Optimization (Avanscoperta Meetup)

Podcasts

Play Episode Listen Later Nov 19, 2024 28:33


Gojko Adzic - Lizard OptimizationLook who's back! Gojko Adzic will present us his new book Lizard Optimization: Unlock product growth by engaging long-tail users.As we read on the back cover: "Lizard Optimization is a technique for designing product development experiments by engaging long-tail users that seem to follow some unexplainable "lizard" logic. It can help you understand your audience better and improve your products. The method is based on the author's experience managing an online software product that grew explosively from November 2021 to November 2022. The key user metric, tracking when people are getting value from the product, increased by more than 500 times in those 12 months (times, not percent). This happened after a period of unremarkable growth and a slow decline. Looking back, the key factor in reversing the decline and unlocking exponential growth was a counter-intuitive approach to engaging users. This book is a summary of the key lessons from that crazy growth phase, synthesized into a simple process that you can apply to improve your products, and help you unlock growth, reduce churn and increase revenue."And as usual - plenty of space for your questions! 

Tech Lead Journal
#131 - Data Essentials in Software Architecture - Pramod Sadalage

Tech Lead Journal

Play Episode Listen Later May 1, 2023 60:09


“The notion of transaction, consistency, and ACID compliance are many times tech imposed. It should be the business that makes the decision. We as technologists should not make that decision." Pramod Sadalage is a Director at ThoughtWorks and the co-author of the Jolt Award winning “Refactoring Databases”. In this episode, we discussed data essentials in software architecture. Pramod started by explaining why dealing with data is hard in software architecture and some data related concerns we should think about when making architecture decisions. He then shared the thought process of how we can choose the right database for our purpose and shared insights on data modeling differences between SQL and NoSQL. Pramod also touched on the important considerations in managing transactions and the trade-offs between ACID and eventual consistency. Towards the end, Pramod shared practical advice on the step-by-step how we can split a monolithic database through database refactoring.   Listen out for: Career Journey - [00:04:23] Data is Hard - [00:15:57] Data Related Architecture Concerns - [00:18:36] Choosing the Right Database - [00:24:19] Data Modeling in SQL vs NoSQL - [00:30:28] Managing Transactions - [00:37:31] Tradeoff Between ACID & Eventual Consistency - [00:44:06] Refactoring Database - [00:46:58] 3 Tech Lead Wisdom - [00:54:58] _____ Pramod Sadalage's BioPramod Sadalage is Director at ThoughtWorks where he enjoys the rare role of bridging the divide between database professionals and application developers. In the early 00's he developed techniques to allow relational databases to be designed in an evolutionary manner based on version-controlled schema migrations. He is co-author of Software Architecture: The Hard Parts: Modern Trade-Off Analyses for Distributed Architectures, co-author for Building Evolutionary Architectures - Automated Software Governance, co-author of Refactoring Databases: Evolutionary Database Design, co-author of NoSQL Distilled: A Brief Guide to the Emerging World of Polyglot Persistence, author of Recipes for Continuous Database Integration and continues to speak and write about the insights he and his clients learn. Follow Pramod Sadalage: Twitter – @pramodsadalage LinkedIn – linkedin.com/in/pramodsadalage Website – sadalage.com Database Refactoring – databaserefactoring.com DevOps for DBA – devopsfordba.com Agile Data – agiledata.org _____ Our Sponsors Are you looking for a new cool swag? Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available. Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags. Like this episode? Show notes & transcript: techleadjournal.dev/episodes/131 Follow @techleadjournal on LinkedIn, Twitter, and Instagram. Buy me a coffee or become a patron.

Tech Lead Journal
#89 - Code That Fits in Your Head - Mark Seemann

Tech Lead Journal

Play Episode Listen Later May 23, 2022 54:46


“The goal of software is often to sustain an organization. An organization invests in software in order to achieve some goal and hopefully to sustain itself in helping it achieve that goal." Mark Seemann is an acclaimed author, international speaker, and a highly experienced developer. In this episode, Mark shared some insights from his latest book, “Code That Fits in Your Head”, on how to write sustainable software and manage software complexity. Mark first started by sharing why he wrote this book and explained why software development is hard. He also pointed out the difference between software engineering and other physical engineering disciplines, especially on the set of constraints. Mark then explained the importance of writing sustainable software and shared the perspective that code is a liability instead of an asset. Towards the end, Mark shared about the Rule of 7 as a guideline to manage code complexity and a few practices we can use to build sustainable software, such as checklist, vertical slice, x-driven development, and command query separation. Listen out for: Career Journey - [00:06:26] Code That Fits in Your Head - [00:07:49] Software Development is Hard - [00:10:55] Software Engineering vs Physical Engineering - [00:15:01] Sustainable Software - [00:17:58] Code is a Liability - [00:19:55] Rule of 7 - [00:22:43] Checklist - [00:31:23] Vertical Slice - [00:35:52] X-Driven Development - [00:39:47] Command Query Separation - [00:45:07] 3 Tech Lead Wisdom - [00:49:38] _____ Mark Seemann's Bio Mark Seemann is a bad economist who's found a second career as a programmer, and he has worked as a web and enterprise developer since the late 1990s. As a young man, Mark wanted to become a rockstar, but unfortunately had neither the talent nor the looks – later, however, he became a Certified Rockstar Developer. He has also written a Jolt Award-winning book about Dependency Injection, given more than a 100 international conference talks, and authored video courses for both Pluralsight and Clean Coders. He has regularly published blog posts since 2006. He lives in Copenhagen with his wife and two children. Follow Mark: Website – https://blog.ploeh.dk Twitter – @ploeh LinkedIn – https://www.linkedin.com/in/ploeh Our Sponsor Today's episode is proudly sponsored by Skills Matter, the global community and events platform for software professionals. Skills Matter is an easier way for technologists to grow their careers by connecting you and your peers with the best-in-class tech industry experts and communities. You get on-demand access to their latest content, thought leadership insights as well as the exciting schedule of tech events running across all time zones. Head on over to skillsmatter.com to become part of the tech community that matters most to you - it's free to join and easy to keep up with the latest tech trends. Like this episode? Subscribe on your favorite podcast app and submit your feedback. Follow @techleadjournal on LinkedIn, Twitter, and Instagram. Pledge your support by becoming a patron. For more info about the episode (including quotes and transcript), visit techleadjournal.dev/episodes/89.

The Idealcast with Gene Kim by IT Revolution
Behind The State of DevOps Research, Favorite Aha Moments, and Where They Are Now: Interviews with The DevOps Handbook Coauthors (Part 2 of 2: Dr. Nicole Forsgren and Jez Humble)

The Idealcast with Gene Kim by IT Revolution

Play Episode Listen Later Jan 27, 2022 89:34


In part two of this two-part episode on The DevOpsHandbook, Second Edition, Gene Kim speaks with coauthors Dr. Nicole Forsgren and Jez Humble about the past and current state of DevOps. Forsgren and Humble share with Kim their DevOps aha moments and what has been the most interesting thing they've learned since the book was released in 2016. Jez discusses the architectural properties of the programming language PHP and what it has in common with ASP.NET. He also talks about the anguish he felt when Mike Nygard's book, Release It!, was published while he was working on his book, Continuous Delivery. Forsgren talks about how it feels to see the findings from the State of DevOps research so widely used and cited within the technology community. She explains the importance of finding the link between technology performance and organizational performance as well as what she's learned about the importance of culture and how it can make or break an organization. Humble, Forsgren, and Kim each share their favorite case studies in The DevOps Handbook.   ABOUT THE GUEST(S) Dr. Nicole Forsgren and Jez Humble are two of five coauthors of The DevOps Handbook along with Gene Kim, Patrick Debois and John Willis. Forsgren, PhD, is a Partner at Microsoft Research. She is coauthor of the Shingo Publication Award-winning book Accelerate: The Science of Lean Software and The DevOps Handbook, 2nd Ed., and is best known as lead investigator on the largest DevOps studies to date. She has been a successful entrepreneur (with an exit to Google), professor, performance engineer, and sysadmin. Her work has been published in several peer-reviewed journals. Humble is co-author of Lean Enterprise, the Jolt Award-winning Continuous Delivery, and The DevOps Handbook. He has spent his career tinkering with code, infrastructure, and product development in companies of varying sizes across three continents, most recently working for the US Federal Government at 18F. As well as serving as DORA's CTO, Jez teaches at UC Berkeley.   YOU'LL LEARN ABOUT Projects Jez and Gene worked on together before The DevOps Handbook came out. What life is like for Jez as a site reliability engineer at Google and what he's learned. The story behind his DevOps aha moment in 2004, working on a large software project involving 70 developers. The architectural properties of his favorite programming language PHP, what it has in common with ASP.NET, and the importance of being able to get fast feedback while building something. The anguish that Jez felt when Mike Nygard's book, Release It!, came out, wondering if there was still a need for the book he was working on, which was Continuous Delivery. “Testing on the Toilet” and other structures for creating distributed learning across an organization and why this is important to create a genuine learning dynamic. What Dr. Forsgren is working on now as Partner of Microsoft Research. Some of Dr. Forsgren's goals as we work together on the State of DevOps research and how it feel to have those findings so widely used and cited within the technology community. The importance of finding the link between technology performance and organizational performance and why it probably was so elusive for at least 40 years in the research community. What Dr. Forsgren has learned about the importance of culture, how it can make or break an organization, and the importance of great leadership.   RESOURCES Personal DevOps Aha Moments, the Rise of Infrastructure, and the DevOps Enterprise Scenius: Interviews with The DevOps Handbook Coauthors (Part 1 of 2: Patrick Debois and John Willis) The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations, Second Edition, by Gene Kim, Patrick Debois, John Willis, Jez Humble, and Dr. Nicole Forsgren Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard H. Thaler and Cass R. Sunstein Nudge vs Shove: A Conversation With Richard Thaler The Visible Ops Handbook: Implementing ITIL in 4 Practical and Auditable Steps by Kevin Behr, Gene Kim and George Spafford FlowCon Elisabeth Hendrickson on the Idealcast: Part 1, Part 2 Cloud Run Beyond Goldilocks Reliability by Narayan Desai, Google Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Jez Humble and David Farley Release It!: Design and Deploy Production-Ready Software (Pragmatic Programmers) by Michael T. Nygard DevOps Days On the Care and Feeding of Feedback Cycles by Elisabeth Hendrickson at FlowCon San Francisco 2013 Bret Victor Inventing on Principle by Bret Victor Media for Thinking the Unthinkable Douglas Engelbart and The Mother of All Demos 18F Pain Is Over, If You Want It at DevOps Enterprise Summit - San Francisco 2015 Goto Fail, Heartbleed, and Unit Testing Culture by Mike Bland Do Developers Discover New Tools On The Toilet? by Emerson Murphy-Hill, Edward Smith, Caitlin Sadowski, Ciera Jaspan, Collin Winter, Matthew Jorde, Andrea Knight, Andrew Trenk and Steve Gross PDF Study: DevOps Can Create Competitive Advantage DevOps Means Business by Nicole Forsgren Velasquez, Jez Humble, Nigel Kersten and Gene Kim Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations by Nicole Forsgren, PhD, Jez Humble, and Gene Kim DevOps Research and Assessment (DORA) on Google Cloud GitLab Inc. takes The DevOps Platform public Paul Strassmann The Idealcast with Dr. Ron Westrum: Part 1, Part 2 Building the Circle of Faith: How Corporate Culture Builds Trust at Trajectory Conference 2021 The Truth About Burnout: How Organizations Cause Personal Stress and What to Do About It by Christina Maslach and Michael P. Leiter Maslach Burnout Inventory Understanding Job Burnout at DevOps Enterprise Summit - Las Vegas 2018 Understanding Job Burnout at DevOps Enterprise Summit - London 2019 Workplace Engagement Panel at DevOps Enterprise Summit - Las Vegas 2019 Expert Panel - Workplace Engagement & Countering Employee Burnout at DevOps Enterprise Summit - London 2019 The Idealcast with Trent Green Kelly Shortridge's tweets about Gitlab S-1   TIMESTAMPS [05:22] Intro [05:34] Meet Jez Humble [10:19] What Jez is working on these days [15:56] What inform his book, “Continuous Delivery” [24:02] Assembling the team for the project [26:30] At what point was PHP an important property [31:56] The most surprising thing since the DevOps Handbook came out [35:07] His favorite pattern that went into the DevOps Handbook [43:40] What DevOps worked on in 2021 [44:46] Meet Dr. Nicole Forsgren [50:32] What Dr. Forsgren is working on these days [52:18] What it's like working at Microsoft Research [55:58] The response to the state of DevOps findings [59:18] The most surprising finding since the findings release [1:05:59] Her favorite pattern that influence performance [1:08:49] How Dr. Forsgren met Dr. Ron Westrum [1:11:06] The most important thing she's learned in this journey [1:14:46] Her favorite case study in the DevOps Handbook [1:19:12] Dr. Christina Maslach and work burnout [1:20:46] More context about the case studies [1:25:32] The Navy case study [1:29:04] Outro

Avanscoperta - Interviews with experts
Gojko Adzic: Facilitating Impact Mapping sessions (Avanscoperta Meetup)

Avanscoperta - Interviews with experts

Play Episode Listen Later May 27, 2021 54:08


Podcasts
Gojko Adzic: Facilitating Impact Mapping sessions (Avanscoperta Meetup)

Podcasts

Play Episode Listen Later May 27, 2021 54:08


Serverless Chats
Episode #97: How Serverless Fits in to the Cyclical Nature of the Industry with Gojko Adzic

Serverless Chats

Play Episode Listen Later Apr 19, 2021 63:19


About Gojko AdzicGojko Adzic is a partner at Neuri Consulting LLP. He one of the 2019 AWS Serverless Heroes, the winner of the 2016 European Software Testing Outstanding Achievement Award, and the 2011 Most Influential Agile Testing Professional Award. Gojko’s book Specification by Example won the Jolt Award for the best book of 2012, and his blog won the UK Agile Award for the best online publication in 2010.Gojko is a frequent speaker at software development conferences and one of the authors of MindMup and Narakeet.As a consultant, Gojko has helped companies around the world improve their software delivery, from some of the largest financial institutions to small innovative startups. Gojko specializes in agile and lean quality improvement, in particular impact mapping, agile testing, specification by example, and behavior driven development.Twitter: @gojkoadzicNarakeet: https://www.narakeet.comPersonal website: https://gojko.netWatch this video on YouTube: https://youtu.be/kCDDli7uzn8This episode is sponsored by CBT Nuggets: https://www.cbtnuggets.com/TranscriptJeremy: Hi everyone, I'm Jeremy Daly and this is Serverless Chats. Today my guest is Gojko Adzic. Hey Gojko, thanks for joining me.Gojko: Hey, thanks for inviting me.Jeremy: You are a partner at Neuri Consulting, you're an AWS Serverless Hero, you've written I think, what? I think 6,842 books or something like that about technology and serverless and all that kind of stuff. I'd love it if you could tell listeners a little bit about your background and what you've been working on lately.Gojko: I'm a developer. I started developing software when I was six and a half. My dad bought a Commodore 64 and I think my mom would have kicked him out of the house if he told her that he bought it for himself, so it was officially for me.Jeremy: Nice.Gojko: And I was the only kid in the neighborhood that had a computer, but didn't have any ways of loading games on it because he didn't buy it for games. I stayed up and copied and pasted PEEKs and POKEs in a book I couldn't even understand until I made the computer make weird sounds and print rubbish on the screen. And that's my background. Basically, ever since, I only wanted to build software really. I didn't have any other hobbies or anything like that. Currently, I'm building a product for helping tech people who are not video editing professionals create videos very easily. Previously, I've done a lot of work around consulting. I've built a lot of product that is used by millions of school children worldwide collaborate and brainstorm through mind-mapping. And since 2016, most of my development work has been on Lambda and on team stuff.Jeremy: That's awesome. I joke a little bit about the number of books that you wrote, but the ones that you have, one of them's called Running Serverless. I think that was maybe two years ago. That is an excellent book for people getting started with serverless. And then, one of my probably favorite books is Humans Vs Computers. I just love that collection of tales of all these things where humans just build really bad interfaces into software and just things go terribly.Gojko: Thank you very much. I enjoyed writing that book a lot. One of my passions is finding edge cases. I think people with a slight OCD like to find edge cases and in order to be a good developer, I think somebody really needs to have that kind of intent, and really look for edge cases everywhere. And I think collecting these things was my idea to help people first of all think about building better software, and to realize that stuff we might glance over like, nobody's ever going to do this, actually might cause hundreds of millions of dollars of damage ten years later. And thanks very much for liking the book.Jeremy: If people haven't read that book, I don't know, when did that come out? Maybe 2016? 2015?Gojko: Yeah, five or six years ago, I think.Jeremy: Yeah. It's still completely relevant now though and there's just so many great examples in there, and I don't want to spent the whole time talking about that book, but if you haven't read it, go check it out because it's these crazy things like police officers entering in no plates whenever they're giving parking tickets. And then, when somebody actually gets that, ends up with thousands of parking tickets, and it's just crazy stuff like that. Or, not using the middle initial or something like that for the name, or the birthdate or whatever it was, and people constantly getting just ... It's a fascinating book. Definitely check that out.But speaking of edge cases and just all this experience that you have just dealing with this idea of, I guess finding the problems with software. Or maybe even better, I guess a good way to put it is finding the limitations that we build into software mostly unknowingly. We do this unknowingly. And you and I were having a conversation the other day and we were talking about way, way back in the 1970s. I was born in the late '70s. I'm old but hopefully not that old. But way back then, time-sharing was a thing where we would basically have just a few large computers and we would have to borrow time against them. And there's a parallel there to what we were doing back then and I think what we're doing now with cloud computing. What are your thoughts on that?Gojko: Yeah, I think absolutely. We are I think going in a slightly cyclic way here. Maybe not cyclic, maybe spirals. We came to the same horizontal position but vertically, we're slightly better than we were. Again, I didn't start working then. I'm like you, I was born in late '70s. I wasn't there when people were doing punch cards and massive mainframes and time-sharing. My first experience came from home PC computers and later PCs. The whole serverless thing, people were disparaging about that when the marketing buzzword came around. I don't remember exactly when serverless became serverless because we were talking about microservices and Lambda was a way to run microservices and execute code on demand. And all of a sudden, I think the JAWS people realized that JAWS is a horrible marketing name, and decided to rename it to serverless. I think it most important, and it was probably 2017 or something like that. 2000 ...Jeremy: Something like that, yeah.Gojko: Something like that. And then, because it is a horrible marketing name, but it's catchy, it caught on and then people were complaining how it's not serverless, it's just somebody else's servers. And I think there's some truth to that, but actually, it's not even somebody else's servers. It really is somebody else's mainframe in a sense. You know in the '70s and early '80s, before the PC revolution, if you wanted to be a small software house or a small product operator, you probably were not running your own data center. What you would do is you would rent it based on paying for time to one of these massive, massive, massive operators. And in fact, we ended up with AWS being a massive data center. As far as you and I are concerned, it's just a blob. It's not a collection of computers, it's a data center we learn something from and Google is another one and then Microsoft is another one.And I remember reading a book about Andy Grove who was the CEO of Intel where they were thinking about the market for PC computers in the late '70s when somebody came to them with the idea that they could repurpose what became a 8080 processor. They were doing this I think for some Japanese calculator and then somebody said, "We can attach a screen to this and make this a universal computer and sell it." And they realized maybe there's a market for four or five computers in the world like that. And I think that that's ... You know, we ended up with four or five computers, it's just the definition of a computer changed.Jeremy: Right. I think that's a good point because you think about after the PC revolution, once the web started becoming really big, people started building data centers and collocation facilities like crazy. This is way before the cloud, and everybody was buying racks and Dell was getting really popular because people buying servers from Dell, and installing these in their data centers and doing this. And it just became this massive, whole industry built around doing that. And then you have these few companies that say, "Well, what if we just handled all that stuff for you? Rather than just racking stuff for you," but started just managing the software, and started managing the networking, and the backups, and all this stuff for you? And that's where the cloud was born.But I think you make a really good point where the cloud, whatever it is, Amazon or Google or whatever, you might as well just assume that that's just one big piece of processing that you're renting and you're renting some piece of that. And maybe we have. Maybe we've moved back to this idea where ... Even though everybody's got a massive computer in their pocket now, tons of compute power, in terms of the real business work that's being done, and the real global value, and the things that are powering global commerce and everything else like that, those are starting to move back to run in four, five, massive computers.Gojko: Again, there's a cyclic nature to all of this. I remember reading about the advent of power networks. Because before people had electric power, there were physical machines and movement through physical power, and there were water-powered plants and things like that. And these whole systems of shafts and belts and things like that powering factories. And you had this one kind of power load in a factory that was somewhere in the middle, and then from there, you actually have physical belts, rotating cogs in other buildings, and that was rotating some shafts that were rotating other cogs, and things like that.First of all, when people were able to package up electricity into something that's distributable, and they were running their own small electricity generators next to these big massive machines that were affecting early factories. And one of the first effects of that was they could reuse 30% of their factories better because it was up to 30% of the workspace in the factory that was taken up by all the belts and shafts. And all that movement was producing a lot of air movement and a lot of dust and people were getting sick. But now, you just plug a cable and you no longer have all this bad air and you don't have employees going sick and things like that. Things started changing quite a lot and then all of a sudden, you had this completely new revolution where you no longer had to operate your own electric generator. You could just plug in and get power from the network.And I think part of that is again, cyclic, what's happening in our industry now, where, as you said, we were getting machines. I used to make money as a Linux admin a long time ago and I could set up my own servers and things like that. I had a company in 2007 where we were operating our own gaming system, and we actually had physical servers in a physical server room with all the LEDs and lights, and bleeps, and things like that. Around that time, AWS really made it easy to get virtual machines on EC2 and I realized how stupid the whole, let's manage everything ourself is. But, we are getting to the point where people had to run their own generators, and now you can actually just plug into the electricity network. And of course, there is some standardization. Maybe U.S. still has 110 volts and Europe has 220, and we never really get global standardization there.But I assume before that, every factory could run their own voltage they wanted. It was difficult to manufacture for these things but now you have standardization, it's easier for everybody to plug into the ecosystem and then the whole ecosystem emerged. And I think that's partially what's happening now where things like S3 is an API or Lambda is an API. It's basically the electric socket in your wall.Jeremy: Right, and that's that whole Wardley maps idea, they become utilities. And that's the thing where if you look at that from an enterprise standpoint or from a small business standpoint if you're a startup right now and you are ordering servers to put into a data center somewhere unless you're doing something that's specifically for servers, that's just crazy. Use the cloud.Gojko: This product I mentioned that we built for mind mapping, there's only two of us in the whole company. We do everything from presales, to development testing supports, to everything. And we're competing with companies that have several orders of magnitude more employees, and we can actually compete and win because we can benefit from this ecosystem. And I think this is totally wonderful and amazing and for anybody thinking about starting a product, it's easier to start a product now than ever. And, another thing that's totally I think crazy about this whole serverless thing is how in effect we got a bookstore to offer that first.You mentioned the world utility. I remember I was the editor of a magazine in 2001 in Serbia, and we had licensing with IDG to translate some of their content. And I remember working on this kind of piece from I think PC World in the U.S. where they were interviewing Hewlett Packard people about utility computing. And people from Hewlett Packard back then were predicting that in a few years' time, companies would not operate their own stuff, they would use utility and things like that. And it's totally amazing that in order to reach us over there, that had to be something that was already evaluated and tested, and there was probably a prototype and things like that. And you had all these giants. Hewlett Packard in 2001 was an IT giant. Amazon was just up-and-coming then and they were a bookstore then. They were not even anything more than a bookstore. And you had, what? A decade later, the tables completely turned where HP's ... I don't know ...Jeremy: I think they bought Compaq at some point too.Gojko: You had all these giants, IBM completely missed it. IBM totally missed ...Jeremy: It really did.Gojko: ... the whole mobile and web and everything revolution. Oracle completely missed it. They're trying to catch up now but fat chance. Really, we are down to just a couple of massive clouds, or whatever that means, that we interact with as we're interacting with electricity sockets now.Jeremy: And going back to that utility comparison, or, not really a comparison. It is a utility now. Compute is offered as a utility. Yes, you can buy and generate compute yourself and you can still do that. And I know a lot of enterprises still will. I think cloud is like 4% of the total IT market or something. It's a fraction of it right now. But just from that utility aspect of it, from your experience, you mentioned you had two people and you built, is it MindMup.com?Gojko: MindMup, yeah.Jeremy: You built that with just two people and you've got tons of people using it. But just from your experience, especially coming from the world of being a Linux administrator, which again, I didn't administer ... Well, I guess I was. I did a lot of work in data centers in my younger days. But, coming from that idea and seeing how companies were building in the past and how companies are still building now, because not every company is still using the cloud, far from it. But not taking advantage of that utility, what are those major disadvantages? How badly do you think that's going to slow companies down that are trying to innovate?Gojko: I can give you a story about MindMup. You mentioned MindMup. When was it? 2018, there was the Intel processor vulnerabilities that were discovered.Jeremy: Right, yes.Gojko: I'm not entirely sure what the year was. A few years ago anyway. We got a email from a concerned university admin when the second one was discovered. The first one made all the news and a month later a second one was discovered. Now everybody knew that, they were in panic and things like that. After the second one was discovered, we got a email from a university admin. And universities are big users, they need to protect the data and things like that. And he was insisting that we tell him what our plan was for mitigating this thing because he knows we're on the cloud.I'm working on European time. The customer was in the U.S., probably somewhere U.S. Pacific because it arrived in the middle of the night. I woke up, I'm still trying to get my head around and drinking coffee and there's this whole sausage CV number that he sent me. I have no idea what it's about. I took that, pasted it into Google to figure out what's going on. The first result I got from Google was that AWS Lambda was already patched. Copy, paste, my day's done. And I assume lots and lots of other people were having a totally different conversation with their IT department that day. And that's why I said I think for products like the one I'm building with video and for the MindMup, being able to rent operations as a utility, but really totally rent ops as a utility, not have to worry about anything below my unique business level is really, really important.And yes, we can hire people to work on that it could even end up being slightly cheaper technically but in terms of my time and where my focus goes and my interruptions, I think deploying on a utility platform, whatever that utility platform is, as long as it's reliable, lets me focus on adding value where I can actually add value. That makes my product unique rather than the generic stuff.Jeremy: You mentioned the video product that you're working on too, and something that is really interesting I think too about taking advantage of the cloud is the scalability aspect of it. I remember, it was maybe 2002, maybe 2003, I was running my own little consulting company at the time, and my local high school always has a rivalry football game every Thanksgiving. And I thought it'd be really interesting if I was to stream the audio from the local AM radio station. I set up a server in my office with ReelCast Streaming or something running or whatever it was. And I remember thinking as long as we don't go over 140 subscribers, we'll be okay. Anything over that, it'll probably crash or the bandwidth won't be enough or whatever.Gojko: And that's just one of those things now, if you're doing any type of massive processing or you need bandwidth, bandwidth alone ... I remember T1 lines being great and then all of a sudden it was like, well, now you need a T3 line or something crazy in order to get the bandwidth that you need. Just from that aspect of it, the ability to scale quickly, that just seems like such a huge blocker for companies that need to order provision servers, maybe get a utility company to come in and install more bandwidth for them, and things like that. That's just stuff that's so far out of scope for building a business to me. At least building a software business or building any business. It's crazy.When I was doing consulting, I did a bit of work for what used to be one of the largest telecom companies in the world.Jeremy: Used to be.Gojko: I don't want to name names on a public chat. Somewhere around 2006, '07 let's say, we did a software project where they just needed to deploy it internally. And it took them seven months to provision a bunch of virtual machines to deploy it internally. Seven months.Jeremy: Wow.Gojko: Because of all the red tape and all the bureaucracy and all the wait for capacity and things like that. That's around the time where Amazon when EC2 became commercially available. I remember working with another client and they were waiting for some servers to arrive so they can install more capacity. And I remember just turning on the Amazon console. I didn't have anything useful to running it then but just being able to start up a virtual machine in about, I think it was less than half an hour, but that was totally fascinating back then. Here's a new Linux machine and in less than half an hour, you can use it. And it was totally crazy. Now we're getting to the point where Lambda will start up in less than 10 milliseconds or something like that. Waiting for that kind of capacity is just insane.With the video thing I'm building, because of Corona and all of this remote teaching stuff, for some reason, we ended up getting lots of teachers using the product. It was one of these half-baked experiments because I didn't have time to build the full user interface for everything, and I realized that lots of people are using PowerPoint to prepare that kind of video. I thought well, how about if I shorten that loop, so just take your PowerPoint and convert it into video. Just type up what you want in the speaker notes, and we'll use these neurometrics to generate audio and things like that. Teachers like it for one reason or the other.We had this influential blogger from Russia explain it on his video blog and then it got picked up, my best guess from what I could see from Google Translate, some virtual meeting of teachers of Russia where they recommended people to try it out. I woke up the next day, the metrics went totally crazy because a significant portion of teachers in Russia tried my tool overnight in a short space of time. Something like that, I couldn't predict it. It's lovely but as you said, as long as we don't go over a hundred subscribers, we're fine. If I was in a situation like that, the thing would completely crash because it's unexpected. We'd have a thing that's amazingly good for marketing that would be amazingly bad for business because it would crash all our capacity we had. Or we had to prepare for a lot more capacity than we needed, but because this is all running on Lambda, Fargate, and other auto-scaling things, it's just fine. No sweat at all. It was a lovely thing to see actually.Jeremy: You actually have two problems there. If you're not running in the cloud or not running on-demand compute, is the fact that one, you would've potentially failed, things would've fallen over and you would've lost all those potential customers, and you wouldn't have been able to grow.Gojko: Plus you've lost paying customers who are using your systems, who've paid you.Jeremy: Right, that's the other thing too. But, on the other side of that problem would be you can't necessarily anticipate some of those things. What do you do? Over-provision and just hope that maybe someday you'll get whatever? That's the crazy thing where the elasticity piece of the cloud to me, is such a no-brainer. Because I know people always talk about, well, if you have predictable workloads. Well yeah, I know we have predictable workloads for some things, but if you're a startup or you're a business that has like ... Maybe you'd pick up some press. I worked for a company that we picked up some press. We had 10,000 signups in a matter of like 30 seconds and it completely killed our backend, my SQL database. Those are hard to prepare for if you're hosting your own equipment.Gojko: Absolutely, not even if you're hosting your own.Jeremy: Also true, right.Gojko: Before moving to Lambda, the app was deployed to Heroku. That was basically, you need to predict how many virtual machines you need. Yes, it's in the cloud, but if you're running on EC2 and you have your 10, 50, 100 virtual machines, whatever running there, and all of a sudden you get a lot more traffic, will it scale or will it not scale? Have you designed it to scale like that? And one of the best things that I think Lambda brought as a constraint was forcing people to design this stuff in a way that scales.Jeremy: Yes.Gojko: I can deploy stuff in the cloud and make it all distributed monolith, so it doesn't really scale well, but with Lambda because it was so constrained when it launched, and this is one thing you mentioned, partially we're losing those constraints now, but it was so constrained when it launched, it was really forcing people to design things that were easy to scale. We had total isolation, there was no way of sharing things, there was no session stickiness and things like that. And then you have to come up with actually good ways of resolving that.I think one of the most challenging things about serverless is that even a Hello World is a distributed transaction processing system, and people don't get that. They think about, well, I had this DigitalOcean five-dollar-a-month server and it was running my, you know, Rails up correctly. I'm just going to use the same ideas to redesign it in Lambda. Yes, you can, but then you're not going to really get the benefits of all of this other stuff. And if you design it as a massively distributed transaction processing system from the start, then yes, it scales like crazy. And it scales up and down and it's lovely, but as Lambda's maturing, I have this slide deck that I've been using since 2016 to talk about Lambda at conferences. And every time I need to do another talk, I pull it out and adjust it a bit. And I have this whole Git history of it because I do markdown to slides and I keep the markdown in Git so I can go back. There's this slide about limitations where originally it's only ... I don't remember what was the time limitation, but something very short.Jeremy: Five minutes originally.Gojko: Yeah, something like that and then it was no PCI compliance and the retries are difficult, and all of this stuff basically became sold. And one of the last things that was there, there was don't even try to put it in a VPC, definitely, you can but it's going to take 10 minutes to start. Now that's reasonably okay as well. One thing that I remember as a really important design constraint was effectively it was a share nothing platform because you could not share data between two Lambdas running at the same time very easily in the same VM. Now that we can connect Lambdas to EFS, you effectively can do that as well. You can have two Lambdas, one writing into an EFS, the other reading the same EFS at the same time. No problem at all. You can pump it into a file and the other thing can just stay in a file and get the data out.As the platform is maturing, I think we're losing some of these design constraints, and sometimes constraints breed creativity. And yes, you still of course can design the system to be good, but it's going to be interesting to see. And this 15-minute limit that we have in Lamdba now is just an artificial number that somebody thought.Jeremy: Yeah, it's arbitrary.Gojko: And at some point when somebody who is important enough asks AWS to give them half-hour Lamdbas, they will get that. Or 24-hour Lambdas. It's going to be interesting to see if Lambda ends up as just another way of running EC2 and starting EC2 that's simpler because you don't have to manage the operating system. And I think the big difference we'll get between EC2 and Lambda is what percentage of ops your developers are responsible for, and what percentage of ops Amazon's developers are responsible for.Because if you look at all these different offerings that Amazon has like Lightsail and EC2 and Fargate and AWS Batch and CodeDeploy, and I don't know how many other things you can run code on in Lambda. The big difference with Lambda is really, at least until very recently was that apart from your application, Amazon is responsible for everything. But now, we're losing design constraints, you can put a Docker container in, you can be responsible for the OS image as well, which is a bit again, interesting to look at.Jeremy: Well, I also wonder too, if you took all those event sources that you can point at Lambda and you add those to Fargate, what's the difference? It seems like they're just merging into two very similar products.Gojko: For the video build platform, the last step runs in Fargate because people are uploading things that are massive, massive, massive for video processing, and just they don't finish in 15 minutes. I have to run to Fargate, and the big difference is the container I packaged up for Fargate takes about 40 seconds to actually deploy. A new event at the moment with the stuff I've packaged in Fargate takes about 40 seconds to deploy. I can optimize that, but I can't optimize it too much. Fargate is still order of magnitude of tens of seconds to process an event. I think as Fargate gets faster and as Lambda gets more of these capabilities, it's going to be very difficult to tell them apart I think.With Fargate, you're intended to manage the container image yourself. You're responsible for patching software, you're responsible for patching OS vulnerabilities and things like that. With Lambda, Amazon, unless you use a container image, Amazon is responsible for that. They come close. When looking at this video building for the first time, I was actually comparing code. I was considering using CodeBuild for that because CodeBuild is also a way to run things on demand and containers, and you actually can get quite decent machines with CodeBuild. And it's also event-driven, and Fargate is event-driven, AWS Batch is event-driven, and all of these things are converging to each other. And really, AWS is famous for having 10 products that do the same thing effectively and you can't tell them apart, and maybe that's where we'll end.Jeremy: And I'm wondering too, the thing that was great about Lambda, at least for me like you said, the shared nothing architecture where it was like, you almost didn't have to rely on anything other than the event that came in, and the processing of that Lambda function. And if you designed your systems well, you may have some bottleneck up front, but especially if you used distributed transactions and you used async invocations of downstream functions, where you could basically take some data that you needed to pass into it, and then you wouldn't necessarily need that to communicate with anything other than itself to process that data. The scale there was massive. You could just keep scaling and scaling and scaling. As you add things like EFS and that adds constraints in terms of the number of transactions and connections that, that can make and all those sort of things. Do these things, do they become less reliable? By allowing it to do more, are we building systems that are less reliable because we're not using some of those tried-and-true constraints that were there?Gojko: Possibly, but every time you add a new moving part, you create one more potential point of failure there. And I think for me, one of the big lessons when I was working on ... I spent a few years working on very high throughput transaction processing systems. That's why this whole thing rings a bell a lot. A lot of it really was how do you figure out what type of messages you send and where you send them. The craze of these messages and distributed transaction processing systems in early 2000s, created this whole craze of enterprise service buses later that came. We now have this... What is it called? It's not called enterprise service bus, it's called EventBridge, or something like that.Jeremy: EventBridge, yes.Gojko: That's effectively an enterprise service bus, it's just the enterprise is the Amazon cloud. The big challenge in designing things like that is decoupling. And it's realizing that when you have a complicated system like that, stuff is going to fail. And especially when we were operating around hardware, stuff is going to fail badly or occasionally, and you need to not bring the whole house down where some storage starts working a bit slower. You create circuit breakers, you create layers and layers of stuff that disconnect things. I remember when we were looking originally at Lambdas and trying to get the head around that and experimenting, should one Lambda call another? Or should one Lambda not call another? And things like that.I realized, let's say for now, until we realize we want to do something else, a Lambda should only ever talk to SNS and nothing else. Or SQS or something like that. When one Lambda completes, it's going to track a message somewhere and we need to design these messages to be good so that we can decouple different parts of the process. And so far, that helps too as a constraint. I think very, very few times we have one Lambda calling another. Mostly when we actually need a synchronized response back, and for security reasons, we wanted to isolate something to a single Lambda, but that's effectively just a black box security isolation. Since creating these isolation layers through messages, through queues, through topics, becomes a fundamental part of designing these systems.I remember speaking at the conference to somebody. I forgot the name of the person who was talking about airline. And he was presenting after me and he said, "Look, I can relate to a lot of what you said." And in the airline community basically they often talk about, apparently, I'm not an airline programmer, he told me that in the airline community, talk about designing the protocol being the biggest challenge. Once you design the protocol between your components, the message is who sends what where, you can recover from almost any other design flaw because it's decoupled so if you've made a mess in one Lambda, you can redesign that Lambda, throw it away, rewrite it, decouple things a different way. If the global protocol is good, you get all the flexibility. If you mess up the protocol for communication, then nothing's going to save you at the end.Now we have EFS and Lambda can talk to an EFS. Should this Lambda talk directly to an EFS or should this Lambda just send some messages to a topic, and then some other Lambdas that are maybe reserved, maybe more constrained talk to EFS? And again, the platform's evolved quite a lot over the last few years. One thing that is particularly useful in that regard is the SQS FIFO queues that came out last year I think. With Corona ...Jeremy: Yeah, whenever it was.Gojko: Yeah, I don't remember if it was last year or two years ago. But one of the things it allows us to do is really run lots and lots of Lambdas in parallel where you can guarantee that no two Lambdas access the same kind of business entity that you have in the same type. For example, for this mind mapping thing, we have lots and lots of people modifying lots and lots of files in parallel, but we need to aggregate a single map. If we have 50 people over here working with a single map and 60 people on a map working a different map, aggregation can run in parallel but I never ever, ever want two people modifying the same map their aggregation to run in parallel.And for Lambda, that was a massive challenge. You had to put Kinesis between Lambda and other Lambdas and things like that. Kinesis' provision capacity, it costs a lot, it doesn't auto-scale. But now with SQS FIFO queues, you can just send a message and you can say the kind of FIFO ID is this map ID that we have. Which means that SQS can run thousands of Lambdas in parallel but they'll never run more than one Lambda for the same map idea at the same time. Designing your protocols like that becomes how you decouple one end of your app that's massively scalable and massively parallel, and another end of your app that we have some reserved capacity or limits.Like for this kind of video thing, the original idea of that was letting me build marketing videos easier and I can't get rid of this accent. Unfortunately, everything I do sounds like I'm threatening someone to blackmail them. I'm like a cheap Bond villain, and that's not good, but I can't do anything else. I can pay other people to do it for me and we used to do that, but then that becomes a big problem when you want to modify tiny things. We paid this lady to professionally record audio for a marketing video that we needed and then six months later, we wanted to change one screen and now the narration is incorrect. And we paid the same woman again. Same equipment, same person, but the sound is totally different because two different equipment.Jeremy: Totally different, right.Gojko: You can't just stitch it up. Then you end up like, okay, do we go and pay for the whole thing again? And I realized the neurometric text-to-speech has learned so much that it can do English better than I can. You're a native English speaker so you can probably defeat those machines, but I can't.Jeremy: I don't know if I could. They're pretty good now. It's kind of scary.Gojko: I started looking at one like why don't they just put stuff in a Markdown and use Markdown to generate videos and things like that? All of these things, you get quota limits still. I thought we were limited on Google. Google gave us something like five requests per second in parallel, and it took me a really long time to even raise these quotas and things like that. I don't want to have lots of people requesting stuff and then in parallel trashing this other thing over there. We need to create these layers of running things in a decent limit, and I think that's where I think designing the protocol for this distributed system becomes an importance.Jeremy: I want to go back because I think you bring up a really good point just about a different type of architecture, or the architectural design of decoupling systems and these event-driven things. You mentioned a Lambda function processes something and sends it to SQS or sends it to SNS to it can do a fan-out pattern or in the case of the FIFO queue, doing an ordered pattern for sequential processing, which those were all great patterns. And even things that AWS has done, such as add things like Lambda destination. Now if you run an asynchronous Lambda function, you still have to write some code or you used to have to write some code that said, "When this is finished processing, now call some other component." And there's just another opportunity for failure there. They basically said, "Well, if it succeeds, then you can actually just forward it off to one of these other services automatically and we'll handle all of the retries and all the failures and that kind of stuff."And those things have been added in to basically give you that warm and fuzzy feeling that if an event doesn't reach where it's supposed to go, that some sort of cloud trickery will kick in and make sure that gets processed. But what that is introduced I think is a cognitive overload for a lot of developers that are designing these systems because you're no longer just writing a script that does X, Y, and Z and makes a few database calls. Now you're saying, okay, I've got to write a script that can massively scale and take the transactions that I need to maybe parallelize or that I maybe need to queue or delay or throttle or whatever, and pass those down to another subsystem. And then that subsystem has to pick those up and maybe that has to parallelize those or maybe there are failure modes in there and I've got all these other things that I have to think about.Just that effect on your average developer, I think you and I think about these things. I would consider myself to be a cloud architect, if that's a thing. But essentially, do you see this being I guess a wall for a lot of developers and something that really requires quite a bit of education to ramp them up to be able to start designing these systems?Gojko: One of the topics we touched upon is the cyclic nature of things, and I think we're going back to where moving from apps working on a single machine to client server architectures was a massive brain melt for a lot of people, and three-tier architectures, which is later, we're not just client server, but three-tier architectures ended up with their own host of problems and then design problems and things like that. That's where a lot of these architectural patterns and design patterns emerged like circuit breakers and things like that. I think there's a whole body of knowledge there for people to research. It's not something that's entirely new and I think you can get started with Lambda quite easily and not necessarily make a mess, but make something that won't necessarily scale well and then start improving it later.That's why I was mentioning that earlier in the discussion where, as long as the protocol makes sense, you can salvage almost anything late. Designing that protocol is important, but then we're going to good software design. I think teaching people how to do that is something that every 10 years, we have to recycle and reinvent and figure it out because people don't like to read books from more than 10 years ago. All of this stuff like designing fault tolerance systems and fail-safe systems, and things like that. There's a ton of books about that from 20 years ago, from 10 years ago. Amazon, for people listening to you and me, they probably use Amazon more for compute than they use for getting books. But Amazon has all these books. Use it for what Amazon was originally intended for and then get some books there and read through this stuff. And I think looking at design of distributed systems and stuff like that becomes really, really critical for Lambdas.Jeremy: Yeah, definitely. All right, we've got a few minutes left and I'd love to go back to something we were talking about a little bit earlier and that was everything moving onto a few of these major cloud providers. And one of the things, you've got scale. Scale is a problem when we talked about oh, we can spin up as many VMs as we want to, and now with serverless, we have unlimited capacity really. I know we didn't say that, but I think that's the general idea. The cloud just provides this unlimited capacity.Gojko: Until something else decides it's not unlimited.Jeremy: And that's my point here where every major cloud provider that I've been involved with and I've heard the stories of, where you start to move the needle at all, there's always an SA that reaches out to you and really wants to understand what your usage is going to be, and what your patterns were going to be. And that's because they need to make sure that where you're running your applications, that they provision enough capacity because there is not enough capacity, or there's not unlimited capacity in the cloud.Gojko: It's physically limited. There's only so much buildings where you can have data centers on the surface of Earth.Jeremy: And I guess that's where my question comes in because you always hear these things about lock-in. Like, well serverless, if you use Lambda, you're going to be locked in. And again, if you're using Oracle, you're locked in. Or, you're using MySQL you're locked in. Or, you're using any of the other things, you're locked in.Gojko: You're actually not locked in physically. There's a key and a lock.Jeremy: Right, but this idea of being locked in not to a specific cloud provider, but just locked into a cloud in general and relying on the cloud to do that scaling for you, where do you think the limitations there are?Gojko: I think again, going back to cyclic, cyclic, cyclic. The PC revolution started when a lot more edge compute was needed on mainframes, and people wanted to get stuff done on their own devices. And I think probably, if we do ever see the limitations of this and it goes into a next cycle, my best guess it's going to be driven by lots of tiny devices connected to a cloud. Not necessarily computers as we know computers today. I pulled out some research preparing for this from IDC. They are predicting basically from 18.3 zettabytes of data needed for IOT in 2019, to be 73.1 zettabytes by 2025. That's like times three in a space of six years. If you went to Amazon now and told them, "You need to have three times more data space in three years," I'm not sure how they would react to that.This stuff, everything is taking more and more data, and everything is more and more connected to the cloud. The impact of something like that going down now is becoming totally crazy. There was a case in 2017 where S3 started getting a bit more latency than usual in U.S. East 1, in I think February of 2018, or something like that. There were cases where people couldn't turn the lights on in their houses because the management software was working on S3 and depending on S3. Expecting S3 to be indestructible. Last year, in November, Kinesis pretty much went offline as far as everybody else outside AWS concerend for about 15 hours I think. There were people on Twitter that they can't go back into their house because their smart lock is no longer that smart.And I think we are getting to places where there will be more need for compute on the edge. First of all, there's going to be a lot more demand for data centers and cloud power and I think that's going to keep going on for the next five, ten years. But then people will realize they've hit some limitation of that, and they're going to start moving towards the edge. And we're going from mainframe back into client server computing I think. We're getting these products now. I assume most of your listeners have seen one like all these fancy ubiquity Wi-Fi thingies that are costing hundreds of dollars and they look like pieces of furniture that's just sitting discretely on the wall. And there was a massive security breach yesterday published. Somebody took their AWS keys and took all the customer data and everything.The big advantage over all the ugly routers was that it's just like a thin piece of glass that sits on your wall, and it's amazing and it looks good, but the reason why they could do a very thin piece of glass is the minimal amount of software is running on that piece of glass, the rest is running the cloud. It's not just locking in terms of is it on Amazon or Google, it's that it's so tightly coupled with something totally outside of your home, where your network router needs Amazon to be alive now in a very specific region of Amazon where everybody's been deploying for the last 15 years, and it's running out of capacity very often. Not very often but often enough.There's some really interesting questions that I guess we'll answer in the next five, ten years. We're on the verge of IOT I think exploding because people are trying to come up with these new products that you wouldn't even think before that you'd have smart shoes and smart whatnot. Smart glasses and things like that. And when that gets into consumer technology, we're no longer going to have five or ten computer devices per person, we'll have dozens and dozens of computing. I guess think about it this way, fifteen years ago, how many computer devices were you carrying with yourself? Probably mobile phone and laptop. Probably not more. Now, in the headphones you have there that's Bose ...Jeremy: Watch.Gojko: ... you have a microprocessor in the headphones, you have your watch, you have a ton of other stuff carrying with you that's low-powered, all doing a bit of processing there. A lot of that processing is probably happening on the cloud somewhere.Jeremy: Or, it's just sending data. It's just sending, hey here's the information. And you're right. For me, I got my Apple Watch, my thermostat is connected to Wi-Fi and to the cloud, my wife just bought a humidifier for our living room that is connected to Wi-Fi and I'm assuming it's sending data to the cloud. I'm not 100% sure, but the question is, I don't know why we need to keep track of the humidity in my living room. But that's the kind of thing too where, you mentioned from a security standpoint, I have a bunch of AWS access keys on my computer that I send over the network, and I'm assuming they're secure. But if I've got another device that can access my network and somebody hacked something on the cloud side and then they can get in, it gets really dangerous.But you're right, the amount of data that we are now generating and compute that we're using in the cloud for probably some really dumb things like humidity in my living room, is that going to get to a point where... You said there's going to be a limitation like five years, ten years, whatever it is. What does the cloud do then? What does the cloud do when it can no longer keep up with the pace of these IOT devices?Gojko: Well, if history is repeating and we'll see if history is repeating, people will start getting throttled and all of a sudden, your unlimited supply of Lambdas will no longer be unlimited supply of Lambdas. It will be something that you have to reserve upfront and pay upfront, and who knows, we'll see when we get there. Or, we get things that we have with power networks like you had a Texas power cut there that was completely severe, and you get a IT cut. I don't know. We'll see. The more we go into utility, the more we'll start seeing parallels between compute and power networks. And maybe power networks are something that you can look at and later name. That's why I think the next cycle is probably going to be some equivalent of client server computing reemerging.Jeremy: Yeah. All right, well, I got one more question for you and this is just something where it may be a little bit of a tongue-in-cheek question. Because we talked it a little bit ... we talked about the merging of Lambda, and of Fargate, and some of these other things. But just from your perspective, serverless in five years from now, where do you see that going? Do you see that just becoming the main ... This idea of utility computing, on-demand computing without setting up servers and managing ops and some of these other things, do you see that as the future of serverless and it just becoming just the way we build applications? Or do you think that it's got a different path?Gojko: There was a tweet by Simon Wardley. You mentioned Simon Wardley earlier in the talk. There was a tweet a few days ago where he mentioned some data. I'm not sure where he pulled it from. This might be unverified, but generally Simon knows what he's talking about. Amazon itself is deploying roughly 50% of all new apps they're building on serverless. I think five years from now, that way of running stuff, I'm not sure if it's Lambda or some new service that Amazon starts and gives it some even more confusing name that runs in parallel to everything. But, that kind of stuff where the operator takes care of all the ops, which they really should be doing, is going to be the default way of getting utility compute out.I think a lot of these other things will probably remain useful for specialists' use cases, where you can't really deploy it in that way, or you need more stability, or it's not transient and things like that. My best guess is first of all, we'll get Lambda's that run for longer, and I assume that after we get Lambdas that run for longer, we'll probably get some ways of controlling routing to Lambdas because you already can set up pre-provisioned Lambdas and hot Lambdas and reserved capacity and things like that. When you have reserved capacity and you have longer running Lambdas, the next logical thing there is to have session stickiness, and routing, and things like that. And I think we'll get a lot of the stuff that was really complicated to do earlier, and you had to run EC2 instances or you had to run complicated networks of services, you'll be able to do in Lambda.And Lambda is, I wouldn't be surprised if they launch a totally new service with some AWS cloud socket, whatever. Something that is a implementation of the same principle, just in a different way, that becomes a default we are running computer for lots of people. And I think GPUs are still a bit limited. I don't think you can run GPU utility anywhere now, and that's limiting for a whole host of use cases. And I think again, it's not like they don't have the technology to do it, it's just they probably didn't get around to doing it yet. But I assume in five years time, you'll be able to do GPUs on-demand, and processing GPUs, and things like that. I think that the buzzword itself will lose really any special meaning and that's going to just be a way of running stuff.Jeremy: Yeah, absolutely. Totally agree. Well, listen Gojko, thank you so much for spending the time chatting with me. Always great to talk with you.Gojko: You, too.Jeremy: If people want to get in touch with you, find out more about what you're doing, how do they do that?Gojko: Well, I'm very easy to find online because there's not a lot of people called Gojko. Type Gojko into Google, you'll find me. And gojko.networks, gojko.com works, gojko.org works, and all these other things. I was lucky enough to get all those domains.Jeremy: That's G-O-J-K-O ...Gojko: Yes, G-O-J-K-O.Jeremy: ... for people who need the spelling.Gojko: Excellent. Well, thanks very much for having me, this was a blast.Jeremy: All right, yeah. And make sure you check out ... You mentioned Narakeet. It's a speech thing?Gojko: Yeah, for developers that want to build videos without hassle, and want to put videos in continuous integration, and things like that. Narakeet, that's like parakeet with an N for narration. Check that out and thanks for plugging it.Jeremy: Awesome. And then, check out MindMup as well. Awesome stuff. I've got all the stuff in the show notes. Thanks again, Gojko.Gojko: Thank you. Bye-bye.

Screaming in the Cloud
Teaching the Cloud Forever with Jez Humble

Screaming in the Cloud

Play Episode Listen Later Oct 29, 2020 30:29


Jez Humble is a developer advocate at Google and a lecturer at UC Berkeley, where he teaches classes on agile software development and product management. Jez brings more than 20 years of experience to these positions, including stints as vice president at Chef and deputy director of delivery architecture and infrastructure services for the federal government’s General Services Administration. Most recently, he founded DevOps Research and Assessment LLC (DORA), which was acquired by Google. Jez is also the other of several books, including the Jolt Award-winning Continuous Delivery. Join Corey and Jez as they talk about the differences between working for large organizations and nimble startups, the wonderful world of NIST, why Jez believes that Google acquired DORA, the five characteristics that mean you have a cloud according to the NIST, the difference between knowing what you should do vs. actually getting there, how to think about books written about technology, why Silicon Valley is one of the worst places in the world when it comes to the Dunning–Kruger effect, and more.

Google Cloud Platform Podcast
DevOps with Nathen Harvey and Jez Humble

Google Cloud Platform Podcast

Play Episode Listen Later Nov 26, 2019 34:34


Happy Thanksgiving! This week, Aja and Brian are talking DevOps with Nathen Harvey and Jez Humble. Our guests thoroughly explain what DevOps is and why it’s important. DevOps purposely has no official definition but can be thought of as a community of practice that aims to make large-scale systems reliable and secure. It’s also a way to get developers and operations to work together to focus on the needs of the customer. Nathen later tells us all about DevOpsDays, a series of locally organized conferences occurring in cities around the world. The main goal is to bring a cross-functional group of people together to talk about how they can improve IT, DevOps, business strategy, and consider cultural changes the organization might benefit from. DevOpsDays supports this by only planning content for half the conference, then turning over the other half to attendees via Open Spaces. At this time, conference-goers are welcome to propose a topic and start a conversation. Jez then describes the Accelerate State of DevOps Report, how it came to be, and why it’s so useful. It includes items like building security into the software, testing continuously, ideal management practices, product development practices, and more. With the help of the DevOps Quick Check, you can discover the places your company could use some help and then refer back to the report for suggestions of improvements in those areas. Nathen Harvey Nathen Harvey helps the community understand and apply DevOps and SRE practices in the cloud. He is part of the global organizing committee for the DevOpsDays conferences and was a technical reviewer for the 2019 Accelerate State of DevOps Report. Jez Humble Jez Humble is co-author of several books on software including Shingo Publication Award winner “Accelerate” and Jolt Award winner “Continuous Delivery”. He has spent his career tinkering with code, infrastructure, and product development in companies of varying sizes across three continents. He works for Google Cloud as a technology advocate and teaches at UC Berkeley. Cool things of the week It’s a wrap: Key announcements from Next ‘19 UK blog Explainable AI site Hand-drawn Graphviz diagrams blog Add one line to plot in XKCD comic sketchy style site Interview DevOps insights from Google site DevOps Quick Check site DevOpsDays site Agile Alliance site Velocity Conference site DevOps Enterprise Summit site Question of the week Why do you need the Cloud SQL Proxy? Where can you find us next? DevOpsDays has events coming up across the globe, including Galway, Warsaw, Berlin, and Tel Aviv. Nathen and Jez will be at Delivery Conf. Aja will be home drinking tea! Brian will also be home drinking tea!

IT Career Energizer
Find Joy in Your Work with Gojko Adzic

IT Career Energizer

Play Episode Listen Later Sep 2, 2018 23:48


Guest Bio: Gojko is a partner at Neuri Consulting. He is the winner of several awards, including the 2016 European Software Testing Outstanding Achievement Award and the Jolt Award for his book, Specification by Example. Gojko is also a frequent speaker at software development conferences.   Episode Description: In this episode, Phil chats with Gojko Adzic about experiencing the joy of coding and programming, but also recognizing the importance of seeing the big picture when it comes to projects. Gojka highlights this by advising people work “close to the money” to gain a better understanding of how customers use the products he makes, and how his first startup went bankrupt when he got too wrapped up in tracking technical effort instead of product value. Still, he says he can’t imagine doing anything other than working in IT.   Key Takeaways:   (1.00) Phil kicks off the interview by asking Gojko more about himself. Gojko talks about writing books, how he got his start developing software, but he always wanted to do more than just “sitting in a development box,” as he puts it. He prefers working on projects end-to-end.   (3.21) Phil follows up by asking Gojka for a unique career tip people might now know. Gojka answers with the advice: “stay as close to the money as possible.” He goes on to say that he feels like today, IT is often extremely divorced from the customers and users that they are actually making things for. It becomes hard to tell if your work is having an actual impact. Staying close to the money means making sure that the things you do serve a purpose.   (7.45) Phil moves on, asking Gojka about the worst experience of his IT career, and Gojka jokes that it’s difficult to pick the “worst.” He says the one that made him feel the worst but was the most important learning experience came when he was a CTO of a startup and that they were good at the technical side of things but had no idea how to calculate value and properly run a business and so they went bankrupt. He was way too focused on measuring effort and not value.   (13.06) Phil switches things up and asks Gojka about his greatest career success so far, and he says that he hopes he hasn’t made his greatest success yes. But he’s very proud of a project called Mind Map and that it has helped him rediscover the joy of coding.   (15.00) Phil then asks Gojka what excites him the most about the future of the IT industry, and he says software specifically is the closest thing to magic there is, and that it’s incredible that people can make something that has such a huge impact on the real world, essentially out of thin air.   (15.50) Phil starts the Reveal Round by asking Gojka what motivated him to pursue a career in IT, to which he answers that he never wanted to do anything else, quite literally from the age of six onwards.   (17.01) Next, Phil asks Gojka for the best career advice he’s ever received. Gojka says it’s probably something he read in one of The Pragmatic Programmer books, which was: “don’t say no, offer options.” Instead of saying that something isn’t possible, try to come up with options for things you CAN do instead.   (18.10) Phil then questions Gojka as to what he would do if he were to begin his IT career all over again now, and Gojka answers that he never really wanted to do anything different, but that he would try to switch jobs faster to learn as much as possible about as many different things as he could.   (19.01) On the subject of current career objectives, Gojka talks about writing a new book that’s actually about a technique that can be used to solve the problem of his worst career moment.   (20.03) Phil asks what non-tech skill Gojka has found the most useful during his career, and he responds that he doesn’t really differentiate between what’s technical and non-technical, but that the idea of selling value and not time was a non-tech thing he learned that has made a major impact on his career.   (21.31) Phil wraps things up by asking Gojka for any parting words of advice for the listeners and he advises people to not waste time working on things that they don’t really care about or find important and that they should be able to work on creating things that bring them joy.   Best Moments:   (1.15) Gojko: “I tend to write books to download the stuff in my brain so I can leave more spare capacity for new things.”   (5.48) Gojko: “My career advice for people starting here would be to figure out where the money is and stay as close as possible to that because that just cuts through the whole bullshit that most people shouldn’t really care about.”   (7.24) Phil: “I think it’s all about what the end purpose is, rather than the actual solution that gets you there.”   (12.50) Gojka: “IT’s really nice as an industry because you can make stupid mistakes and learn from them and then kind of pull yourself up.”   (14.48) Gojka: “If people feel that their work is dull they should build their own product.”   (21.56) Gojka: “You can spend a lot of time building stupid systems nobody cares about and you shouldn’t be wasting your life on that. Programming should be a joyful activity.”   Contact Gojko Adzic: Blog: www.gojko.net LinkedIn: https://www.linkedin.com/in/gojko/ Twitter: https://twitter.com/gojkoadzic @gojkoadzic Latest Book: https://www.amazon.com/Humans-Vs-Computers-Gojko- Adzic/dp/0993088147    

Enterprise Marketer Podcast - Conference
James Whittaker on A Brief Introduction to the Future

Enterprise Marketer Podcast - Conference

Play Episode Listen Later Apr 28, 2017 28:05


Apps have eaten the web. The Internet of Things will mean more Internet traffic generated and consumed by machines than by humans. Machines are learning our habits faster than we can form them. What does all this mean for the future? How will we interact with our devices when they are as smart or smarter than we are? How will anyone manage to make money is this coming world? Join Microsoft Distinguished Engineer James Whittaker for an entertaining and thought provoking jaunt into the future – a future coming a lot faster than most people imagine.BiographyJames Whittaker’s career spans academia, start-ups and top tech companies and starts in 1986 as the first computer science graduate hired by the FBI. James then worked as a freelance developer, most notably for IBM, Ericsson, SAP, Cisco and Microsoft, specializing in test automation. He joined the faculty at the Florida Institute of Technology where he continued his prolific publication record in software testing and security. In 2002 his security work was spun off by the university into a startup which was later acquired by Raytheon.James’ first stint at Microsoft was in Trustworthy Computing and Visual Studio. He then joined Google as an engineering director and led teams working on Chrome, Maps and Google+. In 2012 James rejoined Microsoft.James is known for being a creative and passionate leader and sought after speaker and author. Of his five books two have been Jolt Award finalists and one a best-seller. Follow him on Twitter @docjamesw and at his website docjamesw.com.LinksTwitterhttps://twitter.com/docjameswFacebookLinkedInhttps://www.linkedin.com/in/james-whittaker-22987813/InstagramWebhttp:///docjamesw.comCorporatehttps://umsldigitalconference.com/speakers/james-whittaker/Book Long URLhttps://enterprisemarketer.com/podcasts/enterprise-marketer-podcast-conference/mdmc-show-47-james-whittaker/Short URLhttp://emktr.co/EMPC47

Devnology Podcast
Devnology Podcast 034 - Gojko Adzic

Devnology Podcast

Play Episode Listen Later Nov 14, 2012 49:11


This month we bring you an interview with Gojko Adzic. Gojko is a frequent speaker at leading software development and testing conferences and runs the UK agile testing user group. Over the last eleven years, he has worked as a developer, architect, technical director and consultant on projects delivering equity and energy trading, mobile positioning, e-commerce, online gaming and complex configuration management. He is the author of several books and articles. In 2012 his book Specification by Example received the Jolt award. To celebrate the Jolt Award, the publisher of Spec by Example is offering our podcast listeners a 37% discount on Specification by Example. Note that Goiko has recently published a new book Impact Mapping: Making a big impact with software products and projects, which we did not get around to discussing in this interview. Follow Goiko on twitter via @goikoadzic or on his site on http://gojko.net/ This interview was recorded on the 8th of oct 2012 at the Holiday Inn Express in Amsterdam. Interview by @freekl and @felienneAudio post-production by @mendelt Links for this podcast: Book:Specification by Example, Gojko Adzic, 2011. Book:Bridging the Communication Gap ('The Blue book'), Gojko Adzic, 2009 Article: Redefining software quality, Gojko Adzic, 2012 Book: Made to Stick, Chip and Dan Heath, 2007

Software Process and Measurement Cast
SPaMCAST 51 - Tim Lister, Adrenaline Junkies and Template Zombies, Change Readiness Assessment Part 1

Software Process and Measurement Cast

Play Episode Listen Later Jan 25, 2009 45:06


Show fifty one is an interview with Tim Lister discussing his new book,” Adrenaline Junkies and Template Zombies”. The interview discussed the impact of specific patterns and habits on how IT organizations work.***NEWS ***Adrenaline Junkies is one of 5 finalists for general computing book of the year.Tim Lister is a software consultant at the Atlantic Systems Guild, Inc., based in the New York office. He divides his time between consulting, teaching, and writing. Tim is a co-author with his Guild partners of Adrenalin Junkies and Template Zombies: Understanding Patterns of Project Behavior, (Dorset House, 2008 http://www.dorsethouse.com/books/ajtz.html), He, is also co-author with Tom DeMarco of Waltzing With Bears: Managing Risk on Software Projects (Dorset House, 2003) that won Software Development magazine’s Jolt Award as General Computing Book of the Year for 2003-2004. Tim and Tom are also co-authors of Peopleware: Productive Projects and Teams, (Dorset House, 1999) now available in 14 languages.Tim is currently a member of the Cutter IT Trends Council. He is a member of the I.E.E.E. and the A.C.M.  He is in his 23rd year as a panelist for the American Arbitration Association, arbitrating disputes involving software and software services.Contact information:      Web Site:  http://www.systemsguild.com/Email:  lister@acm.orgCheck out SPaMCAST’s Facebook page and get involved!!!! http://tinyurl.com/62z5elThe essay is titled “A Really Simple Checklist for Change Readiness Assessment” Part 1.  The essay reminds us of the big things that sometimes get forgotten when dealing with the minutia of getting a change project off the ground.  Check out the text of the current essay at www.tcagley.wordpress.com.  I should be back with an essay next show.There are a number of ways to share your thoughts with SPaMCAST: •    Email SPaMCAST at spamcastinfo@gmail.com•    Voice messages can be left at 1-206-888-6111•    Twitter – www.twitter.com/tcagley•    BLOG – www.tcagley.wordpress.com•    FACEBOOK!!!! Software Process and Measurement              http://tinyurl.com/62z5elNext Software Process and Measurement Cast: The next Software Process and Measurement Cast will feature an interview with Lisa Crispin discussing agile testing.  Lisa’s most recent book is “Agile Testing: A Practical Guide for Testers and Agile Teams.”  The book was coauthored with Janet Gregory.  Testing and agile are highly inter-related although sometimes understanding how all the parts fit together isn’t obvious.  Lisa makes agile testing very clear in her interview.  Do not miss the interview.The interview on the Software Process and Measurement Cast 51 is with Tim Lister. We discussed Tim's new book "Adrenaline Junkies and Template Zombies".