Podcasts about aws serverless hero

  • 23PODCASTS
  • 42EPISODES
  • 40mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jan 25, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about aws serverless hero

Latest podcast episodes about aws serverless hero

Scrum Master Toolbox Podcast
BONUS: Gojko Adzic on Optimizing Products for Long-Tail Users (Agile Online Summit 2024 Replay)

Scrum Master Toolbox Podcast

Play Episode Listen Later Jan 25, 2025 40:11


BONUS: Gojko Adzic on Optimizing Products for Long-Tail Users (Agile Online Summit 2024 Replay) In this BONUS episode, we revisit Gojko Adzic's insightful interview at the Agile Online Summit 2024. Gojko, an award-winning author and software expert, unpacks the principles behind his latest book, Lizard Optimization, offering a fresh perspective on improving product usability by addressing the needs of long-tail users. From learning from unexpected user behaviors to refining products with a systematic approach, this episode is filled with practical tips for product teams and Agile practitioners. What is Lizard Optimization? Drawing from his experiences as a product developer, Gojko introduces the idea of Lizard Optimization. He discusses how observing unexpected user behaviors led him to refine his SaaS tools like Narakeet and MindMup. By focusing on usability challenges and unusual patterns, he has turned serendipity into actionable insights. “Users aren't stupid—they're just finding creative ways to get value from your product. Listen to them.” Gojko explains the inspiration behind the metaphor of the “Lizardman constant,” a concept from a Scott Alexander blog post. He describes how this principle applies to product optimization: understanding and addressing the 4% of surprising, unexplainable behaviors can uncover opportunities for innovation. “The job isn't to judge users—it's to explore why they're doing what they're doing and how we can help them succeed.” The High-Level Process of Lizard Optimization Gojko outlines the systematic process described in his book to leverage unexpected user behavior: Observe Misuse: Identify how users deviate from expected patterns. Extract Insights: Focus on one unexpected behavior as a signal. Remove Obstacles: Help users achieve their goals more easily. Monitor Impacts: Detect and adjust for unintended consequences. “Start monitoring for the predictable but unexpected—those hidden gems can unlock your next big feature.” Practical Advice for Product Teams For teams ready to apply these concepts, Gojko emphasizes the importance of expanding observability tools to include product metrics and not just technical ones. He shares how tracking unpredictable user actions can inspire impactful changes. “About a third of what we do delivers value—focus on finding where unexpected value lies.” Recommended Resources To dive deeper into these ideas, Gojko recommends: Trustworthy Online Controlled Experiments by Ron Kohavi Evidence Guided by Tim Herbig LizardOptimization.org “Experimentation and evidence-based decision-making are the keys to building better products.” Closing Thoughts: “Look for the Unexpected” Gojko's parting advice for Agile practitioners is simple yet powerful: Look for the unexpected. By embracing surprises in user behavior, teams can transform minor inconveniences into major opportunities for growth. “The unexpected is where innovation begins.” About Gojko Adzic Gojko Adzic is an award-winning author, speaker, and product creator. His books, including Lizard Optimization, Impact Mapping, and Specification by Example, have become essential reads for Agile practitioners and product teams worldwide. Gojko is a 2019 AWS Serverless Hero, the winner of the 2016 European Software Testing Outstanding Achievement Award, and the 2011 Most Influential Agile Testing Professional Award. He has also co-founded several successful SaaS tools, including Narakeet, MindMup, and Votito. You can link with Gojko Adzic on LinkedIn.

Screaming in the Cloud
Replay - Serverless Hero, Got Servers in His Eyes with Ant Stanley

Screaming in the Cloud

Play Episode Listen Later Dec 3, 2024 33:37


On this Screaming in the Cloud Replay, we're revisiting our conversation with Co-Founder of Senzo, Ant Stanley. Ant sits down with Corey to do so. He offers up his history which has lead to his time as “Serverless Hero” to landing on the line that “serverless sucks.” Lend us your ears to see how that transition happened! Ant goes into detail on JeffConf (not the of the Bezos nomen), and working with servers and what to put where and why. Ant and Corey talk over the plague of AWS services where Ant offers his perspective how to trim the fat and keep things simple to make long-term objectives more attainable. They discuss the importance of training, the role of certifications for better and worse, and more. Tune in for his take!Show Highlights(0:00) Intro(0:51) Duckbill Group sponsor read(1:24) What does it mean to be an AWS Serverless Hero?(3:13) Why Ant and Corey are critical of the state of serverless(7:53) Woes with Lambda and CloudFront(10:12) The never-ending stream of new AWS services(13:36) Hurdles ahead of going serverless(17:33) Struggles of getting customers to understand a newly built service(21:31) Duckbill Group sponsor read(22:14) Pros and cons of certifications(32:17) Where you can find more from AntAbout Ant StanleyAnt Stanley is a community focused technologist with a passion for enabling better outcomes for society through technology. He is an AWS Serverless Hero, runs the Serverless London User Group, co-runs ServerlessDays London and is part of the ServerlessDays Global team. LinksA Cloud Guru: https://acloudguru.comhomeschool.dev: https://homeschool.devaws.training: https://aws.traininglearn.microsoft.com: https://learn.microsoft.comTwitter: https://twitter.com/iamstanOriginal Episodehttps://www.lastweekinaws.com/podcast/screaming-in-the-cloud/serverless-hero-got-servers-in-his-eyes-with-ant-stanley/SponsorThe Duckbill Group: duckbillgroup.com 

Screaming in the Cloud
Replay - Creatively Giving Back to the Cloud Community with Forrest Brazeal

Screaming in the Cloud

Play Episode Listen Later Oct 24, 2024 32:56


On this Screaming in the Cloud Replay, we revisit our chat with Forrest Brazeal. When this episode first aired, Forrest was the Head of Content at Google Cloud, but today, he helps run Freeman & Forrest, an influencer marketing service focused on enterprise tech. In this trip down memory lane, Forrest goes into detail on how he is working to give back to the cloud community. Forrest discusses his time at A Cloud Guru, his time as an AWS Serverless Hero, and the technical excellence he brings to his vast-ranging and prolific content. Forrest is also a successful author of a newsletter and multiple books, including a children's book about the cloud! Needless to say, Forrest is an incredibly varied personality in the cloud community, tune in for a chance to get to know him better!Show Highlights(00:00) Intro(1:10) Backblaze sponsor read(1:36) Starting a new job as the Head of Content for Google Cloud(2:32) Forrest's background as a cloud consultant(3:57) Writing endeavors and The Cloud Resume Challenge(6:30) Being authentic and helpful in the cloud(11:43) Forrest's experiences with Google Cloud(13:18) Being a thought leader in the cloud community(16:44) The interview process for Google Cloud(20:24) Creating online cloud content(25:51) Having creative freedom at Google(29:07) The viability of Google Cloud(31:52) Where you can find more from ForrestAbout Forrest BrazealForrest is a cloud educator, cartoonist, author, and Pwnie Award-winning songwriter. He's also led some of the world's most innovative developer content and community teams at companies like Google and A Cloud Guru. LinksThe Cloud Bard Speaks: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/the-cloud-bard-speaks-with-forrest-brazeal/The Read Aloud Cloud: https://www.amazon.com/Read-Aloud-Cloud-Innocents-Inside/dp/1119677629The Cloud Resume Challenge Book: https://forrestbrazeal.gumroad.com/l/cloud-resume-challenge-book/launch-dealThe Cloud Resume Challenge: https://cloudresumechallenge.devTwitter: https://twitter.com/forrestbrazealOriginal Episodehttps://www.lastweekinaws.com/podcast/screaming-in-the-cloud/creatively-giving-back-to-the-cloud-community-with-forrest-brazeal/SponsorBackblaze: https://www.backblaze.com/   

Ready, Set, Cloud Podcast!
Becoming a full-time serverless content creator with Yan Cui

Ready, Set, Cloud Podcast!

Play Episode Listen Later Jun 27, 2024 28:27


Have you ever wished you could quit your job and go create content all day long while working for yourself? You can! In this episode, Yan Cui joins Allen Helton to talk about full-time content creation and consultancy within the serverless world. The two discuss Yan's journey to greatness, weigh in on the use of generative AI when it comes to content, and cover tips and tricks to get better engagement. About Yan Yan is an AWS Serverless Hero offering training and consulting on AWS and serverless applications. With experience running production workloads at scale in AWS since 2010, Yan has a proven track record of leveraging technology to enhance business outcomes. He has served as an architect and principal engineer across various industries, including banking, e-commerce, social networks, live streaming, and mobile gaming. Yan has worked on large-scale systems handling millions of concurrent users and processing billions of events daily. He runs the site "The Burning Monk", where he shares serverless tips, insights, and best practices, and hosts the podcast Real World Serverless, featuring discussions with practitioners from around the globe. Links Twitter - https://x.com/theburningmonk LinkedIn - https://www.linkedin.com/in/theburningmonk/ The Burning Monk - https://theburningmonk.com Production Ready Serverless Course - http://productionreadyserverless.com/ LLRT Deep Dive - https://www.youtube.com/watch?v=_K4ABY60_oo --- Send in a voice message: https://podcasters.spotify.com/pod/show/readysetcloud/message Support this podcast: https://podcasters.spotify.com/pod/show/readysetcloud/support

Smart Cherrys Thoughts
Chatting with Founder of THEBURNINGMONK Limited, Developer Advocate at Lumigo,Host at Real-World Serverless podcast,Writer at Master Serverless Newsletter,AWS Serverless Hero- Yan Cui from Netherlands

Smart Cherrys Thoughts

Play Episode Listen Later May 1, 2024 52:33


Chatting with Founder of THEBURNINGMONK Limited, Developer Advocate at Lumigo, Host at Real-World Serverless podcast, Writer at Master Serverless Newsletter, AWS Serverless Hero, Public Speaker- Yan Cui from Amsterdam, North Holland, Netherlands- Yan Cui said about his work & answered some of my questions. more info at https://smartcherrysthoughts.com

Real World Serverless with theburningmonk
#101: Faster serverless APIs with Brian LeRoux

Real World Serverless with theburningmonk

Play Episode Listen Later Apr 23, 2024 60:19


In this episode, I spoke with Brian LeRoux, co-founder of begin.com and creator of the Architect framework. Brian is also an AWS Serverless Hero and is currently working on enhance.dev, an HTML-first full-stack web framework.In a wide-ranging conversation, we discussed:the Architect frameworkLambdalith vs. Single-purpose functionsBuilding a faster AWS SDK (aws-lite)Web componentsFunctionlessWASMInfra from code frameworks such as AmptLinks from the episode:AWS Lite SDKArchitect frameworkBeginEnhance frameworkThe LocalStack episodeThe LLRT episodeAmpt by Jeremy DalyMy serverless testing courseOpening theme song:Cheery Monday by Kevin MacLeodLink: https://incompetech.filmmusic.io/song/3495-cheery-mondayLicense: http://creativecommons.org/licenses/by/4.0

Screaming in the Cloud
The Current State of Serverless with Kristi Perreault

Screaming in the Cloud

Play Episode Listen Later Mar 27, 2024 33:30


On this week's episode of Screaming in the Cloud, Corey is joined by Kristi Perreault. Given Kristi's title of AWS Serverless Hero, Corey and Kristi discuss the origins and current state of the serverless world, the similarities between AI and serverless as the tech world moves into this next era, and why she emphasizes that serverless is not always the right solution for every issue. Kristi also opens up about her role as Principal Software Engineer at Liberty Mutual, and what she enjoys most about jet setting around the globe giving speeches.Highlights:(00:00) - Introducing Christy Perrealt(00:39) - The Unconventional Path to Becoming an AWS Serverless Hero(05:05) - Exploring the Boundaries of Cloud Education(10:53) - The Challenges of Keeping Up with Rapid Tech Changes(11:51) - Redefining Serverless: Beyond the Hype(13:12) - The Evolution of Serverless and Its Impact(21:55) - Staying Grounded Amidst Technological Zealotry(27:18) - Python Development in the Cloud(29:31) - Upcoming Talks and Where to Connect with KristiAbout KristiKristi Perreault is an AWS Serverless Hero and a Principal Software Engineer at Liberty Mutual Insurance, where her focus is serverless-first cloud enablement. She has over 5 years of industry experience, holds an M.S. in Electrical & Computer Engineering, and is very passionate about promoting women in technology. She is an established speaker, appearing in over 35 conferences, podcasts, panels, and more. Kristi founded the Serverless Denver meetup, and currently co-organizes the Portsmouth, NH AWS User Group and CDK Day. Outside of work and the serverless tech space, Kristi can be found reading a good book in her tiny home, enjoying a good poke bowl, or jet setting all over the world.Links:Linkedin: https://www.linkedin.com/in/kristi-perreault/Twitter: @kperreault95AWS Portsmouth User Group: https://www.meetup.com/aws-portsmouth-user-group/AWS Usergroup Belfast: https://www.meetup.com/aws-usergroup-belfast/

Ready, Set, Cloud Podcast!
Learning by fire: taking your side project to production with Luc van Donkersgoed

Ready, Set, Cloud Podcast!

Play Episode Listen Later Mar 1, 2024 25:23


They say the best way to learn is by doing, but is that always true? Does the way people effectively learn change the further they get into their career? Join Luc and Allen as they discuss continuing education as a developer and what they've experienced over the last decade. The two cover the benefits of side projects and the impact taking them to production has both from a learning perspective and the effectiveness of how it sharpens your skills. About LucLuc is an AWS Serverless Hero and Principal Engineer at PostNL, where he designs and builds enterprise-scale serverless architectures. He is well known for his articles, presentations, videos, and podcasts about AWS. Luc strives to help team members, colleagues, and local and global AWS communities embrace and grow their skill sets so they too can experience the joy, fun, and sheer scale serverless brings to application development. Links Twitter - https://twitter.com/donkersgood LinkedIn - https://www.linkedin.com/in/donkersgoed Blog - https://lucvandonkersgoed.com AWS News - https://aws-news.com Empowered by Marty Cagan - https://rdyset.click/GDix65 The Software Engineer's Guidebook by Gergely Orosz - https://rdyset.click/inkhV2 Team Topologies by Matthew Skelton - https://rdyset.click/qyGDyv Learning Domain Driven Design by Vlad Khononov - https://rdyset.click/iMfBjo Architecture Patterns with Python by Bob Gregory and Harry Percival - https://rdyset.click/ambmw2 --- Send in a voice message: https://podcasters.spotify.com/pod/show/readysetcloud/message Support this podcast: https://podcasters.spotify.com/pod/show/readysetcloud/support

PodRocket - A web development podcast from LogRocket
Stop building synchronous apps with Allen Helton

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Feb 15, 2024 37:22


We talk to Allen Helton, AWS Serverless Hero and Ecosystem Engineer at Momento, about why we should top building synchronous apps. Links https://www.readysetcloud.io/blog https://www.linkedin.com/in/allenheltondev https://github.com/allenheltondev https://allenheltondev.medium.com We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket combines frontend monitoring, product analytics, and session replay to help software teams deliver the ideal product experience. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Allen Helton.

apps helton synchronous aws serverless hero
Ready, Set, Cloud Podcast!
Moving into a post serverless era with Jeremy Daly

Ready, Set, Cloud Podcast!

Play Episode Listen Later Feb 2, 2024 37:13


The serverless community loves talking about serverless. Despite the amazing innovations happening all around us, we still talk about cold starts and scalability like they are new. But what if we didn't? What would it take to get to a post-serverless era? What does post-serverless even mean? Join Jeremy Daly and Allen Helton as they discuss what the future holds for serverless and how the game needs to change in order for us to build software faster and better than ever before. About Jeremy Jeremy is a senior technology leader with more than 25 years of experience managing the development of complex web and mobile applications for domestic and international businesses. Currently, he is the CEO at Ampt and an AWS Serverless Hero. He writes extensively about serverless on his blog and publishes a weekly newsletter about all things serverless called Off-by-none. Jeremy has a soft spot for helping people solve problems using serverless and frequently works with companies and individuals transitioning away from the traditional “server-full” approach. You can find him chatting about serverless on X, in several forums and Slack groups, and at conferences around the world. Links X - https://twitter.com/jeremy_daly LinkedIn - https://www.linkedin.com/in/jeremydaly Jeremy's blog - https://jeremydaly.com Off-by-none - https://offbynone.io Ampt - https://getampt.com --- Send in a voice message: https://podcasters.spotify.com/pod/show/readysetcloud/message Support this podcast: https://podcasters.spotify.com/pod/show/readysetcloud/support

The Engineering Room with Dave Farley
How Agile Failed at the BBC and the FBI | Gojko Adzic In The Engineering Room Ep. 3

The Engineering Room with Dave Farley

Play Episode Listen Later Jan 31, 2024 75:56


In this episode, Dave Farley chats with Gojko Adzic. Gojko is a prolific author, international speaker on software and expert practitioner in DDD, BDD and an AWS Serverless Hero. Dave and Gojko chat about a wide-ranging series of topics on product development, steering development organisations to success, Palchinsky principles and how agile development failed for the FBI and the BBC. It's a fun episode! ( ➡️ https://gojko.net)xxGojko's new text-to-speech video maker ➡️ https://www.narakeet.com MindMup - MindMapping tools ➡️ https://www.mindmup.com

Datacenter Technical Deep Dives
AWS Serverless Hero Jeremy Daly at re:Invent 2023!

Datacenter Technical Deep Dives

Play Episode Listen Later Dec 21, 2023 5:58


This year at AWS re:Invent we are going to interview conference attendees, AWS Heroes, and AWS employees. We're asking them what they are excited about at re:Invent and what they are working on! Join us to hear the answer to these questions from some of the top minds in the industry!!! Resources: https://getampt.com/ https://www.linkedin.com/in/jeremydaly/ https://twitter.com/jeremy_daly Intro music attribution: Artist - MaxKoMusic

Datacenter Technical Deep Dives
AWS Serverless Hero Luciano Mammino at re:Invent

Datacenter Technical Deep Dives

Play Episode Listen Later Dec 17, 2023 2:55


This year at AWS re:Invent we are going to interview conference attendees, AWS Heroes, and AWS employees. We're asking them what they are excited about at re:Invent and what they are working on! Join us to hear the answer to these questions from some of the top minds in the industry!!! Resources: https://loige.co/ https://twitter.com/loige https://aws.amazon.com/developer/community/heroes/luciano-mammino/ Intro music attribution: Artist - MaxKoMusic

Datacenter Technical Deep Dives
The Why and How of IaC with Ben Kehoe

Datacenter Technical Deep Dives

Play Episode Listen Later Oct 28, 2023 54:59


Ben Keheo is an AWS Serverless Hero & retired vacuum cleaner salesman! In this episode we talk about Infrastructure as Code: where we came from, how we got here, and Ben's vision for it's future! Resources: https://www.linkedin.com/in/ben11kehoe/ https://twitter.com/ben11kehoe https://mastodon.cloud/@ben11kehoe Intro music attribution: MaxKoMusic(Future Technology) Denys Kyshchuk (Halloween Background)

Real World Serverless with theburningmonk
#86: Enterprise CDK with Ran Isenberg

Real World Serverless with theburningmonk

Play Episode Listen Later Oct 24, 2023 43:20


In this episode, I spoke with Ran Isenberg, who is an AWS Serverless Hero and Principal Software Architect at CyberArk. Amongst other things, we talked about platform engineering at CyberArk, how they adopted CDK and how they approach testing and tenant isolations.Links from the episode:* Ran's blog* Open positions at CyberArk* cdk-nag* Ran's AWS Lambda cookbook* See Ran speak at re:Invent, session OPN305* My approach towards serverless testing* My course on serverless testing* Episode 85 with Matt Bonig about CDK dos & don'tsYou can find Ran on X as @IsenbergRan-----For more stories about real-world use of serverless technologies, please subscribe to the channel and follow me on X as @theburningmonk.And if you're hungry for more insights, best practices, and invaluable tips on building serverless apps, make sure to subscribe to our free newsletter and elevate your serverless game! https://theburningmonk.com/subscribeOpening theme song:Cheery Monday by Kevin MacLeodLink: https://incompetech.filmmusic.io/song/3495-cheery-mondayLicense: http://creativecommons.org/licenses/by/4.0

DevOps and Docker Talk
AWS Lambda Containers

DevOps and Docker Talk

Play Episode Listen Later Oct 20, 2023 49:39


Ready, Set, Cloud Podcast!
Stop Getting Cloud Certifications With Ro Ndimofor

Ready, Set, Cloud Podcast!

Play Episode Listen Later Sep 22, 2023 23:52


Are you cloud certified? In this episode of the Ready, Set, Cloud podcast Ro Ndimofor stands his ground on a hard-hitting stance against certifications. Enjoy the classic debate of "practice vs theory" as Ro and Allen discuss alternatives to certifications and if there really is a place for them in your career. About RoRo is a software developer and an AWS Serverless Hero who loves creating technical content. Be it on his blog or on his current project, EduCloud Academy, Ro is determined to help educate others and push for a stronger serverless community. EduCloud Academy is a Serverless learning platform with a strong focus on “Learning by Doing”. Links LinkedIn - https://www.linkedin.com/in/rosius Twitter - https://twitter.com/atehrosius Blog - https://phatrabbitapps.com EduCloud Academy - https://www.educloud.academy --- Send in a voice message: https://podcasters.spotify.com/pod/show/readysetcloud/message Support this podcast: https://podcasters.spotify.com/pod/show/readysetcloud/support

Ready, Set, Cloud Podcast!
BONUS - Are Cold Starts Gone Forever with AJ Stuyvenberg

Ready, Set, Cloud Podcast!

Play Episode Listen Later Jul 24, 2023 14:28


Anyone who's heard of serverless has heard of cold starts. But have you heard about proactive initialization? This is a newly published service enhancement from the AWS Lambda team that pre-warms execution environments of on-demand functions. This enhancement results in reduced cold start times and costs consumers no extra money to take advantage of. In this short bonus episode, Allen and AJ talk about what you need to know about his exciting optimization. The two cover how it was discovered, why the AWS Lambda team made the change, and what you need to do to get started with it. About AJAJ Stuyvenberg is a Staff Engineer for Serverless APM at Datadog, and has been a member of the Serverless community for 6+ years. He's an AWS Serverless Hero, serverless meetup organizer, open-source author, and frequently blogs about Serverless topics.Before Datadog, he was a Principal Engineer at Serverless Inc, the company behind the Serverless Framework.In his spare time, AJ is an avid BASE jumper and enjoys flying his wingsuit in the Alps. Links Twitter -https://twitter.com/astuyve LinkedIn - https://www.linkedin.com/in/aaron-stuyvenberg AJ's blog on proactive initialization - https://aaronstuyvenberg.com/posts/understanding-proactive-initialization AWS Lambda docs on proactive initialization -https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtime-environment.html#runtimes-lifecycle-ib --- Send in a voice message: https://podcasters.spotify.com/pod/show/readysetcloud/message Support this podcast: https://podcasters.spotify.com/pod/show/readysetcloud/support

The GeekNarrator
Serverless Architecture with Yan Cui

The GeekNarrator

Play Episode Listen Later Jul 3, 2023 59:09


In this episode I talk to Yan Cui, who is an AWS Serverless Hero, all about Serverless technologies. Chapters: 00:00 Serverless Architecture with Yan Cui 01:58 What do we mean by Serverless Architecture? 05:42 What is the core problem Serverless solves? 11:06 Do we need to think differently to be able to use Serverless? 15:27 What is the difference between serverless and managed services? 19:17 Is Vendor Lock-in really a problem? 27:42 Multicloud - Is it really worth it? 33:46 Is ColdStart a real problem? What kind of apps get impacted? 43:25 Monitoring serverless applications 48:22 Usecases when serverless may not be the best solution 54:27 Future of serverless 57:31 How should a developer learn about serverless? I hope you enjoy the discussion and learn from it. Please hit the like button, share it with your network and also subscribe to the channel. References: Yan Cui - https://theburningmonk.com Courses - https://productionreadyserverless.com/ Corey Quinn on MultiCloud -    • Corey Quinn: The ...   Linkedin Yan - https://www.linkedin.com/in/theburnin... Twitter Yan - https://twitter.com/theburningmonk Other playlists to watch: Distributed Systems and Databases -    • Distributed Syste...   Software Engineering -    • Software Engineering   Distributed systems practices -    • Distributed Systems   Cheers, The GeekNarrator

Real World Serverless with theburningmonk
#79: The meaninglessness of serverless with Ben Kehoe

Real World Serverless with theburningmonk

Play Episode Listen Later May 30, 2023 58:16


In this episode, I caught up with Ben Kehoe, who is an AWS Serverless Hero and one of the earliest adopters of serverless technologies.In a wide-ranging conversation, we discussed many topics around serverless and AI, including:The natural evolution of marketing terms and the need to focus on specific functional characteristics rather than defending the term. For example, including of arguing about what "serverless" means, we should instead talk about "pay-per-use".AWS should focus DX around the core service (e.g. CloudFormation) rather than trying to find client-side solutions by adding workarounds in SAM, CDK, etc.These client-side answers have a higher Total Cost of Ownership (TCO). Developers often don't see the increased TCO they are taking on, but when things break, it's a problem.Developers put too much emphasis on author time benefits and not enough on runtime and operational time costs. They should be more thoughtful about the operational time cost.The “infrastructure from code” movement is taking burdens off the developer but leaving them with the developer's business, which is a bad thing.Developers often have a hard time separating delivering business value vs. coding.As an industry, a flawed narrative has emerged that developers are somehow special within an organisation and that it's OK for them to ignore their responsibilities to security if there is friction in the process.Ai has the potential to impede human growth as the current AI systems are not designed to generate new ideas and challenge the status quo. “an AI generator that is trained on modernist art would never invent post-modernism”.Links from the episode:The meaning(lessness) of serverlessServerless is a state of mindThe serverless spectrumEp16 - Serverless at iRobot with Ben KehoeFor more stories about real-world use of serverless technologies, please follow me on Twitter as @theburningmonk and subscribe to this podcast.Want to step up your AWS game and learn how to build production-ready serverless applications? Check out my upcoming workshops and I will teach you everything I know.Opening theme song:Cheery Monday by Kevin MacLeodLink: https://incompetech.filmmusic.io/song/3495-cheery-mondayLicense: http://creativecommons.org/licenses/by/4.0

Smart Cherrys Thoughts
Chatting with Co-founder and Head of Product at Ampt, AWS Serverless Hero- Emrah Samdan from Ankara, Turkey

Smart Cherrys Thoughts

Play Episode Listen Later Apr 19, 2023 33:36


Emra Samdan said about his work and answered some of my questions. more info- https://www.SmartCherrysThoughts.com

Remote Ruby
Utilizing AWS Lambda and Rails to Build Applications with Ken Collins

Remote Ruby

Play Episode Listen Later Feb 24, 2023 60:08


On this episode of Remote Ruby, we have an awesome guest joining us. Today, we have Ken Collins, who's a Principal Engineer and Cloud Architect at Custom Ink, an active member in the Ruby community for over fifteen years, a Microsoft open source contributor, PC Gamer, and an AWS Serverless Hero. We have so much to discuss today, as Ken fills us in on Lamby, Custom Ink, how Lambda evolved, a gem called Lambdakiq, and if you're looking for cost optimization, why Lambda is the best compute service out there. We'll also learn how CloudFormation can help developers, how CloudWatch Events is used, and we'll hear about the different database options Amazon has such as Aurora Serverless, DynamoDB, and RDS. If you've never used Lambda, it's a good time to try it out. Andrew realized he's in the perfect place to try it since he recently built a proxy one. Download this episode to learn much more! [00:01:52] Ken tells us about himself and his background[00:04:47] Custom Ink makes some great products, and we'll learn how Lamby came to be, the stuff they build, the cool tech behind it, and the services, such as AWS Lambda.[00:08:16] How did Lambda evolve?[00:09:17] Ken details what the OCI format is, and how Lambda works compared to deploying to a traditional server. We hear about Lambda releasing Function URLs, a free API gateway, and what it does.[00:12:16] We hear the whole process from end-to-end, starting from a web request, what happens, how it gets to Rails, Dynos are running, the database gets affected, and how those containers can be used for other things like an event driven architectures.[00:16:03] Chris asks Ken how Kubernetes and Lambda compare. Also, we hear how background jobs and cron jobs fit in, and a gem that Ken wrote called, Lambdakiq.[00:20:30] How does Ken manage connections being made and the events being sent to the right place? Also, Chris wonders if CloudFormation is something you should learn as one of the starting points or you should later for it to be more useful, and Ken tells us about the AWS Cloud Development Kit and what it does.[00:24:10] Amazon has many different database options and Ken explains that you can use any database you want, wherever you want.[00:25:39] Ken explains the differences between Aurora Serverless, DynamoDB, and RDS.   [00:30:23] We're going back to talking about Lambda now and Ken tells us about their website, a documentation website where they cover things, and a Quick Start Guide on how you can deploy a new Rails APP on Rails 3.2 to Lambda in 5 minutes.[00:33:02] Chris mentions how Taylor Otwell modified Laravel to run on Lambda, and Vapor is their tool for deploying to Lambda.[00:36:25] Are there any gotchas? Chris heard people were talking about Rails being slow to boot and issues with connecting to your Lambda to a VPC was slow. Ken tells us the VPC has been solved very well.[00:39:31] Ken and Chris chat about the hardest things are learning and change management, like setting up CI for the first time can be challenging, Heroku is amazing but has its limits, and using CloudWatch Logs which is a change for people. Also, Ken shares a hotspot with Lambda, and he tells us about Lambda Punch and New Relic. [00:42:47] Ken tells us to use CloudWatch Events for setting up Cronjobs that run on a schedule.[00:44:51] Chris wonders if there are concerns or ways you have to change things for assets, and Ken explains what they do with turning on the magic environment variable, but if you need something else, it goes into the CI/CD Pipeline creation.[00:48:30] Andrew is going to try Lambda now, and we hear Ken's thoughts on how different development is from production when you use Lambda. Find out why he loves Microsoft's Development Containers Specification, and Chris mentions DHH's MRSK project and what it's going to do.[00:56:06] Find out where to follow Ken, if you're interested in Custom Ink, check them out, and please try out Lambda because he could use some contributors to help write the guides.Panelists:Jason CharnesChris OliverAndrew MasonGuest:Ken CollinsSponsor:HoneybadgerLinks:Jason Charnes TwitterChris Oliver TwitterAndrew Mason TwitterKen Collins TwitterKen Collins GitHubKen Collins (Dev.to)Lamby-GitHubCustom InkCustom Ink ProductsLambdakiqAmazon Aurora ServerlessAmazon DynamoDBAmazon RDSLambyFull Stack Radio Podcast-Episode 120-Taylor Otwell-Serverless Laravel with VaporLambda PunchNew Relic-GitHubAmazon CloudWatch EventsDevelopment ContainersRemote Ruby Podcast-Episode 165: GitHub Codespaces & Docker with Benjamin WoodMRSK: Deploy Web apps anywhereRuby Radar TwitterRuby for All Podcast

Dev Interrupted
Educating the Next Generation of Cloud Engineers w/ Google Cloud's Head of Developer Media, Forrest Brazeal

Dev Interrupted

Play Episode Listen Later Jan 3, 2023 29:47


Happy New Year and welcome to Season 3 of Dev Interrupted! We couldn't think of a better way to kick off Season 3 of the podcast than with the immensely talented Forrest Brazeal. Not only is Forrest the Head of Developer Media at Google Cloud, but he lists being a writer, speaker, cartoonist, cloud architect and AWS Serverless Hero, among his many accolades. To top it all off, Forrest is an all around great guy with a passion for education and advocacy. That's why he's working to help educate, train and develop the next generation of cloud engineers. But he needs your help. Listen as Forrest explains why so many great engineers get overlooked by companies - and how to stop it. Show NotesCloud Resume ChallengeGoogle Cloud Next '22 Developer Keynote: Top 10 Cloud Technology Predictions Learn about the power of Continuous Merge with gitStream Join the Dev Interrupted Discord Want to try LinearB? Book a LinearB Demo and use the "Dev Interrupted Podcast" discount code.

Datacenter Technical Deep Dives
The Lambda Sidecar Patter for Kubernetes Event Driven Architecture with AWS Hero Ken Collins

Datacenter Technical Deep Dives

Play Episode Listen Later Jan 3, 2023 53:06


Ken Collins is an AWS Serverless Hero and a Staff Engineer for Custom Ink and in this episode he takes us through software design patterns, kubernetes, and AWS Lambda! Resources: https://www.linkedin.com/in/metaskills/

עוד פודקאסט לסטארטאפים
מ-פיבוט לאקזיט של 500 מליון דולר - רן רבנזפט

עוד פודקאסט לסטארטאפים

Play Episode Listen Later Dec 4, 2022 44:47


השבוע היה לי הכבוד לארח את רן רבנזפט - מייסד שותף וסמנכ"ל טכנולוגיות באפסגון, שנמכרה לסיסקו בכ-500 מליון דולר. רן בוגר 81, עם מעל 15 שנות ניסיון בפיתוח וניהול גופי הנדסה ומוצר. רן הוא גם AWS Serverless Hero, שאוהב לדבר על יזמות, ענן, ופיתוח תוכנה.אפסגון הוקמה במטרה לתת למפתחים כלי מודרני לניטור, תחקור, וייעול סביבות הענן שלהם. בסביבת ענן מודרני ארגונים מפתחים אפליקציות מורכבות יותר, שמכילות עשרות ומאות סרביסים שונים. תקלה כלשהי בסביבה עלולה לגרום שהפסדים של מאות אלפי דולרים. טכנולוגיות סטנדרטיות שקיימות בשוק כמו לוגים ומטריקות לא אפשרו להבין לעומק את התהליכים שקורים באפליקציות, ועל כן מפתחים היו מוגבלים ביכולת שלהם לנטר ולתחקר.אפסגון הייתה בין החברות הראשונות בארץ שצמחה במודל Product-led growth, ומכרה ישירות למפתחים (Business2Developers). החברה הוקמה ב-2018 על ידי ניצן שפירא ורן רבנזפט. סיסקו נרכשה על ידי אפסגון לפני כשנה, וכיום הקבוצה עובדת על פתרון זהה כמוצר open-source לקהילה. לינקים רלוונטיים מהפרק:  11:43 - לינק לאתר של Fusion (פיוז'ן) - http://bit.ly/3Fg7qKc 13:54 - לינק לאתר פייבר - https://bit.ly/3SXh0q2  23:02 - לינק לספר Poduct Led Growth, המעמיק בנושאים שדיברו עליהם בפרק - https://amzn.to/3EQ2GtF הלינקים החכמים לפרק באדיבות YOPEwww.yopepods.com   (*) ללינקדאין שלי: https://www.linkedin.com/in/guykatsovich/ (*) לאינסטגרם שלי: https://www.instagram.com/guykatsovich/ (*) עקבו אחרינו ב"עוד פודקאסט לסטארטאפים" וקבלו פרק מדי שבוע: ספוטיפיי:https://open.spotify.com/show/0dTqS27ynvNmMnA5x4ObKQ אפל פודקאסט:https://podcasts.apple.com/podcast/id1252035397 גוגל פודקאסט:https://bit.ly/3rTldwq עוד פודקאסט - האתר שלנו:https://omny.fm/shows/odpodcast ה-RSS פיד שלנו:https://www.omnycontent.com/.../f059ccb3-e0c5.../podcast.rssSee omnystudio.com/listener for privacy information.

product fusion aws serverless hero
AWS Developers Podcast
Episode 058 - Serverless and Well Architectured Frameworks with Kristi Perreault

AWS Developers Podcast

Play Episode Listen Later Nov 4, 2022 32:34


In this episode, Emily and Dave chat with Kristi Perreault, a Principal Software Engineer at Liberty Mutual, and an AWS Serverless Hero. Krisiti shares her journey to the cloud, thoughts on Serverless, building a sustainable application, the importance of well architected frameworks, and how she aids over 4,000 Liberty Mutual engineers to be more successful in their jobs. The trio also discusses the importance of creating an inclusive work environment for all. Both Emily and Kristi share their personal journeys as women in tech, actionable advice, and steps allies can take in support. Kristi's Twitter: https://twitter.com/kperreault95 Kristi on LinkedIn: https://www.linkedin.com/in/kristi-perreault/ Kristi on Medium: https://kristiperreault.medium.com Kristi on Dev.to: https://dev.to/kristiperreault Kristi's AWS Hero Page: https://aws.amazon.com/developer/community/heroes/kristi-perreault/ Serverless Days: https://serverlessdays.io/ Serverless Days Denver: https://www.meetup.com/serverlessdays-denver/ Women Who Code: https://www.womenwhocode.com/ How To Support Women in Tech: https://index.medium.com/how-to-support-women-in-tech-ea5b9de61fb4 Know My Name: A Memoir by Chanel Miller: https://www.amazon.com/Know-My-Name-Chanel-Miller-ebook/dp/B07SJPPTDL Serverless Applications Lens - AWS Well-Architected Framework: https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/welcome.html All Trails – Mobile App - iPhone: https://apps.apple.com/us/app/alltrails-hike-bike-run/id405075943 All Trails - Mobile App - Android: https://play.google.com/store/apps/details?id=com.alltrails.alltrails --------------------- Subscribe: Amazon Music: https://music.amazon.com/podcasts/f8bf7630-2521-4b40-be90-c46a9222c159/aws-developers-podcast Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-developers-podcast/id1574162669 Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zb3VuZGNsb3VkLmNvbS91c2Vycy9zb3VuZGNsb3VkOnVzZXJzOjk5NDM2MzU0OS9zb3VuZHMucnNz Spotify: https://open.spotify.com/show/7rQjgnBvuyr18K03tnEHBI TuneIn: https://tunein.com/podcasts/Technology-Podcasts/AWS-Developers-Podcast-p1461814/ RSS Feed: https://feeds.soundcloud

You Can Be Anything
Episode 042: A Chat With Ateh Rosius – Serverless Developer & AWS Serverless Hero

You Can Be Anything

Play Episode Listen Later Oct 27, 2022 48:03


Rosius Ndimofor is an avid thinker, extreme problem solver, and full-stack mobile and web developer. He enjoys building applications for the cloud. He was introduced to Java programming back in 2008 and became Java certified in 2009. Since then, he has been contributing to the creation of both commercial and personal software. Today, he comes on the "You Can Be Anything Podcast" to share more light on his beginnings, life experiences, starting in tech, learning tech skills, and how anyone can transition into tech. We hope his story inspires you to become anything. Thanks.   Thanks for your support. You can connect with us on Facebook, Instagram, and YouTube, or send us an email at hello@youcanbeanythingpodcast.com Check out our website  www.youcanbeanythingpodcast.com for more resources and to learn more. Also, you can connect with Solange Che on Facebook (@Solange Che) and Instagram (@solangeche1). Thank you! Remember to Be Good To Each Other!

Serverless Craic from The Serverless Edge
Serverless Craic Ep31 Event Driven Architecture Examples at EDA Day

Serverless Craic from The Serverless Edge

Play Episode Listen Later Sep 16, 2022 18:27 Transcription Available


We are talking about Event Driven Architecture examples today. There was an event in London a few weeks ago, called EDA Day. It was organised by GOTO with a lot of AWS contributors. It was neat because it was one day focused on event driven architectures. It showed the coming together of a 15 to 20 year old pattern of EDA, plus serverless. And all the bigger services on top of that, like Eventbridge and Step Functions. Gregor Hohpe's Keynote Gregor Hohpe did the keynote talk: 'I made everything loosely coupled. Does my app fall apart?' Gregor is an AWS enterprise strategist. And he talked about the event landscape and the complexities behind event driven and architecture. He had a diagram called: 'A calls B'. It looked pretty simple until you get to the million things you need to think about when A calls B! He said that there were three languages in a cloud native serverless domain. The business domain and how you talk about the business domain as a business person. The eventing architecture and how you talk about it as an architect. And the cloud native area, and how you talk about it as a cloud engineer. So DDD, event framework and CDK for automation. It's about having those three separate languages and how you talk. And bringing them together at the end. Serverless Espresso And one neat thing to mention is a developer advocate called Julian Wood. He's worth looking up on Twitter. He, Ben Smith, and a few others from Serverless Land, put together a demo called Serverless Espresso. You scan a barcode and through an event driven step function, event bridge sequence, you can order a coffee from your phone. It looks and sounds really simple. But you watch the whole thing happen. That's a great lab. So look up AWS labs, to see Serverless Espresso. It's well put together to show how you build an event driven architecture from the ground up. Ben Ellerby - Minimal Viable Migrations Another good speaker was Ben Ellerby. He worked in Theodo and is an AWS Serverless Hero. He has a thing about Minimal Viable Migrations. A lot of people think event driven is a greenfield or brand new thing. But he had a great talk about existing architecture and going event driven. He talked about doing a small part of your architecture and going bit by bit. By using an incremental model. David Boyne - Awesome EventBridge David Boyne joined AWS and does ‘Awesome EventBridge'. He has open source projects. And he does a great talk on 'Thinking Event First'. How to approach events and get your schemes right. And really think about your domain model and lock it in from day one. So he's got a bunch of tools as well. So it's worth looking up his resources on ‘awesome event bridge'. Marcia Villalba - FooBar Serverless Another great speaker was Marcia Villalba. She's one of the developer advocates at AWS. She's got great content on good practices and getting started. She has a really nice way of explaining these concepts. There is one thing I get nervous about around event driven and domain driven. People who are good at it tend to get very complicated very quickly and lose everyone. But Marcia's super at bringing these concepts across and helping normal teams, which is every team. Check out her FooBar Serverless YouTube channel. There is tons of developer friendly content from beginner to more advanced. It's one of my YouTube subscriptions that I watch quite regularly. Lego Talk - Sarah Hamilton and Sheen Brisals The last one to talk about is Lego. They sponsored the event. And they had two talks. Sarah Hamilton is one of the software engineers and she gave a really good talk about the advanced techniques they're using in their event driven architecture. My friend, Sheen Brisals was speaking as well. They have a fantastic story, which is well worth listening to. It's about how they moved to an event driven serverless architecture. There's a socio-technical element to this. How you organise your teams and the attitude is what I would call a core engineering competency and mindset. As opposed to an architectural pattern. Lego tells their story brilliantly. Product Leader panel The event ended with a panel of product leaders from Eventbridge, Step Functions and MongoDB. It was a really relaxed panel. Emily Shea, who we know well, was there. She works in go to market for serverless. It was a relaxed chat. No one was pushing any tools. They were shooting the breeze on good practice and what's coming down the track. The evolution of event driven architecture and the tie in with serverless. There's something in it! I don't want to say Serverless is becoming EDA or EDA is becoming serverless. But serverless enables EDA for sure. https://gotoldn.com/ https://theserverlessedge.com/ https://twitter.com/ServerlessEdge  

Serverless Chats
Episode #134: Serverless Community Building with Farrah Campbell

Serverless Chats

Play Episode Listen Later Apr 25, 2022 47:20


About Farrah CampbellAfter 10 years of working in healthcare management, a serendipitous 20-minute car ride with Kara Swisher inspired Farrah to make the jump into technology. She has worked at multiple startups in many different capacities, eventually working her way to being the Sr. Product Marketing Manager, Containers & Serverless.Farrah previously worked as Ecosystems Director, at Stackery where she managed the relationship with AWS including Stackery as an Advanced Technology Partner, achieving the AWS DevOps Competency, a launch partner for Lambda Layers and is an AWS Serverless Hero. Farrah has cultivated the serverless community as an organizer of Portland Serverless Days, the Portland Serverless Meetup, along with numerous serverless workshops and the Portland tech community events from Techfest to bringing multiple luminaries to Portland. Twitter: @FarrahC32 LinkedIn: https://www.linkedin.com/in/farrahcampbell/ AWS Community Builders: https://aws.amazon.com/developer/community/community-builders/

Talking Serverless
#47 - Thorsten Höeger CEO of Taimos

Talking Serverless

Play Episode Listen Later Feb 11, 2022 37:33


In this episode, host Ryan Jones is joined by the charming Thorsten Höeger. Thorsten is the CEO of Taimos, where he is advising customers on how to use AWS with a focus on serverless computing. He aims to improve development processes through automation and building efficient deployment pipelines, for customers of all sizes. Before Taimos, Thorsten worked as a developer and CTO of Germany's first private bank running on AWS— where he helped migrate the core banking system to the AWS platform (all the way back in 2013). Since then he has become an AWS Serverless Hero, has contributed to several open-source projects, and is a frequent speaker at conferences, meetups, and community events. Follow him on Twitter @hoegertn --- Send in a voice message: https://anchor.fm/talking-serverless/message

Break Things On Purpose
Gunnar Grosch: From User to Hero to Advocate

Break Things On Purpose

Play Episode Listen Later Feb 8, 2022 30:17


In this episode, we cover: 00:00:00 - Intro 00:01:45 - AWS Severless Hero and Gunnar's history using AWS 00:04:42 - Severless as reliability 00:08:10 - How they are testing the connectivity in serverless 00:12:47 - Gunnar shares a suprising result of Chaos Engineering 00:16:00 - Strategy for improving and advice on tracing  00:20:10 - What Gunnar is excited about at AWS 00:28:50 - What Gunnar has going on/Outro Links: Twitter: https://twitter.com/GunnarGrosch LinkedIn: https://www.linkedin.com/in/gunnargrosch/ TranscriptGunnar: When I started out, I perhaps didn't expect to find that many unexpected things that actually showed more resilience or more reliability than we actually thought.Jason: Welcome to the Break Things on Purpose podcast, a show about Chaos Engineering and building more reliable systems. In this episode, we chat with Gunnar Grosch, a Senior Developer Advocate at AWS about Chaos Engineering with serverless, and the new reliability-related projects at AWS that he's most excited about.Jason: Gunnar, why don't you say hello and introduce yourself.Gunnar: Hi, everyone. Thanks, Jason, for having me. As you mentioned that I'm Gunnar Grosch. I am a Developer Advocate at AWS, and I'm based in Sweden, in the Nordics. And I'm what's called a Regional Developer Advocate, which means that I mainly cover the Nordics and try to engage with the developer community there to, I guess, inspire them on how to build with cloud and with AWS in different ways. And well, as you know, and some of the viewers might know, I've been involved in the Chaos Engineering and resilience community for quite some years as well. So, topics of real interest to me.Jason: Yeah, I think that's where we actually met was around Chaos Engineering, but at the time, I think I knew you as just an AWS Serverless Hero, that's something that you'd gotten into. I'm curious if you could tell us more about that. How did you begin that journey?Gunnar: Well, I guess I started out as an AWS user, built things on AWS. As a builder, developer, I've been through a bunch of different roles throughout my 20-plus something year career by now. But started out as an AWS user. I worked for a company, we were a consulting firm helping others build on AWS, and other platforms as well. And I started getting involved in the AWS community in different ways, by arranging and speaking at different meetups across the Nordics and Europe, also speaking at different conferences, and so on.And through that, I was able to combine that with my interest for resiliency or reliability, as someone who's built systems for myself and for our customers. That has always been a big interest for me. Serverless, it came as I think a part of that because I saw the benefits of using serverless to perhaps remove that undifferentiated heavy lifting that we often talk about with running your own servers, with operating things in your own data centers, and so on. Serverless is really the opposite to that. But then I wanted to combine it with resilience engineering and Chaos Engineering, especially.So, started working with techniques, how to use Chaos Engineering with serverless. That gained some traction, it wasn't a very common topic to talk about back then. Adrian Hornsby, as some people might know, also from AWS, he was previously a Developer Advocate at AWS, now in a different role within the organization. He also talked a bit about Chaos Engineering for serverless. So, teamed up a bit with him, and continue those techniques, started creating different tools and some open-source libraries for how to actually do that. And I guess that's how, maybe, the AWS serverless team got their eyes opened for me as well. So somehow, I managed to become what's known as an AWS Hero in the serverless space.Jason: I'm interested in that experience of thinking about serverless and reliability. I feel like when serverless was first announced, it was that idea of you're not running any infrastructure, you're just deploying code, and that code gets called, and it gets run. Talk to me about how does that change the perception or the approach to reliability within that, right? Because I think a lot of us when we first heard of serverless it's like, “Great, there's Nothing. So theoretically, if all you're doing is calling my code and my code runs, as long as I'm being reliable on my end and, you know, doing testing on my code, then it should be fine, right?” But I think there's some other bits in there or some other angles to reliability that you might want to tune us into.Gunnar: Yeah, for sure. And AWS Lambda really started it all as the compute service for serverless. And, as you said, it's about having your piece of code running that on-demand; you don't have to worry about any underlying infrastructure, it scales as you need it, and so on; the value proposition of serverless, truly. The serverless landscape has really evolved since then. So, now there is a bunch of different services in basically all different categories that are serverless.So, the thing that I started doing was to think about how—I wasn't that concerned about not having my Lambda functions running; they did their job constantly. But then when you start building a system, it becomes a lot more complex. You need to have many different parts. And we know that the distributed systems we build today, they are very complex because they contain so many different moving parts. And that's still the case for serverless.So, even though you perhaps don't have to think about the underlying infrastructure, what servers you're using, how that's running, you still have all of these moving pieces that you've interconnected in different ways. So, that's where the use case for Chaos Engineering came into play, even for serverless. So, testing how these different parts work together to then make sure that it actually works as you intended to. So, it's a bit harder to create those experiments since you don't have control of that underlying infrastructure. So instead, you have to do it in a few different ways, since you can't install any agents to run on the platform, for instance, you can't control the servers—shut down servers, the perhaps most basic of Chaos Engineering experiment.So instead, we're doing it using different libraries, we're doing it by changing configuration of services, and so on. So, it's still apply the same principles, the principles of Chaos Engineering, we just have to be—well, we have to think about it in different way in how we actually create those experiments. So, for me, it's a lot about testing how the different services work together. Since the serverless architectures that you build, they usually contain a bunch of different services that you stitch together to actually create the output that you're looking for.Jason: Yeah. So, I'm curious, what does that actually look like then in testing, how these are stitched together, as you say? Because I know with traditional Chaos Engineering, you would run a blackhole attack or some sort of network attack to disrupt that connectivity between services. Obviously, with Lambdas, they work a little bit differently in the way that they're called and they're more event-driven. So, what does that look like to test the connectivity in serverless?Gunnar: So, what we started out with, both me and Adrian Hornsby was create these libraries that we could run inside the AWS Lambda functions. So, I created one that was for Node.js, something that you can easily install in your Node.js code. Adrian has created one for Python Lambda functions.So, then they in turn contain a few different experiments. So, for instance, you could add latency to your AWS Lambda functions to then control what happens if you add 50 milliseconds per invocation on your Lambda function. So, for each call to a downstream service, say you're using DynamoDB as a data store, so you add latency to each call to DynamoDB to see how this data affect your application. Another example could be to have a blackhole or a denial list, so you're denying calls to specific services. Or it could be downstream services, other AWS services, or it could be third-party, for instance; you're using a third-party for authentication. What if you're not able to reach that specific API or whatever it is?We've created different experiments for—a typical use case for AWS Lambda functions has been to create APIs where you're using an API Gateway service, an AWS Lambda function is called, and then returning something back to that API. And usually, it should return a 200 response, but you could then alter that response to test how does your application behave? How does the front-end application, for instance, behave when it's not getting that 200 response that it's expecting, instead of getting a 502, a 404, or whatever error code you want to test with. So, that was the way, I think, we started out doing these types of experiments. And just by those simple building blocks, you can create a bunch of different experiments that you can then use to test how the application behaves under those adverse conditions.Then if you want to move to create experiments for other services, well, then serverless, as we talked about earlier, since you don't have control over the underlying infrastructure, it is a bit harder. Instead, you have to think about different ways to do with by, for instance, changing configuration, things like that. You could, for instance, restrict concurrent operations on certain services, or you could do experiments to block access, for instance, using different access control lists, and so on. So, different ways, all depending on how that specific service works.Jason: It definitely sounds like you're taking some of those same concepts, and although serverless is fundamentally different in a lot of ways, really just taking that, translating it, and applying those to the serverless.Gunnar: Yeah, exactly. I think that's very important here to think about, that it is still using Chaos Engineering in the exact same way. We're using the traditional principles, we're walking through the same steps. And many times as I know everyone doing Chaos Engineering talks about this, we're learning so much just by doing those initial steps. When we're looking at the steady-state of the application, when we're starting to design the experiments, we learn so much about the application.I think just getting through those initial steps is very important for people building with serverless, as well. So, think about, how does my application behave if something goes wrong? Because many times with serverless—and for good reasons—you don't expect anything to fail. Because it's scales as it should, services are reliant, and they are responding. But it is that old, “What if?” What if something goes wrong? So, just starting out doing it in the same way as you normally would do with Chaos Engineering, there is no difference, really.Jason: And know, when we do these experiments, there's a lot that we end up learning, and a lot that can be very surprising, right? When we assume that our systems are one way, and we run the test, and we follow that regular Chaos Engineering process of creating that hypothesis, testing it, and then getting that unexpected result—Gunnar: Right.Jason: —and having to learn from that. So, I'm interested, if you could share maybe one of the surprising results that you've learned as you've done Chaos Engineering, as you've continued to hone this practice and use it. What's a result that was unexpected for you, that you've learned something about?Gunnar: I think those are very common. And I think we see them all the time in different ways. And when I started out, I perhaps didn't expect to find that many unexpected things that actually showed more resilience or more reliability than we actually thought. And I think that's quite common, that we run an experiment, and we often find that the system is more resilient to failure than we actually thought initially, for instance, that specific services are able to withstand more turbulent conditions than we initially thought.So, we create our hypothesis, we expect the system to behave in a certain way. But it doesn't, instead—it doesn't break, but instead, it's more robust. Certain services can handle more stress than we actually thought, initially. And I think those cases, they, well, they are super common. I see that quite a lot. Not only talking about serverless Chaos Engineering experiments; all the Chaos Engineering experiments we run. I think we see that quite a lot.Jason: That's an excellent point. I really love that because it's, as you mentioned, something that we do see a lot of. In my own experience working with some of our customers, oftentimes, especially around networking, networking can be one of the more complex parts of our systems. And I've dealt with customers who have come back to me and said, “I ran a blackhole attack, or latency attack, or some sort of network disruption and it didn't work.” And so you dig into it, well, why didn't it work? And it's actually well, it did; there was a disruption, but your system was designed well enough that you just never noticed it. And so it didn't show up in your metrics dashboards or anything because system just worked around it just fine.Gunnar: Yeah, and I think that speaks to the complexity of the systems we're often dealing with today. I think it's Casey Rosenthal who talked about this quite early on with Chaos Engineering, that it's hard for any person to create that mental model of how a system works today. And I think that's really true. And those are good examples of exactly that. So, we create this model of how we think the system should behave, but [unintelligible 00:15:46], sometimes it behaves very unexpected… but in the positive way.Jason: So, you mentioned about mental models and how things work. And so since we've been talking about serverless, that brought to mind one of those things for me with serverless is, as people make functions and things because they're so easy to make and because they're so small, you end up having so many of them that work together. What's your strategy for starting to improve or build that mental model, or document what's going on because you have so many more pieces now with things like serverless?Gunnar: There are different approaches to this, and I think this ties in with observability and the way we observe systems today because as these systems—often they aren't static, they continue to evolve all the time, so we add new functionality, and especially using serverless and building it with AWS Lambda functions, for instance, as soon as we start creating new features to our systems, we add more and more AWS Lambda functions or different serverless ways of doing new functionality into our system. So, having that proper observability, I think that's one of the keys of creating that model of how the system actually works, to be able to actually see tracing, see how the system or how a request flows through the system. Besides that, having proper documentation is something that I think most organizations struggle with; that's been the case throughout all of my career, being able to keep up with the pace of innovation that's inside that organization. So, keeping up with the pace of innovation in the system, continuing to evolve your documentation for the system, that's important. But I think it's hard to do it in the way that we build systems today.So, it's not about only keeping that mental model, but keeping documentation and how the system actually looks, the architecture of the system, it's hard today. I think that's just a fact. And ways to deal with that, I think it comes down to how the engineering organization is structured, as well. We have Amazon and AWS, we—well, I guess we're quite famous for our two-pizza teams, the smaller teams that they build and run their systems, their services. And it's very much up to each team to have that exact overview how their part on the bigger picture works. And that's our solution for doing that,j but as we know, it differs from organization to organization.Jason: Absolutely. I think that idea of systems being so dynamic that they're constantly changing, documentation does fall out of step. But when you mentioned tracing, that's always been one of those really key parts, for me at least coming from a background of doing monitoring and observability. But the idea of having tracing that just automatically going to expose things because it's following that request path. As you dive into this, any advice for listeners about how to approach that, how to approach tracing whether that's AWS X-Ray or any other tools?Gunnar: For me, it's always been important to actually do it. And I think what I sometimes see is that's something that's added on later on in the process when people are building. I tend to say that you should start doing it early on because I often think it helps a lot in the development phase as well. So, it shouldn't be an add-on later on, after the fact. So, starting to use tracing no matter if it's as you said, X-Ray or any third-party's service, using it early on, that helps, and it helps a lot while building the system. And we know that there are a bunch of different solutions out there that are really helpful, and many AWS partners that are willing to help with that as well.Jason: So, we've talked a bunch about serverless, but I think your role at AWS encompasses a whole lot of things beyond just serverless. What's exciting you now about things in the AWS ecosystem, like, what are you talking about that just gets you jazzed up?Gunnar: One thing that I am talking a lot about right now that is very exciting is fortunately, we're in line with what we've just talked about, with resilience and with reliability. And many of you might have seen the release from AWS recently called AWS Resilience Hub. So, with AWS Resilience Hub, you're able to make use of all of these best practices that we've gathered throughout the years in our AWS Well-Architected Framework that then guides you on the route to building resilient and reliable systems. But we've created a service that will then, in an, let's say, more opinionated but also easier way, will then help you on how to improve your system with resilience in mind. So, that's one super exciting thing. It's early days for Resilience Hub , but we're seeing customers already starting to use it, and already making use of the service to improve on their architecture, use those best practices to then build more resilient and reliable systems.Jason: So, AWS Resilience Hub is new to me. I haven't actually haven't really gotten into it much. As far as I understand it, it really takes the Well-Architected Framework and combines the products or the services from Amazon into that, and as a guide. Is this something for people that have developed a service for them to add on, or is this for people that are about to create a new service, and really helping them start with a framework?Gunnar: I would say that it's a great fit if you've already built something on AWS because you are then able to describe your application using AWS Resilience Hub. So, if you build it using Infrastructure as Code, or if you have tagging in place, and so on, you can then define your application using that, or describe your application using that. So, you point towards your CloudFormation templates, for instance, and then you're able to see, these are the parts of my application. Then you'll set up policies for your application. And the policies, they include the RTO and the RPO targets for your application, for your infrastructure, and so on.And then you do the assessment of your application. And this then uses the AWS Well-Architected Framework to assess your application based on the policies you c reated. And it will then see if your application RTO and RPO targets are in line with what you set up in your policies. You will also then get an output with recommendations what you can do to improve the resilience of your application based, once again, on the Well-Architected Framework and all of the best practices that we've created throughout the years. So, that means that you, for instance, will get it, you'll build an application that right now is in one single availability zone, well, then Resilience Hub will give you recommendations on how you can improve resilience by spreading your application across multiple availability zones. That could be one example.It could also be an example of recommending you to choose another data store to have a better RTO or RPO, based on how your application works. Then you'll implement these changes, hopefully. And at the end, you'll be able to validate that these new changes then help you reach your targets that you've defined. It also integrates with AWS Fault Injection Simulator, so you're able to actually then run experiments to validate that through the help of this.Jason: That's amazing. So, does it also run those as part of the evaluation, do failure injection to automatically validate and then provide those recommendations? Or, those provided sort of after it does the evaluation, for you to continue to ensure that you're maintaining your objectives?Gunnar: It's the latter. So, you will then get a few experiments recommended based on your application, and you can then easily run those experiments at your convenience. So, it doesn't run them automatically. As of now, at least.Jason: That is really cool because I know a lot of people when they're starting out, it is that idea of you get a tool—no matter what tool that is—for Chaos Engineering, and it's always that question of, “What do I do?” Right? Like, “What's the experiment that I should run?” And so this idea of, let's evaluate your system, determine what your goals are and the things that you can do to meet those, and then also providing that feedback of here's what you can do to test to ensure it, I think that's amazing.Gunnar: Yeah, I think this is super cool one. And as a builder, myself who's used the Well-Architected Framework as a base when building application, I know how hard it can be to actually use that. It's a lot of pages of information to read, to learn how to build using best practices, and having a tool that then helps you to actually validate that, and I think it's great. And then as you mentioned, having recommendations on what experiments to run, it makes it easier to start that Chaos Engineering journey. And that's something that I have found so interesting through these last, I don't know, two, three years, seeing how tools like Gremlin, like, now AWS FIS, and with the different open-source tools out there, as well, all of them have helped push that getting-started limit closer to the users. It is so much easier to start with Chaos Engineering these days, which I think it's super helpful for everyone wanting to get started today.Jason: Absolutely. I had someone recently asked me after running a workshop of, “Well, should I use a Chaos Engineering tool or just do my own thing? Like do it manually?” And, you know, the response was like, “Yeah, you could do it manually. That's an easy, fast way to get started, but given how much effort has been put into all of these tools, there's just so much available that makes it so much easier.” And you don't have to think as much about the safety and the edge cases of what if I manually do this thing? What are all the ways that can go wrong? Since there are these tools now that just makes it so much easier?Gunnar: Exactly. And you mentioned safety, and I think that's a very important part of it. Having that, we've always talked about that automated stop button when doing Chaos Engineering experiments and having the control over that in the system where you're running your experiments, I think that's one of the key features of all of these Chaos Engineering tools today, to have a way to actually abort the experiments if things start to go wrong.Jason: So, we're getting close to the end of our time here. Gunnar, I wanted to ask if you've got anything that you wanted to plug or promote before we wrap up.Gunnar: What I'd like to promote is the different workshops that we have available that you can use to start getting used to AWS Fault Injection Simulator. I would really like people to get that hands-on experience with AWS Fault Injection Simulators, so get your hands dirty, and actually, run some Chaos Engineering experiments. Even though you are far away from actually doing it in your organization, getting that experience, I think that's super helpful as the first step. Then you can start thinking about how could I implement this in my organization? So, have a look at the different workshops that we at AWS have available for running Chaos Engineering.Jason: Yeah, that's a great thing to promote because it is that thing of when people ask, “Where do I start?” I think we often assume not just that, “Let me try this,” but, “How am I going to roll this out in my organization? How am I going to make the business case for this? Who needs to be involved in it?” And then suddenly it becomes a much larger problem that maybe we don't want to tackle. Awesome.Gunnar: Yeah, that's right.Jason: So, if people want to find you around the internet, where can they follow you and find out more about what you're up to?Gunnar: I am available everywhere, I think. I'm on Twitter at @GunnarGrosch. Hard to spell, but you can probably find it in the description. I'm available on LinkedIn, so do connect there. I have a TikTok account, so maybe I'll start posting there as well sometimes.Jason: Fantastic. Well, thanks again for being on the show.Gunnar: Thank you for having me.Jason: For links to all the information mentioned, visit our website at gremlin.com/podcast. If you liked this episode, subscribe to the Break Things on Purpose podcast on Spotify, Apple Podcasts, or your favorite podcast platform. Our theme song is called, “Battle of Pogs” by Komiku, and it's available on loyaltyfreakmusic.com.

Serverless Chats
Episode #121: Educating Serverless Developers with Ivonne Roberts

Serverless Chats

Play Episode Listen Later Nov 29, 2021 55:09


Ivonne Roberts is a recently named AWS Serverless Hero and currently a Software Architect at Bill.com. Prior to joining Bill.com, she was a Senior Software Architect, Principal Engineer at Edelman Financial Engines, where she and her team were critical in the company's adoption of a serverless-first software development philosophy. She has experience in modernizing applications as part of cloud migration initiatives based on serverless architecture, and her expertise includes researching new technologies and design patterns, building prototypes, establishing reference architectures, and gaining buy-in from members across the organization. On her blog ivonneroberts.com and her YouTube channel Serverless DevWidgets, Ivonne focuses on demystifying and removing the hurdles of adopting serverless architecture and on simplifying the software development lifecycle. Twitter: https://twitter.com/ivlo11 Website/personal blog: https://ivonneroberts.com Serverless DevWidgets: https://www.youtube.com/c/ServerlessDevWidgets

Cloud Security Podcast
Challenges with Building Serverless Applications at Scale

Cloud Security Podcast

Play Episode Listen Later Nov 14, 2021 38:28


In this episode of the Virtual Coffee with Ashish edition, we spoke with Ran Ribenzaft (@ranrib) is an AWS Serverless Hero, Forbes under 30 and the co-Founder of Epsagon (@Epsagon). Episode ShowNotes, Links and Transcript on Cloud Security Podcast: www.cloudsecuritypodcast.tv Host Twitter: Ashish Rajan (@hashishrajan) Guest Twitter: Ran Ribenzaft (@ranrib) Podcast Twitter - Cloud Security Podcast (@CloudSecPod) If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our YouTube Channel: - Cloud Security News - Cloud Security Academy

Screaming in the Cloud
Creatively Giving Back to the Cloud Community with Forrest Brazeal

Screaming in the Cloud

Play Episode Listen Later Sep 1, 2021 36:36


About Forrest Forrest is a cloud educator, cartoonist, author, and Pwnie Award-winning songwriter. He currently leads the content marketing team at Google Cloud. You can buy his book, The Read Aloud Cloud, from Wiley Publishing or attend his talks at public and private events around the world.Links: The Cloud Bard Speaks: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/the-cloud-bard-speaks-with-forrest-brazeal/ The Read Aloud Cloud: https://www.amazon.com/Read-Aloud-Cloud-Innocents-Inside/dp/1119677629 The Cloud Resume Challenge Book: https://forrestbrazeal.gumroad.com/l/cloud-resume-challenge-book/launch-deal The Cloud Resume Challenge: https://cloudresumechallenge.dev Twitter: https://twitter.com/forrestbrazeal TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part my Cribl Logstream. Cirbl Logstream is an observability pipeline that lets you collect, reduce, transform, and route machine data from anywhere, to anywhere. Simple right? As a nice bonus it not only helps you improve visibility into what the hell is going on, but also helps you save money almost by accident. Kind of like not putting a whole bunch of vowels and other letters that would be easier to spell in a company name. To learn more visit: cribl.ioCorey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached, the hard way. Take a look at this. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That's canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.Corey: Welcome to Screaming in the Cloud. I am Cloud Economist Corey Quinn, and as an industry, we stand on the precipice of change. There's an awful lot of movement lately. It feels like the real triggering event for this was when Andy Jassy ascended from being the CEO of AWS—the cloud computing division of Amazon—to being the CEO of all of Amazon, including things like not just AWS, but also the underpants store. Suddenly, we have people migrating between different cloud providers constantly.Today's guest is a change I would not have expected and didn't see coming. So, last year, on episode 127, called The Cloud Bard Speaks I had Forrest Brazeal from A Cloud Guru joining me. Forrest, welcome back.Forrest: Hey, thanks, Corey. Big fan of the show; always great to be here.Corey: At the time that we're recording this, you are unemployed, which is great because it's Screaming in the Cloud. Screaming at people on your day off is always fun. But by the time it airs, you'll have started your new job as the Head of Content for Google Cloud.Forrest: Yes. And of course, that's definitely a career change for me coming directly from A Cloud Guru, which was a wonderful place to be and it was exciting to be with them right up through their acquisition earlier this summer, but when it came time to make the next move, I ended up going to Google Cloud. I'll be starting there on Monday after this recording has been completed, and just really looking forward to helping tell the story of the cloud at a much bigger scale, something that I've been doing throughout my career with increasing levels of scale. It's exciting to do it at the level of an entire cloud provider.Corey: We'll get to the future in a minute, but I want to start by looking at the past. From my perspective, you were a consultant for a while at Trek10; we've talked about that before. You have an engineering background of building things with computers, at least presumably computers—you've been a big serverless advocate and I'm told that runs on computers somewhere, but I don't want to get into that particular debate—to the point where you were—I assume were, not are anymore—an AWS Serverless Hero?Forrest: Yes, that's right, and even going back prior to Trek10, my background is in enterprise software. I helped to migrate some of the world's largest enterprise applications from data centers to cloud when I was at Infor and continued to work on that kind of thing as a consultant later on. And in that time, I was working a lot with AWS, which was the only game in town for a lot of those years, right? You go back to 2014, 2015, I'm putting an enterprise app in the cloud, what am I going to put it on? Probably AWS if I'm serious about what I'm doing.But it's been amazing to see how the industry has grown and changed and the other options that have come along. And one of the cool things about my work in A Cloud Guru is that I really got a chance to branch out and expand, not just to AWS, but also to get a much better feel for the other cloud providers, for Azure and GCP, and even beyond to Oracle and some of the other vendors that are out there. And just to get a better understanding of how these different cloud providers thrive in different niches. So yes, it is absolutely a change for me; I obviously won't be an AWS Hero anymore, I'm having to close that chapter, sadly; I love those people and that program, but it is going to be a new and interesting change. I'm going to have to be back in learning mode, back in catch-up mode as I get busy on GCP.Corey: So, one thing that I think gets occluded with you because it definitely does with me is that you and I are both distinguishable personalities in the cloud community—historically AWS, let's be clear here—and you do your own custom songs; you write a newsletter that instead of snarky is insightful—of which I'm jealous—but it still has a personality that shines through; you wrote a children's book, The Read Aloud Cloud; you wound up having a new book that just came out last week for folks listening to this the day of release, called The Cloud Resume Challenge Book, if I'm getting the terms all in the right order?Forrest: Yeah, exactly.Corey: It's like naming cloud services only naming books instead? It's still challenging to keep all the words in the right order?Forrest: You know, I think it actually transcends industries; naming things is hard whether you're in computer science or not.Corey: Whereas making fun of things' names is a lot easier. It's something you did not do—to my understanding—as an employee of A Cloud Guru, The Cloud Resume Challenge, but it's something you did as a side project because it interested you. It's effectively, you want to get into tech, into cloud.Great. Here's a list of things I want you to do. And it ranges the gamut. And we talked about it before, but to my understanding it's, build a statically hosted website that winds up building your resume, and a blog post, and how to do all these things, CI/CD, frontend, backend, the works. It's a lot of work, but by the time you're done, you know a heck of a lot more about the cloud provider you're working with than you did when you started.Forrest: Yeah, not only do you know more than you did when you started, but quite frankly, you're going to know more than a lot of people who've even been doing this kind of thing for a couple of years. That's why we have people that take The Cloud Resume Challenge, who are not only aspiring cloud engineers but who have been doing this for a while, maybe even are hiring people, and they see this project and say, “Wow. That would look good on my resume. I've never actually sat down and plugged a frontend and a backend together on AWS,” and, “Maybe I've never had to actually sit down and think carefully about how I would build a CI/CD pipeline,” or, “I really want to get my hands dirty with Terraform,” or something like that. So, we see a whole range of people.I did a survey on this actually, and I found that about 40% of all the people who take The Cloud Resume Challenge have three years or more of professional IT experience. So, that should tell you how impressive it is, if you can figure this out as a brand new person to cloud. That's why we've seen so many of these folks change careers and go from things like plumbing, and working in a bank, and working in HR, and whatever else to starting roles, now, as cloud engineers and DevOps engineers. It's not entirely due to the challenge; not even mostly due to the challenge. These are folks who are self-motivated, quick learners, and are going to succeed no matter what, but The Cloud Resume Challenge was the thing that came on at the right time for them to build those skills and show what they had.Corey: And the fact that you put this together is incredibly uplifting for folks new to the field. And that's amazing, and it's great, and it's more content, the kind that I think that we need in this industry. You also launched a newsletter last week: the cloud jobs newsletter, which is fantastic. It's a pay-to-subscribe newsletter—which I've always debated experimenting with but never did—and lists curated jobs in the industry, sorted by level of experience required and things that you find personally interesting. You might have sponsored job listings in the future that you've already said would be clearly delineated from the others, which is the ethically right thing to do. You are seemingly everywhere in the cloud space.Forrest: Well, I mean look, I'm trying to give back. I've benefited from folks like yourself and others who have made time to help lift my career over the years, and I really want to be here to help others as well. The newsletter that you mentioned the Best Jobs in Cloud, it does have a small fee associated with it, but that's really just to help gate my [laugh] referrals so that they don't end up getting overwhelmed. You actually can get free access to the newsletter with the purchase of The Cloud Resume Challenge Book we talked about before. It's really intended to be a package deal where you prepare your resume by doing these projects, and there's a lot of other advice in that book about how to get yourself positioned for a great career in the cloud.And then you have this newsletter coming into your inbox every couple of weeks that lays out a list of jobs and they're broken down by, you know, these are jobs that are best for juniors, these are jobs where you're going to need some senior-level experience. Because what I found—and honestly, I've been kind of acting as a talent agent for a lot of engineers over the past several years as my network has grown, and I've tried to give back to others and help to connect folks who are eagerly trying to find great engineers for cool projects that are working on with folks who are eagerly looking for those opportunities. And what I've realized is whether you're a junior or whether you've been doing this for a long time, let's face it, most of us are not spending all of our time being those distinguishable personalities that you mentioned a minute ago. I like how you said distinguishable and not distinguished by the way; those are two very different words. But most of us are not spending our time doing that.You know, we're working engineers; we're working, right? We're not blogging and tweeting all the time and building these gigantic personal networks. So, it helps if you can have a trusted friend standing alongside you so that when you are thinking about maybe making a switch, or maybe you're not thinking about making a switch but you should be because of where the market is, that friend is coming alongside you and saying, “Hey, this is an awesome opportunity that I think you should consider checking out; why not just do the interview. Even if you're not really looking to move, it's always important to keep your skills fresh.” That's what this newsletter is designed to do. I hope that it'll be helpful for you, no matter where you are in your cloud career, as long as you're staying in the cloud space.Corey: And the fact that's how you view this is the answer to a question a lot of folks have asked me over drinks with theoretical conversations for years of, “Well, Corey, if you went to go work at one of these big cloud providers, it destroy everything you've built because how in the world could you be authentic while working for one of these companies?” And the answer is exactly what you're doing. It's, “Yeah, the people who pay you don't own you.” I cannot imagine that even Google could afford to buy your authenticity from you because once that's gone, you don't get it back, and you're one of those people in this space, that—I'm not entirely sure that you understand where you are in this space, so let me help enlighten you with that for a minute.Forrest: Oh, great. [laugh].Corey: Oh, yeah, like, the first thing I was starting to talk about that we have in common is that we do a lot of content, both of us and that sometimes occludes the very real fact that we have a distinct level of technical expertise, historically. You and I can both feel relatively deep technical questions about cloud services, but because our job doesn't have the word engineer in the title, it doesn't lead to the same type of recognition of that fact. But I want to be very clear: you are technically excellent at what you'll do. You also have a distinguished personality and brand in the space, and your authenticity is also unparalleled. When you say something is good, it is believed that it is because you say it, and the inverse is also true.You're also someone that is very clearly aligned with fighting for the user if you want to quote Tron. It's the, you're not here to shill for things that don't get people ahead in their careers; you're not here to prop things up just because that's where the money is blowing. Your position on this is unimpeachable. And I'm going to be clear here: I am more interested in Google Cloud now than I was before you made this announcement. That is the value of having someone like you aboard, and frankly, I'm astonished they managed to grab you. It shows a forward-looking ability that historically I have not associated with cloud marketing groups.Forrest: Yeah, well I mean, the space changes fast. And I think you've said this yourself as well, even with the services; you look away for six months and you look back and it's not the same industry you remember. And that actually is a challenge when you talk about that technical credibility because that can go away very, very quickly. So, it does require some constant effort to stay fresh on that, especially if you're not building every single day. But to your point about the forward-looking-ness of Google Cloud, I really am excited about that and that's honestly the biggest thing that attracted me to what they're doing.They clearly understand, I think, their position in the space. We know they're three out of three and trying to catch up, and because of that, they're able to [laugh] be really creative. They're able to make bold choices and try things that you might not try if you were trying to maintain a market-leading position. So, that's exciting to me. I'm a creative person, I like to do things that are outside the box and I think you can look forward to seeing some more outside-the-box things coming at Google Cloud here over the next couple of years.Corey: I'd be astounded if it were otherwise. The question I have for you is that ‘Head of Cloud' is not a junior role. That's not something entry-level that you're just going to pick some rando off of LinkedIn to fill. They're going to pick a different rando: you specifically as one of those randos. And to my understanding, you've never really touched Google Cloud in anger from a technical level before. Is that right? Am I dramatically misunderstanding, “Oh yeah, you don't remember the whole musical, and three-act stage play that you put on, and the music video, and the rock opera all about Google Cloud?” It's, “No, I must have been sick that week,” because that's the level of prolific you tend to be?Forrest: [laugh].Corey: What is your experience with it?Forrest: That's yet to come. So, check back on the Google Cloud rock opera; we'll see if that takes place. So no, I'm going to be learning about Google Cloud. This will be a chance for me to kind of start over a little bit from first principles. In another sense, I've been interacting with Google services for years.Keep in mind that Google Cloud is not just Google Cloud Platform, but it's G Suite as well, and there's a lot going on there. So, I definitely am going to be going back to being a beginner a little bit here. They do say if you can teach something to a beginner, you have to really understand it at an expert level. And I know that whether I'm doing this officially on behalf of Google or otherwise, I'm going to be continuing to try to help and educate folks wherever I can. So, it's going to be incumbent on me, if I want to keep doing that, to go deep quickly and continue to learn.I'm excited about that challenge. I've been doing a lot with AWS for a long time, I don't know everything. In fact, I know less every day with the amount that they're continuing to roll out, but this is a chance for me to expand, become a more well-rounded person to see how the other cloud lives. I'm taking that very seriously; I'm not going to be an expert overnight, but stick around, follow me. I'm going to be learning, I'm going to share what I learned, and maybe we'll all get a little better Google Cloud together.Corey: The thing I can't quite get past is that when you told me that you had resigned from A Cloud Guru, I want to be selfish here and say that there were two things that went through my mind. The first was, “Okay, it's probably AWS. I hope it's AWS,” because the alternative is you're going somewhere potentially independent, and I know you keep arguing with me on this point but you are one of the few people I could point out that could start something on the basis of cloud content with a personal brand that I would view as potentially being an audience split for what I do. And it's, “Oh, you're going to go work for a big cloud company. That's awesome. Is it AW—no, it's not.” And that one threw me for a different loop where it's, that is very odd because you have identified, clearly, publicly as the leading voice in AWS in many contexts. It just really surprised me. Did you consider looking at AWS as an alternative?Forrest: I mean first, I don't know that it's fair to say that I was a leading voice for AWS. There's many wonderful people that [crosstalk 00:14:13]—Corey: To be clear, Forrest, that was not a question. You are a leading voice in the community for AWS and understanding how it works. That is one of those things that no one knows their own reputation. This is one of those areas. Take it from me—a thought leader—that it's true. Please continue.Forrest: You have led my thoughts in that direction, so thanks for that, Corey. But to your question, Corey, regarding how did I decide what career move to make, and definitely was a challenge. And it was a struggle for me to say, well, I'm going to leave behind this warm, friendly AWS community that I know, and try something brand new. But it's not the first time I've done something like that in my career. You mentioned already that I spent a number of years as a very, very technical person and I identified strongly as an engineer.I had multiple degrees in computer science and I had worked as a frontend/backend software engineer, I'd worked as a database administrator, I'd worked as a cloud engineer, and a manager of cloud engineers, and I'd consulted for companies from startups all the way up to the Fortune 50, always on cloud and always very hands-on and writing code. I've never had a job where I didn't have an IDE open and wasn't writing code every day. And it was a tremendous shock to my system when I started moving away from that, moving a little bit more into the business side of cloud, learning more about marketing, learning how to impact the bottom line of a company in other ways. That was a real challenge, and I went through months where I kind of felt like I was having an identity crisis because if I'm not writing code if I didn't create YAML today, who am I? Can I call myself an engineer? What worth do I have? And I know a lot of folks have struggled with this, and a lot of times, I think that's what sometimes holds people back in their career, saying, “Well, I can only do what I've already done because I've identified myself so strongly with it.” So, I'm encouraging anyone who's listening, if you're at that point where you feel like, “I don't know if I can leave behind what I know because will I still be able to succeed?” I would encourage you to go ahead and take that step and commit to it if you really believe that you have an opportunity because growth is ultimately going to be a good thing for you. Getting outside your comfort zone and feeling those unpleasant cracks as you start to grow and change into a different person, that ultimately is a strength-building thing.If you're not growing, you're not struggling, you're not going to be the person that you want to be. So, tying all that back, I went through one round of that already, Corey, when I moved a little bit away from technical delivery. I'm about to go through a second round of that when I move away a little bit farther from the AWS community. I believe that's going to be a growth opportunity. But yeah, it's going to be hard.Corey: It really is. The idea of walking away from the thing that you've immersed yourself in is really an interesting thing to think about. Forgive me in advance for the next question; I have to ask it. As a part of your interview process at Google, do they make you write code in a Google Doc?Forrest: Not as a part of this interview process. I interviewed at Google years ago for a developer advocate position, actually, and made it all the way through their interview process, writing many lines of code in many Google Docs, but not this time.Corey: Yeah, I confess, I did the same with an SRE job many years ago at Google, and again, you are better at writing code than I am; I did not progress past this stage. But it was moot, honestly, because the way that the interview was conducted, the person I was talking to was so adversarial at the time and so, I got to be honest, condescending that I swore I would never put myself through that process again. But I was also under the impression that the ritualistic algorithmic hazing via whiteboarding code was sort of a requirement for every role at Google. So, things change, times change, people change. I'm gratified to know that was not a part of your interview process.Forrest: Well, I mean, I think it was more just about the role. My favorite whiteboard interview—Corey: Nonsense. Every accountant must be able to solve code on a whiteboard.Forrest: No, I don't think that's true. But my favorite whiteboard interview story and I'm sure you have a few, I remember being in an interview with someone—I won't say who it was or what company it was, but it wasn't not Google—it was some sort of problem where I was having to lay out, I don't know, a path for a robot to take through an environment or something like that. And I wrote the code, and it was fine. It was, like, iterative. It was what you would do if you had ten minutes to write something.And then the interviewer looked at the code, and he said, “Great, now write it again, but don't use any variables.” And I remember sitting there for a minute thinking, “In what professional context [laugh] would someone encourage you to do that in a pair programming situation?”Corey: Right. The response there is, “What the hell does your codebase in production look like?”Forrest: [laugh]. And of course, the answer is you're supposed to be using, like, the stack, and it's kind of like this thought exercise with the local stack. But even if you were to do that, the performance hit would be tremendous. It would not be a wise or logical way to actually write the code. So, it was a pure trivial, kind of like a just academic exercise that they were recommending. And I remember being really turned off by that. So, I guess if you're considering putting problems like that in your interview process, don't. They're not helpful.Corey: Yeah, I remember hearing at one point one of the Microsoft brain teasers which they've since done away with—credit where due—where someone was asked, “How would you go about finding out the weight of a Boeing 747?” And the person responded with the exact weight of a Boeing 747 because their previous job had been at Boeing for seven years. And that was apparently not what they were expecting to hear. But yeah, it's sort of an allegory as well for, first, this has no bearing on your ability to do the job, and two, expertise is important. There's a lot of ways I could try and Hacker News first principles my way through something like that, but the easier answer is for me to call someone at Boeing and ask them, or Google it, depending on exactly how precise I need to be and whether lives hang in the balance of the [laugh] answer to the question. That's a skill that seems lost somewhere, too.Forrest: Yeah, and this takes us all the way back to the conversation about The Cloud Resume Challenge, Corey. And why it works is it takes the burden of proof off of you in the interview, or the burden of proof off the interviewer to have to come up with some kind of trivial problem that you've done under time pressure, and instead, it lets the conversation flow naturally back to, “Well, what have you done? Tell me about a story about a problem that you have solved, a challenge you ran into, and how you got past it.” That's all work that has taken place prior to the interview that you've reflected on, that's built you as a person and as an engineer, even if you don't necessarily have professional experience. That's how I try to conduct interviews and I think it's a much healthier and more sustainable way to find people that you'll like to work with.Corey: Is this going to be your first outing at a giant multinational tech company?Forrest: No, although it will be my first time with a public company. When I worked at Infor, Infor was the largest privately owned software company in the world. I don't know if that's still technically true or not, but it'll be my first time with a publicly-traded company.Corey: Fantastic. The nice thing from my perspective is it gives me a little bit more context into what companies can and can't do, and how things are structured. It feels like your content—I mean, the music videos and things and whatnot that you do—I mean, you have something that I don't, which is commonly known as musical talent. And that's great. I can write funny lyrics, but you are not just able to write lyrics, you're able to perform, you're able to sing, the unanswered question for the entire interview right now is whether you can also dance. So, we're going to find that out at some point.Forrest: You would think that I could, Corey. I definitely seem like someone who should be able to tap dance. I regret to tell you that I can't, but I want to learn.Corey: For a lot of this, it's clearly you're doing this in front of your own piano with a microphone in front of you, doing it live, and having a—I don't know if it is a built-in webcam to a laptop that's sitting in front of you or something else, but—Forrest: I'm playing with that.Corey: Yeah, well don't take this the wrong way; it's not a high definition 4k camera, et cetera. It's the Lightning's—eh, it's your home office. You're comfortable there. It's not a studio. What I'm most excited about—from my perspective, I know what you're excited about—but you're now going to be producing content for Google and I checked the numbers in preparation for this interview.It's okay, can Google wind up affording a production house of some sort to work on your videos to upscale the production value of some of what you're doing? And I have checked; it is not the likeliest scenario—and I have no inside knowledge for those who are trying to trade on this—but yes, it turns out that Google could, in fact, shore up your content by buying you Disney.Forrest: I think that's technically true, and I do expect that to happen in the next three to six months, so that is completely inside information.Corey: Oh, exactly. Have reasonable expectations, but you could let it go as long as a year because that's when the first annual review cycle comes in and you want to give people time to let that clear through M&A and make sure that they are living up to their commitments to you, of course.Forrest: That's right, yeah. We're just about to go into the quiet period there. No, but kind of to that point, though, and you bring up the amateurish quality of a lot of these videos that I put together in terms of the lighting and the staging, and everything else. And I am doing a little bit to help with that. Like, it would be great if you could see—Corey: To be clear, that is not a criticism. I'm in the same boat as you are on this. It's—[laugh]—Forrest: So, far from a criticism, it's actually pretty deliberate. The fact of the matter is, there's something very raw, very authentic about just seeing someone sitting in their house, at their piano, playing and singing. There's no tricks, there's no edits, there's no glitz, there's no makeup team behind the scenes, there's no one who's involved with this other than just me caring a lot about something and sitting down and singing about it. And I think some of that is what helps come across to people and it helps these things travel. So yeah, I'm looking forward a lot to being able to collaborate with other fantastic people at Google, and I can't exactly promise what will come out of that, but I'm quite sure there will be more fun content to come.But I hope never to lose that, kind of, DIY sensibility. Because, again, my background is as an engineer, and the things I create, whether it's music, whether it's cartoons, whether it's books, or other things I write, I never want to lose that sense of just excitement about the technologies I'm working with and the fact that I get to use the tools that are available at my disposal to share them with you as directly and honestly and humanly as possible.Corey: Up next we've got the latest hits from Veem. Its climbing charts everywhere and soon its going to climb right into your heart. Here it is!Corey: No matter how hard you try, you're not able to hide the sheer joy you take from even talking about this sort of stuff, and I think that's a powerful lesson. For folks listening to this who want to expand into their own content story and approach things that they find interesting in a way that they enjoy, don't try and do what I do; don't try to do what Forrest does; do the thing that makes you happy. I would love to be able to sing, but I can't. I can write funny lyrics, but those don't do well in pure text form. I'm fortunate that I was able to construct a structure on my end where I can pay people who do know how to sing—like Adeem the Artist and many more—to participate in a lot of the things that I get to work on.But find the way that you want to express things and do you. You're only ever going to be second best at being Forrest or being Corey, but you're always going to be number one at being whoever you happen to be. I think that's a lesson that gets overlooked an awful lot.Forrest: Yeah, I've been playing with this thought for a while that the only real [moat 00:24:24] out there is originality, is your personality. Everything else can be cloned, but you are an individual. And I mean that to us specifically, Corey, and also the general ‘you' to anybody listening to this. So, find what makes you tick. It sounds like the most cliche device in the world, but another way, it's also the only useful advice that's out there.Corey: I want to be clear, you don't work there yet and I'm not here to effectively give undue praise to large companies, but I just want to say again how the sheer vision of hiring you is just astounding to me. That it makes perfect sense, don't get me wrong, but because I know that every large company, somewhere, at some point, internally has had a conversation of, “We really should hire Corey, except…” well, I've got to level with you, Corey without the except parts looks an awful lot like you.Forrest: Yeah, you know, you brought up earlier this idea that well, hopefully, Forrest doesn't lose his authenticity at Google. And one of the things that I appreciate about the team that I've talked to there so far, is that they really do understand the power of individuals and voices. And so that's not going to happen. You know, my authenticity is not for sale. And frankly, I'm useless without it, so it wouldn't be in anyone's best interest to buy it anyway. And that would be true for you as well, Corey. Whatever you end up doing, whether you someday ascend to the head of AWS Marketing, as is apparently your divine destiny, I know that—Corey: Well, I'm starting to worry that there's not too many people left in that org, so I'm worried people took me seriously and they think I've got this in hand or something.Forrest: You may be the last man standing for all we know. You may be able to go in and just, kind of, do this non-hostile takeover where there's just no one there to defend against you, anymore.Corey: Well, speaking about takeovers and whatnot, we talk about Google acquiring Disney so you now have a production studio on this. But let's talk about actual hard problems you're going to be solving there. Do you think you can bring back Google Reader?Forrest: That would be my dream. I have no inside knowledge of what would even be required to bring that off, but I think it's obvious that it's not just about that particular product that people like—because yes, you or I could go make a startup and create something that did what Google Reader did—but it's about what it represents. It's about the commitment that it would mean to Google's customers and to their products. So yeah, something like bring Google Reader back would be a wonderful thing for everyone that subscribes to Google but it would also be a fantastic storytelling element for Google as well. So yes, I'd be entirely in favor of something like that. I hope we can make it happen someday.Corey: Oh, as would I. YOu're in Brian Hall's org, correct?Forrest: Yes.Corey: Brian is a man who was the VP of Product Marketing over at AWS, went to Google for the same role, was sued by AWS under the auspices of a non-compete, which is just the most ridiculous thing in the world, and I want to be very clear here, you can say an awful lot about Brian Hall. I say an awful lot about Brian Hall. AWS says a lot about Brian Hall in very poorly conceived depositions and lawsuits that should never have been allowed to continue, and at least have an editor go over them, but that's a separate problem. But one thing you cannot say about Brian is that he is not incredibly intelligent. And the way that I find that manifesting is, I do not accept that he is someone with such a limited vision that he would be prepared to even entertain the idea of hiring you without giving you what amounts to effectively full creative control of the things you're going to be working on.You are not someone it would make any sense to hire and then try and shove into a box. That is my assessment of everything I've read on every conversation I've had with Googlers in the marketing org; it all speaks to something like this. Was that your impression during the interview? Specifically that you have carte blanche, not that Brian is smart. You're about to be in his org; you're obligated to say it. That's okay. We'll meet at the bar until the real Brian stories later but I'm talking about their remit here.Forrest: No, my authenticity is not for sale, but at the same time. I am a big fan of Brian's and have been since his AWS days, which was honestly one of the big reasons why I ended up joining his org. But yeah, to your question about what is that role going to look like, day to day, of course obviously, that remains to be seen, but it is my understanding that it will have a consultative element and that I will have some opportunity to help to drive some influence across some different teams. Something that I've learned as I've grown in my career a little bit and I've moved into more of management type of roles is that the people that report to you are such a small fraction of the overall influence that you should be having to be really successful in a role like that, any kind of leadership role, so much more of your leadership is going to happen indirectly and by influence, and it's going to happen slowly over time, as you build support for what you're doing and you start to show value and encourage other people to come around to your side. That's just the reality of making change in large organizations.And of course, this is by far the largest organization I've ever worked in, so I know it's going to take time. But my understanding is I do have a little bit of leeway to bring some of my ideas in, and I'm excited about that, and you can sort of judge for yourself, how successful I am, over time.Corey: My last question for you is that sort that has the potential to get you in trouble, except I think I'm going to agree with your answer to this. Do you believe that they're going to Google Reader Google Cloud?Forrest: If I believed that I wouldn't be joining? So obviously, no, I don't believe that.Corey: I have to confess that for the longest time, I was convinced that this was yet another Google misadventure, where they were going to dabble with it, sort of half-ass it, and then shut it down. Because that seems to be the fate of so many Google products out there. The first AWS service that entered beta was Simple Queuing Service. What is a queue but a messaging system, and we know how Google treats messaging products. Same problem; same story.I have to say over the last year or so, my perspective has evolved considerably. They are signing ten-year deals with very large banks; they are investing heavily in hiring, in R&D, in marketing clearly, in a bunch of different areas that are doing the right thing for the long-term. The financial analysts like to beat Google Cloud up because I think two quarters ago, they showed a $5 billion loss, either for the year or for the quarter, and, “It's not making money.” It's, “No. Given Google's position in the market, I'd be horrified if it were. The only way it shouldn't be turning a profit is if there's nowhere left to invest in the platform.”They're making the investments, they're doing the right things. And I have to say I've gone from, “I don't know if I would trust that without an exodus plan,” to, “Yeah, you should have a theoretical exodus plan the same way you should with any provider, but it's not the sort of thing that I feel the need to yank away on 30-days' notice.” I have crossed that bridge myself. In all sincerity, cheap, easy jokes aside, it's clear to me from what I've seen that Google Cloud is going to be around for the long term. Now, we are talking long-term in terms of tech companies, not 150-year-old companies based in Europe, but we can aspire to it. I expect it to outlive me, and not just because I have a big mouth and piss off large companies.Forrest: Yeah. Some of my closest friends and longest-tenured colleagues, people I've worked with for years are GCP engineers, people who are not working for GCP, but they're building on GCP services at various companies. And they always come to me and I've noticed a steady increase in this over the past, I would say 12 to 18 months where they say, “I love working on GCP. I love these services. I love the way the IAM is designed. I love the way the projects are put together. It just feels right. It feels natural to me. It scratches some sort of an itch in my engineering brain.”And then they pause and they say, “Why don't more people get this? Why don't more people understand this story?” That's a problem that I can help to solve. So, I'm really excited about helping to tell the story of Google Cloud. And yeah, that chapter is just about to be written.Corey: I can't wait to see what happens next. If people want to learn more about what you're up to, and how you're approaching these things, and sign up for your various newsletters, where's the entry point? Where can they find you?Forrest: I would say go to my Twitter. I'm on Twitter @forrestbrazeal and there'll be a link in my bio that has links to all the things we've mentioned: The Cloud Resume Challenge Book, my other extremely bizarre book about cloud which is called The Read Aloud Cloud. And there you can sign up for that Best Jobs in Cloud newsletter and all the other things we talked about. So, I'll see you there.Corey: I look forward to including those links in the [show notes 00:32:24]. That's how I wind up expressing my support for all of my guests' nonsense, but particularly yours. Forrest, thank you so much for taking the time to speak with me.Forrest: Much appreciated, Corey. Always a pleasure.Corey: Forrest Brazeal, currently unemployed, but by the time you listen to this, the Head of Content at Google Cloud. I am Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with a long, obnoxious, insulting comment, and then rewrite the entire insulting comment without using vowels.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Screaming in the Cloud
Severless Hero, Got Severs in His Eyes with Ant Stanley

Screaming in the Cloud

Play Episode Listen Later Aug 31, 2021 37:02


About AntAnt Co-founded A Cloud Guru, ServerlessConf, JeffConf, ServerlessDays and now running Senzo/Homeschool, in between other things. He needs to work on his decision making.Links: A Cloud Guru: https://acloudguru.com homeschool.dev: https://homeschool.dev aws.training: https://aws.training learn.microsoft.com: https://learn.microsoft.com Twitter: https://twitter.com/iamstan TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached, the hard way. Take a look at this. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That's canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.Corey: This episode is sponsored in part my Cribl Logstream. Cirbl Logstream is an observability pipeline that lets you collect, reduce, transform, and route machine data from anywhere, to anywhere. Simple right? As a nice bonus it not only helps you improve visibility into what the hell is going on, but also helps you save money almost by accident. Kind of like not putting a whole bunch of vowels and other letters that would be easier to spell in a company name. To learn more visit: cribl.ioCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while I talk to someone about, “Oh, yeah, remember that time that you appeared on Screaming in the Cloud?” And it turns out that they didn't; it was something of a fever dream. Today is one of those guests that I'm, frankly, astonished I haven't had on before: Ant Stanley. Ant, thank you so much for indulging me and somehow forgiving me for not having you on previously.Ant: Hey, Corey, thanks for that. Yeah, I'm not too sure why I haven't been on previously. You can explain that to me over a beer one day.Corey: Absolutely, and I'm sure I'll be the one that buys it because that is just inexcusable. So, who are you? What do you do? I know that you're a Serverless Hero at AWS, which is probably the most self-aggrandizing thing you can call someone because who in the world in their right mind is going to introduce themselves that way? That's what you have me for. I'll introduce you that way. So, you're an AWS Serverless Hero. What does that mean?Ant: So, the Serverless Hero, effectively I've been recognized for my contribution to the serverless community, what that contribution is potentially dubious. But yeah, I was one of the original co-founders of A Cloud Guru. We were a serverless-first company, way back when. So, from 2015 to 2016, I was with A Cloud Guru with Ryan and Sam, the two other co-founders.I left in 2016 after we'd run ServerlessConf. So, I led and ran the first ServerlessConf. And then for various reasons, I decided, hey, the pressure was too much; I needed a break, and a few other reasons I decided to leave A Cloud Guru. A very amicable split with my former co-founders. And then yeah, I kind of took a break, took some time off, de-stressed, got the serverless user group in London up and running; ran a small conference in London called JeffConf, which was a take on a blog that Paul Johnson, who was one of the folks who ran JeffConf with me, wrote a while ago saying we could have called it serverless—and we might as well have called it Jeff. Could have called it anything; might as well have called it Jeff. So, we had this joke about JeffConf. Not a reference to Mr. Bazos.Corey: No, no. Though they do have an awful lot of Jeffs working over there. But that's neither here nor there. ‘The Land of the Infinite Jeffs' as it were.Ant: Yeah, exactly. There are more Jeffs than women in the exec team if I remember correctly.Corey: I think it's now it's a Dave problem instead.Ant: Yeah, it's a Dave problem. Yeah. [laugh]. It's not a problem either way. Yeah. So, JeffConf morphed into SeverlessDays, which is a group of community events around the world. So, I think AWS said, “Hey, this guy likes running serverless events for some silly reason. Let's make him a Serverless Hero.”Corey: And here we are. Which is interesting because a few directions you can take this in. One of them, most recently, we were having a conversation, and you were opining on your thoughts of the current state of serverless, which can succinctly be distilled down to ‘serverless sucks,' which is not something you'd expect to hear from a Serverless Hero—and I hope you can hear the initial caps when I say ‘Serverless Hero'—or the founder of a serverless conference. So, what's the deal with that? Why does it suck?Ant: So, whole serverless movement started to gather momentum in 2015. The early adopters were all extremely experienced technologists, folks like Ben Kehoe, the chief robotics scientist at iRobot—he's incredibly smart—and folks of that caliber. And those were the kinds of people who spoke at the first serverless conference, spoke at all the first serverless events. And, you know, you'd kind of expect that with a new technology where there's not a lot of body of knowledge, you'd expect these high-level, really advanced folks being the ones putting themselves out there, being the early adopters. The problem is we're in 2021 and that's still the profile of the people who are adopting serverless, you know? It's still not this mass adoption.And part of the reason for me is because of the complexity around it. The user experience for most serverless tools is not great. It's not easy to adopt. The patterns aren't standardized and well known—even though there are a million websites out there saying that there are serverless patterns—and the concepts aren't well explained. I think there's still a fair amount of education that needs to happen.I think folks have focused far too much on the technical aspects of serverless, and what is serverless and not serverless, or how you deploy something, or how you monitor something, observability, instead of going back to basics and first principles of what is this thing? Why should you do it? How do you do it? And how do we make that easy? There's no real focus on user experience and adoption for inexperienced folks.The adoption curve, the learning curve for serverless, no matter what platform you do, if you want to do anything that's beyond a side project it's really difficult because there's no easy path. And I know there's going to be folks that are going to complain about it, but the Serverless Stack just got a million dollars to solve this problem.Corey: I love the Serverless Stack. They had a great way of building things out.Ant: Yeah.Corey: I cribbed a fair bit from what they built when I was building out my own serverless project of the newsletter production pipeline system. And that's awesome. And I built that, and I run it mostly as a technology testbed. But my website, lastweekinaws.com?I pay WP Engine to host it on WordPress and the reason behind that is not that I can't figure out the serverless pieces of it, it's because when I want to hire someone to do something that's a bit off the beaten path on WordPress, I don't have to spend $400 an hour for a consultant to do it because there's more than 20 people in the world who understand how all this stuff fits together and integrates well. There's something to be said for going in the direction the rest of the market is when there's not a lot of reason to differentiate yourselves. Yeah, could I save thousands of dollars a year in infrastructure costs if I'd gone with serverless? Of course, but people's time is worth more than that. It's expensive to have people work on these things.And even on the serverless stuff that I've built, if it's been more than six months since I've touched a component, someone else may have written it; I have to rediscover what the hell I was thinking and what the constraints are, what the constraints I thought existed there in the platform. And every time I deal with Lambda or API Gateway, I come away with a spiraling sense of complexity tied to all of it. And the vision of serverless I believe in, truly, but the execution has lagged from all providers.Ant: Yeah. I agree with that completely. The execution is just not there. I look at the situation—so Datadog had their report, “The State of Serverless Report” that came out about a month or two ago; I think it's the second year they've done it, now, might be the third. And in the report, one of the sections, they talked about tooling.And they said, “What's the most adopted tools?” And they had the Serverless Framework in there, they had SAM in there, they had CloudFormation, I think they had Terraform in there. But basically, Serverless Framework had 70% of the respondents. 70% of folks using Datadog and using serverless tools were using Serverless Framework. But SAM, AWS's preferred solution, was like 12%.It was really tiny and this is the thing that every single AWS demo example uses, that the serverless developer advocates push heavily. And it's the official solution, but the Serverless Application Model is just not being adopted and there are reasons for that, and it's because it's the way they approach the market because it's highly opinionated, and they don't really listen to end-users that much. And their CDK out there. So, that's the other AWS organizational complexity as well, you've got another team within AWS, another product team who've developed this different way—CDK—doing things.Corey: This is all AWS's fault, by the way. For the longest time, I've been complaining about Lambda edge functions because they are not at all transparent; you have to wait for a CloudFront deployment for it to update every time, only to figure out that in my case, I forgot a comma because I've never heard of a linter. And it becomes this awful thing. Only recently did I find out they only run at regional edge caches, not just in all of the CloudFront pop, so I said, “The hell with it,” ripped it out of everything I was using it with, and wound up implementing it in bog-standard Lambda because it was easier. But then rather than fixing that, they've created their—what was it—their CloudFront Workers. Or is it—is it CloudFront Workers, or is it CloudFront Functions?Ant: No, CloudFront Functions.Corey: I don't even remember it because rather than fixing the thing, you just released a different thing that addresses these problems in very different ways that aren't directly compatible. And it's oh, great, awesome. Terrific. As a customer, I want absolutely not this. It's one of these where, honestly, I've left in many cases with the resigned position of, if you're not going to take this seriously, why am I?Ant: Yeah, exactly. And it's bizarre. So, the CloudFront Functions thing, it's based on Nginx's [little 00:08:39] JavaScript engine. So, it's the Nginx team supporting it—the engine—which is really small number of users; it's tiny, there's no foundation behind it. So, you've got these massive companies reliant on some tiny organization to support the runtime of one of their businesses, one of their services.And they expect people to adopt it. And on top of that, that engine supports primary language is JavaScript's ES5 or ES2015, which is the 2015 edition of JavaScript, so it's a six-year-old version of JavaScript. You cannot use one JavaScript with it which also means you can't use any other tools in the JavaScript ecosystem for it. So basically, anything you write for that is going to be vanilla, you're going to write yourself, there's no tooling, no community to really leverage to use that thing. Again, like, why have you even done that? Why if you now gone off and taken an engine no one uses—they will say someone uses it, but basically no one uses—Corey: No one willingly uses or knowingly uses it.Ant: Yeah. No one really uses. And then decided to run that. Why not look at WebAssembly—it's crazy—which has a foundation behind it and they're doing great things, and other providers are using WebAssembly on the edge. I just don't understand the thought process—well, I say I don't understand, but I do understand the thought processes behind Amazon. Every single GM in Amazon is effectively incentivized to release stuff, and build stuff, and to get stuff out the door. That's how they make money. You hear the stories—Corey: Oh, it's been clear for years. They only recently stopped—in their keynotes every year—talking about the number of feature releases that they've had over the past 12 months. And I think they finally had it clued into them by someone snarky on Twitter—ahem—that the only people that feel good about that are people internal to AWS because customers see that and get horrified of, “I haven't kept up with most of those things. How many of those are important? How many of them are nonsense?”And I'm sure somewhere you have released a serverless that will solve my business problem perfectly so I don't have to build it together myself out of Lambda functions, and string, and popsicle sticks, but I'll never hear about it because you're too busy talking about nonsense. And that problem still exists and it's writ large. There's a philosophy around not breaking existing workloads—which I get; that's a hard problem to solve for—but their solution is, rather than fixing existing services will launch a new one that doesn't have those constraints and takes a different approach to it. And it's horrible.Ant: Yeah, exactly. If you compare Amazon to Apple, Apple releases a net-new product once a year, once every two years.Corey: You're talking about new generations of products, that comes out on an annualized basis, but when you're talking about actual new product, not that frequently. The last one—Ant: Yeah.Corey: —I can really think of is probably going to be AirPods, at least of any significance.Ant: AirTags is the new one.Corey: Oh, AirTags. AirTags is recent, which is a neat—but it's an accessory to the rest of those things. It is—Ant: And then there's AirPods. But yeah, it's once—because they—everything works. If you're in that Apple ecosystem, everything works. And everything's back-ported and supported. My four-year-old phone still works and had a five-year-old MacBook before this current one, still worked, you know, not a problem.And those two philosophies—and the Amazon folk are heavily incentivized to release products and to grow the usage of those products. And they're all incentivized within their bubbles. So, that's why you get competing products. That's why Proton exists when CodeBuild and CodePipeline, and all of those things exist, and you have all these competing products. I'm waiting for the container team to fully recreate AWS on top of containers. They're not far away.Corey: They're already in the process of recreating AWS on top of Lightsail. It's more or less the, “Oh, we're making this the simpler version.” Which is great. You know who likes simplicity? Freaking everyone.So, it's the vision of a cloud, we could have had but didn't. “Oh, you want a virtual machine. Spin up a Lightsail instance; you're going to get a fixed amount of compute, disk, RAM, and CPU that you can adjust, and it's going to cost you a flat fee per month until you exceed some fairly high limits.” Why can't everything be like that, on some level? Because in many cases, I don't care about wanting to know exactly to the penny shave things off.I want to spin up a fleet of 20 virtual machines, and if they cost me 20 bucks a pop each a month, I can forecast that, I can budget for that, I can do a lot and I don't actually care in any business context about the money there, but dialing it in and having the variable charges and the rest, and, “Oh, you went through a managed NAT gateway. That's going to double your bandwidth price and it's going to be expensive. Surprise, you should have looked more closely at it,” is sort of the lesson of the original AWS services. At some level, they've deviated away from anything resembling simplicity and increasingly we're seeing a world where in order to do something effectively with cloud, you have to spend 12 weeks going to cloud school first.Ant: Oh, yeah. Completely. See, that's one of the major barriers with serverless. You can't use serverless for any of the major cloud providers until you understand that cloud provider. So yeah, do your 12 weeks of cloud school. And there's more than enough providers.Corey: Whoa, whoa, whoa. Before you spin up a function that runs code, you have to understand the identity and security model, and how the network works, and a bunch of other ancillary nonsense that isn't directly tied to business value.Ant: And all these fun things. How are you're going to test this, and how are you're going to do all that?Corey: How do you write the entry point? Where is it going to enter? What is it expecting? What objects are getting passed in, if any? What format is it going to take?I've spent days, previously, trying to figure out the exact invocation for working with a JSON object in Python, what that's going to show up as, and how specifically to refer to it. And once you've done that a couple of times, great, fine, it's easy. Copy and paste it from the last time you did it. But figuring it out from first principles, particularly in a time when there isn't a lot of good public demonstrations of this—especially early days—it's hard to do.Ant: Yeah. And they just love complexity. Have you looked at the second edition—so the third version of the AWS SDK for JavaScript?Corey: I don't touch JavaScript with my hands most days, just because I'm bad at it and I don't understand the asynchronous model and computers are really not my thing most.Ant: So, unfortunately for my sins, I do use JavaScript a lot. So, version two of the SDK is effectively the single most popular Cloud SDK of any language, anything out there; 20 million downloads a week. It's crazy. It's huge—version two. And JavaScript's a very fast-evolving language, though.Basically, it's a bit like the English language in that it adopts things from other languages through osmosis, and co-opts various other features of other languages. So, JavaScript has—if there's a feature you love in your language, it's going to end up in JavaScript at some point. So, it becomes a very broad Swiss Army knife that can do almost anything. And there's always better ways to do things. So, the problem is, the version two was written in old JavaScript from years twenty fifteen years five years six kind of level.So, from 2015, 2016, I—you know, 2020, 2021, JavaScript has changed. So, they said, “Oh, we're going to rewrite this.” Which good; you should do. But they absolutely broke all compatibility with version two. So, there is no path from version two to version three without rewriting what you've got.So, if you want to take anything you've written—not even serverless—anything in JavaScript you've written and you want to upgrade it to get some of the new features of JavaScript in the SDK, you have to rewrite your code to do that. And some instances, if you're using hexagonal architecture and you're doing all the right things, that's a really small thing to do. But most people aren't doing that.Corey: But let's face it, a lot of things grow organically.Ant: Yeah.Corey: And again, I can sit here and tell you how to build things appropriately and then I look at my own environment and… yeah, pay no attention to that burning dumpster fire behind the camera. And it's awful. You want to make sure that you're doing things the right way but it's hard to do and taking on additional toil because the provider decides the time to focus on this is a problem.Ant: But it's completely not a user-centric way of thinking. You know, they've got all their 14—is it 16 principles now? Did they add two principles, didn't they?Corey: They added two to get up to 16; one less than the numbers of ways to run containers in AWS.Ant: Yeah. They could barely contain themselves. [laugh]. It's just not customer-centric. They've moved themselves away from that customer-centric view of the world because the reality is, they are centered on the goals of the team, the goals of the GM, and the goals of that particular product.That famous drawing of all the different organizational charts, they got the Facebook chart, and the Google Chart, and the Amazon chart has all these little circles, everyone pointing guns at each other. And the more Amazon grows, the more you feel like that's reality. And it's hurting users, it's massively hurting users. And we feel the pain every day, absolutely every day, which is not great. And it's going to hurt Amazon in the long run, but short-term, they're not going to see that pain quarterly, they're not going to see that pain, probably within 12 months.But they will see the pain long run. And if they want to fix it, they probably should have started fixing it two years ago. But it's going to take years to fix because that's a massive cultural shift to say, “Okay, how do we get back to being more customer-focused? How do we stop that organizational targets and goals from getting in the way of delivering value to the customer?”Corey: It's a good question. The hard part is getting customers to understand enough of what you put out there to be able to disambiguate what you've built, and what parts to trust, what parts not the trust, what parts are going to be hard, et cetera, et cetera, et cetera, et cetera. The concern that I've got across the board here is, how do you learn? How do you get started with this? And the way that I came into this was I started off, in the early days of AWS, there were a dozen services, and okay, I could sort of stumble my way through it.And the UI was rough, but it got better with time. So, the answer for a lot of folks these days is training, which makes sense. In the beginning, we learned through things like podcasts. Like there was a company called Jupiter Broadcasting which did a bunch of Linux-oriented podcasts and learned how this stuff works. And then they were acquired by Linux Academy which really focused on training.And then A Cloud Guru acquired Linux Academy. And then Pluralsight acquired A Cloud Guru and is now in the process of itself being acquired by Vista Equity Partners. There's always a bigger fish eating something somewhere. It feels like a tremendous, tremendous consolidation in the training market. Given that you were one of the founders of A Cloud Guru, where do you stand on that?Ant: So, in terms of that actual transaction, I don't know the details because I'm a long time out of A Cloud Guru, but I've stayed within the whole training sphere, and so effectively, the bigger fish scenario, it's making the market smaller in terms of providers are there. You really don't have many providers doing cloud-specific training anymore. On one level you don't, but then another level, you've got lots of independent folks doing tons of stuff. So, you've got this explosion at the bottom end. If you go to Udemy—which is where A Cloud Guru started, on Udemy—you will see tons of folks offering courses at ten bucks a pop.And then there's what I'm doing now on homeschool.dev; there's serverless-focused training on there. But that's really focused on a really small niche. So, there's this explosion at the bottom end of lots of small people doing lots of things, and then you've got this consolidation at the top end, all the big providers buying each other, which leaves a massive gap in the middle.And on top of that, you've got AWS themselves, and all the other cloud providers, offering a lot of their own free training, whether it's on their own platforms—there's aws.training now, and Microsoft have similar as well—I think it's learn.microsoft.com is theirs. And you've got all these different providers doing their own training, so there's lots out there.There's actually probably more training for lower costs than ever before. The problem is, it's like the complexity of too many services, it's the 17 container problem. Which training do you use because the actual cost of the training is your time? It's not the cost of the course. Your time is always going to be more expensive.Corey: Yeah, the course is never going to be anywhere comparable to the time you spend on it. And I've never understood, frankly, why these large companies charge money for training on their own platform and also charge money for certifications because I don't care what you're going to pay for those things, once you know a platform well enough to hit a certification, you're going to use the thing you know, in most cases; it's a great bottom-up adoption story.Ant: Yeah, completely. That was actually one of Amazon's first early problems with their trainings, why A Cloud Guru even exists, and Linux Academy, and Cloud Academy all actually came into being is because Amazon hired a bunch of folks from VMware to set up their training program. And VMware's training, back in the day, was a profit center. So, you'd have a one-and-a-half thousand, two thousand dollar training course you'd go on for three to five days, and then you'd have a couple hundred dollars to do the certification. It was a profit center because VMware didn't really have that much competition. Zen and Microsoft's Hyper V were so late to the market, they basically own the market at the time. So—Corey: Oh, yeah. They still do in some corners.Ant: Yeah. They're still massively doing in this place as they still exist. And so they Amazon hired a bunch of ex-VMware folk, and they said, “We're just going to do what we did at VMware and do it at Amazon,” not realizing Amazon didn't own the market at the time, was still growing, and they tried to make it a profit center, which basically left a huge gap for folks who just did something at a reasonable price, which was basically everyone else. [laugh].This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking databases, observability, management, and security.And - let me be clear here - it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build.With Always Free you can do things like run small scale applications, or do proof of concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free. No asterisk. Start now. Visit https://snark.cloud/oci-free that's https://snark.cloud/oci-free.Corey: The challenge I found with a few of these courses as well, is that they teach you the certification, and the certifications are, in some ways, crap when it comes to things you actually need to know to intelligently use a platform. So, many of them distill down not to the things you need to know, but to the things that are easy to test in a multiple-choice format. So, it devolves inherently into trivia such as, “Which is the right syntax for this thing?” Or, “Which one of these CloudFormations stanzas or functions isn't real?” Things like that where it's, no one in the real world needs to know any of those things.I don't know anyone these days—sensible—who can write CloudFormation from scratch without pulling up some reference somewhere because most people don't have that stuff in their head. And if you do, I'd suggest forgetting it so you can use that space to remember something that's more valuable. It doesn't make sense for how people interact with these things. But I do see the value as well in large companies trying to upskill thousands and thousands of people. You have 5000 people that are trying to come up to speed because you're migrating into cloud. How do you judge people's progress? Well, certifications are an easy answer.Ant: Yeah, massively. Probably the most successful blog post ever written—I don't think it's up anymore, but it was when I was at A Cloud Gurus—like, what's the value of a certification? And ultimately, it came down to, it's a way for companies that are hiring to filter people easily. That's it. That's really it. It's if you've got to hire ten people and you get 1000 CVs or resumes for those ten roles, first thing you do is you filter by who's certified for that role. And then you go through anything else. Does the certification mean you can actually do the job? Not really. There are hundreds of people who are not cer—thousands, millions of people who are not certified to do jobs that they do. But when you're getting hired and there's lots of people applying for the same role, it's literally the first thing they will filter on. And it's—so you want to get certified, it's hard to get through that filter. That's what the certification does, it's how you get through that first filter of whatever the talent tracking system they're using is. That's it. And how to get into the dev lounge at re:Invent.Corey: Oh yeah, that's my reason for getting a certification, originally. And again, for folks who learn effectively that way, I have no problem with people getting certifications. If you're trying to advance in your career, especially early stage, and you need a piece of paper that says you know what you're talking about, a certification is a decent approach. In time, with seniority, that gets replaced by a piece of paper, it's called your resume or your CV, but that is a longer-term more senior-focused approach. I don't begrudge people getting certifications and I don't think that they're foolish for doing it.But in time, it feels like the market for training is simultaneously contracting into only a few players left, and also, I'm curious as to whether or not the large companies out there are increasing their spend with the training providers or not. On the community side, the direct-to-consumer approach, that is exploding, but at the same time, you're then also dealing—forgive me, listeners—with the general public and there is nothing worse than a customer, from a customer service perspective, who was only paying a little money to you. I used to work in a web hosting company that $3,000 a month customers were great to work with. The $2999 a month customers were hell on earth who expected that they were entitled to 80 hours a month of systems engineering time. And you see something similar in the training space. It's always the small individual customers who are spending personal money instead of corporate money that are more difficult to serve. You've been in the space for a while. What do you see around that?Ant: Yeah, I definitely see that. So, the smaller customers, there's a correlation between the amount of money you spend and the amount of hand-holding that someone needs. The more money someone spends, the less hand-holding they need, generally. But the other side of it, what training businesses—particularly for subscription-based business—it's the same model as most gyms. You pay for it and you never use it.And it's not just subscription; like, Udemy is a perfect example of that, you know, people who have hundreds of Udemy courses they've never done, but they spend ten bucks on each. So, there's a lot of that at the lower end, which is why people offer courses at that level. So, there's people who actually do the course who are going to give you a lot of a headache, but then you're going to have a bunch of folk who never do the course and you're just taking their money. Which is also not great, either, but those folks don't feel bad because I only spent 10, 20 bucks on it. It's like, oh, it's their fault for not doing it, and you've made the money.So, that's kind of how a lot of the training works. So, the other problem with training as well is you get the quality is so variable at the bottom end. It's so, so variable. You really struggle to find—there's a lot of people just copying, like, you see instances where folks upload videos to Udemy that are literally they've downloaded someone's, video resized it, cut out a logo or something like that, and re-uploaded it and it's taken a few weeks for them to get caught. But they made money in the meantime.That's how blatant it does get to some level, but there are levels where people will copy someone else's content and just basically make it their own slides, own words, that kind of thing; that happens a lot. At the low end, it's a bit all over the place, but you still have quality, as well, at the low end, where you have these cheapest smaller courses. And how do you find that quality, as well? That's the other side of it. And also people will just trade in their name.That's the other problem you see. Someone has a name for doing X whatever, and they'll go out and bring a course on whatever that is. Doesn't mean they're a good teacher; it means they're good at building a brand.Corey: Oh, teaching is very much its own skill set.Ant: Oh, yeah.Corey: I learned to speak publicly by being a corporate trainer for Puppet and it teaches you an awful lot. But I had the benefit, in that case, of a team of people who spent their entire careers building curricula, so it wasn't just me throwing together some slides; I would teach a well-structured curriculum that was built by someone who knew exactly what they're doing. And yeah, I needed to understand failure modes, and how to get things to work when they weren't working properly, and how to explain it in different ways for folks who learn in different ways—and that is the skill of teaching right there—but curriculum development is usually not the same thing. And when you're bootstrapping, learning—I'm going to build my own training course, you have to do all of those things, and more. And it lends itself to, in many cases, what can come across as relatively low-quality offerings.Ant: Yeah, completely. And it's hard. But one thing you will often see is sometimes you'll see a course that's really high production quality, but actually, the content isn't great because folks have focused on making it look good. That's another common, common problem I see. If you're going to do training out there, just get referrals, get references, find people who've done it.Don't believe the references you see on a website; there's a good chance they might be fake or exaggerated. Put something out on Twitter, put out something on Reddit, whatever communities—and Slack or Discord, whatever groups you're in, ask questions. And folks will recommend. In the world of Google where you could search for anything, [laugh], the only way to really find out if something is any good is to find out if someone else has done it first and get their opinion on it.Corey: That's really the right answer. And frankly, I think that is sort of the network effect that makes a lot of software work for folks. Because you don't want to wind up being the first person on your provider trying to do a certain thing. The right answer is making sure that you are basically 8,000th person to try and do this thing so you can just Google it and there's a bunch of results and you can borrow code on GitHub—which is how we call ‘thought leadership' because plagiarism just doesn't work the same way—and effectively realizing this has been solved before. If you find a brand new cloud that has no customers, you are trailblazing every time you do anything with the platform. And that's personally never where I wanted to spend my innovation points.Ant: We did that at Cloud Guru. I think when we were—in 2015 and we had problems with Lambda and you go to Stack Overflow, and there was no Lambda tag on Stack Overflow, no serverless tag on Stack Overflow, but you asked a question and Tim Wagner would probably be the one answering. And he was the former head of product on Lambda. But it was painful, and in general you don't want to do it. Like [sigh] whenever AWS comes out with a new product, I've done it a few times, I'll go, “I think I might want to use this thing.”AWS Proton is a really good example. It's like, “Hey, this looks awesome. It looks better than CodeBuild and CodePipeline,” the headlines or what I thought it would be. I basically went while the keynote was on, I logged in to our console, had a look at it, and realized it was awful. And then I started tweeting about it as well and then got a lot of feedback [laugh] on my tweets on that.And in general, my attitude from whatever the new shiny thing is if I'm going to try it, it needs to work perfectly and it needs to live up to its billing on day one. Otherwise, I'm not going to touch it. And in general with AWS products now, you announce something, I'm not going to look at it for a year.Corey: And it's to their benefit that you don't look at it for a year because the answer is going to be, ah, if you're going to see that it's terrible, that's going to form your opinion and you won't go back later when it's actually decent and reevaluate your opinion because no one ever does. We're all busy.Ant: Yeah, exactly.Corey: And there's nothing wrong with doing that, but it is obnoxious they're not doing themselves favors here.Ant: Yeah, completely. And I think that's actually a failure of marketing and communication more than anything else. I don't blame the product teams too much there. Don't bill something as a finished glossy product when it's not. Pitch it at where it is.Say, “Hey, we are building”—like, I don't think at the re:Invent stage they should announce anything that's not GA and anything that it does not live up to the billing, the hype they're going to give it to. And they're getting more and more guilty of that the last few re:Invents, of announcing products that do not live up to the hype that they promote it at and that are not GA. Literally, they should just have a straight-up rule, they can announce products, but don't put it on the keynote stage if it's not GA. That's it.Corey: The whole re:Invent release is a whole separate series of arguments.Ant: [laugh]. Yeah, yeah.Corey: There are very few substantial releases throughout the year and then they drop a whole bunch of them at re:Invent, and it doesn't matter what you're talking about, whose problem it solves, how great it is, it gets drowned out in the flood. The only thing more foolish that I see than that is companies that are not AWS releasing things during re:Invent that are not on the re:Invent keynote stage, which in turn means that no one pays attention. The only thing you should be releasing is news about your data breach.Ant: [laugh]. Yeah. That's exactly it.Corey: What do I want to bury? Whenever Adam Selipsky gets on stage and starts talking, great, then it's time to push the button on the, “We regret to inform you,k” dance.Ant: Yeah, exactly. Microsoft will announce yet another print spooler bug malware.Corey: Ugh, don't get me started on that. Thank you so much for taking the time to speak with me today. If people want to hear more about your thoughts and how you view these nonsenses, and of course to send angry emails because they are serverless fans, where can they find you?Ant: Twitter is probably the easiest place to find me, @iamstan—Corey: It is a place for outrage. Yes. Your Twitter user account is?Ant: [laugh], my Twitter user account's all over the place. It's probably about 20% serverless. So, yeah @iamstan. Tweet me; I will probably respond to you… unless you're rude, then I probably won't. If you're rude about something else, I probably will. But if you're rude about me, I won't.And I expect a few DMs from Amazon after this. I'm waiting for you, [unintelligible 00:32:02], as I always do. So yeah, that's probably the easiest place to get hold of me. I check my email once a month. And I'm actually not joking about that; I really do check my email once a month.Corey: Yeah, people really need me then they'll find me. Thank you so much for taking the time to speak with me. I appreciate it.Ant: Yes, Corey. Thank you.Corey: Ant Stanley, AWS Serverless Hero, and oh so much more. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment defending serverless's good name just as soon as you string together the 85 components necessary to submit that comment.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

DevDiscuss
S6:E2 - Lambda, Fargate, EC2, Oh My! An AWS Service Deep Dive

DevDiscuss

Play Episode Listen Later Aug 17, 2021 56:49


In this episode, we talk about solving problems via Amazon Web Services with Ken Collins, AWS Serverless Hero and staff engineer at Custom Ink, and Vlad Ionescu, AWS Container Hero and DevOps consultant. Show Notes Scout APM (DevDiscuss) (sponsor) Cockroach Labs (DevDiscuss) (sponsor) Amazon Web Services Custom Ink AWS Lambda Amazon Elastic Container Service AWS Fargate DigitalOcean Terraform Kubernetes SaaS AWS SaaS Factory Heroku Sidekiq

Serverless Chats
Episode #107: Serverless Infrastructure as Code with Ben Kehoe

Serverless Chats

Play Episode Listen Later Jun 28, 2021 79:04


About Ben KehoeBen Kehoe is a Cloud Robotics Research Scientist at iRobot and an AWS Serverless Hero. As a serverless practitioner, Ben focuses on enabling rapid, secure-by-design development of business value by using managed services and ephemeral compute (like FaaS). Ben also seeks to amplify voices from dev, ops, and security to help the community shape the evolution of serverless and event-driven designs.Twitter: @ben11kehoeMedium: ben11kehoeGitHub: benkehoeLinkedIn: ben11kehoeiRobot: www.irobot.comWatch this episode on YouTube: https://youtu.be/B0QChfAGvB0 This episode is sponsored by CBT Nuggets and Lumigo.TranscriptJeremy: Hi, everyone. I'm Jeremy Daly.Rebecca: And I'm Rebecca Marshburn.Jeremy: And this is Serverless Chats. And this is a momentous occasion on Serverless Chats because we are welcoming in Rebecca Marshburn as an official co-host of Serverless Chats.Rebecca: I'm pretty excited to be here. Thanks so much, Jeremy.Jeremy: So for those of you that have been listening for hopefully a long time, and we've done over 100 episodes. And I don't know, Rebecca, do I look tired? I feel tired.Rebecca: I've never seen you look tired.Jeremy: Okay. Well, I feel tired because we've done a lot of these episodes and we've published a new episode every single week for the last 107 weeks, I think at this point. And so what we're going to do is with you coming on as a new co-host, we're going to take a break over the summer. We're going to revamp. We're going to do some work. We're going to put together some great content. And then we're going to come back on, I think it's August 30th with a new episode and a whole new show. Again, it's going to be about serverless, but what we're thinking is ... And, Rebecca, I would love to hear your thoughts on this as I come at things from a very technical angle, because I'm an overly technical person, but there's so much more to serverless. There's so many other sides to it that I think that bringing in more perspectives and really being able to interview these guests and have a different perspective I think is going to be really helpful. I don't know what your thoughts are on that.Rebecca: Yeah. I love the tech side of things. I am not as deep in the technicalities of tech and I come at it I think from a way of loving the stories behind how people got there and perhaps who they worked with to get there, the ideas of collaboration and community because nothing happens in a vacuum and there's so much stuff happening and sharing knowledge and education and uplifting each other. And so I'm super excited to be here and super excited that one of the first episodes I get to work on with you is with Ben Kehoe because he's all about both the technicalities of tech, and also it's actually on his Twitter, a new compassionate tech values around humility, and inclusion, and cooperation, and learning, and being a mentor. So couldn't have a better guest to join you in the Serverless Chats community and being here for this.Jeremy: I totally agree. And I am looking forward to this. I'm excited. I do want the listeners to know we are testing in production, right? So we haven't run any unit tests, no integration tests. I mean, this is straight test in production.Rebecca: That's the best practice, right? Total best practice to test in production.Jeremy: Best practice. Right. Exactly.Rebecca: Straight to production, always test in production.Jeremy: Push code to the cloud. Here we go.Rebecca: Right away.Jeremy: Right. So if it's a little bit choppy, we'd love your feedback though. The listeners can be our observability tool and give us some feedback and we can ... And hopefully continue to make the show better. So speaking of Ben Kehoe, for those of you who don't know Ben Kehoe, I'm going to let him introduce himself, but I have always been a big fan of his. He was very, very early in the serverless space. I read all his blogs very early on. He was an early AWS Serverless Hero. So joining us today is Ben Kehoe. He is a cloud robotics research scientist at iRobot, as I said, an AWS Serverless Hero. Ben, welcome to the show.Ben: Thanks for having me. And I'm excited to be a guinea pig for this new exciting format.Rebecca: So many observability tools watching you be a guinea pig too. There's lots of layers to this.Jeremy: Amazing. All right. So Ben, why don't you tell the listeners for those that don't know you a little bit about yourself and what you do with serverless?Ben: Yeah. So I mean, as with all software, software is people, right? It's like Soylent Green. And so I'm really excited for this format being about the greater things that technology really involves in how we create it and set it up. And serverless is about removing the things that don't matter so that you can focus on the things that do matter.Jeremy: Right.Ben: So I've been interested in that since I learned about it. And at the time saw that I could build things without running servers, without needing to deal with the scaling of stuff. I've been working on that at iRobot for over five years now. As you said early on in serverless at the first serverless con organized by A Cloud Guru, now plural sites.Jeremy: Right.Ben: And yeah. And it's been really exciting to see it grow into the large-scale community that it is today and all of the ways in which community are built like this podcast.Jeremy: Right. Yeah. I love everything that you've done. I love the analogies you've used. I mean, you've always gone down this road of how do you explain serverless in a way to show really the adoption of it and how people can take that on. Serverless is a ladder. Some of these other things that you would ... I guess the analogies you use were always great and always helped me. And of course, I don't think we've ever really come to a good definition of serverless, but we're not talking about that today. But ...Ben: There isn't one.Jeremy: There isn't one, which is also a really good point. So yeah. So welcome to the show. And again, like I said, testing in production here. So, Rebecca, jump in when you have questions and we'll beat up Ben from both sides on this, but, really ...Rebecca: We're going to have Ben from both sides.Jeremy: There you go. We'll embrace him from both sides. There you go.Rebecca: Yeah. Yeah.Jeremy: So one of the things though that, Ben, you have also been very outspoken on which I absolutely love, because I'm in very much closely aligned on this topic here. But is about infrastructure as code. And so let's start just quickly. I mean, I think a lot of people know or I think people working in the cloud know what infrastructure as code is, but I also think there's a lot of people who don't. So let's just take a quick second, explain what infrastructure as code is and what we mean by that.Ben: Sure. To my mind, infrastructure as code is about having a definition of the state of your infrastructure that you want to see in the cloud. So rather than using operations directly to modify that state, you have a unified definition of some kind. I actually think infrastructure is now the wrong word with serverless. It used to be with servers, you could manage your fleet of servers separate from the software that you were deploying onto the servers. And so infrastructure being the structure below made sense. But now as your code is intimately entwined in the rest of your resources, I tend to think of resource graph definitions rather than infrastructure as code. It's a less convenient term, but I think it's worth understanding the distinction or the difference in perspective.Jeremy: Yeah. No, and I totally get that. I mean, I remember even early days of cloud when we were using the Chefs and the Puppets and things like that, that we were just deploying the actual infrastructure itself. And sometimes you deploy software as part of that, but it was supporting software. It was the stuff that ran in the runtime and some of those and some configurations, but yeah, but the application code that was a whole separate process, and now with serverless, it seems like you're deploying all those things at the same time.Ben: Yeah. There's no way to pick it apart.Jeremy: Right. Right.Rebecca: Ben, there's something that I've always really admired about you and that is how strongly you hold your opinions. You're fervent about them, but it's also because they're based on this thorough nature of investigation and debate and challenging different people and yourself to think about things in different ways. And I know that the rest of this episode is going to be full with a lot of opinions. And so before we even get there, I'm curious if you can share a little bit about how you end up arriving at these, right? And holding them so steady.Ben: It's a good question. Well, I hope that I'm not inflexible in these strong opinions that I hold. I mean, it's one of those strong opinions loosely held kind of things that new information can change how you think about things. But I do try and do as much thinking as possible so that there's less new information that I have to encounter to change an opinion.Rebecca: Yeah. Yeah.Ben: Yeah. I think I tend to try and think about how people ... But again, because it's always people. How people interact with the technology, how people behave, how organizations behave, and then how technology fits into that. Because sometimes we talk about technology in a vacuum and it's really not. Technology that works for one context doesn't work for another. I mean, a lot of my strong opinions are that there is no one right answer kind of a thing, or here's a framework for understanding how to think about this stuff. And then how that fits into a given person is just finding where they are in that more general space. Does that make sense? So it's less about finding out here's the one way to do things and more about finding what are the different options, how do you think about the different options that are out there.Rebecca: Yeah, totally makes sense. And I do want to compliment you. I do feel like you are very good at inviting new information in if people have it and then you're like, "Aha, I've already thought of that."Ben: I hope so. Yeah. I was going to say, there's always a balance between trying to think ahead so that when you discover something you're like, "Oh, that fits into what I thought." And the danger of that being that you're twisting the information to fit into your preexisting structures. I hope that I find a good balance there, but I don't have a principle way of determining that balance or knowing where you are in that it's good versus it's dangerous kind of spectrum.Jeremy: Right. So one of the opinions that you hold that I tend to agree with, I have some thoughts about some of the benefits, but I also really agree with the other piece of it. And this really has to do with the CDK and this idea of using CloudFormation or any sort of DSL, maybe Terraform, things like that, something that is more domain-specific, right? Or I guess declarative, right? As opposed to something that is imperative like the CDK. So just to get everybody on the same page here, what is the top reasons why you believe, or you think that DSL approach is better than that iterative approach or interpretive approach, I guess?Ben: Yeah. So I think we get caught up in the imperative versus declarative part of it. I do think that declarative has benefits that can be there, but the way that I think about it is with the CDK and infrastructure as code in general, I'm like mildly against imperative definitions of resources. And we can get into that part, but that's not my smallest objection to the CDK. I'm moderately against not being able to enforce deterministic builds. And the CDK program can do anything. Can use a random number generator and go out to the internet to go ask a question, right? It can do anything in that program and that means that you have no guarantees that what's coming out of it you're going to be able to repeat.So even if you check the source code in, you may not be able to go back to the same infrastructure that you had before. And you can if you're disciplined about it, but I like tools that help give you guardrails so that you don't have to be as disciplined. So that's my moderately against. My strongly against piece is I'm strongly against developer intent remaining client side. And this is not an inherent flaw in the CDK, is a choice that the CDK team has made to turn organizational dysfunction in AWS into ownership for their customers. And I don't think that's a good approach to take, but that's also fixable.So I think if we want to start with the imperative versus declarative thing, right? When I think about the developers expressing an intent, and I want that intent to flow entirely into the cloud so that developers can understand what's deployed in the cloud in terms of the things that they've written. The CDK takes this approach of flattening it down, flattening the richness of the program the developer has written into ... They think of it as assembly language. I think that is a misinterpretation of what's happening. The assembly language in the process is the imperative plan generated inside the CloudFormation engine that says, "Here's how I'm going to take this definition and turn it into an actual change in the cloud.Jeremy: Right.Ben: They're just translating between two definition formats in CDK scene. But it's a flattening process, it's a lossy process. So then when the developer goes to the Console or the API has to go say, "What's deployed here? What's going wrong? What do I need to fix?" None of it is framed in terms of the things that they wrote in their original language.Jeremy: Right.Ben: And I think that's the biggest problem, right? So drift detection is an important thing, right? What happened when someone went in through the Console? Went and tweaked some stuff to fix something, and now it's different from the definition that's in your source repository. And in CloudFormation, it can tell you that. But what I would want if I was running CDK is that it should produce another CDK program that represents the current state of the cloud with a meaningful file-level diff with my original program.Jeremy: Right. I'm just thinking this through, if I deploy something to CDK and I've got all these loops and they're generating functions and they're using some naming and all this kind of stuff, whatever, now it produces this output. And again, my naming of my functions might be some function that gets called to generate the names of the function. And so now I've got all of these functions named and I have to go in. There's no one-to-one map like you said, and I can imagine somebody who's not familiar with CloudFormation which is ultimately what CDK synthesizes and produces, if you're not familiar with what that output is and how that maps back to the constructs that you created, I can see that as being really difficult, especially for younger developers or developers who are just getting started in that.Ben: And the CDK really takes the attitude that it's going to hide those things from those developers rather than help them learn it. And so when they do have to dive into that, the CDK refers to it as an escape hatch.Jeremy: Yeah.Ben: And I think of escape hatches on submarines, where you go from being warm and dry and having air to breathe to being hundreds of feet below the sea, right? It's not the sort of thing you want to go through. Whereas some tools like Amplify talk about graduation. In Amplify they aim to help you understand the things that Amplify is doing for you, such that when you grow beyond what Amplify can provide you, you have the tools to do that, to take the thing that you built and then say, "Okay, I know enough now that I understand this and can add onto it in ways that Amplify can't help with."Jeremy: Right.Ben: Now, how successful they are in doing that is a separate question I think, but the attitude is there to say, "We're looking to help developers understand these things." Now the CDK could also if the CDK was a managed service, right? Would not need developers to understand those things. If you could take your program directly to the cloud and say, "Here's my program, go make this real." And when it made it real, you could interact with the cloud in an understanding where you could list your deployed constructs, right? That you can understand the program that you wrote when you're looking at the resources that are deployed all together in the cloud everywhere. That would be a thing where you don't need to learn CloudFormation.Jeremy: Right.Ben: Right? That's where you then end up in the imperative versus declarative part where, okay, there's some reasons that I think declarative is better. But the major thing is that disconnect that's currently built into the way that CDK works. And the reason that they're doing that is because CloudFormation is not moving fast enough, which is not always on the CloudFormation team. It's often on the service teams that aren't building the resources fast enough. And that's AWS's problem, AWS as an entire company, as an organization. And this one team is saying, "Well, we can fix that by doing all this client side."What that means is that the customers are then responsible for all the things that are happening on the client side. The reason that they can go fast is because the CDK team doesn't have ownership of it, which just means the ownership is being pushed on customers, right? The CDK deploys Lambda functions into your account that they don't tell you about that you're now responsible for. Right? Both the security and operations of. If there are security updates that the CDK team has to push out, you have to take action to update those things, right? That's ownership that's being pushed onto the customer to fix a lack of ACM certificate management, right?Jeremy: Right. Right.Ben: That is ACM not building the thing that's needed. And so AWS says, "Okay, great. We'll just make that the customer's problem."Jeremy: Right.Ben: And I don't agree with that approach.Rebecca: So I'm sure as an AWS Hero you certainly have pretty good, strong, open communication channels with a lot of different team members across teams. And I certainly know that they're listening to you and are at least hearing you, I should say, and watching you and they know how you feel about this. And so I'm curious how some of those conversations have gone. And some teams as compared to others at AWS are really, really good about opening their roadmap or at least saying, "Hey, we hear this, and here's our path to a solution or a success." And I'm curious if there's any light you can shed on whether or not those conversations have been fruitful in terms of actually being able to get somewhere in terms of customer and AWS terms, right? Customer obsession first.Ben: Yeah. Well, customer obsession can mean two things, right? Customer obsession can mean giving the customer what they want or it can mean giving the customer what they need and different AWS teams' approach fall differently on that scale. The reason that many of those things are not available in CloudFormation is that those teams are ... It could be under-resourced. They could have a larger majority of customer that want new features rather than infrastructure as code support. Because as much as we all like infrastructure as code, there are many, many organizations out there that are not there yet. And with the CDK in particular, I'm a relatively lone voice out there saying, "I don't think this ownership that's being pushed onto the customer is a good thing." And there are lots of developers who are eating up CDK saying, "I don't care."That's not something that's in their worry. And because the CDK has been enormously successful, right? It's fixing these problems that exists. And I don't begrudge them trying to fix those problems. I think it's a question of do those developers who are grabbing onto those things and taking them understand the full total cost of ownership that the CDK is bringing with it. And if they don't understand it, I think AWS has a responsibility to understand it and work with it to help those customers either understand it and deal with it, right? Which is where the CDK takes this approach, "Well, if you do get Ops, it's all fine." And that's somewhat true, but also many developers who can use the CDK do not control their CI/CD process. So there's all sorts of ways in which ... Yeah, so I think every team is trying to do the best that they can, right?They're all working hard and they all have ... Are pulled in many different directions by customers. And most of them are making, I think, the right choices given their incentives, right? Given what their customers are asking for. I think not all of them balance where customers ... meeting customers where they are versus leading them where they should, like where they need to go as well as I would like. But I think ... I had a conclusion to that. Oh, but I think that's always a debate as to where that balance is. And then the other thing when I talk about the CDK, that my ideal audience there is less AWS itself and more AWS customers ...Rebecca: Sure.Ben: ... to understand what they're getting into and therefore to demand better of AWS. Which is in general, I think, the approach that I take with AWS, is complaining about AWS in public, because I do have the ability to go to teams and say, "Hey, I want this thing," right? There are plenty of teams where I could just email them and say, "Hey, this feature could be nice", but I put it on Twitter because other people can see that and say, "Oh, that's something that I want or I don't think that's helpful," right? "I don't care about that," or, "I think it's the wrong thing to ask for," right? All of those things are better when it's not just me saying I think this is a good thing for AWS, but it being a conversation among the community differently.Rebecca: Yeah. I think in the spirit too of trying to publicize types of what might be best next for customers, you said total cost of ownership. Even though it might seem silly to ask this, I think oftentimes we say the words total cost of ownership, but there's actually many dimensions to total cost of ownership or TCO, right? And so I think it would be great if you could enumerate what you think of as total cost of ownership, because there might be dimensions along that matrices, matrix, that people haven't considered when they're actually thinking about total cost of ownership. They're like, "Yeah, yeah, I got it. Some Ops and some security stuff I have to do and some patches," but they might only be thinking of five dimensions when you're like, "Actually the framework is probably 10 to 12 to 14." And so if you could outline that a bit, what you mean when you think of a holistic total cost of ownership, I think that could be super helpful.Ben: I'm bad at enumeration. So I would miss out on dimensions that are obvious if I was attempting to do that. But I think a way that I can, I think effectively answer that question is to talk about some of the ways in which we misunderstand TCO. So I think it's important when working in an organization to think about the organization as a whole, not just your perspective and that your team's perspective in it. And so when you're working for the lowest TCO it's not what's the lowest cost of ownership for my team if that's pushing a larger burden onto another team. Now if it's reducing the burden on your team and only increasing the burden on another team a little bit, that can be a lower total cost of ownership overall. But it's also something that then feeds into things like political capital, right?Is that increased ownership that you're handing to that team something that they're going to be happy with, something that's not going to cause other problems down the line, right? Those are the sorts of things that fit into that calculus because it's not just about what ... Moving away from that topic for a second. I think about when we talk about how does this increase our velocity, right? There's the piece of, "Okay, well, if I can deploy to production faster, right? My feedback loop is faster and I can move faster." Right? But the other part of that equation is how many different threads can you be operating on and how long are those threads in time? So when you're trying to ship a feature, if you can ship it and then never look at it again, that means you have increased bandwidth in the future to take on other features to develop other new features.And so even if you think about, "It's going to take me longer to finish this particular feature," but then there's no maintenance for that feature, that can be a lower cost of ownership in time than, "I can ship it 50% faster, but then I'm going to periodically have to revisit it and that's going to disrupt my ability to ship other things," right? So this is where I had conversations recently about increasing use of Step Functions, right? And being able to replace Lambda functions with Step Functions express workflows because you never have to go back to those Lambdas and update dependencies in them because dependent bot has told you that you need to or a version of Python is getting deprecated, right? All of those things, just if you have your Amazon States Language however it's been defined, right?Once it's in there, you never have to touch it again if nothing else changes and that means, okay, great, that piece is now out of your work stream forever unless it needs to change. And that means that you have more bandwidth for future things, which serverless is about in general, right? Of say, "Okay, I don't have to deal with this scaling problems here. So those scaling things. Once I have an auto-scaling group, I don't have to go back and tweak it later." And so the same thing happens at the feature level if you build it in ways that allow you to do that. And so I think that's one of the places where when we focus on, okay, how fast is this getting me into production, it's okay, but how often do you have to revisit it ...Jeremy: Right. And so ... So you mentioned a couple of things in there, and not only in that question, but in the previous questions as you were talking about the CDK in general, and I am 100% behind you on this idea of deterministic builds because I want to know exactly what's being deployed. I want to be able to audit that and map that back. And you can audit, I mean, you could run CDK synth and then audit the CloudFormation and test against certain things. But if you are changing stuff, right? Then you have to understand not only the CDK but also the CloudFormation that it actually generates. But in terms of solving problems, some of the things that the CDK does really, really well, and this is something where I've always had this issue with just trying to use raw CloudFormation or Serverless Framework or SAM or any of these things is the fact that there's a lot of boilerplate that you often have to do.There's ways that companies want to do something specifically. I basically probably always need 1,400 lines of CloudFormation. And for every project I do, it's probably close to the same, and then add a little bit more to actually make it adaptive for my product. And so one thing that I love about the CDK is constructs. And I love this idea of being able to package these best practices for your company or these compliance requirements, excuse me, compliance requirements for your company, whatever it is, be able to package these and just hand them to developers. And so I'm just curious on your thoughts on that because that seems like a really good move in the right direction, but without the deterministic builds, without some of these other problems that you talked about, is there another solution to that that would be more declarative?Ben: Yeah. In theory, if the CDK was able to produce an artifact that represented all of the non-deterministic dependencies that it had, right? That allowed you to then store that artifacts as you'd come back and put that into the program and say, "I'm going to get out the same thing," but because the CDK doesn't control upstream of it, the code that the developers are writing, there isn't a way to do that. Right? So on the abstraction front, the constructs are super useful, right? CloudFormation now has modules which allow you to say, "Here's a template and I'm going to represent this as a CloudFormation type itself," right? So instead of saying that I need X different things, I'm going to say, "I packaged that all up here. It is as a type."Now, currently, modules can only be playing CloudFormation templates and there's a lot of constraints in what you can express inside a CloudFormation template. And I think the answer for me is ... What I want to see is more richness in the CloudFormation language, right? One of the things that people do in the CDK that's really helpful is say, "I need a copy of this in every AZ."Jeremy: Right.Ben: Right? There's so much boilerplate in server-based things. And CloudFormation can't do that, right? But if you imagine that it had a map function that allowed you to say, "For every AZ, stamp me out a copy of this little bit." And then that the CDK constructs allowed to translate. Instead of it doing all this generation only down to the L one piece, instead being able to say, "I'm going to translate this into more rich CloudFormation templates so that the CloudFormation template was as advanced as possible."Right? Then it could do things like say, "Oh, I know we need to do this in every AZ, I'm going to use this map function in the CloudFormation template rather than just stamping it out." Right? And so I think that's possible. Now, modules should also be able to be defined as CDK programs. Right? You should be able to register a construct as a CloudFormation tag.Jeremy: It would be pretty cool.Ben: There's no reason you shouldn't be able to. Yeah. Because I think the declarative versus imperative thing is, again, not the most important piece, it's how do we move ... It's shifting right in this case, right? That how do you shift what's happening with the developer further into the process of deployment so that more of their context is present? And so one of the things that the CDK does that's hard to replicate is have non-local effects. And this is both convenient and I think of code smell often.So you can pass a bucket resource from another stack into a piece of code in your CDK program that's creating a different stack and you say, "Oh great, I've got this Lambda function, it needs permissions to that bucket. So add permissions." And it's possible for the CDK programs to either be adding the permissions onto the IAM role of that function, or non-locally adding to that bucket's resource policy, which is weird, right? That you can be creating a stack and the thing that you do to that stack or resource or whatever is not happening there, it's happening elsewhere. I don't think that's a great approach, but it's certainly convenient to be able to do it in a lot of situations.Now, that's not representable within a module. A module is a contained piece of functionality that can't touch anything else. So things like SAM where you can add events onto a function that can go and create ... You create the API events on different functions and then SAM aggregates them and creates an API gateway for you. Right? If AWS serverless function was a module, it couldn't do that because you'd have these in different places and you couldn't aggregate something between all of them and put them in the top-level thing, right?This is what CloudFormation macros enable, but they don't have a... There's no proper interface to them, right? They don't define, "This is what I'm doing. This is the kind of resources I can create." There's none of that that would help you understand them. So they're infinitely flexible, but then also maybe less principled for that reason. So I think there are ways to evolve, but it's investment in the CloudFormation language that allows us to shift that burden from being a flattening inside client-side code from the developer and shifting it to be able to be represented in the cloud.Jeremy: Right. Yeah. And I think from that standpoint too if we go back to the solving people's problems standpoint, that everything you explained there, they're loaded with nuances, it's loaded with gotchas, right? Like, "Oh, you can't do this, you can't do that." So that's just why I think the CDK is so popular because it's like you can do so much with it so quickly and it's very, very fast. And I think that trade-off, people are just willing to make it.Ben: Yes. And that's where they're willing to make it, do they fully understand the consequences of it? Then does AWS communicate those consequences well? Before I get into that question of, okay, you're a developer that's brand new to AWS and you've been tasked with standing up some Kubernetes cluster and you're like, "Great. I can use a CDK to do this." Something is malfunctioning. You're also tasked with the operations and something is malfunctioning. You go in through the Console and maybe figure out all the things that are out there are new to you because they're hidden inside L3 constructs, right?You're two levels down from where you were defining what you want, and then you find out what's wrong and you have no idea how to turn that into a change in your CDK program. So instead of going back and doing the thing that infrastructure as code is for, which is tweaking your program to go fix the problem, you go and you tweak it in the Console ...Jeremy: Right. Which you should never do.Ben: ... and you fix it that way. Right. Well, and that's the thing that I struggle with, with the CDK is how does the CDK help the developer who's in that situation? And I don't think they have a good story around that. Now, I don't know. I haven't talked with enough junior developers who are using the CDK about how often they get into that situation. Right? But I always say client-side code is not a replacement for a managed service because when it's client-side code, you still own the result.Jeremy: Right.Ben: If a particular CDK construct was a managed service in AWS, then all of the resources that would be created underneath AWS's problem to make work. And the interface that the developer has is the only level of ownership that they have. Fargate is this. Because you could do all the things that Fargate does with a CDK construct, right? Set up EC2, do all the things, and represent it as something that looks like Fargate in your CDK program. But every time your EC2 fleet is unhealthy that's your problem. With Fargate, that's AWS's problem. If we didn't have Fargate, that's essentially what CDK would be trying to do for ECS.And I think we all recognize that Fargate is very necessary and helpful in that case, right? And I just want that for all the things, right? Whenever I have an abstraction, if it's an abstraction that I understand, then I should have a way of zooming into it while not having to switch languages, right? So that's where you shouldn't dump me out the CloudFormation to understand what you're doing. You should help me understand the low-level things in the same language. And if it's not something that I need to understand, it should be a managed service. It shouldn't be a bunch of stuff that I still own that I haven't looked at.Jeremy: Makes sense. Got a question, Rebecca? Because I was waiting for you to jump in.Rebecca: No, but I was going to make a joke, but then the joke passed, and then I was like, "But should I still make it?" I was going to be like, "Yeah, but does the CDK let you test in production?" But that was a 32nd ago joke and then I was really wrestling with whether or not I should tell it, but I told it anyway, hopefully, someone gets a laugh.Ben: Yeah. I mean, there's the thing that Charity Majors says, right? Which is that everybody tests in production. Some people are lucky enough to have a development environment in production. No, sorry. I said that the wrong way. It's everybody has a test environment. Some people are lucky enough that it's not in production.Rebecca: Yeah. Swap that. Reverse it. Yeah.Ben: Yeah.Jeremy: All right. So speaking of talking to developers and getting feedback from them, so I actually put a question out on Twitter a couple of weeks ago and got a lot of really interesting reactions. And essentially I asked, "What do you love or hate about infrastructure as code?" And there were a lot of really interesting things here. I don't know, maybe it might be fun to go through a couple of these and get your thoughts on them. So this is probably not a great one to start with, but I thought it was interesting because this I think represents the frustration that a lot of us feel. And it was basically that they love that automation minimizes future work, right? But they hate that it makes life harder over time. And that pretty much every approach to infrastructure in, sorry, yeah, infrastructure in code at the present is flawed, right? So really there are no good solutions right now.Ben: Yeah. CloudFormation is still a pain to learn and deal with. If you're operating in certain IDEs, you can get tab completion.Jeremy: Right.Ben: If you go to CDK you get tab completion, which is, I think probably most of the value that developers want out of it and then the abstraction, and then all the other fancy things it does like pipelines, which again, should be a managed service. I do think that person is absolutely right to complain about how difficult it is. That there are many ways that it could be better. One of the things that I think about when I'm using tools is it's not inherently bad for a tool to have some friction to use it.Jeremy: Right.Ben: And this goes to another infrastructure as code tool that goes even further than the CDK and says, "You can define your Lambda code in line with your infrastructure definition." So this is fine with me. And there's some other ... I think Punchcard also lets you do some of this. Basically extracts out the bits of your code that you say, "This is a custom thing that glues together two things I'm defining in here and I'll make that a Lambda function for you." And for me, that is too little friction to defining a Lambda function.Because when I define a Lambda function, just going back to that bringing in ownership, every time I add a Lambda function, that's something that I own, that's something that I have to maintain, that I'm responsible for, that can go wrong. So if I'm thinking about, "Well, I could have API Gateway direct into DynamoDB, but it'd be nice if I could change some of these fields. And so I'm just going to drop in a little sprinkle of code, three lines of code in between here to do some transformation that I want." That is all of sudden an entire Lambda function you've brought into your infrastructure.Jeremy: Right. That's a good point.Ben: And so I want a little bit of friction to do that, to make me think about it, to make me say, "Oh, yeah, downstream of this decision that I am making, there are consequences that I would not otherwise think about if I'm just trying to accomplish the problem," right? Because I think developers, humans, in general, tend to be a bit shortsighted when you have a goal especially, and you're being pressured to complete that goal and you're like, "Okay, well I can complete it." The consequences for later are always a secondary concern.And so you can change your incentives in that moment to say, "Okay, well, this is going to guide me to say, "Ah, I don't really need this Lambda function in here. Then I'm better off in the long term while accomplishing that goal in the short term." So I do think that there is a place for tools making things difficult. That's not to say that the amount of difficult that infrastructure as code is today is at all reasonable, but I do think it's worth thinking about, right?I'd rather take on the pain of creating an ASL definition by hand for express workflow than the easier thing of writing Lambda code. Because I know the long-term consequences of that. Now, if that could be flipped where it was harder to write something that took more ownership, it'd be just easy to do, right? You'd always do the right thing. But I think it's always worth saying, "Can I do the harder thing now to pay off to pay off later?"Jeremy: And I always call those shortcuts "tomorrow-Jeremy's" problem. That's how I like to look at those.Ben: Yeah. Yes.Jeremy: And the funny thing about that too is I remember right when EventBridge came out and there was no CloudFormation support for a long time, which was super frustrating. But Serverless Framework, for example, implemented a custom resource in order to do that. And I remember looking at a clean stack and being like, "Why are there two Lambda functions there that I have no idea?" I'm like, "I didn't publish ..." I honestly thought my account was compromised that somebody had published a Lambda function in there because I'm like, "I didn't do that." And then it took me a while to realize, I'm like, "Oh, this is what this is." But if it is that easy to just create little transform functions here and there, I can imagine there being thousands of those in your account without anybody knowing that they even exist.Ben: Now, don't get me wrong. I would love to have the ability to drop in little transforms that did not involve Lambda functions. So in other words, I mean, the thing that VTL does for API Gateway, REST APIs but without it being VTL and being ... Because that's hard and then also restricted in what you can do, right? It's not, "Oh, I can drop in arbitrary code in here." But enough to say, "Oh, I want to flip ... These fields should go from a key-value mapping to a list of key-value, right? In the way that it addresses inconsistent with how tags are defined across services, those kinds of things. Right? And you could drop that in any service, but once you've defined it, there's no maintenance for you, right?You're writing JavaScript. It's not actually a JavaScript engine underneath or something. It's just getting translated into some big multi-tenant fancy thing. And I have a hypothesis that that should be possible. You should be able to do it where you could even do it in the parsing of JSON, being able to do transforms without ever having to have the whole object in memory. And if we could get that then, "Oh, sure. Now I have sprinkled all over the place all of these little transforms." Now there's a little bit of overhead if the transform is defined correctly or not, right? But once it is, then it just works. And having all those little transforms everywhere is then fine, right? And that incentive to make it harder it doesn't need to be there because it's not bringing ownership with it.Rebecca: Yeah. It's almost like taking the idea of tomorrow-Jeremy's problem and actually switching it to say tomorrow-Jeremy's celebration where tomorrow-Jeremy gets to look back at past-Jeremy and be like, "Nice. Thank you for making that decision past-Jeremy." Because I think we often do look at it in terms of tomorrow-Jeremy will think of this, we'll solve this problem rather than how do we approach it by saying, how do I make tomorrow-Jeremy thankful for it today-Jeremy? And that's a simple language, linguistic switch, but a hard switch to actually make decisions based on.Ben: Yeah. I don't think tomorrow-Ben is ever thankful for today-Ben. I think it's tomorrow-Ben is thankful for yesterday-Ben setting up the incentives correctly so that today-Ben will do the right thing for tomorrow-Ben. Right? When I think about people, I think it's easier to convince people to accept a change in their incentives than to convince them to fight against their incentives sustainably.Jeremy: Right. And I think developers and I'm guilty of this too, I mean, we make decisions based off of expediency. We want to get things done fast. And when you get stuck on that problem you're like, "You know what? I'm not going to figure it out. I'm just going to write a loop or I'm going to do whatever I can do just to make it work." Another if statement here, "Isn't going to hurt anybody." All right. So let's move to ... Sorry, go ahead.Ben: We shouldn't feel bad about that.Jeremy: You're right.Ben: I was going to say, we shouldn't feel bad about that. That's where I don't want tomorrow-Ben to have to be thankful for today-Ben, because that's the implication there is that today-Ben is fighting against his incentives to do good things for tomorrow-Ben. And if I don't need to have to get to that point where just the right path is the easiest path, right? Which means putting friction in the right places than today-Ben ... It's never a question of whether today-Ben is doing something that's worth being thankful for. It's just doing the job, right?Jeremy: Right. No, that makes sense. All right. I got another question here, I think falls under the category of service discovery, which I know is another topic that you love. So this person said, "I love IaC, but hate the fuzzy boundaries where certain software awkwardly fall. So like Istio and Prometheus and cert-manager. That they can be considered part of the infrastructure, but then it's awkward to deploy them when something like Terraform due to circular dependencies relating to K8s and things like that."So, I mean, I know that we don't have to get into the actual details of that, but I think that is an important aspect of infrastructure as code where best practices sometimes are deploy a stack that has your permanent resources and then deploy a stack that maybe has your more femoral or the ones that are going to be changing, the more mutable ones, maybe your Lambda functions and some of those sort of things. If you're using Terraform or you're using some of these other services as well, you do have that really awkward mix where you're trying to use outputs from one stack into another stack and trying to do all that. And really, I mean, there are some good tools that help with it, but I mean just overall thoughts on that.Ben: Well, we certainly need to demand better of AWS services when they design new things that they need to be designed so that infrastructure as code will work. So this is the S3 bucket notification problem. A very long time ago, S3 decided that they were going to put bucket notifications as part of the S3 bucket. Well, CloudFormation at that point decided that they were going to put bucket notifications as part of the bucket resource. And S3 decided that they were going to check permissions when the notification configuration is defined so that you have to have the permissions before you create the configuration.This creates a circular dependency when you're hooking it up to anything in CloudFormation because the dependency depends on the resource policy on an SNS topic, and SQS queue or a Lambda function depends on the bucket name if you're letting CloudFormation name the bucket, which is the best practice. Then bucket name has to exist, which means the resource has to have been created. But the notification depends on the thing that's notifying, which doesn't have the names and the resource policy doesn't exist so it all fails. And this is solved in a couple of different ways. One of which is name your bucket explicitly, again, not a good practice. Another is what SAM does, which says, "The Lambda function will say I will allow all S3 buckets to invoke me."So it has a star permission in it's resource policy. So then the notification will work. None of which is good or there's custom resources that get created, right? Now, if those resources have been designed with infrastructure as code as part of the process, then it would have been obvious, "Oh, you end up with a circular pendency. We need to split out bucket notifications as a separate resource." And not enough teams are doing this. Often they're constrained by the API that they develop first ...Jeremy: That's a good point.Ben: ... they come up with the API, which often makes sense for a Console experience that they desire. So this is where API Gateway has this whole thing where you create all the routes and the resources and the methods and everything, right? And then you say, "Great, deploy." And in the Console you only need one mutable working copy of that at a time, but it means that you can't create two deployments or update two stages in parallel through infrastructure as code and API Gateway because they both talk to this mutable working copy state and would overwrite each other.And if infrastructure as code had been on their list would have been, "Oh, if you have a definition of your API, you should be able to go straight to the deployment," right? And so trying to push that upstream, which to me is more important than infrastructure as code support at launch, but people are often like, "Oh, I want CloudFormation support at launch." But that often means that they get no feedback from customers on the design and therefore make it bad. KMS asymmetric keys should have been a different resource type so that you can easily tell which key types are in your template.Jeremy: Good point. Yeah.Ben: Right? So that you can use things like CloudFormation Guard more easily on those. Sure, you can control the properties or whatever, but you should be able to think in terms of, "I have a symmetric key or an asymmetric key in here." And they're treated completely separately because you use them completely differently, right? They don't get used to the same place.Jeremy: Yeah. And it's funny that you mentioned the lacking support at launch because that was another complaint. That was quite prevalent in this thread here, was people complaining that they don't get that CloudFormation support right away. But I think you made a very good point where they do build the APIs first. And that's another thing. I don't know which question asked me or which one of these mentioned it, but there was a lot of anger over the fact that you go to the API docs or you go to the docs for AWS and it focuses on the Console and it focuses on the CLI and then it gives you the API stuff and very little mention of CloudFormation at all. And usually, you have to go to a whole separate set of docs to find the CloudFormation. And it really doesn't tie all the concepts together, right? So you get just a block of JSON or of YAML and you're like, "Am I supposed to know what everything does here?"Ben: Yeah. I assume that's data-driven. Right? And we exist in this bubble where everybody loves infrastructure as code.Jeremy: True.Ben: And that AWS has many more customers who set things up using Console, people who learn by doing it first through the Console. I assume that's true, if it's not, then the AWS has somehow gotten on the extremely wrong track. But I imagine that's how they find that they get the right engagement. Now maybe the CDK will change some of this, right? Maybe the amount of interest that is generating, we'll get it to the point where blogs get written with CDK programs being written there. I think that presents different problems about what that CDK program might hide from when you're learning about a service. But yeah, it's definitely not ... I wrote a blog for AWS and my first draft had it as CloudFormation and then we changed it to the Console. Right? And ...Jeremy: That must have hurt. Did you die a little inside when that happened?Ben: I mean, no, because they're definitely our users, right? That's the way in which they interact with data, with us and they should be able to learn from that, their company, right? Because again, developers are often not fully in control of this process.Jeremy: Right. That's a good point.Ben: And so they may not be able to say, "I want to update this through CloudFormation," right? Either because their organization says it or just because their team doesn't work that way. And I think AWS gets requests to prevent people from using the Console, but also to force people to use the Console. I know that at least one of them is possible in IAM. I don't remember which, because I've never encountered it, but I think it's possible to make people use the Console. I'm not sure, but I know that there are companies who want both, right? There are companies who say, "We don't want to let people use the API. We want to force them to use the Console." There are companies who say, "We don't want people using the Console at all. We want to force them to use the APIs."Jeremy: Interesting.Ben: Yeah. There's a lot of AWS customers, right? And there's every possible variety of organization and AWS should be serving all of them, right? They're all customers. And certainly, I want AWS to be leading the ones that are earlier in their cloud journey and on the serverless ladder to getting further but you can't leave them behind, I think it's important.Jeremy: So that people argument and those different levels and coming in at a different, I guess, level or comfortability with APIs versus infrastructure as code and so forth. There was another question or another comment on this that said, "I love the idea of committing everything that makes my solution to text and resurrect an entire solution out of nothing other than an account key. Loved the ability to compare versions and unit tests, every bit of my solution, and not having to remember that one weird setting if you're using the Console. But hate that it makes some people believe that any coder is now an infrastructure wizard."And I think this is a good point, right? And I don't 100% agree with it, but I think it's a good point that it basically ... Back to your point about creating these little transformations in Pulumi, you could do a lot of damage, I mean, good or bad, right? When you are using these tools. What are your thoughts on that? I mean, is this something where ... And again, the CDK makes it so easy for people to write these constructs pretty quickly and spin up tons of infrastructure without a lot of guard rails to protect them.Ben: So I think if we tweak the statement slightly, I think there's truth there, which isn't about the self-perception but about what they need to be. Right? That I think this is more about serverless than about infrastructure as code. Infrastructure as code is just saying that you can define it. Right? I think it's more about the resources that are in a particular definition that require that. My former colleague, Aaron Camera says, "Serverless means every developer is an architect" because you're not in that situation where the code you write goes onto something, you write the whole thing. Right?And so you do need to have those ... You do need to be an infrastructure wizard whether you're given the tools to do that and the education to do that, right? Not always, like if you're lucky. And the self-perception is again an even different thing, right? Especially if coders think that there's nothing to be learned ... If programmers, software developers, think that there's nothing to be learned from the folks who traditionally define the infrastructure, which is Ops, right? They think, "Those people have nothing to teach me because now I can do all the things that they did." Well, you can create the things that they created and it does not mean that you're as good at it ...Jeremy: Or responsible for monitoring it too. Right.Ben: ... and have the ... Right. The monitoring, the experience of saying these are the things that will come back to bite you that are obvious, right? This is how much ownership you're getting into. There's very much a long-standing problem there of devaluing Ops as a function and as a career. And for my money when I look at serverless, I think serverless is also making the software development easier because there's so much less software you need to write. You need to write less software that deals with the hard parts of these architectures, the scaling, the distributed computing problems.You still have this, your big computing problems, but you're considering them functionally rather than coding things that address them, right? And so I see a lot of operations folks who come into serverless learn or learn a new programming language or just upscale, right? They're writing Python scripts to control stuff and then they learn more about Python to be able to do software development in it. And then they bring all of that Ops experience and expertise into it and look at something and say, "Oh, I'd much rather have step functions here than something where I'm running code for it because I know how much my script break and those kinds of things when an API changes or ... I have to update it or whatever it is."And I think that's something that Tom McLaughlin talks about having come from an outside ground into serverless. And so I think there's definitely a challenge there in both directions, right? That Ops needs to learn more about software development to be more engaged in that process. Software development does need to learn much more about infrastructure and is also at this risk of approaching it from, "I know the syntax, but not the semantics, sort of thing." Right? We can create ...Jeremy: Just because I can doesn't mean I should.Ben: ... an infrastructure. Yeah.Rebecca: So Ben, as we're looping around this conversation and coming back to this idea that software is people and that really software should enable you to focus on the things that do matter. I'm wondering if you can perhaps think of, as pristine as possible, an example of when you saw this working, maybe it was while you've been at iRobot or a project that you worked on your own outside of that, but this moment where you saw software really working as it should, and that how it enabled you or your team to focus on the things that matter. If there's a concrete example that you can give when you see it working really well and what that looks like.Ben: Yeah. I mean, iRobot is a great example of this having been the company without need for software that scaled to consumer electronics volumes, right? Roomba volumes. And needing to build a IOT cloud application to run connected Roombas and being able to do that without having to gain that expertise. So without having to build a team that could deal with auto-scaling fleets of servers, all of those things was able to build up completely serverlessly. And so skip an entire level of organizational expertise, because that's just not necessary to accomplish those tasks anymore.Rebecca: It sounds quite nice.Ben: It's really great.Jeremy: Well, I have one more question here that I think could probably end up ... We could talk about for another hour. So I will only throw it out there and maybe you can give me a quick answer on this, but I actually had another Twitter thread on this not too long ago that addressed this very, very problem. And this is the idea of the feedback cycle on these infrastructure as code tools where oftentimes to deploy infrastructure changes, I mean, it just takes time. In many cases things can run in parallel, but as you said, there's race conditions and things like that, that sometimes things have to be ... They just have to be synchronous. So is this something where there are ways where you see in the future these mutations to your infrastructure or things like that potentially happening faster to get a better feedback cycle, or do you think that's just something that we're going to have to deal with for a while?Ben: Yeah, I think it's definitely a very extensive topic. I think there's a few things. One is that the deployment cycle needs to get shortened. And part of that I think is splitting dev deployments from prod deployments. In prod it's okay for it to take 30 seconds, right? Or a minute or however long because that's at the end of a CI/CD pipeline, right? There's other things that are happening as part of that. Now, you don't want that to be hours or whatever it is. Right? But it's okay for that to be proper and to fully manage exactly what's going on in a principled manner.When you're doing for development, it would be okay to, for example, change the Lambda code without going through CloudFormation to change the Lambda code, right? And this is what an architect does, is there's a notion of a dirty deploy which just packages up. Now, if your resource graph has changed, you do need to deploy again. Right? But if the only thing that's changing is your code, sure, you can go and say, "Update function code," on that Lambda directly and that's faster.But calling it a dirty deploy is I think important because that is not something that you want to do in prod, right? You don't want there to be drift between what the infrastructure as code service understands, but then you go further than that and imagine there's no reason that you actually have to do this whole zip file process. You could be R sinking the code directly, or you could be operating over SSH on the code remotely, right? There's many different ways in which the loop from I have a change in my Lambda code to that Lambda having that change could be even shorter than that, right?And for me, that's what it's really about. I don't think that local mocking is the answer. You and Brian Rue were talking about this recently. I mean, I agree with both of you. So I think about it as I want unit tests of my business logic, but my business logic doesn't deal with AWS services. So I want to unit test something that says, "Okay, I'm performing this change in something and that's entirely within my custom code." Right? It's not touching other services. It doesn't mean that I actually need adapters, right? I could be dealing with the native formats that I'm getting back from a given service, but I'm not actually making calls out of the code. I'm mocking out, "Well, here's what the response would look like."And so I think that's definitely necessary in the unit testing sense of saying, "Is my business logic correct? I can do that locally. But then is the wiring all correct?" Is something that should only happen in the cloud. There's no reason to mock API gateway into Lambda locally in my mind. You should just be dealing with the Lambda side of it in your local unit tests rather than trying to set up this multiple thing. Another part of the story is, okay, so these deploys have to happen faster, right? And then how do we help set up those end-to-end test and give you observability into it? Right? X-Ray helps, but until X-Ray can sort through all the services that you might use in the serverless architecture, can deal with how does it work in my Lambda function when it's batching from Kinesis or SQS into my function?So multiple traces are now being handled by one invocation, right? These are problems that aren't solved yet. Until we get that kind of inspection, it's going to be hard for us to feel as good about cloud development. And again, this is where I feel sometimes there's more friction there, but there's bigger payoff. Is one of those things where again, fighting against your incentives which is not the place that you want to be.Jeremy: I'm going to stop you before you disagree with me anymore. No, just kidding! So, Rebecca, you have any final thoughts or questions for Ben?Rebecca: No. I just want to say to both of you and to everyone listening that I hope your today self is celebrating your yesterday-self right now.Jeremy: Perfect. Well, Ben, thank you so much for joining us and being a guinea pig as we said on this new format that we are trying. Excellent guinea pig. Excellent.Rebecca: An excellent human too but also great guinea pig.Jeremy: Right. Right. Pretty much so. So if people want to find out more about you, read some of the stuff you're doing and working on, how do they do that?Ben: I'm on Twitter. That's the primary place. I'm on LinkedIn, I don't post much there. And then I write articles that show up on Medium.Rebecca: And just so everyone knows your Twitter handle I'll say it out loud too. It's @ben11kehoe, K-E-H-O-E, ben11kehoe.Jeremy: Right. Perfect. All right. Well, we will put all that in the show notes and hopefully people will like this new format. And again, we'd love your feedback on this, things that you'd like us to do in the future, any ideas you have. And of course, make sure you reach out to Ben. He's an amazing resource for serverless. So again, thank you for everything you do, and thank you for being on the show.Ben: Yeah. Thanks so much for having me. This was great.Rebecca: Good to see you. Thank you.

AWS - Il podcast in italiano
I microservizi nel cloud: come gestire la complessità delle applicazioni moderne (ospite: Luca Bianchi)

AWS - Il podcast in italiano

Play Episode Listen Later Jun 14, 2021 35:08


Perché utilizzare un approccio a microservizi? Quali sono i vantaggi tecnologici e di business? Come si può raggiungere un buon livello di disaccoppiamento? E quali compromessi relativi ai sistemi distribuiti vanno considerati in base al teorema CAP? Cosa cambia nella gestione dei dati e perché si parla di "consistenza"? In questo episodio ospito Luca Bianchi, CTO di Neosperience ed AWS Serverless Hero, per parlare di microservizi e del ruolo abilitante del cloud per questo approccio grazie ai (micro)servizi gestiti e a nuovi paradigmi come il serverless. Link: Microservizi su AWS.

AWS Developers Podcast
Episode 001 - AWS Lambda as a DevOps Tool with Ken Collins

AWS Developers Podcast

Play Episode Listen Later Jun 7, 2021 26:19


In this episode Dave and Emily talk to Ken Collins, a Principal Engineer at Custom Ink, focused on DevOps and Ecommerce. Ken has done some interesting things with AWS Lambda and the Rails programming language when it comes to DevOps. Ken is also an AWS Serverless Hero and has created a blog series around real-world usage for Rails and Lambda. Ken's Info: Ken on Twitter: https://twitter.com/metaskills Custom Ink Technology https://twitter.com/custominktech Rails & Lambda with Lamby https://lamby.custominktech.com Ken's AWS Hero Page: https://aws.amazon.com/developer/community/heroes/ken-collins/ AWS Services Discussed: AWS Lambda: https://aws.amazon.com/lambda/ AWS CDK: https://aws.amazon.com/cdk/ Connect with Us on Twitter: Emily on Twitter: https://twitter.com/editingemily Dave on Twitter: https://twitter.com/thedavedev

Serverless Chats
Episode #103: Differing Serverless Perspectives Between Cloud Providers with Mahdi Azarboon

Serverless Chats

Play Episode Listen Later May 31, 2021 51:07


About Mahdi AzarboonMahdi Aazarboon started working as a serverless specialist and evangelizing it through blog posts, conference talks and open source projects. He climbed up the corporate ladder, and currently works as Senior Manager - Cloud Presales at Cognizant. He helps big and traditional corporations to move into the cloud and improve their existing cloud environment. Having a hands-on background and currently working at the corporate level of cloud journeys, he has matured his overall understanding of serverless.Linkedin: linkedin.com/in/azarboon/Twitter: @m_azarboonWatch this episode on YouTube: https://youtu.be/QG-N3hf1zqIThis episode sponsored by CBT Nuggets and Lumigo.Transcript:Jeremy: Hi, everyone. I'm Jeremy Daly, and this is Serverless Chats. Today, I'm joined by Mahdi Azarboon. Hey, Mahdi. Thanks for joining me.Mahdi: Hi. Thanks for having me.Jeremy: So, you are a senior manager for cloud pre-sales in the Nordic region for Cognizant. So, I'd love it if you could tell the listeners a little bit about yourself, your background, and what it is that you do at Cognizant?Mahdi: Yeah. Just a little bit of background, I started as a full stack developer, then I joined Accenture as a serverless specialist, and over there I started to play with AWS Lambda specifically. Started to do some geeky stuff, writing blog posts, and speaking at conferences and so on. Then, I was developing several solutions for multiple corporations in Finland, then I joined another consultancy company, Eficode, which are known for DevOps. It is very good, they have a good reputation for that in Nordic region. I was as a practice lead, AWS practice lead driving their business. Then, I joined my current company, Cognizant, and here I work as a pre-sales capacity. I'm not hands-on anymore, but basically I do whatever is needed to make our customers happy and make them to go to the cloud. So that means high-level solutioning, talking with the customer and as a senior architect, I comment about stuff, I make diagrams, And I translate business and technical stuff requirements, basically as an interface between the delivery and the customer side. Yeah, that's all.Jeremy: Right. Awesome. All right. Well, so you mentioned in some of the blog posts that you were writing and some of that was a little while ago. And it's actually, I think there is some interesting perspective there. So I want to get into that in a little while, but I want to start by this idea or this post that you wrote about sort of what you need to know about Azure functions versus AWS Lambda and vice versa and it was sort of this lead-in to this concept of multi-cloud and not cloud-agnostic like being able to run the same workloads, but being able to understand the differences or maybe some of the nuances in Azure versus AWS and of course, that got extended to GCP and IBM cloud and some of these other things. But I'm curious why understanding different serverless services or different cloud services across clouds in this multi-cloud world we are living in now, why is that so important?Mahdi: Yeah. That's a good question. First of all, I would like to clarify that whatever I'm telling in this podcast is just my personal opinion and doesn't reflect my employer. This is just to save myself.Jeremy: Absolutely. Like a standard Twitter handle route.Mahdi: Yeah.Jeremy: Views are my own, right? Yeah.Mahdi: I don't want to answer to my boss after this podcast. Answering to your question, the thing is that multi-cloud is inevitable and even AWS which was ... In the best practices, I remember like a few years ago, they were saying that, no, try to avoid that. They started to even admitting through their offerings that they are trying to embracing that multi-cloud with their Kubernetes offerings. The thing is that, well, whether AWS fans like it or not, Azure is gaining a lot of market share and it depends on the country. For example, in Finland at least AWS is really popular. But now I'm dealing, for example, in other countries like Norway or UK, Azure is very popular. I mean, you can just exclude yourself to be only with one cloud, but in my opinion, you are missing a lot of opportunities, both to learn and just as a company to embrace the capacities, because whether ...Well, Azure provides some stuff which are better than AWS. I mean, I heard from a corporation that they really like AI capabilities of Azure much better than AWS and they do a lot of analytics. So it's inevitable whether many people like to admit it or not.Jeremy: Right. Right. But so even the fact that it's inevitable and we talk about, multi-cloud is one of those terms ... I just talked to Rob Sutter about multi-cloud a couple of episodes ago and it's so expansive. I mean, everything from SaaS providers to, obviously the public cloud providers, to maybe even on-prem cloud, I know that sounds weird, but like your hybrid cloud and things like that. So the problem is that there are a lot of providers, there are a lot of SaaS products, things like that. I mean, are you advocating that people will try to become experts in multiple clouds or how do you sort of ... What level of knowledge do you think you need to have in order to work across multiple clouds?Mahdi: I haven't met a single person who can claim to be expert in more than one cloud provider and I have talked with many experts because I have been running serverless in Finland and so I have been talking with many experts. None of them dared to claim that they knew it. I mean, even keeping up with one single cloud provider is a lot of work and I don't consider myself expert in any of them either, because I'm not hands-on anymore. The thing is that ... No, you don't have to be experts to work with different stuff. Of course, at some level you need some ... For example, you might need an Azure expert to work with Azure, AWS expert to work with AWS. But in my opinion, if you really want to keep up with the technology and so you need to be good in one provider, really good with that and then, know the fundamentals of the cloud, the best practices which are, I would say, it's irrespective of which cloud provider you are using there and be willing to learn.For example, it happened to me. At that time, I mean, when I wrote that blog post, I was only working with AWS. Then they said to me that, okay, you have this project on Azure, go for it and I never touched Azure before. It was a lot of pain, but I learned a lot. So I mean, as I said, the fundamentals are same and now be expert in one and be willing to learn. In my opinion, that should be good enough.Jeremy: Right. I'm curious, I think that's good advice to sort of be well-rounded. I mean, that's always good advice I think for technologists, going a mile wide and an inch deep is usually good enough. But like you said, being able to be an expert in a specific field or a specific technology or something like that can really help. So you think that's certainly a good career choice to sort of start to broaden your perspective a little bit?Mahdi: Definitely. Actually, I was one of those AWS fans that really was following this Hero, Serverless Heroes, and so on, basically was parroting whatever AWS was telling and I was saying that I just want to come to work with AWS. Actually, it happened to be like that, but when I joined my current company, my manager said that most of the opportunities that you are filling, I mean, in my department, so is mostly Azure. So basically they said that it is as it is, and cope with it. And I felt very happy actually. When I, for example, see ... Well, I'm sure that anyone who is in the cloud gets many job offers from recruiters. I was thinking about it, at some point when I was AWS guy, at least in my experience, half of those job ads were irrelevant and ...Jeremy: Right. Right.Mahdi: ... depending on the country. For example in Finland, if you are Azure ... AWS is very popular at least and if you are Azure expert, you are going to miss a lot of opportunities. But at least in my experience, if you say that you are with that, you have worked with the other one, you know something, a lot of career opportunities opens up. This is my observation.Jeremy: Right. Right. Yeah. And I think actually, you made a really good point and that's certainly, in terms of AWS heroes and so forth. I'm an AWS Serverless Hero and we get inside information but we spend a lot of time thinking about things the AWS way. AWS is very good at what they are doing with serverless and they have an interesting perspective in terms of what they believe serverless is supposed to be and what that roadmap looks like. But even just hosting this show and talking to so many different people in different clouds and different ways that they do it, getting that different perspective of how other people or other clouds think about serverless and how they are building it out. I think that's actually really good context to have.Mahdi: Yeah, I agree. Actually, you are one of my heroes also, I was following you. But I should say that it has its own advantages and disadvantage was that I was in a kind of AWS bubble. But when I started to see that, okay, even AWS itself opens up having this multi-cloud offering and some serverless heroes start to write about that, I was like, okay, that's time for opening of your thing. But I mean, by that time actually, I already started to use Azure. So again, I mean, I would say that what you have been doing, actually heroes are doing a great job, really doing a great job.Jeremy: Absolutely, totally agree.Mahdi: Azure also have similar. If I remember correctly, they tried MVP, something like that.Jeremy: I guess, that's MVP, yeah.Mahdi: The thing is that, at least based on my observation, they have more or less same level of dynamics or a narrative between themselves. They also consider Azure more and AWS more and so. But I was lucky, maybe by the choice and so that somehow I had to join or use or attach to both communities. Yeah, it has been a very valuable experience.Jeremy: Yeah. Yeah. So you went through that process, you were sort of an AWS convert or I guess, an Azure convert from AWS, and you stayed connected. But I know, that idea of transferring your skills and transferring the concepts and you mentioned sort of the pillars are the same as they are in AWS and you sort of have some of the general concepts, but as someone who went through that, what were the challenges that, what were some of the, I guess, challenges and the barriers that you faced going from AWS and that way of thinking into the Microsoft world?Mahdi: That's a very good question. The thing is that in the department, at that time I was working at Accenture and actually all of us were big AWS fans because at least Accenture owned Avanade, so Azure was very separate, we were in an AWS bubble. Yeah. I'm sure that definitely AWS is much more mature in many aspects than Azure, no doubt. At least it was like that and I'm sure it's still like that. Their gap has been narrower, but that still might be the case. I remember at that time, many of my colleagues were really bashing down Azure, really bashing down and they were right. I mean, some of their services were really immature. But then again, I had the chance to ... Actually, it wasn't quite choice, they said to me, okay, this is an Azure project. Basically, it was a team, I would say quite junior, developed something on Azure, something that you never probably want to hear.They developed everything in browser, infrastructure as a code nothing at all, they were junior, so they made quite many mistakes also, but they just made the app up and running. It didn't matter how or what, it was just running and that's all. So they told me that, okay, we need some little improvement, this was little improvement and that little improvement basically forced me to reverse engineer whatever they had done, and that required me to upgrade the whole application, because as I said, there was no infrastructure as a code, if I want to use it I had to use ... If I wanted to do local development, I had to use Windows, I had only Mac, so I had to change the complete platform. It was a very tedious process by itself. On top of that, I had to start to see how Azure functions work and that was another pain for that.The thing is that I had AWS mindset and I was thinking that, okay, AWS is the best, they came out first with the cloud and Lambda, so Azure should be something like that. As I elaborated in the blog post, no, actually they are different and there are some small patches or nuances that makes some even days to find it out, but you need to find it out, otherwise, your app doesn't work. After a while when I reflected the things, I realized that, okay, of course, I was angry and pissed ... I was really bashing down Azure, it was fight of the dynamics over there, but after a while when I reflected through my whole process and actually I wrote in the blog post, I realized that part of the blame was on me because I was expecting Azure to work in the AWS way. No, that's not how it works.I mean, when you look at, for example, authentication or the mindset, it's different. That requires a learning curve, I mean, you need to find out Stack Overflow and actually, the Azure community is really supportive. I really like it. They have their own community which is really supportive. So the pain basically was that ... Yeah, I had to find out how things work in Azure and what's different. But now that I'm working basically pre-sales in both of the cloud I can say that, again, fundamentals are same.Jeremy: Right.Mahdi: And these AWS architecture framework, there are five pillars. You can see that Azure has copied from AWS, it's obvious. Even they haven't changed the name. The naming is similar and you can find that it's just a bad copy. At least like few months ago that they had to implement for that. But at the end, I mean, Azure is catching up fast.Jeremy: Right.Mahdi: It's undeniable. And fundamentals are more or less same. I mean, if you want to make your app ... For example, you want to innovate, you should have shorter time to market. Basically, you need to use infrastructure as a code If you want to make your app really high-level appeal, you need to follow best practices, do maybe SRA. At the high level it's same, but when it comes to the detail level, it can be very different. Even the documentation was really confusing and it wasn't just me telling that.Jeremy: Out of curiosity, was the documentation for AWS more confusing or was the documentation for Azure more confusing?Mahdi: This is a million-dollar question. Actually, I thought that maybe it's me. I found the Azure doc very confusing, but I thought it's me, so I asked I think nine of my friends who are AWS experts that, "What's your opinion? Have you worked with Azure? Do you find documentation readable?" I think all of them said that it's confusing.Jeremy: Yeah.Mahdi: So I was like that, okay, then it's confusing. Then I talked with a few Azure experts who, they breathe in Azure, they are Windows guys and they never touched AWS and they said that, "No, documentation is good. Everything is fine." Actually, if I remember correctly one of them said that, "Actually, I find the AWS documentation confusing." It seemed like two different worlds, you know?Jeremy: Right. I find them both confusing, actually.Mahdi: Maybe now it has changed.Jeremy: Right. Yeah. So, that's interesting. I mean, I think the documentation is a good ... Well, first of all having good documentation is important and I think they both have good documentation, but I do think it's organized differently, right?Mahdi: Yes.Jeremy: And again, it's organized more towards I think maybe that different mindset. But let's just talk a little bit about the maturity of those, because to be fair to Azure, I mean, Azure or Azure Functions, it has come a very, very long way. I remember way back in 2018, way back, I mean it seems like a long time ago at this point, seeing very early demos of Durable Functions and I remember thinking like, oh, that's just a mess, like that is not the way that you want to do that. Now fast forward three years, Durable Functions are pretty cool and they do a lot of really interesting things. It does take time to catch up. So certainly I would think your criticism of Azure Functions back then in terms of what it is now, that's probably there is a huge gap there.Mahdi: Yeah. I'm sure that most of the criticism, the detailed one that I mentioned the blog post, I'm sure that many of them have either been fully addressed or they have been improved a lot. So that's why I don't want to focus that much on detail and I would focus more on the high-level things. Yeah.Jeremy: Right. So speaking of the high-level things, let's go there for a second. So you mentioned like a well-architected framework, sort of this idea of their being something very, very similar, maybe even a carbon copy in Microsoft. But what about getting down, you said that your individual skills are kind of when you get into the weeds there, that is certainly different, so I mean for the most part though, event-driven, stateless computes, things like that, do those skills transfer over?Mahdi: Yeah, they do. It's just a matter of implementation. For example, I can tell you, yes, those ones ... Well, there is some caveat. For example, I remember in Azure community, I was at that time, this probably has been changed, but I think it shows some kind of mindset. I was struggling to find out the observability tools of Azure, if I remember correctly it's what's called Application Insight, one of the tools, and they had some event driven insight, something like that which was, they call it near real-time. I remember that basically when I want to get the logs from the functions, it took three minutes to come up, three minutes. At the same time CloudWatch, for example, it was coming in 20 seconds, something like that, 10, 20 seconds and I mentioned it in their community.If I remember correctly, it was a notable dude, either one of the product team, or he was a very notable dude and he said that three minutes time is, in my opinion, is near real-time. He said that and I remember we made a lot of joke out of that sentence with my colleagues about that.Jeremy: I can imagine.Mahdi: But that shows some kind of mindset. I mean, three minutes, I don't think is near real-time. Most probably this time has been reduced, but I just wanted to tell you their mindset about that. But, yeah, event-to-event stateless stuff, they are transferable. But when it comes to implementation, it's different. For example, as I mentioned that blog post, there was some stuff that you can do with an authentication with some, certain some, environmental variables in AWS, but that same thing in Azure, if I remember correctly, is done through something like service principles, it's different. So if you try to play with environmental variables, it turns out no, it doesn't work that way. It gets to very detailed stuff, that gets different. Yeah.Jeremy: Yeah. Right. Right. Yeah. I'm curious to hear about like another sort of interconnectivity of what you would connect. I'm now trying to remember what they call bindings or triggers and bindings in Azure functions as opposed to events or actually event sources, I think we call them in the Lambda world. So would you look at the way that you connect to other services? Is that another thing that is similar between the two?Mahdi: Okay. I should say that I don't remember that much of these details anymore, but as far as I remember, again, the high levels were more or less the same. Okay, they call it three gears, but I don't remember now what does AWS Lambda calls it. But it was more or less the same.Jeremy: I can't even remember what it's called, it's like event sources or something like that.Mahdi: Yeah. It was more or less same. Yeah, yeah, yeah.Jeremy: Yeah.Mahdi: And they had something like a bus, events bus in order to have a centralized event driven thing. It's same I would say.Jeremy: Yeah.Mahdi: Again, when it comes to poor person who has to implement it if he hasn't done it before. But the person who is doing the high-level architecture and so, I can easily see that, I mean, I don't see that much difference. But I know that if someone has to implement it and hasn't done it before, he will go through the most pain, because he has to find this small configuration things that, unfortunately, you need to make them. Otherwise, it doesn't work out. But high-level, it's same. It's event ...Jeremy: Yeah.Mahdi: Yeah.Jeremy: I think the nuances are always those tough things. So thinking of the overall mindset here and sort of maybe the approach to serverless. So I know you went from AWS to Azure, but I'm curious, do you think it would be easier to go from Azure to AWS or easier to go from AWS to Azure?Mahdi: Well, I came from this part of the river to the other one, so I can just speculate about the other part. But I would say it's more or less same, because again when I talk with a few Azure people who really have been breathing always in Azure and never touched or barely touched AWS, I felt that they are feeling same thing about AWS. So I would say it's more or less same. They need to go through the same pain, they will find AWS stuff very confusing, especially that they will not have that great community support of AWS, but they need to either do the Stack Overflow thing or have a enterprise support of AWS. I would say it's more or less same for them.Jeremy: Yeah. I mean, I think that's interesting too just, that it is different enough that there is pain there, right? I mean, it would be nice if there was some standards and I know there's like the opening, the Cloud Computing Foundation is like open events and some of those things whatever, not that that's all working out for ... I think Kubernetes and Knative and those and some of those teams are implementing it or those projects, but I'm not sure the same things fall into AWS. But anyways, go ahead. You have any thoughts on that?Mahdi: Actually, that Cloud Foundation, I was working at Eficode and they are really working that stuff. They are so good in Kubernetes. I find that also another world completely.Jeremy: Yeah.Mahdi: This Cloud Foundation stuff. I never had to implement any of that for any of our customers in any of the companies that I worked, that they were AWS or Azure. Yeah, some of them they used Kubernetes also, but that CNC or whatever it was ...Jeremy: Yeah, CNCF.Mahdi: Yeah, yeah. I found it, that's a different world for me also, I should say. Sometimes out of curiosity, I played with it, but I never ... Nobody ever asked me that, do you want to use that?Jeremy: Right. Right. Yeah. No, that makes sense. All right. So we talked a lot about, we've been talking about the difficulties in switching between different cloud providers, but also the value of knowing those different cloud providers. And more so, so that you can build serverless applications. So let's talk about serverless in general. I know you are a little bit outside of the ... You're not in the developer role anymore. But this actually, could be really interesting to get your perspective on the management approach to this and how other companies are thinking about the value of serverless at a management level as opposed to ... I guess, even as a sort of planning level. So let me ask you this question then. Are you seeing companies looking at serverless and adopting serverless and that serverless mindset and then maybe a follow-up question would be, if they are not, why do you think they should be embracing serverless?Mahdi: Okay. Firstly I'll answer the second part. Basically, the thing is that nowadays the world is fast changing. Many companies, many corporations basically, are benefiting from their existing market share or regulatories or the monopoly that they have. For now, it works. If they don't want to change basically if they have the mindset that things are working, what's the point for change. Most probably within a decade or so they are going to die, their business is going to die. Because the world is fast-changing and they need to have them to adapt to the market.So ideally, they need to go through the pain and disrupt themselves. Disruption always brings pain. You cannot disrupt yourself and feel that everything runs smoothly. Ideally, they need to disrupt themselves, go through the pain and so become really agile in order to understand the customer feedback and deliver the value to the customer, what really the customer wants. They can either have this phase or they can ignore it and say that, okay, things are working, we are making money through our monopoly, regulatory, existing market share, whatever and then, their business is going to go away. These two choices, that's all. Yeah. Painful process to become more competitive and be ahead of customers or assume that everything is okay, and then at that time that's going to be very late.Jeremy: Right. So let me go back to that first question then. So you are seeing people not doing that?Mahdi: Okay. The thing is that what I'm telling is going to be biased because I'm working in a cloud team and whatever opportunity that they are going to bring to me, of course, you have the departments and the companies that they are interested in the cloud. So my mindset is a bit biased, but what I'm seeing is that it varies a lot and I mostly focus on corporations, because ... Yeah, of course, for startups it's much easier to go for that.Jeremy: Right. Of course.Mahdi: At least in Finland, my observation was that there are two ways. Either they are very ... it depends on the executive leadership. For example, a major bank in Finland, they say that, we want to go to the cloud and be, we want to go for that. And once, one of these big ones goes through that, there is going to be a domino effect on others. But there are some other ones say that, no, it's cloud, who is going to take care of the data? We are not going to do that and they don't touch it.There are some other companies and their departments, I would say there are departments who are interested in trying things out and then, they have to fight internally with the more conservative departments. So I'm sure that there are three levels of that. But mostly, I work with the ones who are inclined toward using the cloud.Jeremy: Right. Right. So then, the ones that are starting to dabble in the cloud, is that something where you see ... I mean, clearly there's lift and shift, right? Which I think we probably all understand at this point, it is not the best implementation or the best use of the cloud, right? That it is better to maybe use more native cloud services or cloud native services, I guess, to do that. So in terms of people just rehosting or maybe re-platforming, are you seeing this sort of rearchitecture, or I guess, this refactoring or is that something where companies are staying away from that?Mahdi: First of all, I respectfully maybe have to disagree with you.Jeremy: Okay.Mahdi: Actually, I think rehosting is actually a good approach and that's what even AWS promotes for conservative companies who want to start working with the cloud and they want to get the fastest result in the shortest period of time, with the least amount of pain, it's better to do migration through the easiest one which is lift and shift. Easiest, everything is relatively.Jeremy: Right.Mahdi: And then, have a data-driven approach to see what really needs to be improved and then refactor or rearchitect or re-platform based on data. So in AWS terms, I'm sure you're right there with me, have that evolutionary architecture in a data-driven approach. So lift and shift, I don't consider bad at all. Actually, I consider it a very good cornerstone, stepping stone at the beginning, for the beginning.Jeremy: Interesting. Okay.Mahdi: Yeah. What was the other question?Jeremy: No. I was just going to say, so you've got companies that are lift and shift, and, yeah ...Mahdi: Oh, okay. Sorry. Sorry. Yeah. Sorry, I just remembered.Jeremy: Yeah.Mahdi: Sorry to interrupt you. Actually, I'm a bit careful about using the word cloud native. I remember, in a previous company that I was working, we had some philosophical fight about that and I'm sure that then everyone was dissatisfied and I had to have an authoritarian appearance that this is the definition of cloud native. I'm sure many of them hated me after that. But the word cloud native, I really struggle to find a consensus of what does it mean and if you spend some time, you realize that you will find a variety of definitions of that. So I'm picky for the word cloud native. There is a lot of fight can happen, what is exactly cloud native. Some consider Kubernetes cloud native. Some consider using AWS or Azure cloud native. So this is the picky ... this is a very controversial term, I would say. Yeah.Jeremy: Well, let me interrupt you for a second. So when I think of cloud native, what I'm thinking of are services and components that are built specifically to run in the cloud, things like your API gateway at AWS or Azure functions or things that are like very much so built to run in the cloud environment where they do things. It's that serverless aspect. I think of it more serverlessly. I mean, I know containers and so forth fit in there as well. But that's how I think of cloud native. I think of cloud native as going beyond just your traditional VM and running everything on the VM and moving to the higher-level services that are more managed for you.Mahdi: May I challenge you?Jeremy: Absolutely.Mahdi: So you just said that basically things that use cloud, like API gateway and so. And now I should ask more of a technical question. What is cloud?Jeremy: Right. Well, that's another good question. Right.Mahdi: Okay. I can tell you, based on these several definitions that I read and I reflected on them, I have this definition of cloud native, most probably many people I'm sure will disagree. So that's fine because it's very controversial. In my opinion, cloud native is very simple. If your application is architectured in a way that it can leverage the advantages of the cloud environment, then it's cloud native. Doesn't matter if it is on Kubernetes, if it's on AWS, if it's on Azure or so. If it can scale to zero and theoretically to infinity and you pay for only what you use, then it's cloud native. That's my definition of that and I read so many definitions, so I came up with this. But feel free to disagree with that, because many people disagreed with me. I'm fine with that.Jeremy: That's all right. You are not the only one I'm sure, has differing opinions of what cloud native are. So let me ask this though because I think that's interesting, the way you explained the strategy of lift and shift of basically being able to say it's the, probably the lowest risk way to take an application that's on-prem and move that into the cloud and then to use data and so forth to kind of figure out what parts of the application might you want to migrate to, maybe again I don't want to overload the term, but more cloud native things. I think that's actually really interesting. I have found and I have seen many companies that seem to do this where it's more that they move things, they just rehost without really thinking through what that strategy is going to be and then they basically just end up having their on-prem in the cloud and not benefiting from some of those managed services and some of the benefits of the cloud that you get, they don't transfer on to them. That's what I have seen.Mahdi: Well, you know it better than me. Your cloud environment is never perfect and it's always an ongoing operation. So I mean, going to the cloud ... Again, if you put your own frame in, put them I don't know, use EC2 or which VM or the AWS or Azure, that's a very good first step ...Jeremy: Right. That's probably true.Mahdi: ... but you need to be able to start leveraging that. At least get the data, which one is being used and hopefully, hopefully when you are going to the cloud, you have done some analysis and you have realized that some of the services even are not working with the cloud. Some of them need to retire, some of them cannot be rehosted. They must be rearchitected, because they are so legacy for that. But even again, assuming that you have done your homework and you have done rehosting, okay, you need to leverage that and go and see that all things that AWS or Azure provide, how much over-used or over-utilized or under-utilized are your CPUs and this kind of thing and according to that, do right sizing for that.Jeremy: Right.Mahdi: That's a good step for that. Then if they want, requires refactoring, try to I don't know, do refactoring and use more managed services for that. So again, rehosting is a good first step, but cloud is a long journey. I don't know who came up that cloud is cheap, I really don't know.Jeremy: Right. No, I totally agree. You are right about the first step and I actually loved your point about which services might you be able to retire and not move at all because I think in a lot of these big companies, there are a lot of services that you probably don't need anymore or they are redundant or whatever and you could get rid of those moving to cloud. Good point. All right. I got a couple of more minutes and I want to go back to an article that you wrote. Now, this I think is like three years old and in terms of reading the article now, it's not relevant, because so many things have changed. But what's relevant is, what has changed and this was an article that was about the worrying and promising signals from the serverless community. I think this was an event you went to in Germany, they did this, and you have a couple of different points that you called out.One of the points was that users have ignored security and that was a worrying sign for you. Where do you think sort of cloud security or more specifically serverless security is now? Do you think people are still thinking about it or have brought it front and center like it probably should be or do you think it's still a worrying factor?Mahdi: Since I have implemented cloud solutions for I'll say mostly enterprises and a few startups, I haven't seen a single one of them using, having a cloud security specialist. Most of the corporations when they, at least in my experience, when they want to go to the cloud, they must address the security of it and typically because of the customer requirements, so they bring a security guy who has worked with this, let's put it this way, all their security stuff and he has to come in on the cloud part and it's funny that actually, sometimes I have to teach them basically. I remember they had a head of security for a customer. I really had to teach him and actually, I had the Lambda functioning in front of him and he was like, wow, is it really like that? I had to teach him what are the attack methods and it was funny. He had to sign off my solution that it is secure, but basically, I had to tell him what are the priorities.Jeremy: You had to tell him what it was.Mahdi: They address it from a traditional way. Yeah, they do some kind of a test, automated test and this kind of thing which is, yeah, definitely ... Again, I'm not a security expert, but as far as I understand, again they have some fundamentals which are safe, that's true, but when it comes to the cloud especially serverless and functional service, you will see that there is a lot of more attack vectors and unfortunately, these security experts, I have not seen any of them who have any expertise in that. I learned about it because I was curious about it and I started to work with basically professionals, some startups which provide professional security solutions for serverless. So that's how I got that, but again I had to go through the pain. It took few months to read so many stuff. But I haven't seen any security specialists who have been working on cloud projects who have done this.Jeremy: Yeah.Mahdi: So I would say customers, they consider it, but no, there is still a lot more way to mature.Jeremy: They are not addressing it. Yeah. It's funny because I remember that in 2018, 2019, there were a couple of companies that were in the serverless security space and they were all acquired. So now they are part of larger platforms which is ...Mahdi: Exactly.Jeremy: ... great for them, don't get me wrong. All right. So then another thing you said and I think this is important, because the biggest complaint that I always hear about serverless is, just the workflows are not easy. So you had mentioned that DevOps was finding its way and that was sort of a promising signal, you think that we've ... I mean, we have got a lot of tools for serverless now. Speaking of Azure, the way to deploy an Azure function right through VS Code now with the plugins is really, really slick and Serverless Frameworks, SAM, CDK, all these are there, Terraform and so forth. I mean, have we gotten to some stability around serverless and sort of mixing in DevOps there?Mahdi: Based on my experience, at least the ones that I have worked with, I can say that, yes, DevOps is now a part of a solution that's provided to the customer and maybe it's correct because personally, I went through the pain whenever I proposed any solution for the customer, so they are always using infrastructure as a code and always try to have a DevOps-centric viewpoint about your solution. So I try to push for that and, yes, I find customers receptive about that. It seems to me that, now DevOps is not one of those buzzwords for cool kids who just want to do this stuff, even the corporate guys are more receptive with that. Again, there is more way to really do the DevOps stuff, because you know that many companies claim that they are doing DevOps, but in reality, they are not. You know this better than me.Jeremy: Right. Of course.Mahdi: But, yeah, it's good. I'm happy for that. I mean, a few years ago DevOps was one of those buzzwords, but now I don't think it's buzzword anymore.Jeremy: Yeah. Yeah. And I think that serverless has actually opened up a lot of making it easier for teams to do automation and things like that, there's a lot that you can do because you have that little bit of compute power that you can do something with. So I think that's definitely promising. So speaking of sort of compute power and other things that you can do with it, one of the things you mentioned was that you saw as a promising sort of signal was, that serverless-based prototypes were on the rise, meaning different services, so whether it was cues or whatever or I guess Lex and things like that, all kinds of services that allow you to do different things that are specializing in different capabilities. So how do you feel about that now? Because there are a lot of those APIs out there.Mahdi: Yeah. Actually, I also find that even from these legacy corporations that I have been working with, I like the idea that now, they definitely when they want to do migration especially or this kind of thing or do anything cloud, first they do POC. Yes, I find it good. Honestly, I was sometimes impressed that, oh, from some people that I would never expect them to use this one, first let's do POC, then see what's come out. Oh, really? Yeah, it's good in my opinion. It's finding its way.Jeremy: Yeah. Yeah. No, I like that too and I think you are right about proof of concept, because it's just one of those things where even if it's expensive to use the Google Vision API or something like that, it's a really good way to prove out how that fits into whatever the business use case you have for that and then like you said, you can certainly take a step further and create more sophisticated or I won't say sophisticated but maybe more integrated tools or something like that, that would work around that. So I think that's interesting, allow people to fail fast, learn quickly, and just build out their applications.Mahdi: Yeah. When we say POC, I should say that I wouldn't exclude it only to this cool new serverless or what the AI stuff that AWS and Azure provide. Even for migration actually, POC is highly recommended. Again, I was working for some period of time for, I would say, one of the most conservative banks in Finland, small and conservative, for consultancy, but even then as we are trying to push the cloud and even then they said that, "Yeah, first let's do a POC of migration and see what's going to happen." Again, there really I was surprised. I would never expect it from them. But the idea of fail fast and learn fast, I think at least that it requires some level of maturity to reach that.Jeremy: True.Mahdi: That really needs more room for improvement, fail fast, learn fast. Yeah. Just something, I don't know, I would like to address about this cloud stuff if I can.Jeremy: Yeah, absolutely.Mahdi: Yeah. Basically, when companies or customers decide to go to the cloud, I'll recommend that don't look at only the technical aspect of it, because I see that there is, at least there is lot of debate for example ... At least it was like that. AWS, Azure, or this kind of thing, at the end I'll say that most of the things it doesn't matter that much. I mean, it depends on their, sometimes company policy, how much discount you can get, how much funding you can get from the cloud provider. So it's not really the technical people who decide, sometime it's the executive who decides.Jeremy: Right.Mahdi: But even then, when you go to the cloud, in my opinion as much as the technology and maturity of the cloud provider matters, the amount that your company is ready to change its operations is also important. This is my favorite example, that I developed and I would say at that time at least a state-of-the-art serverless solution, DevOps, or CI/DC stuff for a major bank in Finland and I was the first one who managed to do that among so many consultants that they have. It was really good. I'm proud of what I did and actually, I open-sourced that. It was really basically we could deploy multiple times per day and we went to their release manager and I said that, "Okay. It's like that. Everything is perfect. DevOps, CI/CD, we can release multiple times per day." And she said that, "No. It doesn't work like that. We need to release once per month," and we have to go through a very painstaking process, fill out so many useless documents.It didn't matter how much I tried to convince her that, "Well, the idea is different. I mean, you need to do small deployment. This way actually you have less risk. You deploy once per month. Still every time something goes wrong, but when you do a more frequent deployment, your risk is lowered." She said, "No. We are a bank. It is as it is. Sorry." Most of that effort that I made at least at that time went to waste basically, because the process was legacy, even though the technology was good. I'm sure that by now, they have changed because I was among those innovators basically or the early adopters who made through that. But in my opinion, technology matters but operation and processes and release stuff also matters and everything needs to change. So basically it needs to be holistic approach of going to the cloud, not just implementing from technical viewpoint.Jeremy: Mahdi, thank you so much for having this conversation with me. This was a lot of fun and then I love people who have sort of experienced, from moving from one cloud to another. It's a huge shift, but I think your advice here is great, just to sort of know those basics on those other platforms and do that. So if people want to reach out to you or find out more about, follow you on Twitter, things like that, how do they do that?Mahdi: Well, I have a Twitter account, but nowadays I mostly put non-service stuff, but LinkedIn is a good option for me.Jeremy: Okay. Great. And it's m_azarboon on Twitter and then, I will put LinkedIn and Twitter and that in the show notes as well. Thanks again, Mahdi.Mahdi: Thank you very much for having me. Bye-bye. Thank you.

Serverless Chats
Episode #97: How Serverless Fits in to the Cyclical Nature of the Industry with Gojko Adzic

Serverless Chats

Play Episode Listen Later Apr 19, 2021 63:19


About Gojko AdzicGojko Adzic is a partner at Neuri Consulting LLP. He one of the 2019 AWS Serverless Heroes, the winner of the 2016 European Software Testing Outstanding Achievement Award, and the 2011 Most Influential Agile Testing Professional Award. Gojko’s book Specification by Example won the Jolt Award for the best book of 2012, and his blog won the UK Agile Award for the best online publication in 2010.Gojko is a frequent speaker at software development conferences and one of the authors of MindMup and Narakeet.As a consultant, Gojko has helped companies around the world improve their software delivery, from some of the largest financial institutions to small innovative startups. Gojko specializes in agile and lean quality improvement, in particular impact mapping, agile testing, specification by example, and behavior driven development.Twitter: @gojkoadzicNarakeet: https://www.narakeet.comPersonal website: https://gojko.netWatch this video on YouTube: https://youtu.be/kCDDli7uzn8This episode is sponsored by CBT Nuggets: https://www.cbtnuggets.com/TranscriptJeremy: Hi everyone, I'm Jeremy Daly and this is Serverless Chats. Today my guest is Gojko Adzic. Hey Gojko, thanks for joining me.Gojko: Hey, thanks for inviting me.Jeremy: You are a partner at Neuri Consulting, you're an AWS Serverless Hero, you've written I think, what? I think 6,842 books or something like that about technology and serverless and all that kind of stuff. I'd love it if you could tell listeners a little bit about your background and what you've been working on lately.Gojko: I'm a developer. I started developing software when I was six and a half. My dad bought a Commodore 64 and I think my mom would have kicked him out of the house if he told her that he bought it for himself, so it was officially for me.Jeremy: Nice.Gojko: And I was the only kid in the neighborhood that had a computer, but didn't have any ways of loading games on it because he didn't buy it for games. I stayed up and copied and pasted PEEKs and POKEs in a book I couldn't even understand until I made the computer make weird sounds and print rubbish on the screen. And that's my background. Basically, ever since, I only wanted to build software really. I didn't have any other hobbies or anything like that. Currently, I'm building a product for helping tech people who are not video editing professionals create videos very easily. Previously, I've done a lot of work around consulting. I've built a lot of product that is used by millions of school children worldwide collaborate and brainstorm through mind-mapping. And since 2016, most of my development work has been on Lambda and on team stuff.Jeremy: That's awesome. I joke a little bit about the number of books that you wrote, but the ones that you have, one of them's called Running Serverless. I think that was maybe two years ago. That is an excellent book for people getting started with serverless. And then, one of my probably favorite books is Humans Vs Computers. I just love that collection of tales of all these things where humans just build really bad interfaces into software and just things go terribly.Gojko: Thank you very much. I enjoyed writing that book a lot. One of my passions is finding edge cases. I think people with a slight OCD like to find edge cases and in order to be a good developer, I think somebody really needs to have that kind of intent, and really look for edge cases everywhere. And I think collecting these things was my idea to help people first of all think about building better software, and to realize that stuff we might glance over like, nobody's ever going to do this, actually might cause hundreds of millions of dollars of damage ten years later. And thanks very much for liking the book.Jeremy: If people haven't read that book, I don't know, when did that come out? Maybe 2016? 2015?Gojko: Yeah, five or six years ago, I think.Jeremy: Yeah. It's still completely relevant now though and there's just so many great examples in there, and I don't want to spent the whole time talking about that book, but if you haven't read it, go check it out because it's these crazy things like police officers entering in no plates whenever they're giving parking tickets. And then, when somebody actually gets that, ends up with thousands of parking tickets, and it's just crazy stuff like that. Or, not using the middle initial or something like that for the name, or the birthdate or whatever it was, and people constantly getting just ... It's a fascinating book. Definitely check that out.But speaking of edge cases and just all this experience that you have just dealing with this idea of, I guess finding the problems with software. Or maybe even better, I guess a good way to put it is finding the limitations that we build into software mostly unknowingly. We do this unknowingly. And you and I were having a conversation the other day and we were talking about way, way back in the 1970s. I was born in the late '70s. I'm old but hopefully not that old. But way back then, time-sharing was a thing where we would basically have just a few large computers and we would have to borrow time against them. And there's a parallel there to what we were doing back then and I think what we're doing now with cloud computing. What are your thoughts on that?Gojko: Yeah, I think absolutely. We are I think going in a slightly cyclic way here. Maybe not cyclic, maybe spirals. We came to the same horizontal position but vertically, we're slightly better than we were. Again, I didn't start working then. I'm like you, I was born in late '70s. I wasn't there when people were doing punch cards and massive mainframes and time-sharing. My first experience came from home PC computers and later PCs. The whole serverless thing, people were disparaging about that when the marketing buzzword came around. I don't remember exactly when serverless became serverless because we were talking about microservices and Lambda was a way to run microservices and execute code on demand. And all of a sudden, I think the JAWS people realized that JAWS is a horrible marketing name, and decided to rename it to serverless. I think it most important, and it was probably 2017 or something like that. 2000 ...Jeremy: Something like that, yeah.Gojko: Something like that. And then, because it is a horrible marketing name, but it's catchy, it caught on and then people were complaining how it's not serverless, it's just somebody else's servers. And I think there's some truth to that, but actually, it's not even somebody else's servers. It really is somebody else's mainframe in a sense. You know in the '70s and early '80s, before the PC revolution, if you wanted to be a small software house or a small product operator, you probably were not running your own data center. What you would do is you would rent it based on paying for time to one of these massive, massive, massive operators. And in fact, we ended up with AWS being a massive data center. As far as you and I are concerned, it's just a blob. It's not a collection of computers, it's a data center we learn something from and Google is another one and then Microsoft is another one.And I remember reading a book about Andy Grove who was the CEO of Intel where they were thinking about the market for PC computers in the late '70s when somebody came to them with the idea that they could repurpose what became a 8080 processor. They were doing this I think for some Japanese calculator and then somebody said, "We can attach a screen to this and make this a universal computer and sell it." And they realized maybe there's a market for four or five computers in the world like that. And I think that that's ... You know, we ended up with four or five computers, it's just the definition of a computer changed.Jeremy: Right. I think that's a good point because you think about after the PC revolution, once the web started becoming really big, people started building data centers and collocation facilities like crazy. This is way before the cloud, and everybody was buying racks and Dell was getting really popular because people buying servers from Dell, and installing these in their data centers and doing this. And it just became this massive, whole industry built around doing that. And then you have these few companies that say, "Well, what if we just handled all that stuff for you? Rather than just racking stuff for you," but started just managing the software, and started managing the networking, and the backups, and all this stuff for you? And that's where the cloud was born.But I think you make a really good point where the cloud, whatever it is, Amazon or Google or whatever, you might as well just assume that that's just one big piece of processing that you're renting and you're renting some piece of that. And maybe we have. Maybe we've moved back to this idea where ... Even though everybody's got a massive computer in their pocket now, tons of compute power, in terms of the real business work that's being done, and the real global value, and the things that are powering global commerce and everything else like that, those are starting to move back to run in four, five, massive computers.Gojko: Again, there's a cyclic nature to all of this. I remember reading about the advent of power networks. Because before people had electric power, there were physical machines and movement through physical power, and there were water-powered plants and things like that. And these whole systems of shafts and belts and things like that powering factories. And you had this one kind of power load in a factory that was somewhere in the middle, and then from there, you actually have physical belts, rotating cogs in other buildings, and that was rotating some shafts that were rotating other cogs, and things like that.First of all, when people were able to package up electricity into something that's distributable, and they were running their own small electricity generators next to these big massive machines that were affecting early factories. And one of the first effects of that was they could reuse 30% of their factories better because it was up to 30% of the workspace in the factory that was taken up by all the belts and shafts. And all that movement was producing a lot of air movement and a lot of dust and people were getting sick. But now, you just plug a cable and you no longer have all this bad air and you don't have employees going sick and things like that. Things started changing quite a lot and then all of a sudden, you had this completely new revolution where you no longer had to operate your own electric generator. You could just plug in and get power from the network.And I think part of that is again, cyclic, what's happening in our industry now, where, as you said, we were getting machines. I used to make money as a Linux admin a long time ago and I could set up my own servers and things like that. I had a company in 2007 where we were operating our own gaming system, and we actually had physical servers in a physical server room with all the LEDs and lights, and bleeps, and things like that. Around that time, AWS really made it easy to get virtual machines on EC2 and I realized how stupid the whole, let's manage everything ourself is. But, we are getting to the point where people had to run their own generators, and now you can actually just plug into the electricity network. And of course, there is some standardization. Maybe U.S. still has 110 volts and Europe has 220, and we never really get global standardization there.But I assume before that, every factory could run their own voltage they wanted. It was difficult to manufacture for these things but now you have standardization, it's easier for everybody to plug into the ecosystem and then the whole ecosystem emerged. And I think that's partially what's happening now where things like S3 is an API or Lambda is an API. It's basically the electric socket in your wall.Jeremy: Right, and that's that whole Wardley maps idea, they become utilities. And that's the thing where if you look at that from an enterprise standpoint or from a small business standpoint if you're a startup right now and you are ordering servers to put into a data center somewhere unless you're doing something that's specifically for servers, that's just crazy. Use the cloud.Gojko: This product I mentioned that we built for mind mapping, there's only two of us in the whole company. We do everything from presales, to development testing supports, to everything. And we're competing with companies that have several orders of magnitude more employees, and we can actually compete and win because we can benefit from this ecosystem. And I think this is totally wonderful and amazing and for anybody thinking about starting a product, it's easier to start a product now than ever. And, another thing that's totally I think crazy about this whole serverless thing is how in effect we got a bookstore to offer that first.You mentioned the world utility. I remember I was the editor of a magazine in 2001 in Serbia, and we had licensing with IDG to translate some of their content. And I remember working on this kind of piece from I think PC World in the U.S. where they were interviewing Hewlett Packard people about utility computing. And people from Hewlett Packard back then were predicting that in a few years' time, companies would not operate their own stuff, they would use utility and things like that. And it's totally amazing that in order to reach us over there, that had to be something that was already evaluated and tested, and there was probably a prototype and things like that. And you had all these giants. Hewlett Packard in 2001 was an IT giant. Amazon was just up-and-coming then and they were a bookstore then. They were not even anything more than a bookstore. And you had, what? A decade later, the tables completely turned where HP's ... I don't know ...Jeremy: I think they bought Compaq at some point too.Gojko: You had all these giants, IBM completely missed it. IBM totally missed ...Jeremy: It really did.Gojko: ... the whole mobile and web and everything revolution. Oracle completely missed it. They're trying to catch up now but fat chance. Really, we are down to just a couple of massive clouds, or whatever that means, that we interact with as we're interacting with electricity sockets now.Jeremy: And going back to that utility comparison, or, not really a comparison. It is a utility now. Compute is offered as a utility. Yes, you can buy and generate compute yourself and you can still do that. And I know a lot of enterprises still will. I think cloud is like 4% of the total IT market or something. It's a fraction of it right now. But just from that utility aspect of it, from your experience, you mentioned you had two people and you built, is it MindMup.com?Gojko: MindMup, yeah.Jeremy: You built that with just two people and you've got tons of people using it. But just from your experience, especially coming from the world of being a Linux administrator, which again, I didn't administer ... Well, I guess I was. I did a lot of work in data centers in my younger days. But, coming from that idea and seeing how companies were building in the past and how companies are still building now, because not every company is still using the cloud, far from it. But not taking advantage of that utility, what are those major disadvantages? How badly do you think that's going to slow companies down that are trying to innovate?Gojko: I can give you a story about MindMup. You mentioned MindMup. When was it? 2018, there was the Intel processor vulnerabilities that were discovered.Jeremy: Right, yes.Gojko: I'm not entirely sure what the year was. A few years ago anyway. We got a email from a concerned university admin when the second one was discovered. The first one made all the news and a month later a second one was discovered. Now everybody knew that, they were in panic and things like that. After the second one was discovered, we got a email from a university admin. And universities are big users, they need to protect the data and things like that. And he was insisting that we tell him what our plan was for mitigating this thing because he knows we're on the cloud.I'm working on European time. The customer was in the U.S., probably somewhere U.S. Pacific because it arrived in the middle of the night. I woke up, I'm still trying to get my head around and drinking coffee and there's this whole sausage CV number that he sent me. I have no idea what it's about. I took that, pasted it into Google to figure out what's going on. The first result I got from Google was that AWS Lambda was already patched. Copy, paste, my day's done. And I assume lots and lots of other people were having a totally different conversation with their IT department that day. And that's why I said I think for products like the one I'm building with video and for the MindMup, being able to rent operations as a utility, but really totally rent ops as a utility, not have to worry about anything below my unique business level is really, really important.And yes, we can hire people to work on that it could even end up being slightly cheaper technically but in terms of my time and where my focus goes and my interruptions, I think deploying on a utility platform, whatever that utility platform is, as long as it's reliable, lets me focus on adding value where I can actually add value. That makes my product unique rather than the generic stuff.Jeremy: You mentioned the video product that you're working on too, and something that is really interesting I think too about taking advantage of the cloud is the scalability aspect of it. I remember, it was maybe 2002, maybe 2003, I was running my own little consulting company at the time, and my local high school always has a rivalry football game every Thanksgiving. And I thought it'd be really interesting if I was to stream the audio from the local AM radio station. I set up a server in my office with ReelCast Streaming or something running or whatever it was. And I remember thinking as long as we don't go over 140 subscribers, we'll be okay. Anything over that, it'll probably crash or the bandwidth won't be enough or whatever.Gojko: And that's just one of those things now, if you're doing any type of massive processing or you need bandwidth, bandwidth alone ... I remember T1 lines being great and then all of a sudden it was like, well, now you need a T3 line or something crazy in order to get the bandwidth that you need. Just from that aspect of it, the ability to scale quickly, that just seems like such a huge blocker for companies that need to order provision servers, maybe get a utility company to come in and install more bandwidth for them, and things like that. That's just stuff that's so far out of scope for building a business to me. At least building a software business or building any business. It's crazy.When I was doing consulting, I did a bit of work for what used to be one of the largest telecom companies in the world.Jeremy: Used to be.Gojko: I don't want to name names on a public chat. Somewhere around 2006, '07 let's say, we did a software project where they just needed to deploy it internally. And it took them seven months to provision a bunch of virtual machines to deploy it internally. Seven months.Jeremy: Wow.Gojko: Because of all the red tape and all the bureaucracy and all the wait for capacity and things like that. That's around the time where Amazon when EC2 became commercially available. I remember working with another client and they were waiting for some servers to arrive so they can install more capacity. And I remember just turning on the Amazon console. I didn't have anything useful to running it then but just being able to start up a virtual machine in about, I think it was less than half an hour, but that was totally fascinating back then. Here's a new Linux machine and in less than half an hour, you can use it. And it was totally crazy. Now we're getting to the point where Lambda will start up in less than 10 milliseconds or something like that. Waiting for that kind of capacity is just insane.With the video thing I'm building, because of Corona and all of this remote teaching stuff, for some reason, we ended up getting lots of teachers using the product. It was one of these half-baked experiments because I didn't have time to build the full user interface for everything, and I realized that lots of people are using PowerPoint to prepare that kind of video. I thought well, how about if I shorten that loop, so just take your PowerPoint and convert it into video. Just type up what you want in the speaker notes, and we'll use these neurometrics to generate audio and things like that. Teachers like it for one reason or the other.We had this influential blogger from Russia explain it on his video blog and then it got picked up, my best guess from what I could see from Google Translate, some virtual meeting of teachers of Russia where they recommended people to try it out. I woke up the next day, the metrics went totally crazy because a significant portion of teachers in Russia tried my tool overnight in a short space of time. Something like that, I couldn't predict it. It's lovely but as you said, as long as we don't go over a hundred subscribers, we're fine. If I was in a situation like that, the thing would completely crash because it's unexpected. We'd have a thing that's amazingly good for marketing that would be amazingly bad for business because it would crash all our capacity we had. Or we had to prepare for a lot more capacity than we needed, but because this is all running on Lambda, Fargate, and other auto-scaling things, it's just fine. No sweat at all. It was a lovely thing to see actually.Jeremy: You actually have two problems there. If you're not running in the cloud or not running on-demand compute, is the fact that one, you would've potentially failed, things would've fallen over and you would've lost all those potential customers, and you wouldn't have been able to grow.Gojko: Plus you've lost paying customers who are using your systems, who've paid you.Jeremy: Right, that's the other thing too. But, on the other side of that problem would be you can't necessarily anticipate some of those things. What do you do? Over-provision and just hope that maybe someday you'll get whatever? That's the crazy thing where the elasticity piece of the cloud to me, is such a no-brainer. Because I know people always talk about, well, if you have predictable workloads. Well yeah, I know we have predictable workloads for some things, but if you're a startup or you're a business that has like ... Maybe you'd pick up some press. I worked for a company that we picked up some press. We had 10,000 signups in a matter of like 30 seconds and it completely killed our backend, my SQL database. Those are hard to prepare for if you're hosting your own equipment.Gojko: Absolutely, not even if you're hosting your own.Jeremy: Also true, right.Gojko: Before moving to Lambda, the app was deployed to Heroku. That was basically, you need to predict how many virtual machines you need. Yes, it's in the cloud, but if you're running on EC2 and you have your 10, 50, 100 virtual machines, whatever running there, and all of a sudden you get a lot more traffic, will it scale or will it not scale? Have you designed it to scale like that? And one of the best things that I think Lambda brought as a constraint was forcing people to design this stuff in a way that scales.Jeremy: Yes.Gojko: I can deploy stuff in the cloud and make it all distributed monolith, so it doesn't really scale well, but with Lambda because it was so constrained when it launched, and this is one thing you mentioned, partially we're losing those constraints now, but it was so constrained when it launched, it was really forcing people to design things that were easy to scale. We had total isolation, there was no way of sharing things, there was no session stickiness and things like that. And then you have to come up with actually good ways of resolving that.I think one of the most challenging things about serverless is that even a Hello World is a distributed transaction processing system, and people don't get that. They think about, well, I had this DigitalOcean five-dollar-a-month server and it was running my, you know, Rails up correctly. I'm just going to use the same ideas to redesign it in Lambda. Yes, you can, but then you're not going to really get the benefits of all of this other stuff. And if you design it as a massively distributed transaction processing system from the start, then yes, it scales like crazy. And it scales up and down and it's lovely, but as Lambda's maturing, I have this slide deck that I've been using since 2016 to talk about Lambda at conferences. And every time I need to do another talk, I pull it out and adjust it a bit. And I have this whole Git history of it because I do markdown to slides and I keep the markdown in Git so I can go back. There's this slide about limitations where originally it's only ... I don't remember what was the time limitation, but something very short.Jeremy: Five minutes originally.Gojko: Yeah, something like that and then it was no PCI compliance and the retries are difficult, and all of this stuff basically became sold. And one of the last things that was there, there was don't even try to put it in a VPC, definitely, you can but it's going to take 10 minutes to start. Now that's reasonably okay as well. One thing that I remember as a really important design constraint was effectively it was a share nothing platform because you could not share data between two Lambdas running at the same time very easily in the same VM. Now that we can connect Lambdas to EFS, you effectively can do that as well. You can have two Lambdas, one writing into an EFS, the other reading the same EFS at the same time. No problem at all. You can pump it into a file and the other thing can just stay in a file and get the data out.As the platform is maturing, I think we're losing some of these design constraints, and sometimes constraints breed creativity. And yes, you still of course can design the system to be good, but it's going to be interesting to see. And this 15-minute limit that we have in Lamdba now is just an artificial number that somebody thought.Jeremy: Yeah, it's arbitrary.Gojko: And at some point when somebody who is important enough asks AWS to give them half-hour Lamdbas, they will get that. Or 24-hour Lambdas. It's going to be interesting to see if Lambda ends up as just another way of running EC2 and starting EC2 that's simpler because you don't have to manage the operating system. And I think the big difference we'll get between EC2 and Lambda is what percentage of ops your developers are responsible for, and what percentage of ops Amazon's developers are responsible for.Because if you look at all these different offerings that Amazon has like Lightsail and EC2 and Fargate and AWS Batch and CodeDeploy, and I don't know how many other things you can run code on in Lambda. The big difference with Lambda is really, at least until very recently was that apart from your application, Amazon is responsible for everything. But now, we're losing design constraints, you can put a Docker container in, you can be responsible for the OS image as well, which is a bit again, interesting to look at.Jeremy: Well, I also wonder too, if you took all those event sources that you can point at Lambda and you add those to Fargate, what's the difference? It seems like they're just merging into two very similar products.Gojko: For the video build platform, the last step runs in Fargate because people are uploading things that are massive, massive, massive for video processing, and just they don't finish in 15 minutes. I have to run to Fargate, and the big difference is the container I packaged up for Fargate takes about 40 seconds to actually deploy. A new event at the moment with the stuff I've packaged in Fargate takes about 40 seconds to deploy. I can optimize that, but I can't optimize it too much. Fargate is still order of magnitude of tens of seconds to process an event. I think as Fargate gets faster and as Lambda gets more of these capabilities, it's going to be very difficult to tell them apart I think.With Fargate, you're intended to manage the container image yourself. You're responsible for patching software, you're responsible for patching OS vulnerabilities and things like that. With Lambda, Amazon, unless you use a container image, Amazon is responsible for that. They come close. When looking at this video building for the first time, I was actually comparing code. I was considering using CodeBuild for that because CodeBuild is also a way to run things on demand and containers, and you actually can get quite decent machines with CodeBuild. And it's also event-driven, and Fargate is event-driven, AWS Batch is event-driven, and all of these things are converging to each other. And really, AWS is famous for having 10 products that do the same thing effectively and you can't tell them apart, and maybe that's where we'll end.Jeremy: And I'm wondering too, the thing that was great about Lambda, at least for me like you said, the shared nothing architecture where it was like, you almost didn't have to rely on anything other than the event that came in, and the processing of that Lambda function. And if you designed your systems well, you may have some bottleneck up front, but especially if you used distributed transactions and you used async invocations of downstream functions, where you could basically take some data that you needed to pass into it, and then you wouldn't necessarily need that to communicate with anything other than itself to process that data. The scale there was massive. You could just keep scaling and scaling and scaling. As you add things like EFS and that adds constraints in terms of the number of transactions and connections that, that can make and all those sort of things. Do these things, do they become less reliable? By allowing it to do more, are we building systems that are less reliable because we're not using some of those tried-and-true constraints that were there?Gojko: Possibly, but every time you add a new moving part, you create one more potential point of failure there. And I think for me, one of the big lessons when I was working on ... I spent a few years working on very high throughput transaction processing systems. That's why this whole thing rings a bell a lot. A lot of it really was how do you figure out what type of messages you send and where you send them. The craze of these messages and distributed transaction processing systems in early 2000s, created this whole craze of enterprise service buses later that came. We now have this... What is it called? It's not called enterprise service bus, it's called EventBridge, or something like that.Jeremy: EventBridge, yes.Gojko: That's effectively an enterprise service bus, it's just the enterprise is the Amazon cloud. The big challenge in designing things like that is decoupling. And it's realizing that when you have a complicated system like that, stuff is going to fail. And especially when we were operating around hardware, stuff is going to fail badly or occasionally, and you need to not bring the whole house down where some storage starts working a bit slower. You create circuit breakers, you create layers and layers of stuff that disconnect things. I remember when we were looking originally at Lambdas and trying to get the head around that and experimenting, should one Lambda call another? Or should one Lambda not call another? And things like that.I realized, let's say for now, until we realize we want to do something else, a Lambda should only ever talk to SNS and nothing else. Or SQS or something like that. When one Lambda completes, it's going to track a message somewhere and we need to design these messages to be good so that we can decouple different parts of the process. And so far, that helps too as a constraint. I think very, very few times we have one Lambda calling another. Mostly when we actually need a synchronized response back, and for security reasons, we wanted to isolate something to a single Lambda, but that's effectively just a black box security isolation. Since creating these isolation layers through messages, through queues, through topics, becomes a fundamental part of designing these systems.I remember speaking at the conference to somebody. I forgot the name of the person who was talking about airline. And he was presenting after me and he said, "Look, I can relate to a lot of what you said." And in the airline community basically they often talk about, apparently, I'm not an airline programmer, he told me that in the airline community, talk about designing the protocol being the biggest challenge. Once you design the protocol between your components, the message is who sends what where, you can recover from almost any other design flaw because it's decoupled so if you've made a mess in one Lambda, you can redesign that Lambda, throw it away, rewrite it, decouple things a different way. If the global protocol is good, you get all the flexibility. If you mess up the protocol for communication, then nothing's going to save you at the end.Now we have EFS and Lambda can talk to an EFS. Should this Lambda talk directly to an EFS or should this Lambda just send some messages to a topic, and then some other Lambdas that are maybe reserved, maybe more constrained talk to EFS? And again, the platform's evolved quite a lot over the last few years. One thing that is particularly useful in that regard is the SQS FIFO queues that came out last year I think. With Corona ...Jeremy: Yeah, whenever it was.Gojko: Yeah, I don't remember if it was last year or two years ago. But one of the things it allows us to do is really run lots and lots of Lambdas in parallel where you can guarantee that no two Lambdas access the same kind of business entity that you have in the same type. For example, for this mind mapping thing, we have lots and lots of people modifying lots and lots of files in parallel, but we need to aggregate a single map. If we have 50 people over here working with a single map and 60 people on a map working a different map, aggregation can run in parallel but I never ever, ever want two people modifying the same map their aggregation to run in parallel.And for Lambda, that was a massive challenge. You had to put Kinesis between Lambda and other Lambdas and things like that. Kinesis' provision capacity, it costs a lot, it doesn't auto-scale. But now with SQS FIFO queues, you can just send a message and you can say the kind of FIFO ID is this map ID that we have. Which means that SQS can run thousands of Lambdas in parallel but they'll never run more than one Lambda for the same map idea at the same time. Designing your protocols like that becomes how you decouple one end of your app that's massively scalable and massively parallel, and another end of your app that we have some reserved capacity or limits.Like for this kind of video thing, the original idea of that was letting me build marketing videos easier and I can't get rid of this accent. Unfortunately, everything I do sounds like I'm threatening someone to blackmail them. I'm like a cheap Bond villain, and that's not good, but I can't do anything else. I can pay other people to do it for me and we used to do that, but then that becomes a big problem when you want to modify tiny things. We paid this lady to professionally record audio for a marketing video that we needed and then six months later, we wanted to change one screen and now the narration is incorrect. And we paid the same woman again. Same equipment, same person, but the sound is totally different because two different equipment.Jeremy: Totally different, right.Gojko: You can't just stitch it up. Then you end up like, okay, do we go and pay for the whole thing again? And I realized the neurometric text-to-speech has learned so much that it can do English better than I can. You're a native English speaker so you can probably defeat those machines, but I can't.Jeremy: I don't know if I could. They're pretty good now. It's kind of scary.Gojko: I started looking at one like why don't they just put stuff in a Markdown and use Markdown to generate videos and things like that? All of these things, you get quota limits still. I thought we were limited on Google. Google gave us something like five requests per second in parallel, and it took me a really long time to even raise these quotas and things like that. I don't want to have lots of people requesting stuff and then in parallel trashing this other thing over there. We need to create these layers of running things in a decent limit, and I think that's where I think designing the protocol for this distributed system becomes an importance.Jeremy: I want to go back because I think you bring up a really good point just about a different type of architecture, or the architectural design of decoupling systems and these event-driven things. You mentioned a Lambda function processes something and sends it to SQS or sends it to SNS to it can do a fan-out pattern or in the case of the FIFO queue, doing an ordered pattern for sequential processing, which those were all great patterns. And even things that AWS has done, such as add things like Lambda destination. Now if you run an asynchronous Lambda function, you still have to write some code or you used to have to write some code that said, "When this is finished processing, now call some other component." And there's just another opportunity for failure there. They basically said, "Well, if it succeeds, then you can actually just forward it off to one of these other services automatically and we'll handle all of the retries and all the failures and that kind of stuff."And those things have been added in to basically give you that warm and fuzzy feeling that if an event doesn't reach where it's supposed to go, that some sort of cloud trickery will kick in and make sure that gets processed. But what that is introduced I think is a cognitive overload for a lot of developers that are designing these systems because you're no longer just writing a script that does X, Y, and Z and makes a few database calls. Now you're saying, okay, I've got to write a script that can massively scale and take the transactions that I need to maybe parallelize or that I maybe need to queue or delay or throttle or whatever, and pass those down to another subsystem. And then that subsystem has to pick those up and maybe that has to parallelize those or maybe there are failure modes in there and I've got all these other things that I have to think about.Just that effect on your average developer, I think you and I think about these things. I would consider myself to be a cloud architect, if that's a thing. But essentially, do you see this being I guess a wall for a lot of developers and something that really requires quite a bit of education to ramp them up to be able to start designing these systems?Gojko: One of the topics we touched upon is the cyclic nature of things, and I think we're going back to where moving from apps working on a single machine to client server architectures was a massive brain melt for a lot of people, and three-tier architectures, which is later, we're not just client server, but three-tier architectures ended up with their own host of problems and then design problems and things like that. That's where a lot of these architectural patterns and design patterns emerged like circuit breakers and things like that. I think there's a whole body of knowledge there for people to research. It's not something that's entirely new and I think you can get started with Lambda quite easily and not necessarily make a mess, but make something that won't necessarily scale well and then start improving it later.That's why I was mentioning that earlier in the discussion where, as long as the protocol makes sense, you can salvage almost anything late. Designing that protocol is important, but then we're going to good software design. I think teaching people how to do that is something that every 10 years, we have to recycle and reinvent and figure it out because people don't like to read books from more than 10 years ago. All of this stuff like designing fault tolerance systems and fail-safe systems, and things like that. There's a ton of books about that from 20 years ago, from 10 years ago. Amazon, for people listening to you and me, they probably use Amazon more for compute than they use for getting books. But Amazon has all these books. Use it for what Amazon was originally intended for and then get some books there and read through this stuff. And I think looking at design of distributed systems and stuff like that becomes really, really critical for Lambdas.Jeremy: Yeah, definitely. All right, we've got a few minutes left and I'd love to go back to something we were talking about a little bit earlier and that was everything moving onto a few of these major cloud providers. And one of the things, you've got scale. Scale is a problem when we talked about oh, we can spin up as many VMs as we want to, and now with serverless, we have unlimited capacity really. I know we didn't say that, but I think that's the general idea. The cloud just provides this unlimited capacity.Gojko: Until something else decides it's not unlimited.Jeremy: And that's my point here where every major cloud provider that I've been involved with and I've heard the stories of, where you start to move the needle at all, there's always an SA that reaches out to you and really wants to understand what your usage is going to be, and what your patterns were going to be. And that's because they need to make sure that where you're running your applications, that they provision enough capacity because there is not enough capacity, or there's not unlimited capacity in the cloud.Gojko: It's physically limited. There's only so much buildings where you can have data centers on the surface of Earth.Jeremy: And I guess that's where my question comes in because you always hear these things about lock-in. Like, well serverless, if you use Lambda, you're going to be locked in. And again, if you're using Oracle, you're locked in. Or, you're using MySQL you're locked in. Or, you're using any of the other things, you're locked in.Gojko: You're actually not locked in physically. There's a key and a lock.Jeremy: Right, but this idea of being locked in not to a specific cloud provider, but just locked into a cloud in general and relying on the cloud to do that scaling for you, where do you think the limitations there are?Gojko: I think again, going back to cyclic, cyclic, cyclic. The PC revolution started when a lot more edge compute was needed on mainframes, and people wanted to get stuff done on their own devices. And I think probably, if we do ever see the limitations of this and it goes into a next cycle, my best guess it's going to be driven by lots of tiny devices connected to a cloud. Not necessarily computers as we know computers today. I pulled out some research preparing for this from IDC. They are predicting basically from 18.3 zettabytes of data needed for IOT in 2019, to be 73.1 zettabytes by 2025. That's like times three in a space of six years. If you went to Amazon now and told them, "You need to have three times more data space in three years," I'm not sure how they would react to that.This stuff, everything is taking more and more data, and everything is more and more connected to the cloud. The impact of something like that going down now is becoming totally crazy. There was a case in 2017 where S3 started getting a bit more latency than usual in U.S. East 1, in I think February of 2018, or something like that. There were cases where people couldn't turn the lights on in their houses because the management software was working on S3 and depending on S3. Expecting S3 to be indestructible. Last year, in November, Kinesis pretty much went offline as far as everybody else outside AWS concerend for about 15 hours I think. There were people on Twitter that they can't go back into their house because their smart lock is no longer that smart.And I think we are getting to places where there will be more need for compute on the edge. First of all, there's going to be a lot more demand for data centers and cloud power and I think that's going to keep going on for the next five, ten years. But then people will realize they've hit some limitation of that, and they're going to start moving towards the edge. And we're going from mainframe back into client server computing I think. We're getting these products now. I assume most of your listeners have seen one like all these fancy ubiquity Wi-Fi thingies that are costing hundreds of dollars and they look like pieces of furniture that's just sitting discretely on the wall. And there was a massive security breach yesterday published. Somebody took their AWS keys and took all the customer data and everything.The big advantage over all the ugly routers was that it's just like a thin piece of glass that sits on your wall, and it's amazing and it looks good, but the reason why they could do a very thin piece of glass is the minimal amount of software is running on that piece of glass, the rest is running the cloud. It's not just locking in terms of is it on Amazon or Google, it's that it's so tightly coupled with something totally outside of your home, where your network router needs Amazon to be alive now in a very specific region of Amazon where everybody's been deploying for the last 15 years, and it's running out of capacity very often. Not very often but often enough.There's some really interesting questions that I guess we'll answer in the next five, ten years. We're on the verge of IOT I think exploding because people are trying to come up with these new products that you wouldn't even think before that you'd have smart shoes and smart whatnot. Smart glasses and things like that. And when that gets into consumer technology, we're no longer going to have five or ten computer devices per person, we'll have dozens and dozens of computing. I guess think about it this way, fifteen years ago, how many computer devices were you carrying with yourself? Probably mobile phone and laptop. Probably not more. Now, in the headphones you have there that's Bose ...Jeremy: Watch.Gojko: ... you have a microprocessor in the headphones, you have your watch, you have a ton of other stuff carrying with you that's low-powered, all doing a bit of processing there. A lot of that processing is probably happening on the cloud somewhere.Jeremy: Or, it's just sending data. It's just sending, hey here's the information. And you're right. For me, I got my Apple Watch, my thermostat is connected to Wi-Fi and to the cloud, my wife just bought a humidifier for our living room that is connected to Wi-Fi and I'm assuming it's sending data to the cloud. I'm not 100% sure, but the question is, I don't know why we need to keep track of the humidity in my living room. But that's the kind of thing too where, you mentioned from a security standpoint, I have a bunch of AWS access keys on my computer that I send over the network, and I'm assuming they're secure. But if I've got another device that can access my network and somebody hacked something on the cloud side and then they can get in, it gets really dangerous.But you're right, the amount of data that we are now generating and compute that we're using in the cloud for probably some really dumb things like humidity in my living room, is that going to get to a point where... You said there's going to be a limitation like five years, ten years, whatever it is. What does the cloud do then? What does the cloud do when it can no longer keep up with the pace of these IOT devices?Gojko: Well, if history is repeating and we'll see if history is repeating, people will start getting throttled and all of a sudden, your unlimited supply of Lambdas will no longer be unlimited supply of Lambdas. It will be something that you have to reserve upfront and pay upfront, and who knows, we'll see when we get there. Or, we get things that we have with power networks like you had a Texas power cut there that was completely severe, and you get a IT cut. I don't know. We'll see. The more we go into utility, the more we'll start seeing parallels between compute and power networks. And maybe power networks are something that you can look at and later name. That's why I think the next cycle is probably going to be some equivalent of client server computing reemerging.Jeremy: Yeah. All right, well, I got one more question for you and this is just something where it may be a little bit of a tongue-in-cheek question. Because we talked it a little bit ... we talked about the merging of Lambda, and of Fargate, and some of these other things. But just from your perspective, serverless in five years from now, where do you see that going? Do you see that just becoming the main ... This idea of utility computing, on-demand computing without setting up servers and managing ops and some of these other things, do you see that as the future of serverless and it just becoming just the way we build applications? Or do you think that it's got a different path?Gojko: There was a tweet by Simon Wardley. You mentioned Simon Wardley earlier in the talk. There was a tweet a few days ago where he mentioned some data. I'm not sure where he pulled it from. This might be unverified, but generally Simon knows what he's talking about. Amazon itself is deploying roughly 50% of all new apps they're building on serverless. I think five years from now, that way of running stuff, I'm not sure if it's Lambda or some new service that Amazon starts and gives it some even more confusing name that runs in parallel to everything. But, that kind of stuff where the operator takes care of all the ops, which they really should be doing, is going to be the default way of getting utility compute out.I think a lot of these other things will probably remain useful for specialists' use cases, where you can't really deploy it in that way, or you need more stability, or it's not transient and things like that. My best guess is first of all, we'll get Lambda's that run for longer, and I assume that after we get Lambdas that run for longer, we'll probably get some ways of controlling routing to Lambdas because you already can set up pre-provisioned Lambdas and hot Lambdas and reserved capacity and things like that. When you have reserved capacity and you have longer running Lambdas, the next logical thing there is to have session stickiness, and routing, and things like that. And I think we'll get a lot of the stuff that was really complicated to do earlier, and you had to run EC2 instances or you had to run complicated networks of services, you'll be able to do in Lambda.And Lambda is, I wouldn't be surprised if they launch a totally new service with some AWS cloud socket, whatever. Something that is a implementation of the same principle, just in a different way, that becomes a default we are running computer for lots of people. And I think GPUs are still a bit limited. I don't think you can run GPU utility anywhere now, and that's limiting for a whole host of use cases. And I think again, it's not like they don't have the technology to do it, it's just they probably didn't get around to doing it yet. But I assume in five years time, you'll be able to do GPUs on-demand, and processing GPUs, and things like that. I think that the buzzword itself will lose really any special meaning and that's going to just be a way of running stuff.Jeremy: Yeah, absolutely. Totally agree. Well, listen Gojko, thank you so much for spending the time chatting with me. Always great to talk with you.Gojko: You, too.Jeremy: If people want to get in touch with you, find out more about what you're doing, how do they do that?Gojko: Well, I'm very easy to find online because there's not a lot of people called Gojko. Type Gojko into Google, you'll find me. And gojko.networks, gojko.com works, gojko.org works, and all these other things. I was lucky enough to get all those domains.Jeremy: That's G-O-J-K-O ...Gojko: Yes, G-O-J-K-O.Jeremy: ... for people who need the spelling.Gojko: Excellent. Well, thanks very much for having me, this was a blast.Jeremy: All right, yeah. And make sure you check out ... You mentioned Narakeet. It's a speech thing?Gojko: Yeah, for developers that want to build videos without hassle, and want to put videos in continuous integration, and things like that. Narakeet, that's like parakeet with an N for narration. Check that out and thanks for plugging it.Jeremy: Awesome. And then, check out MindMup as well. Awesome stuff. I've got all the stuff in the show notes. Thanks again, Gojko.Gojko: Thank you. Bye-bye.

Roopu Cloud's Podcast
Yan Cui: 10 questions about cloud

Roopu Cloud's Podcast

Play Episode Listen Later Mar 28, 2021 36:10


In this episode, Yan Cui answers 10 questions about cloud.Yan is an AWS Serverless Hero and the author of Production-Ready Serverless. He helps organizations go faster and deliver more with less.Imagine if your feature velocity goes from months to days, and your systems become more scalable, more secure, more resilient, AND cheaper to run!MEET YAN CUI➡️  Twitter: https://twitter.com/theburningmonk➡️  LinkedIn: https://www.linkedin.com/in/theburningmonk/➡️  Youtube: https://www.youtube.com/channel/UCd2PaRjI5iAGgeld3lCFPNg➡️  Podcast: https://realworldserverless.com/➡️  Github: https://github.com/theburningmonk➡️  Website: https://theburningmonk.com/MEET PABLO PUIG➡️ LinkedIn: https://www.linkedin.com/in/pablo-puig-433295171/PODCAST "10 QUESTIONS ABOUT CLOUD"➡️ Web: https://roopu.cloud/podcast➡️ Spotify: https://open.spotify.com/show/4kH3z7x0Eydh1lBlvncLyQ➡️ Apple Podcasts: https://podcasts.apple.com/us/podcast/roopu-clouds-podcast/id1539635929MEET ROOPU CLOUD➡️ https://roopu.cloud#cloud #cloudcomputing #podcast #roopucloud