POPULARITY
Today, you'll learn about an artificial reef that could save the shore from storms, how simply owning a pair of glasses can make you earn more income, and how air conditioners could help CSI detectives solve crimes. Artificial Reef “Artificial reef designed by MIT engineers could protect marine life, reduce storm damage.” by Jennifer Chu. 2024. “Coastal Protection.” Coral Reef Alliance. 2024. “Architected materials for artificial reefs to increase storm energy dissipation.” by Edvard Ronglan, et al. 2024. Glasses & Income “Having the right glasses could boost earning power by a third, Bangladesh study shows.” by Sarah Johnson. 2024. “The effect on income of providing near vision correction to workers in Bangladesh: The THRIVE (Tradespeople and Hand-workers Rural Initiative for a Vision-enhanced Economy) randomized controlled trial.” by Farzana Sehrin, et al. 2024. “Presbyopia.” Mayo Clinic. 2021. “The Global Burden of Potential Productivity Loss from Uncorrected Presbyopia.” by Kevin D. Frick, et al. 2015. AC DNA “Cold case: DNA in airconditioners to place suspects at the scene of a crime.” by Ben Coxworth. 2024. “Up in the air: Presence and collection of DNA from air and air conditioner units.” by Mariya Goray, et al. 2024. Follow Curiosity Daily on your favorite podcast app to get smarter with Calli and Nate — for free! Still curious? Get exclusive science shows, nature documentaries, and more real-life entertainment on discovery+! Go to https://discoveryplus.com/curiosity to start your 7-day free trial. discovery+ is currently only available for US subscribers. Hosted on Acast. See acast.com/privacy for more information.
In episode 187 of our SAP on Azure video podcast we talk about the Well-Architected Framework for SAP. The Framework guides customers through a set of questions to track the status of your SAP on Azure implementation. Find all the links mentioned here: https://www.saponazurepodcast.de/episode187 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure ## Summary created by AI * Azure well-architected framework for SAP: Jitendra and Hemanth introduced the tool and its benefits for assessing and improving SAP workloads on Azure based on five pillars: cost, performance, reliability, security and operational excellence. Tool overview: The tool is a self-service design framework that helps customers to improve the quality of their SAP workloads on Azure by following the best practices and recommendations from Microsoft and SAP. Tool benefits: The tool helps customers to identify risks, priorities, and opportunities for improvement in their SAP workloads on Azure, as well as to align with the latest innovations and features available on the platform. Tool usage: The tool is available on the assessment platform and allows customers to select their SAP workload and the pillars they want to focus on. It also remembers the previous selections and shows the progress and changes over time. The tool provides a bar chart with the scores for each pillar and a list of recommendations for improvement. * Assessment platform and process: The tool is available on the assessment platform and allows customers to select their SAP workload and the pillars they want to focus on. It also remembers the previous selections and shows the progress and changes over time. Assessment platform: The assessment platform is a web-based portal that hosts various tools and frameworks for customers to evaluate and optimize their Azure workloads. Customers can log in to the platform and access the Azure well-architected framework for SAP tool. Assessment process: The assessment process consists of answering a set of questions related to the five pillars of the framework: cost, performance, reliability, security, and operational excellence. The questions are based on the latest best practices and recommendations from Microsoft and SAP. The tool provides a score for each pillar and a list of recommendations for improvement. Customers can also import their subscription data to get additional insights from the Azure advisor. Customers can create multiple assessments and compare the results over time. * Best practices and recommendations: The tool provides questions and explanations based on the latest best practices and recommendations from Microsoft and SAP. It also identifies risks and priorities for improvement and gives feedback and comments. Engagement with CSU team: The tool can be used as a self-service or with the help of the CSU team who can provide more detailed guidance and follow-up sessions. The tool also supports the value-based delivery approach of the CSU team.
Topics covered in this episode: 6 ways to improve the architecture of your Python project (using import-linter) Mountaineer Why Python's Integer Division Floors Hatchet Extras Joke Watch on YouTube About the show Sponsored by ScoutAPM: pythonbytes.fm/scout Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too. Brian #1: 6 ways to improve the architecture of your Python project (using import-linter) Piglei Using import-linter to define architectural layers check to make sure imports don't violate (import from upper layers) can also check for more contracts, such as forbidden - disallow a specific from/to import independence - list of modules that shouldn't import from each other Fixing violations a process introduced to set exceptions for each violation in a config file then fix violations 1 at a time (nice approach) use the whole team if you can Common methods for fixing dependency issues Merging and splitting modules Dependency Injection, including using protocols to keep type hints without the need to import just for types Use simpler dependency types Delaying function implementations module global methods set by caller, or adding a simple plugin/callback system Configuration driven Setting import statements in a config file and using import_string() at runtime Replace function calls with event-driven approaches Michael #2: Mountaineer Mountaineer is a batteries-included web framework for Python and React. Mountaineer focuses on developer productivity above all else, with production speed a close second.
The Azure Well-Architected Framework (WAF) recently got a big rewrite. We take Operational Excellence, a pillar from WAF, for a deeper look with Bastian Ulke from Microsoft. What is it? How should you utilize this pillar? Bastian also talks about the recommendations and patterns within Operational Excellence. Also, Tobi asks Bastian an unexpected question.(00:00) - Intro and catching up.(02:41) - Show content starts.Show links- WAF: OpEx - Give us feedback!
Sharing Your Knowledge to Benefit the Next Generation of Entrepreneur Architects (Rchitected)Join us as we delve into the journey of Daphne Romani, a seasoned architect with over 25 years of experience. From her roots in Italy to establishing her own practice, Daphne shares insights into her evolution within the field and her newfound focus on nurturing the next wave of architectural talent.Discover the challenges young architects encounter as they transition from academia to the professional realm and the crucial role mentorship plays in bridging this gap. Daphne introduces us to her innovative platform, Rchitected, designed to equip budding architects with the essential skills and knowledge needed to thrive in the industry.Tune in as Daphne outlines her vision for expanding Architected to support staff growth and provide invaluable practice management guidance. With a firm belief in laying a robust foundation for future success, Daphne's passion for empowering emerging architects shines through as she shares her wealth of experience and insights. Join us for an enlightening discussion on the importance of mentorship, skill development, and creating pathways for the next generation of entrepreneurial architects.This week at EntreArchitect Podcast, Sharing Your Knowledge to Benefit the Next Generation of Entrepreneur Architects (Rchitected) with Daphne Romani.Connect with Daphne online at Rchitected, or follow her on LinkedIn.Please visit Our Platform SponsorsGo to https://betterhelp.com/architect for 10% off your first month of therapy with BetterHelp and get matched with a therapist who will listen and help. Thank you to our sponsor BetterHelp for supporting our community of small firm entrepreneur architects.ARCAT.com is much more than a product catalog, with CAD, BIM, and specifications created in collaboration with manufacturers. ARCAT.com also offers LEED data, continuing education resources, newsletters, and the Detailed podcast. Visit https://ARCAT.com to learn more.Visit our Platform Sponsors today and thank them for supporting YOU... The EntreArchitect Community of small firm architects.
Azure Well-Architected Framework (WAF) uppdateras nyligen. För att lära oss mer om WAF i allmänhet och nyheterna i den uppdaterade versionen tog vi hjälp av Robert Folkesson från Active Solutions. https://aka.ms/wafhttps://aka.ms/caf Hosted on Acast. See acast.com/privacy for more information.
We re-visit the Azure Well-architected Framework, with Dom Allen from Microsoft. What has changed, and what's interesting now in the WAF refresh? We take a look at what's new, and how to best utilize WAF. Also, Tobi asks Dom an unexpected question.(00:00) - Intro and catching up.(03:32) - Show content starts.Show links- WAF- Azure documentation | Microsoft Learn - YouTube: Unsolved Tetris Mysteries With Creator Alexy Pajitnov & Designer Henk Rogers - Give us feedback!
In this livestream, I talked to Ryan Saunders - Manager of Security Operations at SpyCloud, about how he used the Cribl Reference Architecture to build a scalable deployment. He explained how this approach enabled SpyCloud to grow alongside its evolving needs without requiring significant rework. The reference architecture also facilitated a repeatable data-onboarding process, reducing administrative time and allowing the team to focus on critical security and data analysis tasks. SpyCloud is a cloud-native organization that generates enormous amounts of data — from hosted email and EDR, sales solutions, and the rest of their sprawling cloud architecture. Before implementing Cribl Stream, they had too many sources and too little time to figure out how to integrate all of them. Saving Valuable Engineering Time Traditional on-prem environments can have many sources, but they generally come from a single area that makes it possible to capture them with a single set of agents. Because of their sprawling cloud architecture, Ryan and his team didn't have that luxury. During our conversation, Ryan pointed out that engineers come to work at SpyCloud to work in security, not to become a data butler. They don't necessarily know how to architect large data pipelines — they just pull the data in and go to work on it. To that end, the first problem they solved with Cribl Stream was streamlining the process of bringing sources into their detection analytics platform. Data now flows in natively from a source like AWS instead of via a TA or other inefficient, incomplete method. Flexibility in Scaling Security Architecture SpyCloud can't afford to have data held up in processing — once all their data comes in, it needs to be processed immediately so their security detections fire in real-time. Cribl's Reference Architecture played a very important role in onboarding their sources and getting things to operate seamlessly. There are times when Ryan and his team get little to no advance notice of a new product or customer, so there may not be much time to add to their logging pipeline. Without Cribl Stream, planning and execution may take weeks or months. But the right tools and a properly designed architecture allow them to scale up in minutes, if not automatically. Splitting Up Worker Groups Spycloud separates worker groups based on data volume workflow and as a way to mitigate risk. Instead of having one large worker group, they have a separate one on the internet with open ports, so they're able to fail small and manage their blast radius. It's good practice to split up your worker groups not only by load, but also by connection type and according to your security needs. When I asked Ryan if he was concerned about the management overhead of having a bunch of worker groups, he compared the experience to his days as a Splunk admin. Setting up different indexer clusters was a nightmare because maintenance efforts only scaled linearly. With worker groups, there's one interface to manage everything. Ryan can copy settings by cloning a worker group, or add and remove pipelines from different worker groups — all from one interface. He sums it up quite nicely: “The biggest win for us with Cribl Stream is that we can upgrade everything from one single pane of glass. I don't have to go out and plan a 12-hour overnight weekend upgrade of my indexers. I just click upgrade in that worker group, and it happens.” - Ryan Saunders, Manager of Security Operations at SpyCloud Taking Advantage of Cribl Edge Ryan and the team at SpyCloud also have Cribl Edge deployed as a log collection agent on all their servers. They have a dozen Edge fleets collecting data that's sent back to Cribl Stream for processing. Managing fleets in Cribl Edge is just as easy as managing worker groups in Cribl Stream. They have the flexibility to control separate configurations for Windows, Linux, production tests, and other products within the same interface. SpyCloud also uses Cribl Edge to consolidate logging agents within the organization because it's easier for them to have one agent that multiple teams can control. His team sends the data they need for security to their own tools, and their DevOps teams can extract the operations data they need as well. Everyone can control and manage their data however they see fit, so it's a win for everybody. Best Practices for a Scalable Cribl Stream Deployment Ryan has many years of experience using Cribl's tools within different organizations and environments, so he has learned some very valuable lessons along the way. His first deployment involved trying to run Kubernetes in a large environment with one giant worker group — so he quickly learned about the importance of splitting them up. You want to be able to do this easily, especially in highly regulated environments. Multinational organizations may not be able to commingle data or send it across national borders. Companies processing healthcare data have strict requirements for handling PII. Even if you don't fall into either of these categories today, business growth or regulatory requirements might change that, so you'll need to be able to adjust quickly to split certain data out. Taking advantage of auto-scaling has also proven beneficial for Ryan, and everyone can take advantage of it — just don't forget to create limits. You want to avoid scaling up until an AWS region explodes, so you don't wake up one night and find 1000 Kubernetes nodes running because something went sideways. Explaining that bill won't be much fun the next day. Watch the full livestream to see more on how SpyCloud uses Cribl Stream and Cribl Edge to streamline the onboarding process and get more visibility and insights from their business data. You'll also learn how to use the Cribl Reference Architectures as a starting point for a scalable deployment so you can reduce administrative time and free up your team to focus on critical security and data analysis tasks. More Videos in our Cribl Reference Architecture Series Introduction to the Cribl Stream Reference Architecture How the All in One Worker Group Fits Into the Cribl Stream Reference Architecture Scaling Syslog Scaling Effectively for a High Volume of Agents
Continuando nossa série sobre o Well Architected Framework, nesse episódio nossos hosts Thiago Couto e Raissa Nejain convidam Gaston Perez, Arquiteto de Soluções da AWS para baterem um papo sobre o pilar de Otimização de Custos do Well Architected Framework (WAF).
Continuando nossa série sobre o Well Architected Framework, nesse episódio nossos hosts Nikolas (Marreta), Maria Ane (Mariane) e Fábio Balancin convidam Tiago Simão e Ibrahim Cesar, Arquitetos de Soluções da AWS para baterem um papo sobre o pilar de Sustentabilidade do Well Architected Framework (WAF).
Continuando nossa série sobre o Well Architected Framework, nesse episódio nossos hosts Bia Mota e Leo Ciccone convidam o Peterson Larentis, Arquiteto de Soluções Especialista em Serverless, para baterem um papo sobre o pilar de Eficiência de Performance.
En este episodio Fernando Hönig, AWS Hero y founder de StackZone, Bart Farrell, embajador de la CNCF y la SODA foundation, nos guiará a través del fascinante mundo del AWS Well-Architected Framework.Este es el episodio 17 de la cuarta temporada del podcast de Charlas Técnicas de AWS.
Nesse episódio vamos acompanhar um pouco mais sobre o pilar Excelência operacional, que apresenta uma visão geral dos princípios de design, das melhores práticas e das perguntas. Aventure-se nesse assunto com nossos hosts Nikolas e Couto junto com nosso convidado Bastos e aprenda um pouco mais como aplicar o conceito em seu projeto! Curtiu? Curta e compartilhe! Não se esqueça, comente no twitter com a hashtag podcastawsbrasil.
O well architected foi criado pela AWS para definir os pilares que garantem uma boa arquitetura de infra e software.
Neste episódio contamos com as presenças ilustres das duas novas integrantes do time do AWS Podcast Brasil e tivemos um bate-papo muito rico sobre o Well Architected Framework, onde compartilhamos algumas experiências sobre a utilização deste framework em cenários do dia a dia dos nossos clientes.
In this episode, Hawn sits down with Ilana Morris and Samir Kopal from the Well-Architected team to discuss a brief history of AWS Well-Architected. They dive into how to use the AWS Well-Architected Tool as a mechanism and the latest Well-Architected features including Consolidated Report, the integration with Service Catalog AppRegistry, and Profiles. AWS Well-Architected Tool website: https://go.aws/3OQZJy3 AWS Well-Architected Tool Documentation: https://go.aws/3s61eRd AWS Well-Architected Labs: https://bit.ly/3Ovrqw6 AWS Well-Architected Partner Program: https://go.aws/47mLrO1
Akash Mahendra is Director of the Haven1 Foundation where he leads strategy, operations, and risk management efforts in support of Haven1 — an EVM-compatible L1 blockchain purpose-built to provide a secure environment for on-chain finance. He is also a Portfolio Manager at the digital wealth platform Yield App. Mahendra started his career as a Legal Enforcement Officer at The Australian Securities and Investments Commission, before diving into Web3 full-time. Prior to joining Haven1, Akash served as the Chief Investment Officer at the Web3 investment firm DAO Capital, and the Head of Operations and Strategy at Steady State, an automated DeFi insurance company, where he honed his expertise in blockchain tech and financial portfolio management. About Haven1 Haven1 is an EVM-compatible layer 1 blockchain designed to offer a secure, trusted, compliant environment to drive the mass adoption of on-chain finance. Architected by the innovators behind the digital wealth platform Yield App, Haven1 incorporates a provable identity framework and robust security guardrails at the network level, to provide retail, professional, and institutional investors alike with an on-chain finance platform free from the challenges and risks that plague the DeFi ecosystem. To learn more about Haven1, visit https://www.haven1.org/ About Yield App Yield App is a digital wealth platform that offers safe custody of digital assets, or allows customers to exchange and earn on their assets in return for market-leading rates. Its mission is to safely unlock the full potential of digital assets, combine them with the most rewarding opportunities available across financial markets and make these available to the masses. Since its public launch in February 2021, Yield App has grown to 90,000+ customers, including 1,000+ high-net-worth clients, who have entrusted Yield App with more than $550 million of their digital wealth. $550MM+ in managed assets, $250MM+ deployed into DeFi, and 90k+ active users. To learn more about Yield App, visit yield.app --- Support this podcast: https://podcasters.spotify.com/pod/show/crypto-hipster-podcast/support
Welcome back to another episode of the Jon Myer Podcast! Today, we're joined by a very special guest, Tobey Amy, who is an expert in co-selling with AWS and the Well-Architected Review. Tobey is a Director of Partner Development, and he works closely with partners to build successful cloud practices and drive joint go-to-market initiatives. In this episode, we'll be diving deep into the world of co-selling with AWS and what it means to become well-architected. We'll explore the benefits of the AWS Well-Architected Review, and why it's becoming increasingly important for businesses to undergo this review in order to optimize their AWS environment. We'll also discuss the incentives for partners and customers to co-sell with AWS, as well as the benefits and programs available to those who do. With AWS being a leader in the cloud computing industry, it's no wonder why so many businesses are looking to partner with them to drive growth and success. So, whether you're a current partner with AWS or looking to become one, this episode is a must-listen. Join us as we learn from Tobey Amy on how to become well-architected at co-selling with AWS, and take your business to the next level.
Welcome back to another episode of the Jon Myer Podcast! Today, we're joined by a very special guest, Tobey Amy, who is an expert in co-selling with AWS and the Well-Architected Review. Tobey is a Director of Partner Development, and he works closely with partners to build successful cloud practices and drive joint go-to-market initiatives. In this episode, we'll be diving deep into the world of co-selling with AWS and what it means to become well-architected. We'll explore the benefits of the AWS Well-Architected Review, and why it's becoming increasingly important for businesses to undergo this review in order to optimize their AWS environment. We'll also discuss the incentives for partners and customers to co-sell with AWS, as well as the benefits and programs available to those who do. With AWS being a leader in the cloud computing industry, it's no wonder why so many businesses are looking to partner with them to drive growth and success. So, whether you're a current partner with AWS or looking to become one, this episode is a must-listen. Join us as we learn from Tobey Amy on how to become well-architected at co-selling with AWS, and take your business to the next level.
Today on the Salesforce Admins Podcast, we talk to Melissa Shepard, Salesforce Certified Architect and CEO of Lizztech Consulting. Join us as we chat about how to approach automations in your org and why asynchronous processes are so powerful. You should subscribe for the full episode, but here are a few takeaways from our conversation […] The post Melissa Shepard on Well-Architected Automation appeared first on Salesforce Admins.
Today on the Salesforce Admins Podcast, we talk to Tom Leddy, Principal Evangelist, Architect Relations at Salesforce. Join us as we chat about how admins can build well-architected automation solutions. You should subscribe for the full episode, but here are a few takeaways from our conversation with Tom Leddy. Salesforce Well-Architected Tom is here to […] The post Well-Architected Automation with Tom Leddy appeared first on Salesforce Admins.
In this episode you'll learn about the AWS Well Architected Framework and how to prepare for an AWS Well Architected review The AWS Well-Architected review helps Cloud architects build secure, high-performing, resilient, and efficient infrastructure for a variety of applications and workloads. Built around six pillars—operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability AWS Well Architected tool https://aws.amazon.com/well-architected-tool/ AWS Well Architected Framework https://docs.aws.amazon.com/wellarchitected/latest/framework/wellarchitected-framework.pdf Enjoy!
This week's episode discusses the security module of the Azure Well-Architected Framework. What's in there, how to utilize the content, and what areas does it cover? Also, Tobi asks Jussi an unexpected question.(00:00) - Intro and catching up.(03:32) - Show content starts.Show links- Azure Well-Architected Framework: Security- Azure Well-Architected Framework Review- Microsoft Azure Well-Architected Framework - Security - Training | Microsoft LearnSPONSORThis episode is sponsored by Sovelto. Stay ahead of the game and advance your career with continuous learning opportunities for Azure Cloud professionals. Sovelto Eduhouse – Learning as a Lifestyle - Start Your Journey now: https://www.eduhouse.fi/cloudpro
If you've been using AWS for a while, you might have heard the term "well-architected". But what does it really mean? Don't worry if you're not quite sure, because we are here to help! In this episode of AWS Bites, we will be diving into the world of well-architected and explaining what it means, both in general and in the specific context of AWS. We will be covering the well-architected framework, the different tools, and facets that come with it, and answering some practical questions like "should you care about building well-architected workloads?" and "how do you know if your workloads are well-architected?". Whether you're a startup or a mature organization, learn why building well-architected systems is crucial for the long-term success of your business. By the end of this episode, you'll have a solid understanding of the world of well-architected and why it's so important. Let's dive in!
InfosecTrain hosts a live event entitled “Cloud Security Masterclass” with certified expert ‘Ayush'. Cloud Security is one of the trending topics every organization is talking about. In this regard, cloud security professionals are in great demand to implement and test security strategies on cloud. Agenda for the Webinar ➡️ Day 2: Native Security Tools in AWS
(00:00) - Intro and catching up.(04:00) - Show content starts.Show links- Welcome | Learn Green Software- Sustainability workloads - Microsoft Azure Well-Architected Framework | Microsoft LearnSPONSORThis episode is sponsored by Sovelto.We at Sovelto support your personal growth, keep your Azure skills up to date and increase your market value. Learn or expire: sovelto.fi/pro
In Episode 55 of "The Dustin Gold Nugget," Dustin briefly discusses his analysis of the 1934 “Technocracy Study Guide” and how their energy credit system is designed to work. Join the discussion and get the ad-free video version of this podcast: Paine.TV/gold Follow Dustin on Twitter: Twitter.com/dustingoldshow and Twitter.com/hackableanimal Get involved with the Telegram discussion: https://t.me/dustingoldshow Join in on live audio conversations: https://wisdom.app/dustingoldshow Ask a question and get a 60-second answer from me: https://wisdom.app/dustingoldshow/ask Learn more about your ad choices. Visit megaphone.fm/adchoices
How to apply the well architected tool Grady Booch is a fantastic software architect who has done a lot of pioneering software engineering. One of his latest messages says that it's a good thing that AWS, Azure and Google are offering guidance on building with well architected. But he also warns that when cloud providers talk to you about well architected, it comes with a bias towards their products. There is an inherent bias among providers who are investing in these frameworks. They lean towards their own ecosystem and services. But the interesting thing about the AWS framework is if you squint at it sideways, you can extract the implementation details. And focus on the principles and values behind delivering a well architected solution. You can abstract a huge amount of value from them, even if you don't embrace the services aligned to that cloud provider. It's the collaboration, questions, mindset and thinking that are good. And they are fairly consistent across whatever tech stack or platform you're leveraging. The questions are very similar. And the answers may be a little bit different. But even just asking those questions is hugely valuable. And as long as you apply it to your context, you can get a lot of value. I remember having discussions with people who were asking what architecture is? They weren't quite sure what architecture was and were trying to redefine architecture. With less technical colleagues, you need to give them a definition of what architecture is. When you compare all three Cloud Providers, they roughly have the same stuff. But AWS is driven by principles or tenets. A lot of the principles in the framework are applicable elsewhere. Azure is almost like a wizard as it takes you on a path, which is good 90% of the time. But I worry about what happens if that's not right for you. And Google sets the bar was quite high with hardcore SRE. The culture of each of the providers comes through. But they are all really good frameworks. With AWS, if you leverage a lot of AWS services, you're going to be EDA. So that is going to bias you towards operating in certain ways and leveraging certain ways of communication. Whether it's Kinesis, EventBridge or SQS SNS to connect things together. With Google, I am interested to see why they're focusing so much on SRE. Maybe there's a requirement on squads to look after resources when they are up, because it's more container oriented. The folks at Google perhaps believe that customer success is driven through proper SRE and operational excellence. A lot of the SRE principles came from Google. A cloud provider writes a framework in their own language. And it will have a bias towards their own products. So you need to see that and see the difference. And if you ask a cloud provider to do a well architected review on your product, a solution architect from that cloud provider to will come in and do it. And all they know is their products. So if you get an AWS solution architect to review your product, they'll start recommending AWS products. The lesson to learn is that it's not the framework that's important. It's the process that you put around it. And the process does not need to involve the cloud provider. We can run that ourselves. And we can apply the framework ourselves. It is a big shift, to use well architected as a benchmark to self assess to learn, measure and improve. The great advantage of doing this yourself is that your feedback loop is established. So you're actually seeing what works, what doesn't work and where your gaps are across all the different pillars. And where you should be investing your time and your team's time across the org. If you bring in somebody external, they won't have the context. And they're not going to be part of that feedback loop. When you use Clarity of Purpose, which is Phase 1 of the Value Flywheel, with the well architected framework, you can combat that bias. Your focus will be on solving the problem with the tools you've got at your disposal. And you're able to leverage tools faster and more efficiently. It should not be something that your architect does to you. This is something that the team should be embracing themselves through an iterative and incremental process. We've written about this a lot in our SCORPS process, which is in the book as well. It doesn't need to be a big, scary thing that an architect from on high comes down and imposes on you. It can be a very useful learning and enablement tool. Grady Booch Search for 'Grady' 'Well' 'Architected' to find that tweet and discussion SCORPS process on TheServerlessEdge.com. The Value Flywheel Effect book Twitter @ServerlessEdge Transcribed by https://otter.ai
AWS Morning Brief for the week of September 19th, 2022 with Corey Quinn.
O mercado atual vive a era da chamada API Economy. O que isso significa? Dentre tantas coisas, significa que usuários na ponta possuem acesso a rápido e fácil a um universo de possibilidades por intermédio de frontends cada vez mais ricos e responsivos que se conectam de maneira suave com operações remotas através das chamadas […]
About MattMatt is a Sr. Architect in Belfast, an AWS DevTools Hero, Serverless Architect, Author and conference speaker. He is focused on creating the right environment for empowered teams to rapidly deliver business value in a well-architected, sustainable and serverless-first way.You can usually find him sharing reusable, well architected, serverless patterns over at cdkpatterns.com or behind the scenes bringing CDK Day to life.Links Referenced: Previous guest appearance: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/slinging-cdk-knowledge-with-matt-coulter/ The CDK Book: https://thecdkbook.com/ Twitter: https://twitter.com/NIDeveloper TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the best parts about, well I guess being me, is that I can hold opinions that are… well, I'm going to be polite and call them incendiary, and that's great because I usually like to back them in data. But what happens when things change? What happens when I learn new things?Well, do I hold on to that original opinion with two hands at a death grip or do I admit that I was wrong in my initial opinion about something? Let's find out. My guest today returns from earlier this year. Matt Coulter is a senior architect since he has been promoted at Liberty Mutual. Welcome back, and thanks for joining me.Matt: Yeah, thanks for inviting me back, especially to talk about this topic.Corey: Well, we spoke about it a fair bit at the beginning of the year. And if you're listening to this, and you haven't heard that show, it's not that necessary to go into; mostly it was me spouting uninformed opinions about the CDK—the Cloud Development Kit, for those who are unfamiliar—I think of it more or less as what if you could just structure your cloud resources using a programming language you claim to already know, but in practice, copy and paste from Stack Overflow like the rest of us? Matt, you probably have a better description of what the CDK is in practice.Matt: Yeah, so we like to say it's imperative code written in a declarative way, or declarative code written in an imperative way. Either way, it lets you write code that produces CloudFormation. So, it doesn't really matter what you write in your script; the point is, at the end of the day, you still have the CloudFormation template that comes out of it. So, the whole piece of it is that it's a developer experience, developer speed play, that if you're from a background that you're more used to writing a programming language than a YAML, you might actually enjoy using the CDK over writing straight CloudFormation or SAM.Corey: When I first kicked the tires on the CDK, my first initial obstacle—which I've struggled with in this industry for a bit—is that I'm just good enough of a programmer to get myself in trouble. Whenever I wind up having a problem that StackOverflow doesn't immediately shine a light on, my default solution is to resort to my weapon of choice, which is brute force. That sometimes works out, sometimes doesn't. And as I went through the CDK, a couple of times in service to a project that I'll explain shortly, I made a bunch of missteps with it. The first and most obvious one is that AWS claims publicly that it has support in a bunch of languages: .NET, Python, there's obviously TypeScript, there's Go support for it—I believe that went generally available—and I'm sure I'm missing one or two, I think? Aren't I?Matt: Yeah, it's: TypeScript, JavaScript, Python Java.Net, and Go. I think those are the currently supported languages.Corey: Java. That's the one that I keep forgetting. It's the block printing to the script that is basically Java cursive. The problem I run into, and this is true of most things in my experience, when a company says that we have deployed an SDK for all of the following languages, there is very clearly a first-class citizen language and then the rest that more or less drift along behind with varying degrees of fidelity. In my experience, when I tried it for the first time in Python, it was not a great experience for me.When I learned just enough JavaScript, and by extension TypeScript, to be dangerous, it worked a lot better. Or at least I could blame all the problems I ran into on my complete novice status when it comes to JavaScript and TypeScript at the time. Is that directionally aligned with what you've experienced, given that you work in a large company that uses this, and presumably, once you have more than, I don't know, two developers, you start to take on aspects of a polyglot shop no matter where you are, on some level?Matt: Yeah. So personally, I jump between Java, Python, and TypeScript whenever I'm writing projects. So, when it comes to the CDK, you'd assume I'd be using all three. I typically stick to TypeScript and that's just because personally, I've had the best experience using it. For anybody who doesn't know the way CDK works for all the languages, it's not that they have written a custom, like, SDK for each of these languages; it's a case of it uses a Node process underneath them and the language actually interacts with—it's like the compiled JavaScript version is basically what they all interact with.So, it means there are some limitations on what you can do in that language. I can't remember the full list, but it just means that it is native in all those languages, but there are certain features that you might be like, “Ah,” whereas, in TypeScript, you can just use all of TypeScript. And my first inclination was actually, I was using the Python one and I was having issues with some compiler errors and things that are just caused by that process. And it's something that talking in the cdk.dev Slack community—there is actually a very active—Corey: Which is wonderful, I will point out.Matt: [laugh]. Thank you. There is actually, like, an awesome Python community in there, but if you ask them, they would all ask for improvements to the language. So, personally if someone's new, I always recommend they start with TypeScript and then branch out as they learn the CDK so they can understand is this a me problem, or is this a problem caused by the implementation?Corey: From my perspective, I didn't do anything approaching that level of deep dive. I took a shortcut that I find has served me reasonably well in the course of my career, when I'm trying to do something in Python, and you pull up a tutorial—which I'm a big fan of reading experience reports, and blog posts, and here's how to get started—and they all have the same problem, which is step one, “Run npm install.” And that's “Hmm, you know, I don't recall that being a standard part of the Python tooling.” It's clearly designed and interpreted and contextualized through a lens of JavaScript. Let's remove that translation layer, let's remove any weird issues I'm going to have in that transpilation process, and just talk in the language it written in. Will this solve my problems? Oh, absolutely not, but it will remove a subset of them that I am certain to go blundering into like a small lost child trying to cross an eight-lane freeway.Matt: Yeah. I've heard a lot of people say the same thing. Because the CDK CLI is a Node process, you need it no matter what language you use. So, if they were distributing some kind of universal binary that just integrated with the languages, it would definitely solve a lot of people's issues with trying to combine languages at deploy time.Corey: One of the challenges that I've had as I go through the process of iterating on the project—but I guess I should probably describe it for those who have not been following along with my misadventures; I write blog posts about it from time to time because I need a toy problem to kick around sometimes because my consulting work is all advisory and I don't want to be a talking head-I have a Twitter client called lasttweetinaws.com. It's free; go and use it. It does all kinds of interesting things for authoring Twitter threads.And I wanted to deploy that to a bunch of different AWS regions, as it turns out, 20 or so at the moment. And that led to a lot of interesting projects and having to learn how to think about these things differently because no one sensible deploys an application simultaneously to what amounts to every AWS region, without canary testing, and having a phased rollout in the rest. But I'm reckless, and honestly, as said earlier, a bad programmer. So, that works out. And trying to find ways to make this all work and fit together led iteratively towards me discovering that the CDK was really kind of awesome for a lot of this.That said, there were definitely some fairly gnarly things I learned as I went through it, due in no small part to help I received from generous randos in the cdk.dev Slack team. And it's gotten to a point where it's working, and as an added bonus, I even mostly understand what he's doing, which is just kind of wild to me.Matt: It's one of those interesting things where because it's a programming language, you can use it out of the box the way it's designed to be used where you can just write your simple logic which generates your CloudFormation, or you can do whatever crazy logic you want to do on top of that to make your app work the way you want it to work. And providing you're not in a company like Liberty, where I'm going to do a code review, if no one's stopping you, you can do your crazy experiments. And if you understand that, it's good. But I do think something like the multi-region deploy, I mean, with CDK, if you'd have a construct, it takes in a variable that you can just say what the region is, so you can actually just write a for loop and pass it in, which does make things a lot easier than, I don't know, try to do it with a YAML, which you can pass in parameters, but you're going to get a lot more complicated a lot quicker.Corey: The approach that I took philosophically was I wrote everything in a region-agnostic way. And it would be instantiated and be told what region to run it in as an environment variable that CDK deploy was called. And then I just deploy 20 simultaneous stacks through GitHub Actions, which invoke custom runners that runs inside of a Lambda function. And that's just a relatively basic YAML file, thanks to the magic of GitHub Actions matrix jobs. So, it fires off 20 simultaneous processes and on every commit to the main branch, and then after about two-and-a-half minutes, it has been deployed globally everywhere and I get notified on anything that fails, which is always fun and exciting to learn those things.That has been, overall, just a really useful experiment and an experience because you're right, you could theoretically run this as a single CDK deploy and then wind up having an iterate through a list of regions. The challenge I have there is that unless I start getting into really convoluted asynchronous concurrency stuff, it feels like it'll just take forever. At two-and-a-half minutes a region times 20 regions, that's the better part of an hour on every deploy and no one's got that kind of patience. So, I wound up just parallelizing it a bit further up the stack. That said, I bet they are relatively straightforward ways, given the async is a big part of JavaScript, to do this simultaneously.Matt: One of the pieces of feedback I've seen about CDK is if you have multiple stacks in the same project, it'll deploy them one at a time. And that's just because it tries to understand the dependencies between the stacks and then it works out which one should go first. But a lot of people have said, “Well, I don't want that. If I have 20 stacks, I want all 20 to go at once the way you're saying.” And I have seen that people have been writing plugins to enable concurrent deploys with CDK out of the box. So, it may be something that it's not an out-of-the-box feature, but it might be something that you can pull in a community plug-in to actually make work.Corey: Most of my problems with it at this point are really problems with CloudFormation. CloudFormation does not support well, if at all, secure string parameters from the AWS Systems Manager parameter store, which is my default go-to for secret storage, and Secrets Manager is supported, but that also cost 40 cents a month per secret. And not for nothing, I don't really want to have all five secrets deployed to Secrets Manager in every region this thing is in. I don't really want to pay $20 a month for this basically free application, just to hold some secrets. So, I wound up talking to some folks in the Slack channel and what we came up with was, I have a centralized S3 bucket that has a JSON object that lives in there.It's only accessible from the deployment role, and it grabs that at deploy time and stuffs it into environment variables when it pushes these things out. That's the only stateful part of all of this. And it felt like that is, on some level, a pattern that a lot of people would benefit from if it had better native support. But the counterargument that if you're only deploying to one or two regions, then Secrets Manager is the right answer for a lot of this and it's not that big of a deal.Matt: Yeah. And it's another one of those things, if you're deploying in Liberty, we'll say, “Well, your secret is unencrypted at runtime, so you probably need a KMS key involved in that,” which as you know, the costs of KMS, it depends on if it's a personal solution or if it's something for, like, a Fortune 100 company. And if it's personal solution, I mean, what you're saying sounds great that it's IAM restricted in S3, and then that way only at deploy time can be read; it actually could be a custom construct that someone can build and publish out there to the construct library—or the construct hub, I should say.Corey: To be clear, the reason I'm okay with this, from a security perspective is one, this is in a dedicated AWS account. This is the only thing that lives in that account. And two, the only API credentials we're talking about are the application-specific credentials for this Twitter client when it winds up talking to the Twitter API. Basically, if you get access to these and are able to steal them and deploy somewhere else, you get no access to customer data, you get—or user data because this is not charge for anything—you get no access to things that have been sent out; all you get to do is submit tweets to Twitter and it'll have the string ‘Last Tweet in AWS' as your client, rather than whatever normal client you would use. It's not exactly what we'd call a high-value target because all the sensitive to a user data lives in local storage in their browser. It is fully stateless.Matt: Yeah, so this is what I mean. Like, it's the difference in what you're using your app for. Perfect case of, you can just go into the Twitter app and just withdraw those credentials and do it again if something happens, whereas as I say, if you're building it for Liberty, that it will not pass a lot of our Well-Architected reviews, just for that reason.Corey: If I were going to go and deploy this at a more, I guess, locked down environment, I would be tempted to find alternative approaches such as having it stored encrypted at rest via KMS in S3 is one option. So, is having global DynamoDB tables that wind up grabbing those things, even grabbing it at runtime if necessary. There are ways to make that credential more secure at rest. It's just, I look at this from a real-world perspective of what is the actual attack surface on this, and I have a really hard time just identifying anything that is going to be meaningful with regard to an exploit. If you're listening to this and have a lot of thoughts on that matter, please reach out I'm willing to learn and change my opinion on things.Matt: One thing I will say about the Dynamo approach you mentioned, I'm not sure everybody knows this, but inside the same Dynamo table, you can scope down a row. You can be, like, “This row and this field in this row can only be accessed from this one Lambda function.” So, there's a lot of really awesome security features inside DynamoDB that I don't think most people take advantage of, but they open up a lot of options for simplicity.Corey: Is that tied to the very recent announcement about Lambda getting SourceArn as a condition key? In other words, you can say, “This specific Lambda function,” as opposed to, “A Lambda in this account?” Like that was a relatively recent Advent that I haven't fully explored the nuances of.Matt: Yeah, like, that has opened a lot of doors. I mean, the Dynamo being able to be locked out in your row has been around for a while, but the new Lambda from SourceArn is awesome because, yeah, as you say, you can literally say this thing, as opposed to, you have to start going into tags, or you have to start going into something else to find it.Corey: So, I want to talk about something you just alluded to, which is the Well-Architected Framework. And initially, when it launched, it was a whole framework, and AWS made a lot of noise about it on keynote stages, as they are want to do. And then later, they created a quote-unquote, “Well-Architected Tool,” which let's be very direct, it's the checkbox survey form, at least the last time I looked at it. And they now have the six pillars of the Well-Architected Framework where they talk about things like security, cost, sustainability is the new pillar, I don't know, absorbency, or whatever the remainders are. I can't think of them off the top of my head. How does that map to your experience with the CDK?Matt: Yeah, so out of the box, the CDK from day one was designed to have sensible defaults. And that's why a lot of the things you deploy have opinions. I talked to a couple of the Heroes and they were like, “I wish it had less opinions.” But that's why whenever you deploy something, it's got a bunch of configuration already in there. For me, in the CDK, whenever I use constructs, or stacks, or deploying anything in the CDK, I always build it in a well-architected way.And that's such a loaded sentence whenever you say the word ‘well-architected,' that people go, “What do you mean?” And that's where I go through the six pillars. And in Liberty, we have a process, it used to be called SCORP because it was five pillars, but not SCORPS [laugh] because they added sustainability. But that's where for every stack, we'll go through it and we'll be like, “Okay, let's have the discussion.” And we will use the tool that you mentioned, I mean, the tool, as you say, it's a bunch of tick boxes with a text box, but the idea is we'll get in a room and as we build the starter patterns or these pieces of infrastructure that people are going to reuse, we'll run the well-architected review against the framework before anybody gets to generate it.And then we can say, out of the box, if you generate this thing, these are the pros and cons against the Well-Architected Framework of what you're getting. Because we can't make it a hundred percent bulletproof for your use case because we don't know it, but we can tell you out of the box, what it does. And then that way, you can keep building so they start off with something that is well documented how well architected it is, and then you can start having—it makes it a lot easier to have those conversations as they go forward. Because you just have to talk about the delta as they start adding their own code. Then you can and you go, “Okay, you've added these 20 lines. Let's talk about what they do.” And that's why I always think you can do a strong connection between infrastructure-as-code and well architected.Corey: As I look through the actual six pillars of the Well-Architected Framework: sustainability, cost optimization, performance, efficiency, reliability, security, and operational excellence, as I think through the nature of what this shitpost thread Twitter client is, I am reasonably confident across all of those pillars. I mean, first off, when it comes to the cost optimization pillar, please, don't come to my house and tell me how that works. Yeah, obnoxiously the security pillar is sort of the thing that winds up causing a problem for this because this is an account deployed by Control Tower. And when I was getting this all set up, my monthly cost for this thing was something like a dollar in charges and then another sixteen dollars for the AWS config rule evaluations on all of the deploys, which is… it just feels like a tax on going about your business, but fine, whatever. Cost and sustainability, from my perspective, also tend to be hand-in-glove when it comes to this stuff.When no one is using the client, it is not taking up any compute resources, it has no carbon footprint of which to speak, by my understanding, it's very hard to optimize this down further from a sustainability perspective without barging my way into the middle of an AWS negotiation with one of its power companies.Matt: So, for everyone listening, watch as we do a live well-architected review because—Corey: Oh yeah, I expect—Matt: —this is what they are. [laugh].Corey: You joke; we should do this on Twitter one of these days. I think would be a fantastic conversation. Or Twitch, or whatever the kids are using these days. Yeah.Matt: Yeah.Corey: And again, if so much of it, too, is thinking about the context. Security, you work for one of the world's largest insurance companies. I shitpost for a living. The relative access and consequences of screwing up the security on this are nowhere near equivalent. And I think that's something that often gets lost, per the perfect be the enemy of the good.Matt: Yeah that's why, unfortunately, the Well-Architected Tool is quite loose. So, that's why they have the Well-Architected Framework, which is, there's a white paper that just covers anything which is quite big, and then they wrote specific lenses for, like, serverless or other use cases that are shorter. And then when you do a well-architected review, it's like loose on, sort of like, how are you applying the principles of well-architected. And the conversation that we just had about security, so you would write that down in the box and be, like, “Okay, so I understand if anybody gets this credential, it means they can post this Last Tweet in AWS, and that's okay.”Corey: The client, not the Twitter account, to be clear.Matt: Yeah. So, that's okay. That's what you just mark down in the well-architected review. And then if we go to day one on the future, you can compare it and we can go, “Oh. Okay, so last time, you said this,” and you can go, “Well, actually, I decided to—” or you just keep it as a note.Corey: “We pivoted. We're a bank now.” Yeah.Matt: [laugh]. So, that's where—we do more than tweets now. We decided to do microtransactions through cryptocurrency over Twitter. I don't know but if you—Corey: And that ends this conversation. No no. [laugh].Matt: [laugh]. But yeah, so if something changes, that's what the well-architected reviews for. It's about facilitating the conversation between the architect and the engineer. That's all it is.Corey: This episode is sponsored in part by our friend EnterpriseDB. EnterpriseDB has been powering enterprise applications with PostgreSQL for 15 years. And now EnterpriseDB has you covered wherever you deploy PostgreSQL on-premises, private cloud, and they just announced a fully-managed service on AWS and Azure called BigAnimal, all one word. Don't leave managing your database to your cloud vendor because they're too busy launching another half-dozen managed databases to focus on any one of them that they didn't build themselves. Instead, work with the experts over at EnterpriseDB. They can save you time and money, they can even help you migrate legacy applications—including Oracle—to the cloud. To learn more, try BigAnimal for free. Go to biganimal.com/snark, and tell them Corey sent you.Corey: And the lens is also helpful in that this is a serverless application. So, we're going to view it through that lens, which is great because the original version of the Well-Architected Tool is, “Oh, you built this thing entirely in Lambda? Have you bought some reserved instances for it?” And it's, yeah, why do I feel like I have to explain to AWS how their own systems work? This makes it a lot more streamlined and talks about this, though, it still does struggle with the concept of—in my case—a stateless app. That is still something that I think is not the common path. Imagine that: my code is also non-traditional. Who knew?Matt: Who knew? The one thing that's good about it, if anybody doesn't know, they just updated the serverless lens about, I don't know, a week or two ago. So, they added in a bunch of more use cases. So, if you've read it six months ago, or even three months ago, go back and reread it because they spent a good year updating it.Corey: Thank you for telling me that. That will of course wind up in next week's issue of Last Week in AWS. You can go back and look at the archives and figure out what week record of this then. Good work. One thing that I have learned as well as of yesterday, as it turns out, before we wound up having this recording—obviously because yesterday generally tends to come before today, that is a universal truism—is it I had to do a bit of refactoring.Because what I learned when I was in New York live-tweeting the AWS Summit, is that the Route 53 latency record works based upon where your DNS server is. Yeah, that makes sense. I use Tailscale and wind up using my Pi-hole, which lives back in my house in San Francisco. Yeah, I was always getting us-west-1 from across the country. Cool.For those weird edge cases like me—because this is not the common case—how do I force a local region? Ah, I'll give it its own individual region prepend as a subdomain. Getting that to work with both the global lasttweetinaws.com domain as well as the subdomain on API Gateway through the CDK was not obvious on how to do it.Randall Hunt over at Caylent was awfully generous and came up with a proof-of-concept in about three minutes because he's Randall, and that was extraordinarily helpful. But a challenge I ran into was that the CDK deploy would fail because the way that CloudFormation was rendered in the way it was trying to do stuff, “Oh, that already has that domain affiliated in a different way.” I had to do a CDK destroy then a CDK deploy for each one. Now, not the end of the world, but it got me thinking, everything that I see around the CDK more or less distills down to either greenfield or a day one experience. That's great, but throw it all away and start over is often not what you get to do.And even though Amazon says it's always day one, those of us in, you know, real companies don't get to just treat everything as brand new and throw away everything older than 18 months. What is the day two experience looking like for you? Because you clearly have a legacy business. By legacy, I of course, use it in the condescending engineering term that means it makes actual money, rather than just telling really good stories to venture capitalists for 20 years.Matt: Yeah. We still have mainframes running that make a lot of money. So, I don't mock legacy at all.Corey: “What's that piece of crap do?” “Well, about $4 billion a year in revenue. Perhaps show some respect.” It's a common refrain.Matt: Yeah, exactly. So yeah, anyone listening, don't mock legacy because as Corey says, it is running the business. But for us when it comes to day two, it's something that I'm actually really passionate about this in general because it is really easy. Like I did it with CDK patterns, it's really easy to come out and be like, “Okay, we're going to create a bunch of starter patterns, or quickstarts”—or whatever flavor that you came up with—“And then you're going to deploy this thing, and we're going to have you in production and 30 seconds.” But even day one later that day—not even necessarily day two—it depends on who it was that deployed it and how long they've been using AWS.So, you hear these stories of people who deployed something to experiment, and they either forget to delete, it cost them a lot of money or they tried to change it and it breaks because they didn't understand what was in it. And this is where the community starts to diverge in their opinions on what AWS CDK should be. There's a lot of people who think that at the minute CDK, even if you create an abstraction in a construct, even if I create a construct and put it in the construct library that you get to use, it still unravels and deploys as part of your deploy. So, everything that's associated with it, you don't own and you technically need to understand that at some point because it might, in theory, break. Whereas there's a lot of people who think, “Okay, the CDK needs to go server side and an abstraction needs to stay an abstraction in the cloud. And then that way, if somebody is looking at a 20-line CDK construct or stack, then it stays 20 lines. It never unravels to something crazy underneath.”I mean, that's one pro tip thing. It'd be awesome if that could work. I'm not sure how the support for that would work from a—if you've got something running on the cloud, I'm pretty sure AWS [laugh] aren't going to jump on a call to support some construct that I deployed, so I'm not sure how that will work in the open-source sense. But what we're doing at Liberty is the other way. So, I mean, we famously have things like the software accelerator that lets you pick a pattern or create your pipelines and you're deployed, but now what we're doing is we're building a lot of telemetry and automated information around what you deployed so that way—and it's all based on Well-Architected, common theme. So, that way, what you can do is you can go into [crosstalk 00:26:07]—Corey: It's partially [unintelligible 00:26:07], and partially at a glance, figure out okay, are there some things that can be easily remediated as we basically shift that whole thing left?Matt: Yeah, so if you deploy something, and it should be good the second you deploy it, but then you start making changes. Because you're Corey, you just start adding some stuff and you deploy it. And if it's really bad, it won't deploy. Like, that's the Liberty setup. There's a bunch of rules that all go, “Okay, that's really bad. That'll cause damage to customers.”But there's a large gap between bad and good that people don't really understand the difference that can cost a lot of money or can cause a lot of grief for developers because they go down the wrong path. So, that's why what we're now building is, after you deploy, there's a dashboard that'll just come up and be like, “Hey, we've noticed that your Lambda function has too little memory. It's going to be slow. You're going to have bad cold starts.” Or you know, things like that.The knowledge that I have had the gain through hard fighting over the past couple of years putting it into automation, and that way, combined with the well-architected reviews, you actually get me sitting in a call going, “Okay, let's talk about what you're building,” that hopefully guides people the right way. But I still think there's so much more we can do for day two because even if you deploy the best solution today, six months from now, AWS are releasing ten new services that make it easier to do what you just did. So, someone also needs to build something that shows you the delta to get to the best. And that would involve AWS or somebody thinking cohesively, like, these are how we use our products. And I don't think there's a market for it as a third-party company, unfortunately, but I do think that's where we need to get to, that at day two somebody can give—the way we're trying to do for Liberty—advice, automated that says, “I see what you're doing, but it would be better if you did this instead.”Corey: Yeah, I definitely want to spend more time thinking about these things and analyzing how we wind up addressing them and how we think about them going forward. I learned a lot of these lessons over a decade ago. I was fairly deep into using Puppet, and came to the fair and balanced conclusion that Puppet was a steaming piece of crap. So, the solution was that I was one of the very early developers behind SaltStack, which was going to do everything right. And it was and it was awesome and it was glorious, right up until I saw an environment deployed by someone else who was not as familiar with the tool as I was, at which point I realized hell is other people's use cases.And the way that they contextualize these things, you craft a finely balanced torque wrench, it's a thing of beauty, and people complain about the crappy hammer. “You're holding it wrong. No, don't do it that way.” So, I have an awful lot of sympathy for people building platform-level tooling like this, where it works super well for the use case that they're in, but not necessarily… they're not necessarily aligned in other ways. It's a very hard nut to crack.Matt: Yeah. And like, even as you mentioned earlier, if you take one piece of AWS, for example, API Gateway—and I love the API Gateway team; if you're listening, don't hate on me—but there's, like, 47,000 different ways you can deploy an API Gateway. And the CDK has to cover all of those, it would be a lot easier if there was less ways that you could deploy the thing and then you can start crafting user experiences on a platform. But whenever you start thinking that every AWS component is kind of the same, like think of the amount of ways you're can deploy a Lambda function now, or think of the, like, containers. I'll not even go into [laugh] the different ways to run containers.If you're building a platform, either you support it all and then it sort of gets quite generic-y, or you're going to do, like, what serverless cloud are doing though, like Jeremy Daly is building this unique experience that's like, “Okay, the code is going to build the infrastructure, so just build a website, and we'll do it all behind it.” And I think they're really interesting because they're sort of opposites, in that one doesn't want to support everything, but should theoretically, for their slice of customers, be awesome, and then the other ones, like, “Well, let's see what you're going to do. Let's have a go at it and I should hopefully support it.”Corey: I think that there's so much that can be done on this. But before we wind up calling it an episode, I had one further question that I wanted to explore around the recent results of the community CDK survey that I believe is a quarterly event. And I read the analysis on this, and I talked about it briefly in the newsletter, but it talks about adoption and a few other aspects of it. And one of the big things it looks at is the number of people who are contributing to the CDK in an open-source context. Am I just thinking about this the wrong way when I think that, well, this is a tool that helps me build out cloud infrastructure; me having to contribute code to this thing at all is something of a bug, whereas yeah, I want this thing to work out super well—Docker is open-source, but you'll never see me contributing things to Docker ever, as a pull request, because it does, as it says on the tin; I don't have any problems that I'm aware of that, ooh, it should do this instead. I mean, I have opinions on that, but those aren't pull requests; those are complete, you know, shifts in product strategy, which it turns out is not quite done on GitHub.Matt: So, it's funny I, a while ago, was talking to a lad who was the person who came up with the idea for the CDK. And CDK is pretty much the open-source project for AWS if you look at what they have. And the thought behind it, it's meant to evolve into what people want and need. So yes, there is a product manager in AWS, and there's a team fully dedicated to building it, but the ultimate aspiration was always it should be bigger than AWS and it should be community-driven. Now personally, I'm not sure—like you just said it—what the incentive is, given that right now CDK only works with CloudFormation, which means that you are directly helping with an AWS tool, but it does give me hope for, like, their CDK for Terraform, and their CDK for Kubernetes, and there's other flavors based on the same technology as AWS CDK that potentially could have a thriving open-source community because they work across all the clouds. So, it might make more sense for people to jump in there.Corey: Yeah, I don't necessarily think that there's a strong value proposition as it stands today for the idea of the CDK becoming something that works across other cloud providers. I know it technically has the capability, but if I think that Python isn't quite a first-class experience, I don't even want to imagine what other providers are going to look like from that particular context.Matt: Yeah, and that's from what I understand, I haven't personally jumped into the CDK for Terraform and we didn't talk about it here, but in CDK, you get your different levels of construct. And is, like, a CloudFormation-level construct, so everything that's in there directly maps to a property in CloudFormation, and then L2 is AWS's opinion on safe defaults, and then L3 is when someone like me comes along and turns it into something that you may find useful. So, it's a pattern. As far as I know, CDK for Terraform is still on L1. They haven't got the rich collection—Corey: And L4 is just hiring you as a consultant—Matt: [laugh].Corey: —to come in fix my nonsense for me?Matt: [laugh]. That's it. L4 could be Pulumi recently announced that you can use AWS CDK constructs inside it. But I think it's one of those things where the constructs, if they can move across these different tools the way AWS CDK constructs now work inside Pulumi, and there's a beta version that works inside CDK for Terraform, then it may or may not make sense for people to contribute to this stuff because we're not building at a higher level. It's just the vision is hard for most people to get clear in their head because it needs articulated and told as a clear strategy.And then, you know, as you said, it is an AWS product strategy, so I'm not sure what you get back by contributing to the project, other than, like, Thorsten—I should say, so Thorsten who wrote the book with me, he is the number three contributor, I think, to the CDK. And that's just because he is such a big user of it that if he sees something that annoys him, he just comes in and tries to fix it. So, the benefit is, he gets to use the tool. But he is a super user, so I'm not sure, outside of super users, what the use case is.Corey: I really want to thank you for, I want to say spending as much time talking to me about this stuff as you have, but that doesn't really go far enough. Because so much of how I think about this invariably winds up linking back to things that you have done and have been advocating for in that community for such a long time. If it's not you personally, just, like, your fingerprints are all over this thing. So, it's one of those areas where the entire software developer ecosystem is really built on the shoulders of others who have done a lot of work that came before. Often you don't get any visibility of who those people are, so it's interesting whenever I get to talk to someone whose work I have directly built upon that I get to say thank you. So, thank you for this. I really do appreciate how much more straightforward a lot of this is than my previous approach of clicking in the console and then lying about it to provision infrastructure.Matt: Oh, no worries. Thank you for the thank you. I mean, at the end of the day, all of this stuff is just—it helps me as much as it helps everybody else, and we're all trying to do make everything quicker for ourselves, at the end of the day.Corey: If people want to learn more about what you're up to, where's the best place to find you these days? They can always take a job at Liberty; I hear good things about it.Matt: Yeah, we're always looking for people at Liberty, so come look up our careers. But Twitter is always the best place. So, I'm @NIDeveloper on Twitter. You should find me pretty quickly, or just type Matt Coulter into Google, you'll get me.Corey: I like it. It's always good when it's like, “Oh, I'm the top Google result for my own name.” On some level, that becomes an interesting thing. Some folks into it super well, John Smith has some challenges, but you know, most people are somewhere in the middle of that.Matt: I didn't used to be number one, but there's a guy called the Kangaroo Kid in Australia, who is, like, a stunt driver, who was number one, and [laugh] I always thought it was funny if people googled and got him and thought it was me. So, it's not anymore.Corey: Thank you again for, I guess, all that you do. And of course, taking the time to suffer my slings and arrows as I continue to revise my opinion of the CDK upward.Matt: No worries. Thank you for having me.Corey: Matt Coulter, senior architect at Liberty Mutual. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice and leave an angry comment as well that will not actually work because it has to be transpiled through a JavaScript engine first.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
(2:35) - 3D Printing Ice
(00:00) - Intro and catching up.(05:50) - Show content starts.Show links- Reliability patterns (Microsoft Docs)- Calculating Composite SLAs (AZ-900 GitHub content)- See the Azure service SLAs (Azure Charts)SPONSORThis episode is sponsored by ScriptRunner.ScriptRunner is a great solution to centrally manage PowerShell Scripts and standardize and automate IT tasks via a Graphical User Interface for helpdesk or end-users. Check it out on scriptrunner.com
Toda jornada tem um momento de maturação, em que os processos estão estáveis, os times estão seguros e confortáveis sobre como operá-los e os resultados estão, a princípio, consistentes. A jornada da nuvem não é diferente. Ao longo dos anos, temos observado diferentes momentos como parte dessa jornada de adoção da nuvem pública. Assessment, lift-and-shift, […]
(00:00) - Intro and catching up.(05:23) - Show content starts.Show links- WAF: Operational Excellence (Microsoft Docs)SPONSORThis episode is sponsored by ScriptRunner.ScriptRunner is a great solution to centrally manage PowerShell Scripts and standardize and automate IT tasks via a Graphical User Interface for helpdesk or end-users. Check it out on scriptrunner.com
O episódio do Por Dentro da Cloud dessa semana vai falar sobre como a Darede implementa as boas práticas da AWS internamente e em clientes. Para esse bate-papo, recebemos os #cloudspecialists Ivan Assolant Vencato, Robson Assunção Lomba e Sandro Henrique Lacerda, além da presença do novo apresentador do podcast Rafael Guidastri! Confere aí! Está imperdível! Link Sustentability Report AWS - https://sustainability.aboutamazon.com/
(00:00) - Intro and catching up.(05:54) - Show content starts.Show links- Performance Efficiency in WAF (Microsoft Docs)SPONSORThis episode is sponsored by ScriptRunner.ScriptRunner is a great solution to centrally manage PowerShell Scripts and standardize and automate IT tasks via a Graphical User Interface for helpdesk or end-users. Check it out on scriptrunner.com
smartsoulacademy.com Zen Cryar DeBrucke's Smart Soul Academy: Soul Guidance for Smart People Zen Cryar DeBrucke has Codified the Human Internal Guidance System and Architected a Flawless Method for Preventing, Detecting and Correcting the Thoughts That Create Stress and Short Circuit Your Bliss! In Her Academy, You Learn How to Use Your Body to Discern and Act on “Openings” & “Closings” Thoughts. That Put You Into the Flow of an Unlimited and Abundant Life Member of the Transformational Leadership Counsel… Over the course of 11 years from 1987 to 1998, Zen Cryar DeBrucke, formerly a high school dropout, built a tech company of 45 employees in the Bay Area that was skyrocketing, doing business with Fortune 500 companies and global conglomerates. And then came the dot-com bust. And while all of her friends, colleagues and associates were frantic as their businesses collapsed under them, Zen stayed well, Zen…her company was cratering too, but she was able to reconfigure her thinking to easily stay peaceful, even happy, sanguine. They all wanted to know why… It's because she had discovered something so profound -- and perfected it over the years to the point that this discovery drove her company's success! It was how to navigate all her thoughts, emotions and actions using an Internal Guidance System (IGS) -- bodily clues that enabled her to identify and shift the beliefs and old programs that were false, disempowering and limiting—the “closings” as she calls them are signals that your thinking is off base. By tapping into the consistent ability to “feel into” and refocus on the “opening” thoughts, she could find the thinking that easily attracted the synchronicities and -- as she said ” the magic” -- that fueled her massive success and joy! Even more importantly, she discovered that these openings were reprogramming the neural pathways of her mind. That stress, worry, fear, overwhelm, guilt, procrastination, frustration, irritation, doubt and misery are simply signals to pivot your thoughts. So that by using ones IGS you are creating a brain that is getting healthier and healthier daily. 23 years later, she experiences openings that lead her to more and more happiness and success. Here's the best part…anyone can learn to live completely stress free, too. Zen Cryar DeBrucke, a member of the illustrious Transformation Leadership Council, has worked with over 40,000 people to teach them how to listen to, respond to and honor their IGS, including training 4,000 master students. And now her online Smart Soul Academy is coming forward to encapsulate 30 years of teachings, resulting in 16 different trainings, into one accountability system that goes deep into the simple, short, yet profound life-changing practices that enable you to master the use of the IGS. This inexpensive membership academy is creating a smash worldwide, making this information accessible to millions. What have people experienced under Zen's guidance? Broken families reuniting, relationships mended, huge fears overcome, trauma released, addictions left behind, body transformations. One woman went from abhorring the thought of appearing on a danced floor to becoming a highly successful multi-titled national championship ballroom dancer!
Vamos falar essa semana de Well Architected! O Renzo Petri e o Luciano Beja irão nos contar como foi esse processo de revisão well architected na Natura. Como foi o desafio? Como isso ajudou na cultura dos times? Melhorou a experiência do usuário? Quais foram os resultados disso agora no dia a dia da Natura? Todas essas perguntas e muitas outras sobre o processo, acompanhe nessa edição!
É um fato: na medida em que as empresas evoluem em suas jornadas de movimentação para as plataformas de nuvem pública, cresce também a necessidade de se investir tempo e dinheiro em aspectos que, nos tradicionais modelos de colocation on-premises de alguns anos atrás, ou existiam em uma escala menor ou, na maioria dos casos, […]
AWS Well Architected Pillars - our favourites! Over the last six episodes, we have gone over the well architected framework from AWS. As a recap, we think this is a fantastic framework. It's about your workloads. Like a physical building, if the foundation is not solid, it may cause structural problems that undermine the integrity and function of the building. So you need to look at all six pillars for your workloads. And that's what you do to effectively produce sound and stable systems. Operational excellence: it's the ability to run and monitor systems to deliver business value, and continually improve support, processes and procedures. Security: the ability to protect information systems and assets, while delivering business value through the risk assessments and mitigation strategies. Reliability: the ability for systems to recover infrastructure or service failures, acquire computing resources to meet demand and mitigate disruptions such as misconfigurations, or transit network issues. Performance efficiency: the ability to use compute resources effectively to meet system requirements, and maintain efficiency. Cost optimization: the ability to avoid or eliminate unneeded cost. Sustainability: guidance on how AWS can help you reduce your footprint and best practices to improve the sustainability of your workloads. Mike's favourite is: operational excellence. I'm a big fan of continuous improvement and getting yourself into a sustainable way of working. How do you learn from failure? How do you react to certain things? How do you have visibility of everything around you? If you can assemble that apparatus and those behaviours, then you can begin to eat into the other pillars. For example, operational excellence gets into how to observe if something's working in production, or if something's failing in production. If something's failing in production, how do you deal with it? Do you have ‘run books'? Do you have playbooks? What's your playbook say about this scenario? It's fundamental and core. It's where I start. If I'm going into a new team or area, I'll always start with operational excellence. This one is very consistent across all squads and all parts of the organisation. That's probably my favourite, because I know it so well and I rely on it. Mark's favourite is the sustainability pillar. I think it if you have all other things in place, the sustainability pillar will really drive you to that next level. If you're trying to deliver a sustainable solution, you can't do that without having a good handle on the other five pillars. So I think sustainability and the sustainability pillars and the questions within them, are a forcing function for good practices, processes and architectural choices that the other pillars are continuing. With our serverless first mindset and approach I think it lends itself well to the sustainability pillar. I think sustainability is probably my favourite now. Also, we want to make sure that we leave the world a better place than what we found it. So if we don't deliver sustainable workloads (especially with the exponential growth of compute devices in the digital era) it's not going to be good for the long term health of the planet and for the people on earth. Dave likes security, reliability, cost optimization. The reason why I like those three is because they're things that a different team does. If a team thinks they're really good but completes one of those pillars, they realise there's a bunch of stuff they've never thought about but actually is their responsibility. The most shocking one is probably cost optimization. Most teams don't really think about cost. There's usually an IT manager somewhere who does it. It's magic and it happens in the background. But when you start asking teams about how they monitor or control their cost and optimise for cost, it spins their head. I like the shock factor of that and also the fact that it's about real money . If you make a tweak, you can actually save your organisation money. It's green dollars and not pretend money. So I always enjoy when teams are connected back to reality. I think that's interesting. Serverless Craic from The Serverless Edge theserverlessedge.com @ServerlessEdge
One of the most important Civil Rights Leaders in the 20th century, behind perhaps only the giants of the movement such as Martin Luther King Jr. WEB DuBois, or Booker T Washington, was Walter Francis White, a Black man who led two lives: one as a leader of the NAACP and the Harlem Renaissance, and the other as a white journalist who investigated lynching crimes in the Deep South. Although White was the most powerful political Black figure in America during the 1930s and 40s, his full story has never been told until now due to scandal that happened at the end of his life. I'm joined today by A.J. Baime, author of White Lies: The Double Life of Walter F. White and America's Darkest Secret. We discuss…•How Walter White was born mixed race with very fair skin and straight hair, which allowed him to “pass” as a white man and investigate 41 lynchings and 8 race riots between 1918 and 1931. As the second generation of the Ku Klux Klan incited violence across the country, White risked his life to report on the Red Summer of 1919, the Tulsa Massacre of 1921, the Marion lynchings of 1930, and more. His reports drew national attention and fueled the beginnings of the civil rights movement•White's rise in the NAACP to chief executive – as leader of the NAACP, he had full access to the Oval Offices of FDR and Harry Truman, and was arguably the most powerful force in the historic realignment of Black political power from the Republican to the Democratic party. He also made Black voting rights a priority of the NAACP, a fight that continues to this day.•How White helped found the Harlem Renaissance as a famed novelist and Harlem celebrity – he hosted apartment parties where Black and white audiences alike were introduced to Paul Robeson's singing, Langston Hughes' verse, and George Gershwin's Rhapsody in Blue.•Why White's full story has never been told until now, in part due to his controversial decision to divorce his Black wife and marry a white woman, which shattered his reputation as a Black civil rights leader.
We're continuing our conversation on the well architected pillars looking at what is our favourite pillar. Today we're talking about sustainability. What's nice about sustainability is that it rolls up a lot of good practices and it's a very simple measure. So it's very hard to measure carbon, but very simple to understand when someone, like AWS measures for you. There's a list of best practices that are broken down into sections. So lets list them out: Region Selection, User Behaviour Patterns, Software and Architecture Patterns, Data Patterns, Hardware Patterns, and Development and Deployment Process. Region Selection is quite straightforward. Some regions are supplied with greener energy than others. Some regions are using non sustainable resources, depending on where you are in the world. If you don't have massive latency requirements, or a real need for super fast, low latency, then you're probably best putting it into a more sustainable region. So where you put your workloads can have a sustainability impact. User Behaviour Patterns is about using assets in an elastic way with the latest technology. It will be more sustainable, efficient and cheaper on modern cloud. If you go down the legacy cloud route, and treat the public cloud like a data centre, then you're not going to be very sustainable, you're not going to be very cheap, and you're not going to be very efficient. It's about how you align your SLAs with your sustainability goals. Making sure workloads and solutions on the cloud are aligned to SLAs is what we're going to be concerned with over the next number of number of years. Software Architecture Patterns is about keeping your code base and your architecture really efficient such as refactoring optimization and more effective data access. It's good practice as it ties back into efficient design. When you work in enterprise spaces, you do question the value of older business products that are running in the background. You've got to constantly assess if this is worth the compute? There's an interesting question on optimising the impact on customer devices and equipment. If you have a really inefficient client side app with a lot of data processing on the device that it doesn't need, you could sustain the lifetime of that device by having a more effective and efficient client side app or web app or mobile app. I always think about IDEs and the bigger IDEs in terms of auto completion and indexing because they get very warm very quick! I've seen a lot of IDEs moving onto the cloud: Cloud9 and VS code. And you're thinking that all of that should be done in the cloud too. A very thin client for all your needs. And everything is done in the cloud, server side, from your IDE to everything else. Data Patterns: there's an awful lot of waste with data flying around the internet. There's a lot of good practice here. Data can be quite toxic for various reasons from privacy breaches and security points of view. You should have a good handle on this. Your data classification is critical. If you don't extract value from it, get rid of it or it's going to be unsustainable. Everything's becoming more data centric, and the amount of compute that goes into chomping data is 90% of what IT. I'd love to see how much electricity or energy is used on processing data. I am keen to see how organisations approach this one. Hardware Patterns: is right sizing our stuff correctly. We've all been in teams where the question asked is: 'what size box do you need?' And the answer back is: 'the biggest one humanly possible!'. It's a natural reaction but you don't need that. This is where a serverless first mindset and approach really kicks up a gear. You don't even have to concern yourself with a lot of these questions. It automatically scales up and down appropriately. We don't have to worry about picking hardware or incident sizes ahead of time. Development and Deployment Process: how do you increase utilisation of build environments? We see this quite a lot where environments sprawl and asset's sprawl for no real benefit. So again, it's all about being smart about how you set up your clients, how you set up your pipelines and how you set up your environments to make sure they're actually delivering value. And they're not just there because that's the way we have always done it. The question here is: 'how do you adopt methods that can rapidly reduce or introduce sustainability improvements?'. If you're on a serverless spectrum, the cloud providers are working for you and they're introducing new capabilities. Serverless Craic from The Serverless Edge theserverlessedge.com @ServerlessEdge
(00:00) - Intro and catching up.(06:52) - Show content starts.Show links- Jussi's cake- Azure Well-architected Framework (Microsoft Docs)- WAF assessments (Microsoft Docs)SPONSORThis episode is sponsored by ScriptRunner.ScriptRunner is a great solution to centrally manage PowerShell Scripts and standardize and automate IT tasks via a Graphical User Interface for helpdesk or end-users. Check it out on scriptrunner.com
In this episode we're continuing our series talking about well architected pillars. Today, we're going to talk about the Performance Pillar which is called performance efficiency. Each of the pillars of well architected usually have around 10 questions. This one has eight. And it's got four sections: Selection, Review, Monitoring, and Trade Offs. It is really about the performance efficiency of your whole system. There are five questions about selection. The kicker question is the first one: 'how do you select the best performing architecture?'. You should go really hard and deep to make sure you understand the problem you're trying to solve for the users that are going to use a system. What are their needs? Once you have that to hand, do something like domain driven design to break it up a little bit and make sure you have good boundaries and domains established. When you have all that you're well informed. At the start of a project, there's always pressure to get something working. But you need to pause at the start and figure that out. The idea of the mental model of the system is really important. Can you explain to everyone in your company what it is? Is it X, Y, or Z? Evolution is critical. It might be the best architecture to meet the needs right now, but is there scope, capacity or room for it to evolve to meet unexpected future needs as well? I want to move fast. But I reserve the right, at some point, as we scale up or the system evolves to pivot and change reasonably quickly. This is another factor with serverless. Because it's event driven, you're forced to use event driven style architectures so it lends itself to that sort of evolution. You can swap things out later on. If you need a container, a SAS or an external vendor, it's pluggable. But there's a bunch of non functional requirements that need to be right. This is where you get into the idea of is this a commodity component? Or is it something that's mission critical to your business or a piece of IP. Do you need to build it? Or can you just rent it? Wardley mapping helps you to think whether or not to build. For example, we need global storage so let's try and build that. The answer is no! Just use S3. In summary, what's the managed service I can leverage? What's the serverless capability I can leverage? If it doesn't meet the needs of your use case, you can fall back to something that's further back on the serverless spectrum. That applies to compute storage, databases and networks. Serverless is not standing still, it is improving over the years. We've seen cold starts reduce and we are seeing more conductivity across managed services, as opposed to directly through lambdas. You get those benefits without having to do anything. That's another benefit to consider when deciding to take a serverless approach with your architecture. I think one of the best things with a serverless approach to performance is that the cloud provider is constantly working at improving performance efficiency, reducing costs, speeding up and adding more horsepower to your compute. By choosing smartly, with your architecture, you get a free underlying platform team that is constantly working on improving your performance. And you can just take advantage of it without having to worry. You can leverage that performance improvement. Moving onto the next section and relating to what you said is how do you constantly review your architecture to take advantage of new releases? Cloud providers are constantly innovating and releasing stuff every week, so you want to be in a position where you can add new stuff quickly without breaking the whole architecture. You need to operate a 'two way door' as Amazon call it where you go in, do something and then get back out again. You don't want a one way door where you get trapped. Event Bridge is a good example. For a long time we've been using an SNS SQS finite type approach to the event. And the Event Bridge was released. The team was trying to get latency reduced and constantly looking at that. And then realise that when you get to a certain level, we'll make that cutover. But it's a good example of how to evolve and plan for that. The next section is Monitoring and how to monitor your resources for performance, which is fairly straightforward. And the last one is Trade Offs and how to use trade offs to improve performance. A great example is Lambda Power Tuner, where you can tune your function based on memory or CPU to get that nice balance between cost and performance. Performance efficiency quickly becomes a work based development conversation. If business isn't bringing in loads of money, you don't need all this horsepower under the hood. It doesn't need to be that efficient or effective from a performance point of view. Serverless Craic from The Serverless Edge theserverlessedge.com @ServerlessEdge
Promise Phelon is one of Silicon's Valley 0.2%. She is one of only FIVE African American female entrepreneurs who have bought, grown, and sold companies for multiple millions of dollars. Now she is an investor and mentor with a great tale to tell. Why you should listen: How to choose a mentor: go for the spark of experience with a specific skill set Promise's three part journaling practice shared by extraordinary entrepreneurs worldwide: morning pages, afternoon audio notes, and a monthly 6 hour Think Time session The top skill you need to succeed in business: building anti-fragile relationships We explore: The culture of mentorship in Silicon Valley Silicon Valley's addiction to disruption and the opportunity this brings for ‘underdogs' Why hustling for self awareness is just as important for hustling for skill development What's changing in leadership: from the messiah to the pilgrimage What she learned accompanying Tony Robbins and 50 top global ‘Lions' around the world for a year
Do you know what cloud native apps are? Well, we don’t really either, but today we’re on a mission to find out! This episode is an exciting one, where we bring all of our different understandings of what cloud native apps are to the table. The topic is so interesting and diverse and can be interpreted in a myriad of ways. The term ‘cloud native app’ is not very concrete, which allows for this open interpretation. We begin by discussing what we understand cloud native apps to be. We see that while we all have similar definitions, there are still many differences in how we interpret this term. These different interpretations unlock some other important questions that we also delve into. Tied into cloud native apps is another topic we cover today – monoliths. This is a term that is used frequently but not very well understood and defined. We unpack some of the pros and cons of monoliths as well as the differences between monoliths and microservices. Finally, we discuss some principles of cloud native apps and how having these umbrella terms can be useful in defining whether an app is a cloud native one or not. These are complex ideas and we are only at the tip of the iceberg. We hope you join us on this journey as we dive into cloud native apps! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Bryan Liles Josh Rosso Nicholas Lane Key Points From This Episode: What cloud native applications mean to Carlisia, Bryan, Josh, and Nicholas. Portability is a big factor of cloud native apps. Cloud native applications can modify their infrastructure needs through API calls. Cloud native applications can work well with continuous delivery/deployment systems. A component of cloud native applications is that they can modify the cloud. An application should be thought of as multiple processes that interact and link together. It is possible resources will begin to be requested on-demand in cloud native apps. An explanation of the commonly used term ‘monolith.’ Even as recently as five years ago, monoliths were still commonly used. The differences between a microservice approach and a monolith approach. The microservice approach requires thinking about the interface at the start, making it harder. Some of the instances when using a monolith is the logical choice for an app. A major problem with monoliths is that as functionality grows, so too does complexity. Some other benefits and disadvantages of monolith apps. In the long run, separating apps into microservices gives a greater range of flexibility. A monolith can be a cloud native application as well. Clarification on why Brian uses the term ‘microservices’ rather than cloud native. ‘Cloud native’ is an umbrella term and a set of principles rather than a strict definition. If it can run confidently on someone else’s computer, it is likely a cloud native application. Applying cloud native principles when building an app from scratch makes it simpler. It is difficult to adapt a monolith app into one which uses cloud native principles. The applications which could never be adapted to use cloud native principles. A checklist of the key attributes of cloud native applications. Cloud native principles are flexible and can be adapted to the context. It is the responsibility of thought leaders to bring cloud native thinking into the mainstream. Kubernetes has the potential to allow us to see our data centers differently. Quotes: “An application could be made up of multiple processes.” — @joshrosso [0:14:43] “A monolith is simply an application or a single process that is running both the UI, the front-end code and the code that fetches the state from a data store, whether that be disk or database.” — @joshrosso [0:16:36] “Separating your app is actually smarter than the long run because what it gives you is the flexibility to mix and match.” — @bryanl [0:22:10] “A cloud native application isn’t a thing. It is a set of principles that you can use to guide yourself to running apps in cloud environments.” — @bryanl [0:26:13] “All of these things that we are talking about sound daunting. But it is better that we can have these conversations and talk about things that don’t work rather than not knowing what to talk about in general.” — @bryanl [0:39:30] Links Mentioned in Today’s Episode: Red Hat — https://www.redhat.com/en IBM — https://www.ibm.com/ VWware — https://www.vmware.com/ The New Stack — https://thenewstack.io/ 10 Key Attributes of Cloud-Native Applications — https://thenewstack.io/10-key-attributes-of-cloud- native-applications/ Kubernetes — https://kubernetes.io/ Linux — https://www.linux.org/ Transcript: EPISODE 16 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.4] NL: Hello and welcome back, my name is Nicholas Lane. This time, we’ll be diving into what it’s all about. Cloud native applications. Joining me this week are Brian Liles. [0:00:53.2] BL: Hi. [0:00:54.3] NL: Carlisia Campos. [0:00:55.6] CC: Hi everybody, glad to be here. [0:00:57.6] NL: And Josh Rosso. [0:00:58.6] JR: Hey everyone. [0:01:00.0] NL: How’s it going everyone? [0:01:01.3] JR: It’s been a great week so far. I’m just happy that I have a good job and able to do things that make me feel whole. [0:01:08.8] NL: That’s awesome, wow. [0:01:10.0] BL: Yeah, I’ve been having a good week as well in doing a bit of some fun stuff after work. Like my soon to be in-laws are in town so I’ve been visiting with them and that’s been really fun. Cloud native applications, what does that mean to you all? Because I think that’s an interesting topic. [0:01:25.0] CC: Definitely not a monolith. I think if you have a monolith running on the clouds, even if you start it out that way, I wouldn’t say it’s a cloud native app, I always think of containerized applications and if you’re using the container system then it’s usually because you want to have a smaller systems in more of them, that sort of thing. Also, when I think of cloud native applications, I think that they were developed the whole strategy of the development in the whole strategy of deploying and shipping has been designed from scratch to put on the cloud system. [0:02:05.6] JR: I think of it as applications that were designed to run in container. And I also think about things like services, like micro services or macro services to know what you want to call them that we have multiple applications that are made to talk not just with themselves but with other apps and they deliver a bigger functionality through their coordination. Then what I also want to go cloud native apps, I think of apps that we are moving to the cloud, that’s a big topic in itself but applications that we run in the cloud. All of our new fancy services and our SaaS offerings, a lot of these are cloud native apps. But then on the other side, I think about applications, they are cloud native are tolerant to failure and on the other side, can actually talk about sells of their health and who they’re talking to. [0:02:54.8] CC: Gets very complicated. [0:02:56.6] BL: Yeah. That was the side of that I haven’t thought about. [0:03:00.7] JR: Actually, it’s for me that always come to mind are obviously portability, right? Wherever you're running this application, it can run somewhat consistently, be it on different clouds or even a lot of people, you know, are running their own cloud which is basically their on-prem cloud, right? That application being able to move across any of those places and often times, containerization is one of the mechanisms we use to do that, right? Which is what we all stated. Then I guess the other thing too is like, this whole cloud ecosystem, be it a cloud provider or your own personal – are often times very API driven, right? So, the applications, maybe being able to take advantage of some of those API’s should they need to. Be it for scaling purposes otherwise. It’s really interesting model. [0:03:43.2] NL: It’s interesting, for me like this question because so far, everyone is getting similar but also different answers. And for me, I’m going to give a silent answer to me, a cloud native application is a lot of things we said like portable. I think of micro services when II] think of a cloud native application. But it’s also an application that can modify the infrastructure it needs via API calls, right? If your application needs a service or needs a networking connection, it can – the application itself can manifest that via cloud offering, right? That’s what I always thought of as a cloud native application, right? If you need like a database, the application can reach out to like AWS RDS and spin up the database and that was an aspect of I always found very fascinating with cloud native applications, it isn’t necessarily the definition but for me, that’s the part that I was really focused on I think is quite interesting. [0:04:32.9] BL: Also, CI/CD cloud native apps are made to work well with our CI, our seamless integration and our continuous delivery/deployment systems as well, that’s like another very important aspect of cloud native applications. We should be able to deploy them to production without typing anything in. should be some kind of automated process. [0:04:56.4] NL: Yeah, that is for sure. Carlisia, you mentioned something that I think it’s good for us to talk about a little bit which is terminology. I keeping coming back to that. You mentioned monolithic apps, what are monoliths then? [0:05:09.0] CC: I am so hung up on what you just said, can we table that for a few minutes? You said cloud native applications for you is an application that can interact with the infrastructure and maybe for example, is the database. I wonder if you have an example or if you could expand on that, I want to – if everybody agrees with that, I’m not clear on what that even is. Because as a developer which is always my point of view is what I know. It’s a lot of responsibility for the application to have. And for example, when I would think cloud native and I’m thinking now, maybe I’m going off on a tangent here. But we have Kubernetes, isn’t that what Kubernetes is supposed to do to glue it all together? So, the application only needs to know what it needs to do. But spinning up an all tight system is not one of the things it would need to do? [0:05:57.3] BL: Sure, actually, I was going to use Kubernetes as my example for cloud native application. Because Kubernetes is what it is, an app, right? It can modify the cloud that it’s running. And so, if you have Kubernetes running in AWS, you can create ELB’s, elastic load balancers. It can create new nodes. It can create new databases if you need, as I mentioned. Kubernetes itself is my example like a cloud native application. I should say that that’s a good callout. My example of what a cloud native application isn’t necessarily like that’s a rule. All cloud native applications have to modify the cloud in which they exist in. It’s more that they can modify. That is a component of a cloud native application. Kubernetes is being an example there. I don’t know, I guess things like operators inside of Kubernetes like the rook operator will create storage for you when you spin up like root create a Ceph cluster, it will also spin up like the ELB’s behind it or at least I believe it does. Or that kind of functionality. [0:06:57.2] CC: I can see what you're saying because for example, if I choose to use the storage inside something like Kubernetes, then you will be required of my app to call an SDK and connect so that their storage somehow. So, in that sense I guess, you are using your app. Someone correct me if I’m wrong but that’s how the connection is created, right? You just request – but you’re not necessarily saying I want this thing specific, you just say I want this sort of thing like which has their storage and then you define that elsewhere. So, your applications don’t need to know details bit definitely needs to say, I need this. I’m talking about again, when your data storage is running on top of Kubernetes and not outside of it. [0:07:46.4] BL: Yeah. [0:07:47.3] NL: That brings up an interesting part of this whole term cloud native app. Because it’s like everything else in the space, our terms are not super concrete and an interesting piece about this is that an application – does an application half the map one to one with the running process? What is an application? [0:08:06.1] NL: That is interesting because it could say that a serverless app or a serverless rule, whatever serverless really is, I guess we can get into that in another episode. Are those cloud native applications? They’re not just running anywhere. [0:08:19.8] JR: I will punt on that because I know my boundaries are and that definitely not in my boundaries. But the reason I bring this up is because a little while ago, it’s probably year ago in a Kubernetes [inaudible 0:08:32] apps, we actually have a conversation about what an application was. And the consensus from the community and from the group members was that actually, an application could be made up of multiple processes. So, let’s say you were building this large SaaS service and because you’re selling dog food online, your application could be your dog food application. But you have inventory management. You have a front end, maybe you haven’t had service, you have a shipping manager and things like that. Sales tax calculator. Are those all applications? Or is it one application? The piece about cloud application are cloud native applications because what we found in Kubernetes is that the way we’re thinking about applications is, an application is multiple processes, that can be linked together and we can tell the whole story of how all those interact and working. Just something else, another way to think about this. [0:09:23.5] NL: Yeah, that is interesting, I never really considered that before but that makes a lot of sense. Particularly with the rise of things like GRPC and the ability to send dedicated messages to are like well codified messages too different processes. That gives rise to things like this multi-tenant process as an application. [0:09:41.8] BL: Right. But going back to your idea around cloud native applications being able to commandeer the resources that they’re needing. That’s something that we do see. We see it within Kubernetes right now. I’ll give you above and beyond the example that you gave is that whenever you create a staple set. And Kubernetes, the operator behind staple set that actually goes and provisions of PPC for you, you requested a resource and whatever you change the number of instances from one to like five, guess what? you get four more PPC’s. Just think about it, that is actually something that is happening, it’s a little transparent with people. but I can see to the point of we’re just requesting a new resource and if we are using cloud services to watch our other things, or our cloud native services to watch our applications, I could see us asking for this on demand or even a service like a database or some other type of queuing thing on demand. [0:10:39.2] CC: When I hear things like this, I think, “ Wow, it sounds very complicated. "But then I start to think about it and I think it’s really neat because it is complicated but the alternative would have been way more complicated. I mean, we can talk about, this is sort of how it’s done now. I mean, it’s really hard to go into details on a one-hour episode. We can’t cover the how it’s done or make conceptually, we are sort of throwing a lot of words out there sort of conceptualize it but we can also try to talk about it in a conceptual way how it is done in a non-cloud native world. [0:11:15.3] NL: Yeah, I kind of want to get back to the question I posed before, what is a monolithic app, what is a none cloud native app? And not all none cloud native apps are monoliths but this is actually something that I’ve heard a lot and I’ll be honest. I have an idea of what a monolithic app is but I think I don’t have a very good grasp of it. We kind of talked a bit about like what a cloud native app is, what is a none cloud native or what came before a cloud native applications. What is a monolith? [0:11:39.8] CC: I’m personally not a big fan of monoliths. Of course, I worked with them but once micro services started becoming common and started developing in that mode. I am much more of a fan of breaking things down for so many different reasons. It is a controversial topic for sure. But to go back to your question, the monolith is basically, you have an app, sort of goes to what Brian was saying, it’s like, what is an app? If you think of an app and like one thing, Amazon is an app, right? It’s an app that we use to buy things as consumers. And you know, the other part is the cloud. But let’s look at it like it’s an app that we use to buy things as consumers, we know it’s broken down to so many different services. There is the checkout service, there is the cart service. I mean, I’m imagining, these I can imagine thought, the small services that compose that one Amazon app. If it was a monolith, those services that you know – those things are different systems that are talking together. The whole thing would be on one code base. It would reside in same code base or it will be deployed together. It will be shipped together. If you make a change in one place and you needed to deploy that, you have to deploy the whole thing together. You might have teams that are working on separate aspects but they’re working against the same code base. And maybe because of that, that will lend itself to teams not really specializing on separate aspects because everything is together so you might make one change of the impacts another place and then you have to know that part as well. So, there is a lot less specialization and separation of teams as well. [0:13:32.3] BL: Maybe to give an example of my experience and I think it aligns with a lot of the details Carlisia just went over. Even taking five years back, my experience at least was, we’d write up a ticket and we’d ask somebody to make a server space for us, maybe run [inaudible 0:13:44] on it, right? We’d write all this Java code and we’d package it into these things that run on a JDL somewhere, right? We would deploy this whole big application you know?Let’s call it that dog food app, right? It would have maybe even like a state layer and have the web server layer, maybe have all these different pieces all running together, this big code base as Carlisia put it. And we’d deploy it, you know, that process took a lot of time and was very consuming especially when we needed to change stuff, we didn’t have all these modern API’s and this kind of decoupled applications, right? But then, over time, you know, we started learning more and more about the notion of isolating each of these pieces or layers. So that we could have the web server, isolated in its how, put some site container or a unit and then the state layer and the other layers even isolated, you know, the micro service approach more or less. And then we were able to scale independently and that was really awesome. so we saw a lot of the gains in that respect. We basically moved our complexity to other areas, we took our complexity that you need to all happen in the same memory space and we moved a lot of it into the network with this new protocols of that different services talk to one another. It’s been an interesting thing kind of seeing the monolith approach and the micro service approach and how a lot of these micro service apps are in my opinion a lot more like cloud native aligned, if that makes sense? Just seeing how the complexity shows around in that regard. [0:15:05.8] CC: Let me just say one more thing because it’s actually the biggest aspect of micro services that I like the most in comparison, you know, the aspect of monolith that I hate the most and that I don’t hate it, I appreciate the least, let’s put it that way. Is that, when you have a monolith, it is so easy because design is hard so it’s so easy to couple different parts of your app with other parts of your app and have couples cold and coupled functionality. When you break this into micro services, that is impossible. Because it was working with separate code bases. If you force to think what is your interface, you’re always thinking about the interface and what people need to consume from you, your interface is the only way into your app, into your system. I really like the aspect that it forces you to think about your API. And people will argue, “Well you can’t put the same amount of effort into that if you have a monolith.” Absolutely, but in reality, I don’t see it. And like Josh was saying, it is not a walk on the park, but I’d much rather deal with those issues, those complexities that Microsoft has create then the challenges of running a big – I’m talking about big monoliths, right? Not something trivial. [0:16:29.8] JR: I will come to distil this about how I look at monoliths and how it fits into this conversation. A monolith is simply an application that is or a single process in this case that is running both the UI, the front-end code and the code that fetches the state from a data store, whether that be disk or database. That is what a monolith is. The reasons people use monoliths are many but I can actually think of some very good reasons. If you have code reuse and let’s say you have a website and you were trying to – you have farms and you want to be able to use those form libraries or you have data access and you want to be able to reuse that data access code, a monolith is great. The problem with monoliths is as functionality becomes larger, complexity becomes larger and not at the same rate. I’m not going to say that it is not linear but it’s not quite exponential. Maybe it logs into or something like that. But the problem is that at a certain point, you’re going to have so much functionality, you’re not going to be able to put it inside of one process, see Rails. Rails is actually a great example of this where we run into the issues where we put so much application into a rail source directory and we try to run it and we basically run up with these huge processes. And we split them up. But what we found is that we could actually split out the front-end code to one process. We could spit out the middle ware, see multiple process in the middle, the data access layer to another process and we could use those, we could actually take advantage of multiple CPU cores or multiple computers. The problem with this is that with splitting this out, it’s complexity. So, what if you have a [inaudible 0:18:15] is, what I’m trying to say here in a very long way is that monoliths have their places. As a matter of fact, the encourage, at least I still encourage people to start with the monolith. Put everything in one place. Whenever it gets too big, you spit it out. But in a cloud native world, because we’re trying to take advantage of containers, we’re trying to take advantage of cords on CPUs, we’re trying to take advantage of multiple computers to do that in the most efficient way, you want to split your application up into smaller pieces so that your front end versus your middle layer, versus your data access layer versus your data layer itself can run on as many computers and as many cores as possible. Therefore, spreading thee risk and spreading the usage because everything should be faster. [0:19:00.1] NL: Awesome. That is some great insight into monolithic apps and also the benefit and pros and cons of them. Like something I didn’t have before. Because I’ve only ever heard of a praise monolithic apps and then it’s like said in hushed tones or what the swear word directly after it. And so, it’s interesting to hear the concept of it being that each way you deploy your application is complex but there are different tradeoffs, right? It’s the idea that I was like, “Why don’t you want to turn your monolithic into micro services? Well, there’s so much more overhead, so much more yak shaving you have to do to get there to take advantage of micro services. That was awesome, thank you so much for that insight. [0:19:39.2] CC: I wanted to reiterate a couple aspects of what Brian said and Josh said in regards to that. One huge advantage, I mean, your application needs to be substantial enough that you feel like you need to do that, you’re going to get some advantage from it. when you had that point, and you do that, you’re clearing to services like Josh was saying and Brian was saying, you have the ability to increase your capabilities, your process capabilities based on one aspect of the system that needs it. So, you have something that requires very low processing, you run that service with certain level of capabilities. And something that like your orders process or your orders micro service. You increase the processing power for that much more than some other part. When it comes to running this in the cloud native world, I think this is more an infrastructure aspect. But my understanding is that you can automate all of that, you can determine, “Okay, I have analyzed my requirements based on history and what I need is stacks. So, I’m going to say tell the cloud native infrastructure, this is what I need in the automation will take care of bringing the system up to that if anything happens.” We are always going to be healing your system in an automated way and this is something that I don’t think gets talked about enough like we say, we talk about, “Oh things split up this way and they’re run this way but in an automated mode that these makes all of the difference. [0:21:15.4] NL: Yeah that makes a lot of sense actually. So, basically analytic apps don’t give us the benefit of automation or automated deployment versus like micro services kind of give us and cloud native applications give us the rise. [0:21:28.2] BL: Yes, and think about this, whenever you have five micro services delivering your applications functionality and you need to upgrade the front-end code for the HTML, whatever generates the HTML. You can actually replace that piece or replace that piece and that not bring your whole application down. And even better yet, you can replace that piece one at a time or two at a time, still have the majority of your applications still running and maybe your users won’t even know at all. So, let’s say you have a monolith and you are running multiple versions of this monoliths. When you take that whole application down, you literally take the whole application down not only do you lose front-end capacity, you also lose back-end capacity as well. So, separating your app is actually smarter than the long run because what it gives you is the flexibility to mix and match and you could actually scale the front end at a different level than you did at the backend. And that is actually super important in [inaudible 0:22:22] land and actually Python land and .NET land if you’re writing monoliths. You have to scale at the level of your monolith and if you can scale that then you are having wasted resources. So smaller micro services, smaller cloud native apps makes the run of containers, actually will use less resources. [0:22:41.4] JR: I have an interesting question for us all. So obviously a lot of cloud native applications usually maybe look like these micro services we’re describing, can a monolith be a cloud native application as well? [0:22:54.4] BL: Yes, it can. [0:22:55.1] JR: Cool. [0:22:55.6] NL: Yeah, I think so. As long as the – basically monolith can be deployed in the mechanism that we described like CSAD or can take advantage of the cloud. I believe the monolith can be a cloud native application, sure. [0:23:08.8] CC: There are monolith – because I am glad you brought that up because I was going to bring that up because I hear Brian using the micro services in cloud native apps interchangeably and it makes it really hard for me to follow, “Okay, so what is not cloud native application or what is not a cloud native service and what is not a cloud native monolith?” So, to start this thread with the question that Josh just asked, which also became my question: if I have a monolith app running a cloud provider is that a cloud native app? If it is not what piece of puzzle needs to exists for that to be considered a cloud native app? And then the follow up question I am going to throw it out there already is why do we care? What is the big deal if it is or if it isn’t? [0:23:55.1] BL: Wow, okay. Well let’s see. Let’s unpack this. I have been using micro service and cloud native interchangeably probably not to the best effect. But let me clear up something here about cloud native versus micro services. Cloud native is a big term and it goes further than an application itself. It is not only the application. It is also the environment of the application can run in. It is the process that we use to get the application to production. So, monoliths can be cloud native apps. We can run them through CI/CD. They can run in containers. They can take advantage of their environment. We can scale them independently. but we use micro services instead this becomes easier because our surface area is smaller. So, what I want to do is not use that term like this. Cloud native applications is an umbrella term but I will never actually say cloud native application. I always say a micro service and the reason why I will say the micro service is because it is much more accurate description of that process that is running. Cloud native applications is more of the umbrella. [0:25:02.0] JR: It is really interesting because a lot of the times that we are working with customers when they go out and introduce them to Kubernetes, we are often times asked, “How do I make my application cloud native?” To what you are talking about Brian and to your question Carlisia, I feel like a lot of times people are a little bit confused about it because sometimes they are actually asking us, “How do I break this legacy app into smaller micro services,” right? But sometimes they are actually asking like, “How do I make it more cloud native?” And usually our guidance or the things that we are working with them on is exactly that, right? It is like getting that application container so we can get it portable whether it is a monolith or a micro service, right? We are containerizing it. We are making it more portable. We are maybe helping them out with health checks that the infrastructure environment that they are running in can tap into it and know the health of that application whether it’s to restart it with Kubernetes as an example. We are going through and helping them understand those principles that I think fall more into the umbrella of cloud native like you are saying Brian if I am following you correctly and helping them kind of enhance their application. But it doesn’t necessarily mean splitting it apart, right? It doesn’t mean running it in smaller services. It just means following these more cloud native principles. It is hard talk up so that was continuing to say cloud native right? [0:26:10.5] BL: So that is actually a good way of putting it. A cloud native application isn’t a thing. It is a set of principles that you can use to guide yourself to running apps in cloud environments. And it is interesting. When I say cloud environments I am not even really particularly talking about Kubernetes or any type of scheduler. I am just talking about we are running apps on other people’s computers in the cloud this is what we should think about and it goes through those principles. Where we use CI/CD, storage maybe most likely will be ephemeral. Actually, you know what? That whole process, that whole virtual machine that we are running on that is ephemeral too, everything will go away. So, cloud native applications is basically a theory that allows us to be strategic about running applications with other people’s computers and storage and networking and compute may go away. So, we do this at this way, this is how to get our 5-9’s or 4-9’s above time because we can actually do this. [0:27:07.0] NL: That is actually a great point. The cloud native application is one that can confidently run on somebody else’s computer. That is a good stake in the ground. [0:27:15.9] BL: I stand behind that and I like the way that you put it. I am going to steal that and say I made it up. [0:27:20.2] NL: Yeah, go ahead. We have been talking about monoliths and cloud native applications. I am curious, since you all are developers, what is your experience writing cloud native applications? [0:27:31.2] JR: I guess for green field projects where we are starting from scratch and we are kind of building this thing, it is a really pleasant experience because a lot of things are sort of done for us. We just need to know how to interact with the API or the contract to get the things we need. So that is kind of my blanket statement. I am not trying to say it is easy, I am just saying like it has become quite convenient in a lot of respects when adopting these cloud native principles. Like the idea that I have a docker file and I build this container and now I am running this app that I am writing code for all over the place, it’s become such a more pleasant experience and at least in my experience years and years ago with like dropping things into the tomcat instances running all over the place, right? But I guess what’s also been interesting is it’s been a bit hard to convert older applications into the cloud native space, right? Because I think the point Carlisia had started with around the idea of all the code being in one place, you know it is a massive undertaking to understand how some of these older applications work. Again, not saying that all older applications are only monoliths. But my experience has been that they generally are. Their bigger code base is hard to understand how they work and where to modify things without breaking other things, right? When you go and you say, “All right, let’s adopt some cloud native principles on this app that has been running on the mainframe for decades” right? That is a pretty hard thing to do but again, green field projects I found it to be pretty convenient. [0:28:51.6] CC: It is actually easy, Josh. You just rewrite it. [0:28:54.0] JR: Totally yes. That is always a piece of cake. ,[0:28:56.9] BL: You usually write it in Go and then it is cloud native. That is actually the secret to cloud native apps. You write it in Go, you install it, you deploy in Kubernetes, mission accomplish, cloud native to you. [0:29:07.8] CC: Anything written in Go is cloud native. We are declaring that here, you heard that here first. [0:29:13.4] JR: That is a great question, it’s like how do we get there? That is a hard question and not one that I would basically just wave a magic set of words over and say that we are there. But what I would say is that as we start thinking of moving applications to cloud native first, we need to identify applications that cannot be called updated and I could actually give you some. Your Windows 2003 applications and yes, I do know some of you are running 2003 still. Those are not cloud native and they never will be and the problem is that you won’t be able to run them in a containerized environment. Microsoft says stop using 2003, you should stop using it. Other applications that won’t be cloud native are applications that require a certain level of machine or server access. We have been able to attract GPU’s. But if you’re working on the IO level like you are actually looking at IO or if you are looking at hardware interrupts. Or you are looking at anything like that, that application will never be cloud native. Because there is no way that we can in a shared environment, which most likely your application will be running in, in the cloud. There is no way that first of all that the hypervisor that is actually running your virtual machine wants to give you that process or give you that access or that is not being shared from one to 200 other processes on that server. So, applications that want low level access or have real time, you don’t want to run those in the cloud. They cannot be cloud native. That still means a lot of applications can be. [0:30:44.7] CC: So, I keep thinking of if I own a tech stack and I every once in a while stop and evaluate, if I am squeezing as most tech as I can out of my system? Meaning am I using the best technology out there to the extent that fits my needs? If I am that kind of person and I don’t know – it’s like when I say I am a decision maker and even if I was a tech person like I am also a tech person, I still would not have – unless I am one of the architects. And sometimes even the architects don’t have an entire vision. I mean they have to talk to other architects who have a greater vision of the whole system because systems that can be so big. But at any rate, if I am an architect or I own the tech stack one way or another, my question is, is my system a cloud native system? Is my app a cloud native app? I am not even sure that we clarified enough for people to answer that. I mean it is so complicated, maybe we did hopefully we helped a little bit. So basically, this will be my question, how do I know if I am there or not? Because my next step would be well if I am not there then what am I missing? Let me look into it and see if the cost benefit is worth it. But if I don’t know what is missing, what do I look at? How do I evaluate? How do I evaluate if I am there and if I am not, what do I need to do? So, we talked about this a little bit on episode one, which we talked about cloud native like what is cloud native in general and now we are talking about apps. And so, you know, there should be a checklist of things that cloud native should at least have these sets of things. Like the 12-factor app, what do you need to have to be considered 12 factor app. We should have a checklist, 12 factor app I think having that checklist is being part of micro-service in the cloud native app. But I think there needs to be more. I just wish we would have that not that we need to come up with that list now but something to think about. Someone should do it, you know? [0:32:57.5] JR: Yeah, it would be cool. [0:32:58.0] CC: Is it reasonable or now to want to have that checklist? [0:33:00.6] BL: So, there is, there is that checklist that exist I know that Red Hat has one. I know that IBM has one. I would guess VMware has one on one of our web pages. Now the problem is they’re all different. What I do and this is me trying to be fair here. The New Stack basically they talk about things that are happening in the cloud and tech. If you search for The New Stack in cloud native application, there is a 10-bullet list. That is what I send to people now. The reason I send that one rather than any vendor is because a vendor is trying to sell you something. They are trying to sell you their vision of cloud native where they excel and they will give you products that help you with that part like CI/CD, “oh we have a product for that.” I like The New Stack list and actually, I Googled it while you were talking Carlisia because I wanted it to bring it up. So, I will just go through the titles of this list and we’ll make sure that we make this link available. So, there is 10 Key Attributes of Cloud-Native Applications. Package as light weight to containers. Developed with best-of-breed languages and frameworks, you know that doesn’t mean much but that is how nebulous this is. Designed as loosely coupled microservices. Centered around API’s for interaction and collaboration. Architected with clean separation of stateless and stateful services. Isolated from server and operating system dependencies. Deployed on self-service elastic cloud infrastructure. Managed through agile DevOps processes. Automated capabilities. And the last one, Defined policy-driven resource allocation. And as you see, those are all very much up for interpretation or implementations. So, a cloud native app from my point of view tries to target most of these items and has an opinion on most of these items. So, a cloud native app isn’t just one thing. It is a mindset that I am running. Like I said before, I am running my software on other people’s computers, how can I best do this.? [0:34:58.1] CC: I added the link to our shownotes. When I look at this list, I don’t see observability. That word is not there. Does it fall under one of those points because observability is another new-ish term that seems to be in parcel of cloud native? Correct me here, people. [0:35:19.1] JR: I am. Actually, the eighth item, ‘Manage through agile DevOps processes,’ is actually – they don’t talk about monitoring observability. But for an application for a person who is not developing application, so whether you have a dev ops team or you have an SRE practice, you are going to have to be able to communicate the status and the application whether it be through metrics logs or metrics logs or whatever the other one is. I am thinking – traces. So that is actually I think is baked in it is just not called out. So, to get the proper DevOps you would need some observability that is how you get that status when you have a problem. [0:35:57.9] CC: So, this is how obscure these things can be. I just want to point this out. It is so frustrating, so literally we have item eight, which Brian has been, as the main developer so he is super knowledgeable. He can look at that and know what it means. But I look at that and the words log metrics, observability none of these words are there and yet Brian knew that that is what it means that that is what he meant. And I don’t disagree with him. I can see it now but why does it have to be so obscure? [0:36:29.7] JR: I think a big thing to consider too is like it very much lands on spectrum, right? Like something you would ask Carlisia is how do I qualify if my app is cloud native or what do I need to do? And you know a lot of people in my experience are just adopting parts of this list and that’s totally fine. You know worrying about whether you fully qualify as a cloud native app since we have talked about it as more of a set of principles is something – I don’t know if there is too too much value in worrying about whether you can block that label onto your app as much as it is, “Oh I can see our organization our applications having these problems.” Like lacking portability when we move them across providers or going back to observability, not being able to know what is going on inside of the application and where the network packets are headed and they switched to being asked we’re late to see these happening. And as those problems come on, really looking at and adopting these principles where it is appropriate. Sometimes it might not be with the engineering efforts without them, one of the more cloud native principles. You know you just have to pick and choose what is most valuable to you. [0:37:26.7] BL: Yes, and actually this is what we should be doing as experts, as thought-leaders, as industry movers and shakers. Our job is to make this easier for people coming behind us. At one time, it was hard to even start an application or start your operating system. Remember when we had to load AN1, you know? Remember we had to do that back in the day on our basic, on our Comado64’s or Apple or Apple2. Now you turn your computer on and it comes with instantly. We click on application and it works. We need to actually bring this whole cloud movement to that point where things like if you include these libraries and you code with these API’s you get automatic observability. And I am saying that with air quotes but you get the ability to have this thing to monitor it in some fashion. If you use this practice and you have this stack, CI/CD should be super simple for you and we are just not quite there yet. And that is why the industry is definitely rotating around this and that is why there has been a lot of buzz around cloud native and Kubernetes is because people are looking at this to actually solve a lot of these problems that we’ve had. Because they just haven’t been solvable because everybody stacks are too different. But this one though, the reason Linux is I think ultimately successful is because it allowed us to do things and all of these Linux things we liked and it worked on all sorts of computers. And it got that mindset behind it behind companies. Kubernetes could also do this. It allows us to think about our data centers as potentially one big computer or fewer computers that allows us to make sure things are running. And once we have this, now we can develop new tools that will help us with our observability, with our getting software into production and upgraded and where we need it. [0:39:17.1] NL: Awesome. So, on that, we are going to have to wrap up for this week. Let’s go ahead and do a round of closing thoughts. [0:39:22.7] JR: I don’t know if I have any closing thoughts. But it was a pleasure talking about cloud native applications with you all. Thanks. [0:39:28.1] BL: Yeah, I have one thought is that all of these things that we are talking about it sounds kind of daunting. But it is better that we can have these conversations and talk about things that don’t work rather than not knowing what to talk about in general. So this is a journey for us and I hope you come for more of our journey. [0:39:46.3] CC: First I was going to follow up on Josh and say I am thoughtless. But now I want to fill up on Brian’s and say, no I have no opinions. It is very much what Brian said for me, the bridging of what we can do using cloud native infrastructure in what we read about it and what we hear about it like for people who are not actually doing it is so hard to connect one with the other. I hope by being here and asking questions and answering questions and hopefully people will also be very interactive with us. And ask us to talk about things they want to know that we all try to connect it too little by little. I am not saying it is rocket science and nobody can understand it. I am just saying for some people who don’t have multi background experience, they might have big gaps. [0:40:38.7] NL: And that is for sure. This was a very useful episode for me. I am glad to know that everybody else is just as confused at what cloud native applications actually mean. So that was awesome. It was a very informative episode for me and I had a lot of fun doing it. So, thank you all for having me. Thank you for joining us on this week of the Kublets Podcast. And I just want to wish our friend Brian a very happy birthday. Bye you all. [0:41:03.2] CC: Happy birthday Brian. [0:41:04.7] BL: Ahhhh. [0:41:05.9] NL: All right, bye everyone. [END OF EPISODE] [0:41:07.5] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.
Sponsor Circle CI Episode on CI/CD with Circle CI Show DetailsIn this episode, we cover the following topics: Pillars in depth Performance Efficiency "Ability to use resources efficiently to meet system requirements and maintain that efficiency as demand changes and technology evolves" Design principles Easy to try new advanced technologies (by letting AWS manage them, instead of standing them up yourself) Go global in minutes Use serverless architectures Experiment more often Mechanical sympathy (use the technology approach that aligns best to what you are trying to achieve) Key service: CloudWatch Focus areas SelectionServices: EC2, EBS, RDS, DynamoDB, Auto Scaling, S3, VPC, Route53, DirectConnect ReviewServices: AWS Blog, AWS What's New MonitoringServices: CloudWatch, Lambda, Kinesis, SQS TradeoffsServices: CloudFront, ElastiCache, Snowball, RDS (read replicas) Best practices SelectionChoose appropriate resource typesCompute, storage, database, networking Trade OffsProximity and caching Cost Optimization "Ability to run systems to deliver business value at the lowest price point" Design principles Adopt a consumption model (only pay for what you use) Measure overall efficiency Stop spending money on data center operations Analyze and attribute expenditures Use managed services to reduce TCO Key service: AWS Cost Explorer (with cost allocation tags) Focus areas Expenditure awarenessServices: Cost Explorer, AWS Budgets, CloudWatch, SNS Cost-effective resourcesServices: Reserved Instances, Spot Instances, Cost Explorer Matching supply and demandServices: Auto Scaling Optimizing over timeServices: AWS Blog, AWS What's New, Trusted Advisor Key pointsUse Trusted Advisor to find ways to save $$$ The Well-Architected Review Centered around the question "Are you well architected?" The Well-Architected review provides a consistent approach to review workload against current AWS best practices and gives advice on how to architect for the cloud Benefits of the review Build and deploy faster Lower or mitigate risks Make informed decisions Learn AWS best practices The AWS Well-Architected Tool Cloud-based service available from the AWS console Provides consistent process for you to review and measure your architecture using the AWS Well-Architected Framework Helps you: Learn Measure Improve Improvement plan Based on identified high and medium risk topics Canned list of suggested action items to address each risk topic MilestonesMakes a read-only snapshot of completed questions and answers Best practices Save milestone after initially completing workload review Then, whenever you make large changes to your workload architecture, perform a subsequent review and save as a new milestone Links AWS Well-Architected AWS Well-Architected Framework - Online/HTML versionincludes drill down pages for each review question, with recommended action items to address that issue AWS Well-Architected Tool Enhanced Networking Amazon EBS-optimized instance VPC Endpoint Amazon S3 Transfer Acceleration AWS Billing and Cost Management Whitepapers AWS Well-Architected Framework Operational Excellence Pillar Security Pillar Reliability Pillar Performance-Efficiency Pillar Cost Optimization Pillar End Song:The Shadow Gallery by Roy EnglandFor a full transcription of this episode, please visit the episode webpage.We'd love to hear from you! You can reach us at: Web: https://mobycast.fm Voicemail: 844-818-0993 Email: ask@mobycast.fm Twitter: https://twitter.com/hashtag/mobycast Reddit: https://reddit.com/r/mobycast
In this episode, we cover the following topics: Pillars in depth Security "Ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies" Design principles Implement strong identity foundation Enable traceability Security at all layers Automate security best practices Protect data in transit and at rest Keep people away from data Prepare for security events Key service: AWS IAM Focus areas Identity and access managementServices: IAM, AWS Organizations, MFA Detective controlsServices: CloudTrail, CloudWatch, AWS Config, GuardDuty Infrastructure protectionServices: VPC, Shield, WAF Data protectionServices: KMS, ELB (encryption), Macie (detect sensitive data) Incident responseServices: IAM, CloudFormation Best practices Identity and access managementAWS Cognito Act as broker between login providers Securely access any AWS service from mobile device Data protection Encrypt Encryption at rest Encryption in transit Encrypted backups Versioning Storage resiliency Detailed logging Incident responseEmploy strategy of templated "clean rooms" Create new trusted environment to conduct investigation Use CloudFormation to easily create the "clean room" environment Reliability "Ability to recover from failures, dynamically acquire resources to meet demand and mitigate disruptions such as network issues" Design principles Test recovery procedures Auto recover from failures Scale horizontally to increase availability Stop guessing capacity Manage change with automation Key service: CloudWatch Focus areas FoundationsServices: IAM, VPC, Trusted Advisor (visibility into service limits), Shield (protect from DDoS) Change managementServices: CloudTrail, AWS Config, CloudWatch, Auto Scaling Failure managementServices: CloudFormation, S3, Glacier, KMS Best practices Foundations Take into account physical and service limits High availability No single points of failure (SPOF) Multi-AZ design Load balancing Auto scaling Redundant connectivity Software resilience Failure management Backup and disaster recoveryRPO, RTO Inject failures to test resiliency Key points Plan network topology Manage your AWS service and rate limits Monitor your system Automate responses to demand Backup In the next episode, we'll cover the remaining 2 pillars and discuss how to perform a Well-Architected Review. Links AWS Well-Architected AWS Well-Architected Framework - Online/HTML versionincludes drill down pages for each review question, with recommended action items to address that issue AWS re:Invent 2018: How AWS Minimizes the Blast Radius of Failures - ARC338 Shuffle Sharding: Massive and Magical Fault Isolation Whitepapers AWS Well-Architected Framework Operational Excellence Pillar Security Pillar Reliability Pillar Performance-Efficiency Pillar Cost Optimization Pillar End song:The Runner (David Last Remix) - FaxFor a full transcription of this episode, please visit the episode webpage.We'd love to hear from you! You can reach us at: Web: https://mobycast.fm Voicemail: 844-818-0993 Email: ask@mobycast.fm Twitter: https://twitter.com/hashtag/mobycast
In this episode, we cover the following topics: AWS Well-Architected Framework Provides consistent approach to evaluating systems against cloud best practices Helps advise changes necessary to make specific architecture align with best practices Comprised of 3 components: Design Principles Pillars Operational Excellence Security Reliability Performance Efficiency Cost Optimization Questions General design principlesCloud-native has changed everything. In cloud, you can: Stop guessing capacity needs Test at scale Automate all the things to make experimentation easier Allow for evolutionary architectures (you are never stuck with a particular technology) Drive architectures using data (allows you to make fact based decisions on how to improve your workload) Improve through game days Pillars in depthOperational Excellence "Ability to run and monitor systems to deliver business value and to continuously improve supporting processes and procedures" Design principles Perform operations as code Annotate documentation Make frequent, small, reversible changes Refine operations procedures frequently Anticipate failure Learn from all operational failures Key service: CloudFormation Focus areas PrepareServices: AWS Config, AWS Config Rules OperateServices: CloudWatch, X-Ray, CloudTrail, VPC Flow Logs EvolveServices: Elasticsearch (for searching log data to gain insights), CloudWatch Insights Best practices Prepare Implement telemetry for: Application Workload User activity Dependencies Implement transaction traceability Operate Any event for which you raise an alert should have associated runbookRunbook defines triggers for escalations Users should be notified when system is impacted Communicate status through dashboardsProvide dashboards to communicate the current operating status of the business and provide metrics of interest EvolveFeedback loops Identify areas for improvement Gauge impact of changes to the system (i.e. did it make an improvement?) Perform operations metrics reviewsRetrospective analysis of operations metricsUse these reviews to identify opportunities for improvement, potential courses of action, and share lessons learned Key points Runbooks, playbooks Document environments Make small changes through automation Monitor workload with business metrics Exercise your response to failures Have well-defined escalation management In future episodes, we'll cover the remaining 4 pillars Links AWS Well-Architected Framework - Online/HTML versionincludes drill down pages for each review question, with recommended action items to address that issue Are You Well-Architected? AWS re:Invent 2016 Keynote: Werner VogelsSee: 25:45 through 31:25 Runbooks Playbooks AWS Service Health Dashboard AWS Personal Health Dashboard Whitepapers AWS Well-Architected Framework Operational Excellence Pillar Security Pillar Reliability Pillar Performance-Efficiency Pillar Cost Optimization Pillar End Song:30 Days & 30 Nights by Fortune FinderFor a full transcription of this episode, please visit the episode webpage.We'd love to hear from you! You can reach us at: Web: https://mobycast.fm Voicemail: 844-818-0993 Email: ask@mobycast.fm Twitter: https://twitter.com/hashtag/mobycast Reddit: https://reddit.com/r/mobycast