Podcasts about CLI

  • 403PODCASTS
  • 1,423EPISODES
  • 51mAVG DURATION
  • 1DAILY NEW EPISODE
  • Sep 29, 2022LATEST

POPULARITY

20152016201720182019202020212022

Categories



Best podcasts about CLI

Show all podcasts related to cli

Latest podcast episodes about CLI

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 29, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

The Legalpreneurs Sandbox
Episode 150: LegalTech Around the World: The Americas

The Legalpreneurs Sandbox

Play Episode Listen Later Sep 29, 2022 59:37


In this podcast, the seventh session in CLI's Legaltech Around the World Series, David Bushby, Managing Director, InCounsel facilitated a discussion about the local legaltech market in Latin America. David was joined by amazing guest panellists: Andrés Jara, Co-Founder & CEO, Kea Technology Inc (Chile) Bibiana Martinez Camelo, Head of Legal Operations, Bancolombia (Colombia) Silvana Stochetti, Founder & CEO, Legalify Latam (Argentina) Maxime Troubat, CEO, Juridoc (Brazil) Agustin Velazquez G.L, Managing Partner, AVA Firm (Mexico) Topics covered in this session included: An overview of the legaltech market in Latin America The drivers/agents of change and impact of legaltech in the B2B and B2C markets (and how they are connected) The challenges and opportunities for legaltech   The impact of COVID on the legaltech market The importance of legaltech communities Who is funding legaltech development in Latin America and are tech developers staying, going, or returning What legaltech adoption REALLY looks like (separating the hype and hope from reality) The growing importance of digital literacy and the role of law schools in that education You'll find information about the other episodes in this series here. The series is presented in association with InCounsel. If you would prefer to watch rather than listen to this episode, you'll find the video in our CLI-Collaborate (CLIC) free Resource Hub here. Additional Resources referred to in this session You'll find the Legal Hackers website here Don't forget to subscribe to: InCounsel's Weekly Newsletter CLI's Newsletter

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 28, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 27, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 26, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 23, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

AWS Bites
52. Authentication for a CLI app with Cognito - Live coding PART 5

AWS Bites

Play Episode Listen Later Sep 23, 2022 129:25


This is a special episode recorded live during a live coding session on YouTube (2022-09-21). The audio-only experience might not be the best one, so if you are curious to see the video and enjoy our diagrams and screen sharing, please check this episode on YouTube: https://www.youtube.com/watch?v=0TzfkbisMEA. How can you build a WeTransfer or a Dropbox Transfer clone on AWS? This is our fifth live coding stream. In this episode, we continued adding some security to our application. Specifically, we implemented 75% of the OAuth 2 device flow on top of AWS Cognito to allow our file upload CLI application to get some credentials. In order to implement this flow, we need to store some secrets. We decided to use DynamoDB and spent a lot of time discussing our data design and how and why we used the famous and controversial DynamoDB single table design principle. All our code is available in this repository: https://github.com/awsbites/weshare.click In this episode we mentioned the following resources: OAuth 2 Device Auth flow RFC8628: https://www.rfc-editor.org/rfc/rfc8628 The DynamoDB book by Alex DeBrie: https://www.dynamodbbook.com/ LevelDB: https://github.com/google/leveldb OAuth 2 Authorization framework RFC6749: https://www.rfc-editor.org/rfc/rfc6749 You can listen to AWS Bites wherever you get your podcasts: - Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-bites/id1585489017 - Spotify: https://open.spotify.com/show/3Lh7PzqBFV6yt5WsTAmO5q - Google: https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy82YTMzMTJhMC9wb2RjYXN0L3Jzcw== - Breaker: https://www.breaker.audio/aws-bites - RSS: https://anchor.fm/s/6a3312a0/podcast/rss Do you have any AWS questions you would like us to address? Leave a comment here or connect with us on Twitter: - https://twitter.com/eoins - https://twitter.com/loige #AWS #livecoding #transfer

The Legalpreneurs Sandbox
Episode 149 - Future 50 Series – Neural Nets - The Future of Legal Search

The Legalpreneurs Sandbox

Play Episode Listen Later Sep 23, 2022 32:11


Neural nets…sounds exciting, perhaps a little scary, and a long way from anything to do with legal practice, right? Wrong! It has everything to do with how the legal ecosystem is being challenged to reconceive and deliver legal work differently from start to finish and…beyond! It's easy to get caught up in jargon in all of this stuff – AI, machine learning, natural language processing, and so the list goes on – that's why we spoke with Pablo Arredondo, a Co-Founder & the Chief Innovation Officer at Casetext. Pablo works in this world every day, he really knows this stuff, understands legal practice, and seamlessly translates what he knows into the work he does with legal practitioners. That's no doubt a big part of why Casetext is leading the way in this area with its outstanding, user-friendly products. Our conversation in this session was a journey. We discussed what neural nets are and are not; how they apply to the legal ecosystem, especially when it comes to legal search; the difference between keyword searches, neural nets, and how you choose between them; some great practical applications of these tools in legal practice like e.g. Casetext's product AllSearch; how neural nets promise to change everything in the legal world (and other worlds too); and, the benefits that have already emerged from implementing this AI for legal practitioners and, their clients too! If you would prefer to watch rather than listen to this podcast, you'll find the video here.  About the Future 50 Series In the Future 50 Series we're chatting with legalpreneurs who, through their ideas and actions, are challenging and transforming legal BAU all around the world. If you would like to recommend people for this Series, please contact us at: CLI@collaw.edu.au.

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 22, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

Screaming in the Cloud
How Data Discovery is Changing the Game with Shinji Kim

Screaming in the Cloud

Play Episode Listen Later Sep 22, 2022 32:58


About ShinjiShinji Kim is the Founder & CEO of Select Star, an automated data discovery platform that helps you to understand & manage your data. Previously, she was the Founder & CEO of Concord Systems, a NYC-based data infrastructure startup acquired by Akamai Technologies in 2016. She led the strategy and execution of Akamai IoT Edge Connect, an IoT data platform for real-time communication and data processing of connected devices. Shinji studied Software Engineering at University of Waterloo and General Management at Stanford GSB.Links Referenced: Select Star: https://www.selectstar.com/ LinkedIn: https://www.linkedin.com/company/selectstarhq/ Twitter: https://twitter.com/selectstarhq TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: I come bearing ill tidings. Developers are responsible for more than ever these days. Not just the code that they write, but also the containers and the cloud infrastructure that their apps run on. Because serverless means it's still somebody's problem. And a big part of that responsibility is app security from code to cloud. And that's where our friend Snyk comes in. Snyk is a frictionless security platform that meets developers where they are - Finding and fixing vulnerabilities right from the CLI, IDEs, Repos, and Pipelines. Snyk integrates seamlessly with AWS offerings like code pipeline, EKS, ECR, and more! As well as things you're actually likely to be using. Deploy on AWS, secure with Snyk. Learn more at Snyk.co/scream That's S-N-Y-K.co/screamCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while, I encounter a company that resonates with something that I've been doing on some level. In this particular case, that is what's happened here, but the story is slightly different. My guest today is Shinji Kim, who's the CEO and founder at Select Star.And the joke that I was making a few months ago was that Select Stars should have been the name of the Oracle ACE program instead. Shinji, thank you for joining me and suffering my ridiculous, basically amateurish and sophomore database-level jokes because I am bad at databases. Thanks for taking the time to chat with me.Shinji: Thanks for having me here, Corey. Good to meet you.Corey: So, Select Star despite being the only query pattern that I've ever effectively been able to execute from memory, what you do as a company is described as an automated data discovery platform. So, I'm going to start at the beginning with that baseline definition. I think most folks can wrap their heads around what the idea of automated means, but the rest of the words feel like it might mean different things to different people. What is data discovery from your point of view?Shinji: Sure. The way that we define data discovery is finding and understanding data. In other words, think about how discoverable your data is in your company today. How easy is it for you to find datasets, fields, KPIs of your organization data? And when you are looking at a table, column, dashboard, report, how easy is it for you to understand that data underneath? Encompassing on that is how we define data discovery.Corey: When you talk about data lurking around the company in various places, that can mean a lot of different things to different folks. For the more structured data folks—which I tend to think of as the organized folks who are nothing like me—that tends to mean things that live inside of, for example, traditional relational databases or things that closely resemble that. I come from a grumpy old sysadmin perspective, so I'm thinking, oh, yeah, we have a Jira server in the closet and that thing's logging to its own disk, so that's going to be some information somewhere. Confluence is another source of data in an organization; it's usually where insight and a knowledge of what's going on goes to die. It's one of those write once, read never type of things.And when I start thinking about what data means, it feels like even that is something of a squishy term. From the perspective of where Select Start starts and stops, is it bounded to data that lives within relational databases? Does it go beyond that? Where does it start? Where does it stop?Shinji: So, we started the company with an intention of increasing the discoverability of data and hence providing automated data discovery capability to organizations. And the part where we see this as the most effective is where the data is currently being consumed today. So, this is, like, where the data consumption happens. So, this can be a data warehouse or data lake, but this is where your data analysts, data scientists are querying data, they are building dashboards, reports on top of, and this is where your main data mart lives.So, for us, that is primarily a cloud data warehouse today, usually has a relational data structure. On top of that, we also do a lot of deep integrations with BI tools. So, that includes tools like Tableau, Power BI, Looker, Mode. Wherever these queries from the business stakeholders, BI engineers, data analysts, data scientists run, this is a point of reference where we use to auto-generate documentation, data models, lineage, and usage information, to give it back to the data team and everyone else so that they can learn more about the dataset they're about to use.Corey: So, given that I am seeing an increased number of companies out there talking about data discovery, what is it the Select Star does that differentiates you folks from other folks using similar verbiage in how they describe what they do?Shinji: Yeah, great question. There are many players that popping up, and also, traditional data catalog's definitely starting to offer more features in this area. The main differentiator that we have in the market today, we call it fast time-to-value. Any customer that is starting with Select Star, they get to set up their instance within 24 hours, and they'll be able to get all the analytics and data models, including column-level lineage, popularity, ER diagrams, and how other people are—top users and how other people are utilizing that data, like, literally in few hours, max to, like, 24 hours. And I would say that is the main differentiator.And most of the customers I have pointed out that setup and getting started has been super easy, which is primarily backed by a lot of automation that we've created underneath the platform. On top of that, just making it super easy and simple to use. It becomes very clear to the users that it's not just for the technical data engineers and DBAs to use; this is also designed for business stakeholders, product managers, and ops folks to start using as they are learning more about how to use data.Corey: Mapping this a little bit toward the use cases that I'm the most familiar with, this big source of data that I tend to stumble over is customer AWS bills. And that's not exactly a big data problem, given that it can fit in memory if you have a sufficiently exciting computer, but using Tableau don't wind up slicing and dicing that because at some point, Excel falls down. From my perspective, problem with Excel is that it doesn't tend to work on huge datasets very well, and from the position of Salesforce, the problem with Excel is that it doesn't cost a giant pile of money every month. So, those two things combined, Tableau is the answer for what we do. But that's sort of the end-all for us of, that's where it stops.At that point, we have dashboards that we build and queries that we run that spit out the thing we're looking at, and then that goes back to inform our analysis. We don't inherently feed that back into anything else that would then inform the rest of what we do. Now, for our use case, that probably makes an awful lot of sense because we're here to help our customers with their billing challenges, not take advantage of their data to wind up informing some giant model and mispurposing that data for other things. But if we were generating that data ourselves as a part of our operation, I can absolutely see the value of tying that back into something else. You wind up almost forming a reinforcing cycle that improves the quality of data over time and lets you understand what's going on there. What are some of the outcomes that you find that customers get to by going down this particular path?Shinji: Yeah, so just to double-click on what you just talked about, the way that we see this is how we analyze the metadata and the activity logs—system logs, user logs—of how that data has been used. So, part of our auto-generated documentation for each table, each column, each dashboard, you're going to be able to see the full data lineage: where it came from, how it was transformed in the past, and where it's going to. You will also see what we call popularity score: how many unique users are utilizing this data inside the organization today, how often. And utilizing these two core models and analysis that we create, you can start looking at first mapping out the data flow, and then determining whether or not this dataset is something that you would want to continue keeping or running the data pipelines for. Because once you start mapping these usage models of tables versus dashboards, you may find that there are recurring jobs that creates all these materialized views and tables that are feeding dashboards that are not being looked at anymore.So, with this mechanism by looking initially data lineage as a concept, a lot of companies use data lineage in order to find dependencies: what is going to break if I make this change in the column or table, as well as just debugging any of issues that is currently happening in their pipeline. So, especially when you will have to debug a SQL query or pipeline that you didn't build yourself but you need to find out how to fix it, this is a really easy way to instantly find out, like, where the data is coming from. But on top of that, if you start adding this usage information, you can trace through where the main compute is happening, which largest route table is still being queried, instead of the more summarized tables that should be used, versus which are the tables and datasets that is continuing to get created, feeding the dashboards and is those dashboards actually being used on the business side. So, with that, we have customers that have saved thousands of dollars every month just by being able to deprecate dashboards and pipelines that they were afraid of deprecating in the past because they weren't sure if anyone's actually using this or not. But adopting Select Star was a great way to kind of do a full spring clean of their data warehouse as well as their BI tool. And this is an additional benefit to just having to declutter so many old, duplicated, and outdated dashboards and datasets in their data warehouse.Corey: That is, I guess, a recurring problem that I see in many different pockets of the industry as a whole. You see it in the user visibility space, you see it in the cost control space—I even made a joke about Confluence that alludes to it—this idea that you build a whole bunch of dashboards and use it to inform all kinds of charts and other systems, but then people are busy. It feels like there's no ‘and then.' Like, one of the most depressing things in the universe that you can see after having spent a fair bit of effort to build up those dashboards is the analytics for who internally has looked at any of those dashboards since the demo you gave showing it off to everyone else. It feels like in many cases, we put all these projects and amount of effort into building these things out that then don't get used.People don't want to be informed by data they want to shoot from their gut. Now, sometimes that's helpful when we're talking about observability tools that you use to trace down outages, and, “Well, our site's really stable. We don't have to look at that.” Very awesome, great, awesome use case. The business insight level of dashboard just feels like that's something you should really be checking a lot more than you are. How do you see that?Shinji: Yeah, for sure. I mean, this is why we also update these usage metrics and lineage every 24 hours for all of our customers automatically, so it's just up-to-date. And the part that more customers are asking for where we are heading to—earlier, I mentioned that our main focus has been on analyzing data consumption and understanding the consumption behavior to drive better usage of your data, or making data usage much easier. The part that we are starting to now see is more customers wanting to extend those feature capabilities to their staff of where the data is being generated. So, connecting the similar amount of analysis and metadata collection for production databases, Kafka Queues, and where the data is first being generated is one of our longer-term goals. And then, then you'll really have more of that, up to the source level, of whether the data should be even collected or whether it should even enter the data warehouse phase or not.Corey: One of the challenges I see across the board in the data space is that so many products tend to have a very specific point of the customer lifecycle, where bringing them in makes sense. Too early and it's, “Data? What do you mean data? All I have are these logs, and their purpose is basically to inflate my AWS bill because I'm bad at removing them.” And on the other side, it's, “Great. We pioneered some of these things and have built our own internal enormous system that does exactly what we need to do.” It's like, “Yes, Google, you're very smart. Good job.” And most people are somewhere between those two extremes. Where are customers on that lifecycle or timeline when using Select Star makes sense for them?Shinji: Yeah, I think that's a great question. Also the time, the best place where customers would use Select Star for is that after they have their cloud data warehouse set up. Either they have finished their migration, they're starting to utilize it with their BI tools, and they're starting to notice that it's not just, like, you know, ten to fifty tables that they're starting with; most of them have more than hundreds of tables. And they're feeling that this is starting to go out of control because we have all these data, but we are not a hundred percent sure what exactly is in our database. And this usually just happens more in larger companies, companies at thousand-plus employees, and they usually find a lot of value out of Select Star right away because, like, we will start pointing out many different things.But we also see a lot of, like, forward-thinking, fast-growing startups that are at the size of a few hundred employees, you know, they now have between five to ten-person data team, and they are really creating the right single source of truth of their data knowledge through a Select Star. So, I think you can start anywhere from when your data team size is, like, beyond five and you're continuing to grow because every time you're trying to onboard a data analyst, data scientist, you will have to go through, like, basically the same type of training of your data model, and it might actually look different because the data models and the new features, new apps that you're integrating this changes so quickly. So, I would say it's important to have that base early on and then continue to grow. But we do also see a lot of companies coming to us after having thousands of datasets or tens of thousands of datasets that it's really, like, very hard to operate and onboard anyone. And this is a place where we really shine to help their needs, as well.Corey: Sort of the, “I need a database,” to the, “Help, I have too many databases,” pipeline, where [laugh] at some point people start to—wanting to bring organization to the chaos. One thing I like about your model is that you don't seem to be making the play that every other vendor in the data space tends to, which is, “Oh, we want you to move your data onto our systems. The end.” You operate on data that is in place, which makes an awful lot of sense for the kinds of things that we're talking about. Customers are flat out not going to move their data warehouse over to your environment, just because the data gravity is ludicrous. Just the sheer amount of money it would take to egress that data from a cloud provider, for example, is monstrous.Shinji: Exactly. [laugh]. And security concerns. We don't want to be liable for any of the data—and this is, like, a very specific decision we've made very early on the company—to not access data, to not egress any of the real data, and to provide as much value as possible just utilizing the metadata and logs. And depending on the types of data warehouses, it also can be really efficient because the query history or the metadata systems tables are indexed separately. Usually, it's much lighter load on the compute side. And that definitely has, like, worked well for our advantage, especially being a SaaS tool.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig secures your cloud from source to run. They believe, as do I, that DevOps and security are inextricably linked. If you wanna learn more about how they view this, check out their blog, it's definitely worth the read. To learn more about how they are absolutely getting it right from where I sit, visit Sysdig.com and tell them that I sent you. That's S Y S D I G.com. And my thanks to them for their continued support of this ridiculous nonsense.Corey: What I like is just how straightforward the integrations are. It's clear you're extraordinarily agnostic as far as where the data itself lives. You integrate with Google's BigQuery, with Amazon Redshift, with Snowflake, and then on the other side of the world with Looker, and Tableau, and other things as well. And one of the example use cases you give is find the upstream table in BigQuery that a Looker dashboard depends on. That's one of those areas where I see something like that, and, oh, I can absolutely see the value of that.I have two or three DynamoDB tables that drive my newsletter publication system that I built—because I have deep-seated emotional problems and I take it out and everyone else via code—but as a small, contained system that I can still fit in my head. Mostly. And I still forget which table is which in some cases. Down the road, especially at scale, “Okay, where is the actual data source that's informing this because it doesn't necessarily match what I'm expecting,” is one of those incredibly valuable bits of insight. It seems like that is something that often gets lost; the provenance of data doesn't seem to work.And ideally, you know, you're staffing a company with reasonably intelligent people who are going to look at the results of something and say, “That does not align with my expectations. I'm going to dig.” As opposed to the, “Oh, yeah, that seems plausible. I'll just go with whatever the computer says.” There's an ocean of nuance between those two, but it's nice to be able to establish the validity of the path that you've gone down in order to set some of these things up.Shinji: Yeah, and this is also super helpful if you're tasked to debug a dashboard or pipeline that you did not build yourself. Maybe the person has left the company, or maybe they're out-of-office, but this dashboard has been broken and you're quote-unquote, “On call,” for data. What are you going to do? You're going to—without a tool that can show you a full lineage, you will have to start digging through somebody else's SQL code and try to map out, like, where the data is coming from, if this is calculating correctly. Usually takes, you know, few hours to just get to the bottom of the issue. And this is one of the main use cases that our customers bring up every single time, as more of, like, this is now the go-to place every time there is any data questions or data issues.Corey: The first and golden rule of cloud economics is step one, turn that shit off.Shinji: [laugh].Corey: When people are using something, you can optimize the hell out of it however you want, but nothing's going to beat turning it off. One challenge is when we're looking at various accounts and we see a Redshift cluster, and it's, “Okay. That thing's costing a few million bucks a year and no one seems to know anything about it.” They keep pointing to other teams, and it turns into this giant, like, finger-pointing exercise where no one seems to have responsibility for it. And very often, our clients will choose not to turn that thing off because on the one hand, if you don't turn it off, you're going to spend a few million bucks a year that you otherwise would not have had to.On the other, if you delete the data warehouse, and it turns out, oh, yeah, that was actually kind of important, now we don't have a company anymore. It's a question of which is the side you want to be wrong on. And in some levels, leaving something as it is and doing something else is always a more defensible answer, just because the first time your cost-saving exercises take out production, you're generally not allowed to save money anymore. This feels like it helps get to that source of truth a heck of a lot more effectively than tracing individual calls and turning into basically data center archaeologists.Shinji: [laugh]. Yeah, for sure. I mean, this is why from the get go, we try to give you all your tables, all of your database, just ordered by popularity. So, you can also see overall, like, from all the tables, whether that's thousands or tens of thousands, you're seeing the most used, has the most number of dependencies on the top, and you can also filter it by all the database tables that hasn't been touched in the last 90 days. And just having this, like, high-level view gives a lot of ideas to the data platform team about how they can optimize usage of their data warehouse.Corey: From where I tend to sit, an awful lot of customers are still relatively early in their data journey. An awful lot of the marketing that I receive from various AWS mailing lists that I found myself on because I've had the temerity to open accounts has been along the lines of oh, data discovery is super important, but first, they presuppose that I've already bought into this idea that oh, every company must be a completely data-driven company. The end. Full stop.And yeah, we're a small bespoke services consultancy. I don't necessarily know that that's the right answer here. But then it takes it one step further and starts to define the idea of data discovery as, ah, you will use it to find a PII or otherwise sensitive or restricted data inside of your datasets so you know exactly where it lives. And sure, okay, that's valuable, but it also feels like a very narrow definition compared to how you view these things.Shinji: Yeah. Basically, the way that we see data discovery is it's starting to become more of an essential capability in order for you to monitor and understand how your data is actually being used internally. It basically gives you the insights around sure, like, what are the duplicated datasets, what are the datasets that have that descriptions or not, what are something that may contain sensitive data, so on and so forth, but that's still around the characteristics of the physical datasets. Whereas I think the part that's really important around data discovery that is not being talked about as much is how the data can actually be used better. So, have it as more of a forward-thinking mechanism and in order for you to actually encourage more people to utilize data or use the data correctly, instead of trying to contain this within just one team is really where I feel like data discovery can help.And in regards to this, the other big part around data discovery is really opening up and having that transparency just within the data team. So, just within the data team, they always feel like they do have that access to the SQL queries and you can just go to GitHub and just look at the database itself, but it's so easy to get lost in the sea of metadata that is just laid out as just the list; there isn't much context around the data itself. And that context and with along with the analytics of the metadata is what we're really trying to provide automatically. So eventually, like, this can be also seen as almost like a way to, like, monitor the datasets, like, how you're currently monitoring your applications through Datadog or your website with your Google Analytics, this is something that can be also used as more of a go-to source of truth around what your state of the data is, how that's defined, and how that's being mapped to different business processes, so that there isn't much confusion around data. Everything can be called the same, but underneath it actually can mean very different things. Does that make sense?Corey: No, it absolutely does. I think that this is part of the challenge in trying to articulate value that is, I guess, specific to this niche across an entire industry. The context that drives data is going to be incredibly important, and it feels like so much of the marketing in the space is aimed at one or two pre-imagined customer profiles. And that has the side effect of making customers for whom that model doesn't align, look and feel like either doing something wrong, or makes it look like the vendor who's pitching this is somewhat out of touch. I know that I work in a relatively bounded problem space, but I still learn new things about AWS billing on virtually every engagement that I go on, just because you always get to learn more about how customers view things and how they view not just their industry, but also the specificities of their own business and their own niche.I think that is one of the challenges historically, with the idea of letting software do everything. Do you find the problems that you're solving tend to be global in nature or are you discovering strange depths of nuance on a customer-by-customer basis at this point?Shinji: Overall, a lot of the problems that we solve and the customers that we work with is very industry agnostic. As long as you are having many different datasets that you need to manage, there are common problems that arises, regardless of the industry that you're in. We do observe some industry-specific issues because your data is either, it's an unstructured data, or your data is primarily events, or you know, depending on how the data looks like, but primarily because of most of the BI solutions and data warehouses are operating as a relational databases, this is a part where we really try to build a lot of best practices, and the common analytics that we can apply to every customer that's using Select Star.Corey: I really want to thank you for taking so much time to go through the ins and outs of what it is you're doing these days. If people want to learn more, where's the best place to find you?Shinji: Yeah, I mean, it's been fun [laugh] talking here. So, we are at selectstar.com. That's our website. You can sign up for a free trial. It's completely self-service, so you don't need to get on a demo but, like, we'll also help you onboard and happy to give a free demo to whoever that is interested.We are also on LinkedIn and Twitter under selectstarhq. Yeah, I mean, we're happy to help for any companies that have these issues around wanting to increase their discoverability of data, and want to help their data team and the rest of the company to be able to utilize data better.Corey: And we will, of course, put links to all of that in the [show notes 00:28:58]. Thank you so much for your time today. I really appreciate it.Shinji: Great. Thanks for having me, Corey.Corey: Shinji Kim, CEO and founder at Select Star. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment that I won't be able to discover because there are far too many podcast platforms out there, and I have no means of discovering where you've said that thing unless you send it to me.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

The Legalpreneurs Sandbox
Episode 148- Future 50 Series - Advancing Diversity and Inclusion in Law – The Mansfield Rule and Move the Needle Initiative

The Legalpreneurs Sandbox

Play Episode Listen Later Sep 21, 2022 34:48


In 2013, after many years working in large law firms and co-founding one of the first legal metrics start-ups, Caren Ulrich Stacy took time out to reflect on how she could bring together what she had done with what she had experienced to solve a problem that keeps knocking at her door – how do women who had left legal practice for a variety of different reasons, find their way back to it? The answer was the OnRamp Fellowship, the first returnship program for law firms that later extended to legal departments. It grew into the Diversity Lab, an incubator for innovative ways of boosting diversity and inclusion in the legal profession. The Lab ran a series of Women in Law Hackathons from 2016 – The Mansfield Rule was an idea from the first Hackathon that grew wings in US, Canadian and UK law firms and legal departments too. Named after the first woman lawyer in the US, Arabella Mansfield, its focus is on boosting and sustaining diversity in leadership and the pipeline to leadership. The Lab also subsequently established the Move the Needle initiative, a wonderful collaborative experiment between the Lab and four founding law firms, resourced over 5 years, the outcome being to produce empirical data (which will include a Report) about diversity and inclusion in hiring, retention and advancement. The Report, when released, promises to provide a tried and tested blueprint for advancing diversity and inclusion in the legal industry. What Caren has accomplished with the Diversity Lab and these two amazing initiatives (of many) we discussed, is remarkable and outstanding! What's also critically important is that the work and outcomes are supported, every step of the way, with data, data analysis, and metrics – they provide an empirical and quantifiable foundation that differentiates these initiatives from others and, tells a story that is both deeply personal but also objectively verifiable!  Caren is the Founder & CEO of the Diversity Lab; the Founder of its On Ramp Fellowship; was recently appointed as the Lead Diversity, Equity, Inclusion & Accessibility Advisor to the USPTO; holds roles with the UN Women initiative, and holds Fellowships at The College of Law Practice Management and the Tory Burch Foundation. If you would prefer to watch rather than listen to this podcast, you'll find the video here. About the Future 50 Series In the Future 50 Series we're chatting with legalpreneurs who, through their ideas and actions, are challenging and transforming legal BAU all around the world. If you would like to recommend people for this Series, please contact us at: CLI@collaw.edu.au.

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 21, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

Yalla To The Cloud
Episode 115: ECS Part 6

Yalla To The Cloud

Play Episode Listen Later Sep 20, 2022 11:30


בפינה זו, נגיש לכם מידע על העבודה היומיומית בסביבת ענן מנקודת המבט שלנו.דוברי הפרק: אבי קינן ומייש סיידל-קיסינג. בפרק הקודם, דיברנו על מה זה deployments ואיך עושים rolling deployments. כמו כן, מאייש הרצאה לנו איך מתקינים גרסא ב-2 קליקים. בפרק זה, שהוא הפרק ה-6 והאחרון בסדרת פרקי ה-ECS, נדבר על כלי אוטומציות (CLI, ECS), מה הצורך של כלים אלו ומדוע אנחנו משתמשים בהם? רוצים להתעדכן בתכנים נוספים בנושאי ענן וטכנולוגיות מתקדמות? הירשמו עכשיו לניוזלטר שלנו ותמיד תישארו בעניינים. להרשמה: https://www.israelclouds.com/newslettersignup

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 20, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

Screaming in the Cloud
Azul and the Current State of the Java Ecosystem with Scott Sellers

Screaming in the Cloud

Play Episode Listen Later Sep 20, 2022 36:35


About ScottWith more than 28 years of successful leadership in building high technology companies and delivering advanced products to market, Scott provides the overall strategic leadership and visionary direction for Azul Systems.Scott has a consistent proven track record of vision, leadership, and success in enterprise, consumer and scientific markets. Prior to co-founding Azul Systems, Scott founded 3dfx Interactive, a graphics processor company that pioneered the 3D graphics market for personal computers and game consoles. Scott served at 3dfx as Vice President of Engineering, CTO and as a member of the board of directors and delivered 7 award-winning products and developed 14 different graphics processors. After a successful initial public offering, 3dfx was later acquired by NVIDIA Corporation.Prior to 3dfx, Scott was a CPU systems architect at Pellucid, later acquired by MediaVision. Before Pellucid, Scott was a member of the technical staff at Silicon Graphics where he designed high-performance workstations.Scott graduated from Princeton University with a bachelor of science, earning magna cum laude and Phi Beta Kappa honors. Scott has been granted 8 patents in high performance graphics and computing and is a regularly invited keynote speaker at industry conferences.Links Referenced:Azul: https://www.azul.com/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: I come bearing ill tidings. Developers are responsible for more than ever these days. Not just the code that they write, but also the containers and the cloud infrastructure that their apps run on. Because serverless means it's still somebody's problem. And a big part of that responsibility is app security from code to cloud. And that's where our friend Snyk comes in. Snyk is a frictionless security platform that meets developers where they are - Finding and fixing vulnerabilities right from the CLI, IDEs, Repos, and Pipelines. Snyk integrates seamlessly with AWS offerings like code pipeline, EKS, ECR, and more! As well as things you're actually likely to be using. Deploy on AWS, secure with Snyk. Learn more at Snyk.co/scream That's S-N-Y-K.co/screamCorey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest on this promoted episode today is Scott Sellers, CEO and co-founder of Azul. Scott, thank you for joining me.Scott: Thank you, Corey. I appreciate the opportunity in talking to you today.Corey: So, let's start with what you're doing these days. What is Azul? What do you folks do over there?Scott: Azul is an enterprise software and SaaS company that is focused on delivering more efficient Java solutions for our customers around the globe. We've been around for 20-plus years, and as an entrepreneur, we've really gone through various stages of different growth and different dynamics in the market. But at the end of the day, Azul is all about adding value for Java-based enterprises, Java-based applications, and really endearing ourselves to the Java community.Corey: This feels like the sort of space where there are an awful lot of great business cases to explore. When you look at what's needed in that market, there are a lot of things that pop up. The surprising part to me is that this is the direction that you personally went in. You started your career as a CPU architect, to my understanding. You were then one of the co-founders of 3dfx before it got acquired by Nvidia.You feel like you've spent your career more as a hardware guy than working on the SaaS side of the world. Is that a misunderstanding of your path, or have things changed, or is this just a new direction? Help me understand how you got here from where you were.Scott: I'm not exactly sure what the math would say because I continue to—can't figure out a way to stop time. But you're correct that my academic background, I was an electrical engineer at Princeton and started my career at Silicon Graphics. And that was when I did a lot of fantastic and fascinating work building workstations and high-end graphics systems, you know, back in the day when Silicon Graphics really was the who's who here in Silicon Valley. And so, a lot of my career began in the context of hardware. As you mentioned, I was one of the founders of graphics company called 3dfx that was one of, I think, arguably the pioneer in terms of bringing 3d graphics to the masses, if you will.And we had a great run of that. That was a really fun business to be a part of just because of what was going on in the 3d world. And we took that public and eventually sold that to Nvidia. And at that point, my itch, if you will, was really learning more about the enterprise segment. I'd been involved with professional graphics with SGI, I had been involved with consumer graphics with 3dfx.And I was fascinated just to learn about the enterprise segment. And met a couple people through a mutual friend around the 2001 timeframe, and they started talking about this thing called Java. And you know, I had of course heard about Java, but as a consumer graphics guy, didn't have a lot of knowledge about it or experience with it. And the more I learned about it, recognized that what was going on in the Java world—and credit to Sun for really creating, obviously, not only language, but building a community around Java—and recognized that new evolutions of developer paradigms really only come around once a decade if then, and was convinced and really got excited about the opportunity to ride the wave of Java and build a company around that.Corey: One of the blind spots that I have throughout the entire world of technology—and to be fair, I have many of them, but the one most relevant to this conversation, I suppose, is the Java ecosystem as a whole. I come from a background of being a grumpy Unix sysadmin—because I've never met a happy one of those in my entire career—and as a result, scripting languages is where everything that I worked with started off. And on the rare occasions, I worked in Java shops, it was, “Great. We're going to go—here's a WAR file. Go ahead and deploy this with Tomcat,” or whatever else people are going to use. But basically, “Don't worry your pretty little head about that.”At most, I have to worry about how to configure a heap or whatnot. But it's from the outside looking in, not having to deal with that entire ecosystem as a whole. And what I've seen from that particular perspective is that every time I start as a technologist, or even as a consumer trying to install some random software package in the depths of the internet, and I have to start thinking about Java, it always feels like I'm about to wind up in a confusing world. There are a number of software packages that I installed back in, I want to say the early-2010s or whatnot. “Oh, you need to have a Java runtime installed on your Mac,” for example.And okay, going through Oracle site, do I need the JRE? Do I need the JDK? Oh, there's OpenJDK, which kind of works, kind of doesn't. Amazon got into the space with Corretto, which because that sounds nothing whatsoever, like Java, but strange names coming from Amazon is basically par for the course for those folks. What is the current state of the Java ecosystem, for those of us who have—basically the closest we've ever gotten is JavaScript, which is nothing alike except for the name.Scott: And you know, frankly, given the protection around the name Java—and you know, that is a trademark that's owned by Oracle—it's amazing to me that JavaScript has been allowed to continue to be called JavaScript because as you point out, JavaScript has nothing to do with Java per se.Corey: Well, one thing they do have in common I found out somewhat recently is that Oracle also owns the trademark for JavaScript.Scott: Ah, there you go. Maybe that's why it continues.Corey: They're basically a law firm—three law firms in a trench coat, masquerading as a tech company some days.Scott: Right. But anyway, it is a confusing thing because you know, I think, arguably, JavaScript, by the numbers, probably has more programmers than any other language in the world, just given its popularity as a web language. But to your question about Java specifically, it's had an evolving life, and I think the state where it is today, I think it's in the most exciting place it's ever been. And I'll walk you through kind of why I believe that to be the case.But Java has evolved over time from its inception back in the days when it was called, I think it was Oak when it was originally conceived, and Sun had eventually branded it as Java. And at the time, it truly was owned by Sun, meaning it was proprietary code; it had to be licensed. And even though Sun gave it away, in most cases, it still at the end of the day, it was a commercially licensed product, if you will, and platform. And if you think about today's world, it would not be conceivable to create something that became so popular with programmers that was a commercially licensed product today. It almost would be mandated that it would be open-source to be able to really gain the type of traction that Java has gained.And so, even though Java was really garnering interest, you know, not only within the developer community, but also amongst commercial entities, right, everyone—and the era now I'm talking about is around the 2000 era—all of the major software vendors, whether it was obviously Sun, but then you had Oracle, you had IBM, companies like BEA, were really starting to blossom at that point. It was a—you know, you could almost not find a commercial software entity that was not backing Java. But it was still all controlled by Sun. And all that success ultimately led to a strong outcry from the community saying this has to be open-source; this is too important to be beholden to a single vendor. And that decision was made by Sun prior to the Oracle acquisition, they actually open-sourced the Java runtime code and they created an open-source project called OpenJDK.And to Oracle's credit, when they bought Sun—which I think at the time when you really look back, Oracle really did not have a lot of track record, if you will, of being involved with an open-source community—and I think when Oracle acquired Sun, there was a lot of skepticism as to what's going to happen to Java. Is Oracle going to make this thing, you know, back to the old days, proprietary Oracle, et cetera? And really—Corey: I was too busy being heartbroken over Solaris at that point to pay much attention to the Java stuff, but it felt like it was this—sort of the same pattern, repeated across multiple ecosystems.Scott: Absolutely. And even though Sun had also open-sourced Solaris, with the OpenSolaris project, that was one of the kinds of things that it was still developed very much in a closed environment, and then they would kind of throw some code out into the open world. And no one really ran OpenSolaris because it wasn't fully compatible with Solaris. And so, that was a faint attempt, if you will.But Java was quite different. It was truly all open-sourced, and the big difference that—and again, I give Oracle a lot of credit for this because this was a very important time in the evolution of Java—that Oracle, maintained Sun's commitment to not only continue to open-source Java but most importantly, develop it in the open community. And so, you know, again, back and this is the 2008, ‘09, ‘10 timeframe, the evolution of Java, the decisions, the standards, you know, what goes in the platform, what doesn't, decisions about updates and those types of things, that truly became a community-led world and all done in the open-source. And credit to Oracle for continuing to do that. And that really began the transition away from proprietary implementations of Java to one that, very similar to Linux, has really thrived because of the true open-source nature of what Java is today.And that's enabled more and more companies to get involved with the evolution of Java. If you go to the OpenJDK page, you'll see all of the not only, you know, incredibly talented individuals that are involved with the evolution of Java, but again, a who's who in pretty much every major commercial entities in the enterprise software world is also somehow involved in the OpenJDK community. And so, it really is a very vibrant, evolving standard. And some of the tactical things that have happened along the way in terms of changing how versions of Java are released still also very much in the context of maintaining compatibility and finding that careful balance of evolving the platform, but at the same time, recognizing that there is a lot of Java applications out there, so you can't just take a right-hand turn and forget about the compatibility side of things. But we as a community overall, I think, have addressed that very effectively, and the result has been now I think Java is more popular than ever and continues to—we liken it kind of to the mortar and the brick walls of the enterprise. It's a given that it's going to be used, certainly by most of the enterprises worldwide today.Corey: There's a certain subset of folk who are convinced the Java, “Oh, it's this a legacy programming language, and nothing modern or forward-looking is going to be built in it.” Yeah, those people generally don't know what the internal language stack looks like at places like oh, I don't know, AWS, Google, and a few others, it is very much everywhere. But it also feels, on some level, like, it's a bit below the surface-level of awareness for the modern full-stack developer in some respects, right up until suddenly it's very much not. How is Java evolving in a cloud these days?Scott: Well, what we see happening—you know, this is true for—you know, I'm a techie, so I can talk about other techies. I mean as techies, we all like the new thing, right? I mean, it's not that exciting to talk about a language that's been around for 20-plus years. But that doesn't take away from the fact that we still all use keyboards. I mean, no one really talks about what keyboard they use anymore—unless you're really into keyboards—but at the end of the day, it's still a fundamental tool that you use every single day.And Java is kind of in the same situation. The reason that Java continues to be so fundamental is that it really comes back to kind of reinventing the wheel problem. Are there are other languages that are more efficient to code in? Absolutely. Are there other languages that, you know, have some capabilities that the Java doesn't have? Absolutely.But if you have the ability to reinvent everything from scratch, sure, go for it. And you also don't have to worry about well, can I find enough programmers in this, you know, new hot language, okay, good luck with that. You might be able to find dozens, but when you need to really scale a company into thousands or tens of thousands of developers, good luck finding, you know, everyone that knows, whatever your favorite hot language of the day is.Corey: It requires six years experience in a four-year-old language. Yeah, it's hard to find that, sometimes.Scott: Right. And you know, the reality is, is that really no application ever is developed from scratch, right? Even when an application is, quote, new, immediately, what you're using is frameworks and other things that have written long ago and proven to be very successful.Corey: And disturbing amounts of code copied and pasted from Stack Overflow.Scott: Absolutely.Corey: But that's one of those impolite things we don't say out loud very often.Scott: That's exactly right. So, nothing really is created from scratch anymore. And so, it's all about building blocks. And this is really where this snowball of Java is difficult to stop because there is so much third-party code out there—and by that, I mean, you know, open-source, commercial code, et cetera—that is just so leveraged and so useful to very quickly be able to take advantage of and, you know, allow developers to focus on truly new things, not reinventing the wheel for the hundredth time. And that's what's kind of hard about all these other languages is catching up to Java with all of the things that are immediately available for developers to use freely, right, because most of its open-source. That's a pretty fundamental Catch-22 about when you start talking about the evolution of new languages.Corey: I'm with you so far. The counterpoint though is that so much of what we're talking about in the world of Java is open-source; it is freely available. The OpenJDK, for example, says that right on the tin. You have built a company and you've been in business for 20 years. I have to imagine that this is not one of those stories where, “Oh, all the things we do, we give away for free. But that's okay. We make it up in volume.” Even the venture capitalist mindset tends to run out of patience on those kinds of timescales. What is it you actually do as a business that clearly, obviously delivers value for customers but also results in, you know, being able to meet payroll every week?Scott: Right? Absolutely. And I think what time has shown is that, with one very notable exception and very successful example being Red Hat, there are very, very few pure open-source companies whose business is only selling support services for free software. Most successful businesses that are based on open-source are in one-way shape or form adding value-added elements. And that's our strategy as well.The heart of everything we do is based on free code from OpenJDK, and we have a tremendous amount of business that we are following the Red Hat business model where we are selling support and long-term access and a huge variety of different operating system configurations, older Java versions. Still all free software, though, right, but we're selling support services for that. And that is, in essence, the classic Red Hat business model. And that business for us is incredibly high growth, very fast-moving, a lot of that business is because enterprises are tired of paying the very high price to Oracle for Java support and they're looking for an open-source alternative that is exactly the same thing, but comes in pure open-source form and with a vendor that is as reputable as Oracle. So, a lot of our businesses based on that.However, on top of that, we also have value-added elements. And so, our product that is called Azul Platform Prime is rooted in OpenJDK—it is OpenJDK—but then we've added value-added elements to that. And what those value-added elements create is, in essence, a better Java platform. And better in this context means faster, quicker to warm up, elimination of some of the inconsistencies of the Java runtime in terms of this nasty problem called garbage collection which causes applications to kind of bounce around in terms of performance limitations. And so, creating a better Java is another way that we have monetized our company is value-added elements that are built on top of OpenJDK. And I'd say that part of the business is very typical for the majority of enterprise software companies that are rooted in open-source. They're typically adding value-added components on top of the open-source technology, and that's our similar strategy as well.And then the third evolution for us, which again is very tried-and-true, is evolving the business also to add SaaS offerings. So today, the majority of our customers, even though they deploy in the cloud, they're stuck customer-managed and so they're responsible for where do I want to put my Java runtime on building out my stack and cetera, et cetera. And of course, that could be on-prem, but like I mentioned, the majority are in the cloud. We're evolving our product offerings also to have truly SaaS-based solutions so that customers don't even need to manage those types of stacks on their own anymore.Corey: On some level, it feels like we're talking about two different things when we talk about cloud and when we talk about programming languages, but increasingly, I'm starting to see across almost the entire ecosystem that different languages and different cloud providers are in many ways converging. How do you see Java changing as cloud-native becomes the default rather than the new thing?Scott: Great question. And I think the thing to recognize about, really, most popular programming languages today—I can think of very few exceptions—these languages were created, envisioned, implemented if you will, in a day when cloud was not top-of-mind, and in many cases, certainly in the case of Java, cloud didn't even exist when Java was originally conceived, nor was that the case when you know, other languages, such as Python, or JavaScript, or on and on. So, rethinking how these languages should evolve in very much the context of a cloud-native mentality is a really important initiative that we certainly are doing and I think the Java community is doing overall. And how you architect not only the application, but even the Java runtime itself can be fundamentally different if you know that the application is going to be deployed in the cloud.And I'll give you an example. Specifically, in the world of any type of runtime-based language—and JavaScript is an example of that; Python is an example of that; Java is an example of that—in all of those runtime-based environments, what that basically means is that when the application is run, there's a piece of software that's called the runtime that actually is running that application code. And so, you can think about it as a middleware piece of software that sits between the operating system and the application itself. And so, that runtime layer is common across those languages and those platforms that I mentioned. That runtime layer is evolving, and it's evolving in a way that is becoming more and more cloud-native in it's thinking.The process itself of actually taking the application, compiling it into whatever underlying architecture it may be running on—it could be an x86 instance running on Amazon; it could be, you know, for example, an ARM64, which Amazon has compute instances now that are based on an ARM64 processor that they call Graviton, which is really also kind of altering the price-performance of the compute instances on the AWS platform—that runtime layer magically takes an application that doesn't have to be aware of the underlying hardware and transforms that into a way that can be run. And that's a very expensive process; it's called just-in-time compiling, and that just-in-time compilation, in today's world—which wasn't really based on cloud thinking—every instance, every compute instance that you deploy, that same JIT compilation process is happening over and over again. And even if you deploy 100 instances for scalability, every one of those 100 instances is doing that same work. And so, it's very inefficient and very redundant. Contrast that to a cloud-native thinking: that compilation process should be a service; that service should be done once.The application—you know, one instance of the application is actually run and there are the other ninety-nine should just reuse that compilation process. And that shared compiler service should be scalable and should be able to scale up when applications are launched and you need more compilation resources, and then scaled right back down when you're through the compilation process and the application is more moving into the—you know, to the runtime phase of the application lifecycle. And so, these types of things are areas that we and others are working on in terms of evolving the Java runtime specifically to be more cloud-native.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig secures your cloud from source to run. They believe, as do I, that DevOps and security are inextricably linked. If you wanna learn more about how they view this, check out their blog, it's definitely worth the read. To learn more about how they are absolutely getting it right from where I sit, visit Sysdig.com and tell them that I sent you. That's S Y S D I G.com. And my thanks to them for their continued support of this ridiculous nonsense.Corey: This feels like it gets even more critical when we're talking about things like serverless functions across basically all the cloud providers these days, where there's the whole setup, everything in the stack, get it running, get it listening, ready to go, to receive a single request and then shut itself down. It feels like there are a lot of operational efficiencies possible once you start optimizing from a starting point of yeah, this is what that environment looks like, rather than us big metal servers sitting in a rack 15 years ago.Scott: Yeah. I think the evolution of serverless appears to be headed more towards serverless containers as opposed to serverless functions. Serverless functions have a bunch of limitations in terms of when you think about it in the context of a complex, you know, microservices-based deployment framework. It's just not very efficient, to spin up and spin down instances of a function if that actually is being—it is any sort of performance or latency-sensitive type of applications. If you're doing something very rarely, sure, it's fine; it's efficient, it's elegant, et cetera.But any sort of thing that has real girth to it—and girth probably means that's what's driving your application infrastructure costs, that's what's driving your Amazon bill every month—those types of things typically are not going to be great for starting and stopping functional instances. And so, serverless is evolving more towards thinking about the container itself not having to worry about the underlying operating system or the instance on Amazon that it's running on. And that's where, you know, we see more and more of the evolution of serverless is thinking about it at a container-level as opposed to a functional level. And that appears to be a really healthy steady state, so it gets the benefits of not having to worry about all the underlying stuff, but at the same time, doesn't have the downside of trying to start and stop functional influences at a given point in time.Corey: It seems to me that there are really two ways of thinking about cloud. The first is what I think a lot of companies do their first outing when they're going into something like AWS. “Okay, we're going to get a bunch of virtual machines that they call instances in AWS, we're going to run things just like it's our data center except now data transfer to the internet is terrifyingly expensive.” The more quote-unquote, “Cloud-native” way of thinking about this is what you're alluding to where there's, “Here's some code that I wrote. I want to throw it to my cloud provider and just don't tell me about any of the infrastructure parts. Execute this code when these conditions are met and leave me alone.”Containers these days seem to be one of our best ways of getting there with a minimum of fuss and friction. What are you seeing in the enterprise space as far as adoption of those patterns go? Or are we seeing cloud repatriation showing up as a real thing and I'm just not in the right place to see it?Scott: Well, I think as a cloud journey evolves, there's no question that—and in fact it's even silly to say that cloud is here to stay because I think that became a reality many, many years ago. So really, the question is, what are the challenges now with cloud deployments? Cloud is absolutely a given. And I think you stated earlier, it's rare that, whether it's a new company or a new application, at least in most businesses that don't have specific regulatory requirements, that application is highly, highly likely to be envisioned to be initially and only deployed in the cloud. That's a great thing because you have so many advantages of not having to purchase infrastructure in advance, being able to tap into all of the various services that are available through the cloud providers. No one builds databases anymore; you're just tapping into the service that's provided by Azure or AWS, or what have you.And, you know, just that specific example is a huge amount of savings in terms of just overhead, and license costs, and those types of stuff, and there's countless examples of that. And so, the services that are available in the cloud are unquestioned. So, there's countless advantages of why you want to be in the cloud. The downside, however, the cloud that is, if at the end of the day, AWS, Microsoft with Azure, Google with GCP, they are making 30% margin on that cloud infrastructure. And in the days of hardware, when companies would actually buy their servers from Dell, or HP, et cetera, those businesses are 5% margin.And so, where's that 25% going? Well, the 25% is being paid for by the users of cloud, and as a result of that, when you look at it purely from an operational cost perspective, it is more expensive to run in the cloud than it is back in the legacy days, right? And that's not to say that the industry has made the wrong choice because there's so many advantages of being in cloud, there's no doubt about it. And there should be—you know, and the cloud providers deserve to take some amount of margin to provide the services that they provide; there's no doubt about that. The question is, how do you do the best of all worlds?And you know, there is a great blog by a couple of the partners in Andreessen Horowitz, they called this the Cloud Paradox. And the Cloud Paradox really talks about the challenges. It's really a Catch-22; how do you get all the benefits of cloud but do that in a way that is not overly taxing from a cost perspective? And a lot of it comes down to good practices and making sure that you have the right monitoring and culture within an enterprise to make sure that cloud cost is a primary thing that is discussed and metric, but then there's also technologies that can help so that you don't have to even think about what you really don't ever want to do: repatriating, which is about the concept of actually moving off the cloud back to the old way of doing things. So certainly, I don't believe repatriation is a practical solution for ongoing and increasing cloud costs. I believe technology is a solution to that.And there are technologies such as our product, Azul Platform Prime, that in essence, allows you to do more with less, right, get all the benefits of cloud, deploy in your Amazon environment, deploy in your Azure environment, et cetera, but imagine if instead of needing a hundred instances to handle your given workload, you could do that with 50 or 60. Tomorrow, that means that you can start savings and being able to do that simply by changing your JVM from a standard OpenJDK or Oracle JVM to something like Platform Prime, you can immediately start to start seeing the benefits from that. And so, a lot of our business now and our growth is coming from companies that are screaming under the ongoing cloud costs and trying to keep them in line, and using technology like Azul Platform Prime to help mitigate those costs.Corey: I think that there is a somewhat foolish approach that I'm seeing taken by a lot of folks where there are some companies that are existentially anti-cloud, if for no other reason than because if the cloud wins, then they don't really have a business anymore. The problem I see with that is that it seems that their solution across the board is to turn back the clock where if I'm going to build a startup, it's time for me to go buy some servers and a rack somewhere and start negotiating with bandwidth providers. I don't see that that is necessarily viable for almost anyone. We aren't living in 1995 anymore, despite how much some people like to pretend we are. It seems like if there are workloads—for which I agree, cloud is not necessarily an economic fit, first, I feel like the market will fix that in the fullness of time, but secondly, on an individual workload belonging in a certain place is radically different than, “Oh, none of our stuff should live on cloud. Everything belongs in a data center.” And I just think that companies lose all credibility when they start pretending that it's any other way.Scott: Right. I'd love to see the reaction of the venture capitalists' face when an entrepreneur walks in and talks about how their strategy for deploying their SaaS service is going to be buying hardware and renting some space in the local data center.Corey: Well, there is a good cost control method, if you think about it. I mean very few engineers are going to accidentally spin up an $8 million cluster in a data center a second time, just because there's no space left for it.Scott: And you're right; it does happen in the cloud as well. It's just, I agree with you completely that as part of the evolution of cloud, in general, is an ever-improving aspect of cost and awareness of cost and building in technologies that help mitigate that cost. So, I think that will continue to evolve. I think, you know, if you really think about the cloud journey, cost, I would say, is still in early phases of really technologies and practices and processes of allowing enterprises to really get their head around cost. I'd still say it's a fairly immature industry that is evolving quickly, just given the importance of it.And so, I think in the coming years, you're going to see a radical improvement in terms of cost awareness and technologies to help with costs, that again allows you to the best of all worlds. Because, you know, if you go back to the Dark Ages and you start thinking about buying servers and infrastructure, then you are really getting back to a mentality of, “I've got to deploy everything. I've got to buy software for my database. I've got to deploy it. What am I going to do about my authentication service? So, I got to buy this vendor's, you know, solution, et cetera.” And so, all that stuff just goes away in the world of cloud, so it's just not practical, in this day and age I think, to think about really building a business that's not cloud-native from the beginning.Corey: I really want to thank you for spending so much time talking to me about how you view the industry, the evolution we've seen in the Java ecosystem, and what you've been up to. If people want to learn more, where's the best place for them to find you?Scott: Well, there's a thing called a website that you may not have heard of, it's really cool.Corey: Can I build it in Java?Scott: W-W-dot—[laugh]. Yeah. Azul website obviously has an awful lot of information about that, Azul is spelled A-Z-U-L, and we sometimes get the question, “How in the world did you name a company—why did you name it Azul?”And it's kind of a funny story because back in the days of Azul when we thought about, hey, we want to be big and successful, and at the time, IBM was the gold standard in terms of success in the enterprise world. And you know, they were Big Blue, so we said, “Hey, we're going to be a little blue. Let's be Azul.” So, that's where we began. So obviously, go check out our site.We're very present, also, in the Java community. We're, you know, many developer conferences and talks. We sponsor and run many of what's called the Java User Groups, which are very popular 10-, 20-person meetups that happen around the globe on a regular basis. And so, you know, come check us out. And I appreciate everyone's time in listening to the podcast today.Corey: No, thank you very much for spending as much time with me as you have. It's appreciated.Scott: Thanks, Corey.Corey: Scott Sellers, CEO and co-founder of Azul. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an entire copy of the terms and conditions from Oracle's version of the JDK.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

The Legalpreneurs Sandbox
Episode 146 - Future 50 Series - The Legal Team of the Future: Law+ Skills

The Legalpreneurs Sandbox

Play Episode Listen Later Sep 19, 2022 41:57


What will the legal team of the future look like and, do we have a blueprint for it? That's the question Adam Curphey answers in his recently published book: The Legal Team of the Future: Law+ Skills. You can purchase Adam's book here. How, where and why the legal ecosystem is transforming is reflected in the new and different way legal work is now being conceived and the capabilities needed to deliver it better, cheaper and faster. It's creating and driving a whole new war for talent! The workforce of the future, evolving now, is a celebration of the multis – multi-disciplinary, multi-cultural, multi-generational and multi-talented and much, much more! In this podcast in the Future 50 Series of the CLI Legalpreneurs Spotlight, we chatted with Adam about his book. We discussed the changing legal ecosystem and, in particular, his Law+ model – Law at the core but plus people, business, change and technology; the challenges and opportunities for law firms, legal departments, and law schools to be discovered and derived from the framework and blueprint in the book; the business case for change; and, where to start on the Law+ journey. Adam is the Senior Manager of Innovation at Mayer Brown in the UK. He has an amazing history in innovation, legal education and capability development as a former practising lawyer, an academic, in law firms, and in advisory capacities for Lawtech UK and the O-Shaped Lawyer. This deep, lived experience is what makes this discussion and his book practical and candid…and his quirky sense of humour is what makes it fun! If you would prefer to watch rather than listen to this podcast, you'll find the video here. About the Future 50 Series In the Future 50 Series we're chatting with legalpreneurs who, through their ideas and actions, are challenging and transforming legal BAU all around the world. If you would like to recommend people for this Series, please contact us at: CLI@collaw.edu.au.

The Legalpreneurs Sandbox
Episode 147 - Future 50 Series - The LegalTech Fund

The Legalpreneurs Sandbox

Play Episode Listen Later Sep 19, 2022 25:30


In this podcast in the Future 50 Series of the CLI Legalpreneurs Spotlight, we chatted with Zach Posner, the General Partner and Co-Founder of The LegalTech Fund (TLTF). You can see why Zach does what he does. His 15+ years experience working with early-stage companies as a CEO and in the Venture and BOD space is layered into a background in finance, strategy, and business development – it's hard to imagine a better set of capabilities for his role or more compatible with the work of TLTF. The TLTF is unique in a number of ways and in the same ways, stands out in a crowded tech funding marketplace - it was established during the pandemic, it invests in start-ups (and beyond), and it's driven by a desire on the part of the founders and investors to proactively support all entrepreneurs (even if they don't invest in them) and, importantly, it's very focussed on building a collaborative community along the way. For TLTF, it's not just about the money, it is about helping entrepreneurs and their companies succeed, supporting what they need, and being there at the beginning, middle, end, and back again!  We discussed TLTF; how investment and development of legaltech has progressed (or not) pre, during and now hopefully post COVID; what barriers have held it back; how these are changing but understanding that there is a lot of room for things to grow in the legal ecosystem, gain momentum, and catch up to where other industries find themselves; when we'll know if legaltech has made a difference; and the future of the legaltech industry in the near future. We also discussed TLTF's inaugural Summit in Miami in December 2022. The Summit, like the Fund, is also unique – it will be a meeting of legaltech thought leaders, doers, and stakeholders who are looking to the future, want to play their part in it, and know the value and mutual benefit derived from working on that together. The Summit will also feature a start-up challenge with the focus again on sharing experiences and building community. If you would prefer to watch rather than listen to this podcast, you'll find the video here. About the Future 50 Series In the Future 50 Series we're chatting with legalpreneurs who, through their ideas and actions, are challenging and transforming legal BAU all around the world. If you would like to recommend people for this Series, please contact us at: CLI@collaw.edu.au.

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 19, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

.NET in pillole
Usare gli alias per aumentare la produttività con git

.NET in pillole

Play Episode Listen Later Sep 19, 2022 10:51


Ecco un valido modo per aumentare la propria produttività con la CLI di git, sfruttare gli alias.A questo link trovate gli alias che utilizzo io: https://gist.github.com/andreadottor/70ec9e63f812a0a331748a695184da26Qui le risorse da qui sono partito:https://git-scm.com/book/en/v2/Git-Basics-Git-Aliaseshttps://haacked.com/archive/2014/07/28/github-flow-aliases/https://haacked.com/archive/2017/01/04/git-alias-open-url/https://opensource.com/article/20/11/git-aliases

AWS Bites
51. Authentication for a CLI app with Cognito - Live coding PART 4

AWS Bites

Play Episode Listen Later Sep 17, 2022 91:06


This is a special episode recorded live during a live coding session on YouTube (2022-09-16). The audio-only experience might not be the best one, so if you are curious to see the video and enjoy our diagrams and screen sharing, please check this episode on YouTube: https://www.youtube.com/watch?v=vVic3oqqqfY. How can you build a WeTransfer or a Dropbox Transfer clone on AWS? This is our fourth live coding stream. In this episode, we started looking into adding some security to our application. Specifically, we started implementing a device auth flow on top of AWS Cognito to allow our file upload CLI application to get some credentials. All our code is available in this repository: https://github.com/awsbites/weshare.click In this episode we mentioned the following resources: Content-Disposition Header on MDN: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition OAuth 2 Device Auth flow RFC8628: https://www.rfc-editor.org/rfc/rfc8628 XKCD Comic about password security: https://xkcd.com/936/ crypto-random-string package: https://www.npmjs.com/package/crypto-random-string Dash offline documentation app: https://kapeli.com/dash You can listen to AWS Bites wherever you get your podcasts: - Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-bites/id1585489017 - Spotify: https://open.spotify.com/show/3Lh7PzqBFV6yt5WsTAmO5q - Google: https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy82YTMzMTJhMC9wb2RjYXN0L3Jzcw== - Breaker: https://www.breaker.audio/aws-bites - RSS: https://anchor.fm/s/6a3312a0/podcast/rss Do you have any AWS questions you would like us to address? Leave a comment here or connect with us on Twitter: - https://twitter.com/eoins - https://twitter.com/loige #AWS #livecoding #transfer

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 16, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

Unofficial SAP on Azure podcast
#110 - The one with the Azure Center for SAP Solutions (Aron Stern) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Sep 16, 2022 53:35


In episode 110 of our SAP on Azure video podcast we talk about a Bridge Framework to integrate SAP systems with Teams, support for NFS shares with Azure Files, Integrating Signavio with SAP solution Manager hosted on Azure using Azure Application Gateway and SAP on Azure high availability – change from SPN to MSI for Pacemaker clusters using Azure fencing. Then Aron Stern joins us to talk about making Azure SAP-aware. The Azure Center for SAP Solutions (ACSS) allows customers to deploy and manager their SAP systems directly from within Azure. With a few clicks -- or via APIs, CLI, ... -- you can install your SAP system, start and stop it or integrate it with other Azure PaaS services. Think of it as the Rosetta Stone for Azure and SAP. http://aka.ms/acss https://www.saponazurepodcast.de/episode110 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #SAPonAzure

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 15, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

Screaming in the Cloud
The Future of Serverless with Allen Helton

Screaming in the Cloud

Play Episode Listen Later Sep 15, 2022 39:06


About AllenAllen is a cloud architect at Tyler Technologies. He helps modernize government software by creating secure, highly scalable, and fault-tolerant serverless applications.Allen publishes content regularly about serverless concepts and design on his blog - Ready, Set Cloud!Links Referenced: Ready, Set, Cloud blog: https://readysetcloud.io Tyler Technologies: https://www.tylertech.com/ Twitter: https://twitter.com/allenheltondev Linked: https://www.linkedin.com/in/allenheltondev/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: I come bearing ill tidings. Developers are responsible for more than ever these days. Not just the code that they write, but also the containers and the cloud infrastructure that their apps run on. Because serverless means it's still somebody's problem. And a big part of that responsibility is app security from code to cloud. And that's where our friend Snyk comes in. Snyk is a frictionless security platform that meets developers where they are - Finding and fixing vulnerabilities right from the CLI, IDEs, Repos, and Pipelines. Snyk integrates seamlessly with AWS offerings like code pipeline, EKS, ECR, and more! As well as things you're actually likely to be using. Deploy on AWS, secure with Snyk. Learn more at Snyk.co/scream That's S-N-Y-K.co/screamCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while I wind up stumbling into corners of the internet that I previously had not traveled. Somewhat recently, I wound up having that delightful experience again by discovering readysetcloud.io, which has a whole series of, I guess some people might call it thought leadership, I'm going to call it instead how I view it, which is just amazing opinion pieces on the context of serverless, mixed with APIs, mixed with some prognostications about the future.Allen Helton by day is a cloud architect at Tyler Technologies, but that's not how I encountered you. First off, Allen, thank you for joining me.Allen: Thank you, Corey. Happy to be here.Corey: I was originally pointed towards your work by folks in the AWS Community Builder program, of which we both participate from time to time, and it's one of those, “Oh, wow, this is amazing. I really wish I'd discovered some of this sooner.” And every time I look through your back catalog, and I click on a new post, I see things that are either I've really agree with this or I can't stand this opinion, I want to fight about it, but more often than not, it's one of those recurring moments that I love: “Damn, I wish I had written something like this.” So first, you're absolutely killing it on the content front.Allen: Thank you, Corey, I appreciate that. The content that I make is really about the stuff that I'm doing at work. It's stuff that I'm passionate about, stuff that I'd spend a decent amount of time on, and really the most important thing about it for me, is it's stuff that I'm learning and forming opinions on and wants to share with others.Corey: I have to say, when I saw that you were—oh, your Tyler Technologies, which sounds for all the world like, oh, it's a relatively small consultancy run by some guy presumably named Tyler, and you know, it's a petite team of maybe 20, 30 people on the outside. Yeah, then I realized, wait a minute, that's not entirely true. For example, for starters, you're publicly traded. And okay, that does change things a little bit. First off, who are you people? Secondly, what do you do? And third, why have I never heard of you folks, until now?Allen: Tyler is the largest company that focuses completely on the public sector. We have divisions and products for pretty much everything that you can imagine that's in the public sector. We have software for schools, software for tax and appraisal, we have software for police officers, for courts, everything you can think of that runs the government can and a lot of times is run on Tyler software. We've been around for decades building our expertise in the domain, and the reason you probably haven't heard about us is because you might not have ever been in trouble with the law before. If you [laugh] if you have been—Corey: No, no, I learned very early on in the course of my life—which will come as a surprise to absolutely no one who spent more than 30 seconds with me—that I have remarkably little filter and if ten kids were the ones doing something wrong, I'm the one that gets caught. So, I spent a lot of time in the principal's office, so this taught me to keep my nose clean. I'm one of those squeaky-clean types, just because I was always terrified of getting punished because I knew I would get caught. I'm not saying this is the right way to go through life necessarily, but it did have the side benefit of, no, I don't really engage with law enforcement going throughout the course of my life.Allen: That's good. That's good. But one exposure that a lot of people get to Tyler is if you look at the bottom of your next traffic ticket, it'll probably say Tyler Technologies on the bottom there.Corey: Oh, so you're really popular in certain circles, I'd imagine?Allen: Super popular. Yes, yes. And of course, you get all the benefits of writing that code that says ‘if defendant equals Allen Helton then return.'Corey: I like that. You get to have the exception cases built in that no one's ever going to wind up looking into.Allen: That's right. Yes.Corey: The idea of what you're doing makes an awful lot of sense. There's a tremendous need for a wide variety of technical assistance in the public sector. What surprises me, although I guess it probably shouldn't, is how much of your content is aimed at serverless technologies and API design, which to my way of thinking, isn't really something that public sector has done a lot with. Clearly I'm wrong.Allen: Historically, you're not wrong. There's an old saying that government tends to run about ten years behind on technology. Not just technology, but all over the board and runs about ten years behind. And until recently, that's really been true. There was a case last year, a situation last year where one of the state governments—I don't remember which one it was—but they were having a crisis because they couldn't find any COBOL developers to come in and maintain their software that runs the state.And it's COBOL; you're not going to find a whole lot of people that have that skill. A lot of those people are retiring out. And what's happening is that we're getting new people sitting in positions of power and government that want innovation. They know about the cloud and they want to be able to integrate with systems quickly and easily, have little to no onboarding time. You know, there are people in power that have grown up with technology and understand that, well, with everything else, I can be up and running in five or ten minutes. I cannot do this with the software I'm consuming now.Corey: My opinion on it is admittedly conflicted because on the one hand, yeah, I don't think that governments should be running on COBOL software that runs on mainframes that haven't been supported in 25 years. Conversely, I also don't necessarily want them being run like a seed series startup, where, “Well, I wrote this code last night, and it's awesome, so off I go to production with it.” Because I can decide not to do business anymore with Twitter for Pets, and I could go on to something else, like PetFlicks, or whatever it is I choose to use. I can't easily opt out of my government. The decisions that they make stick and that is going to have a meaningful impact on my life and everyone else's life who is subject to their jurisdiction. So, I guess I don't really know where I believe the proper, I guess, pace of technological adoption should be for governments. Curious to get your thoughts on this.Allen: Well, you certainly don't want anything that's bleeding edge. That's one of the things that we kind of draw fine lines around. Because when we're dealing with government software, we're dealing with, usually, critically sensitive information. It's not medical records, but it's your criminal record, and it's things like your social security number, it's things that you can't have leaking out under any circumstances. So, the things that we're building on are things that have proven out to be secure and have best practices around security, uptime, reliability, and in a lot of cases as well, and maintainability. You know, if there are issues, then let's try to get those turned around as quickly as we can because we don't want to have any sort of downtime from the software side versus the software vendor side.Corey: I want to pivot a little bit to some of the content you've put out because an awful lot of it seems to be, I think I'll call it variations on a theme. For example, I just read some recent titles, and to illustrate my point, “Going API First: Your First 30 Days,” “Solutions Architect Tips how to Design Applications for Growth,” “3 Things to Know Before Building A Multi-Tenant Serverless App.” And the common thread that I see running through all of these things are these are things that you tend to have extraordinarily strong and vocal opinions about only after dismissing all of them the first time and slapping something together, and then sort of being forced to live with the consequences of the choices that you've made, in some cases you didn't realize you were making at the time. Are you one of those folks that has the wisdom to see what's coming down the road, or did you do what the rest of us do and basically learn all this stuff by getting it hilariously wrong and having to careen into rebound situations as a result?Allen: [laugh]. I love that question. I would like to say now, I feel like I have the vision to see something like that coming. Historically, no, not at all. Let me talk a little bit about how I got to where I am because that will shed a lot of context on that question.A few years ago, I was put into a position at Tyler that said, “Hey, go figure out this cloud thing.” Let's figure out what we need to do to move into the cloud safely, securely, quickly, all that rigmarole. And so, I did. I got to hand-select team of engineers from people that I worked with at Tyler over the past few years, and we were basically given free rein to learn. We were an R&D team, a hundred percent R&D, for about a year's worth of time, where we were learning about cloud concepts and theory and building little proof of concepts.CI/CD, serverless, APIs, multi-tenancy, a whole bunch of different stuff. NoSQL was another one of the things that we had to learn. And after that year of R&D, we were told, “Okay, now go do something with that. Go build this application.” And we did, building on our theory our cursory theory knowledge. And we get pretty close to go live, and then the business says, “What do you do in this scenario? What do you do in that scenario? What do you do here?”Corey: “I update my resume and go work somewhere else. Where's the hard part here?”Allen: [laugh].Corey: Turns out, that's not a convincing answer.Allen: Right. So, we moved quickly. And then I wouldn't say we backpedaled, but we hardened for a long time before the—prior to the go-live, with the lessons that we've learned with the eyes of Tyler, the mature enterprise company, saying, “These are the things that you have to make sure that you take into consideration in an actual production application.” One of the things that I always pushed—I was a manager for a few years of all these cloud teams—I always push do it; do it right; do it better. Right?It's kind of like crawl, walk, run. And if you follow my writing from the beginning, just looking at the titles and reading them, kind of like what you were doing, Corey, you'll see that very much. You'll see how I talk about CI/CD, you'll see me how I talk about authorization, you'll see me how I talk about multi-tenancy. And I kind of go in waves where maybe a year passes and you see my content revisit some of the topics that I've done in the past. And they're like, “No, no, no, don't do what I said before. It's not right.”Corey: The problem when I'm writing all of these things that I do, for example, my entire newsletter publication pipeline is built on a giant morass of Lambda functions and API Gateways. It's microservices-driven—kind of—and each microservice is built, almost always, with a different framework. Lately, all the new stuff is CDK. I started off with the serverless framework. There are a few other things here and there.And it's like going architecting, back in time as I have to make updates to these things from time to time. And it's the problem with having done all that myself is that I already know the answer to, “What fool designed this?” It's, well, you're basically watching me learn what I was, doing bit by bit. I'm starting to believe that the right answer on some level, is to build an inherent shelf-life into some of these things. Great, in five years, you're going to come back and re-architect it now that you know how this stuff actually works rather than patching together 15 blog posts by different authors, not all of whom are talking about the same thing and hoping for the best.Allen: Yep. That's one of the things that I really like about serverless, I view that as a giant pro of doing Serverless is that when we revisit with the lessons learned, we don't have to refactor everything at once like if it was just a big, you know, MVC controller out there in the sky. We can refactor one Lambda function at a time if now we're using a new version of the AWS SDK, or we've learned about a new best practice that needs to go in place. It's a, “While you're in there, tidy up, please,” kind of deal.Corey: I know that the DynamoDB fanatics will absolutely murder me over this one, but one of the reasons that I have multiple Dynamo tables that contain, effectively, variations on the exact same data, is because I want to have the dependency between the two different microservices be the API, not, “Oh, and under the hood, it's expecting this exact same data structure all the time.” But it just felt like that was the wrong direction to go in. That is the justification I use for myself why I run multiple DynamoDB tables that [laugh] have the same content. Where do you fall on the idea of data store separation?Allen: I'm a big single table design person myself, I really like the idea of being able to store everything in the same table and being able to create queries that can return me multiple different types of entity with one lookup. Now, that being said, one of the issues that we ran into, or one of the ambiguous areas when we were getting started with serverless was, what does single table design mean when you're talking about microservices? We were wondering does single table mean one DynamoDB table for an entire application that's composed of 15 microservices? Or is it one table per microservice? And that was ultimately what we ended up going with is a table per microservice. Even if multiple microservices are pushed into the same AWS account, we're still building that logical construct of a microservice and one table that houses similar entities in the same domain.Corey: So, something I wish that every service team at AWS would do as a part of their design is draw the architecture of an application that you're planning to build. Great, now assume that every single resource on that architecture diagram lives in its own distinct AWS account because somewhere in some customer, there's going to be an account boundary at every interconnection point along the way. And so, many services don't do that where it's, “Oh, that thing and the other thing has to be in the same account.” So, people have to write their own integration shims, and it makes doing the right thing of putting different services into distinct bounded AWS accounts for security or compliance reasons way harder than I feel like it needs to be.Allen: [laugh]. Totally agree with you on that one. That's one of the things that I feel like I'm still learning about is the account-level isolation. I'm still kind of early on, personally, with my opinions in how we're structuring things right now, but I'm very much of a like opinion that deploying multiple things into the same account is going to make it too easy to do something that you shouldn't. And I just try not to inherently trust people, in the sense that, “Oh, this is easy. I'm just going to cross that boundary real quick.”Corey: For me, it's also come down to security risk exposure. Like my lasttweetinaws.com Twitter shitposting thread client lives in a distinct AWS account that is separate from the AWS account that has all of our client billing data that lives within it. The idea being that if you find a way to compromise my public-facing Twitter client, great, the blast radius should be constrained to, “Yay, now you can, I don't know, spin up some cryptocurrency mining in my AWS account and I get to look like a fool when I beg AWS for forgiveness.”But that should be the end of it. It shouldn't be a security incident because I should not have the credit card numbers living right next to the funny internet web thing. That sort of flies in the face of the original guidance that AWS gave at launch. And right around 2008-era, best practices were one customer, one AWS account. And then by 2012, they had changed their perspective, but once you've made a decision to build multiple services in a single account, unwinding and unpacking that becomes an incredibly burdensome thing. It's about the equivalent of doing a cloud migration, in some ways.Allen: We went through that. We started off building one application with the intent that it was going to be a siloed application, a one-off, essentially. And about a year into it, it's one of those moments of, “Oh, no. What we're building is not actually a one-off. It's a piece to a much larger puzzle.”And we had a whole bunch of—unfortunately—tightly coupled things that were in there that we're assuming that resources were going to be in the same AWS account. So, we ended up—how long—I think we took probably two months, which in the grand scheme of things isn't that long, but two months, kind of unwinding the pieces and decoupling what was possible at the time into multiple AWS accounts, kind of, segmented by domain, essentially. But that's hard. AWS puts it, you know, it's those one-way door decisions. I think this one was a two-way door, but it locked and you could kind of jimmy the lock on the way back out.Corey: And you could buzz someone from the lobby to let you back in. Yeah, the biggest problem is not necessarily the one-way door decisions. It's the one-way door decisions that you don't realize you're passing through at the time that you do them. Which, of course, brings us to a topic near and dear to your heart—and I only recently started have opinions on this myself—and that is the proper design of APIs, which I'm sure will incense absolutely no one who's listening to this. Like, my opinions on APIs start with well, probably REST is the right answer in this day and age. I had people, like, “Well, I don't know, GraphQL is pretty awesome.” Like, “Oh, I'm thinking SOAP,” and people look at me like I'm a monster from the Black Lagoon of centuries past in XML-land. So, my particular brand of strangeness side, what do you see that people are doing in the world of API design that is the, I guess, most common or easy to make mistakes that you really wish they would stop doing?Allen: If I could boil it down to one word, fundamentalism. Let me unpack that for you.Corey: Oh, please, absolutely want to get a definition on that one.Allen: [laugh]. I approach API design from a developer experience point of view: how easy is it for both internal and external integrators to consume and satisfy the business processes that they want to accomplish? And a lot of times, REST guidelines, you know, it's all about entity basis, you know, drill into the appropriate entities and name your endpoints with nouns, not verbs. I'm actually very much onto that one.But something that you could easily do, let's say you have a business process that given a fundamentally correct RESTful API design takes ten API calls to satisfy. You could, in theory, boil that down to maybe three well-designed endpoints that aren't, quote-unquote, “RESTful,” that make that developer experience significantly easier. And if you were a fundamentalist, that option is not even on the table, but thinking about it pragmatically from a developer experience point of view, that might be the better call. So, that's one of the things that, I know feels like a hot take. Every time I say it, I get a little bit of flack for it, but don't be a fundamentalist when it comes to your API designs. Do something that makes it easier while staying in the guidelines to do what you want.Corey: For me the problem that I've kept smacking into with API design, and it honestly—let me be very clear on this—my first real exposure to API design rather than API consumer—which of course, I complain about constantly, especially in the context of the AWS inconsistent APIs between services—was when I'm building something out, and I'm reading the documentation for API Gateway, and oh, this is how you wind up having this stage linked to this thing, and here's the endpoint. And okay, great, so I would just populate—build out a structure or a schema that has the positional parameters I want to use as variables in my function. And that's awesome. And then I realized, “Oh, I might want to call this a different way. Aw, crap.” And sometimes it's easy; you just add a different endpoint. Other times, I have to significantly rethink things. And I can't shake the feeling that this is an entire discipline that exists that I just haven't had a whole lot of exposure to previously.Allen: Yeah, I believe that. One of the things that you could tie a metaphor to for what I'm saying and kind of what you're saying, is AWS SAM, the Serverless Application Model, all it does is basically macros CloudFormation resources. It's just a transform from a template into CloudFormation. CDK does same thing. But what the developers of SAM have done is they've recognized these business processes that people do regularly, and they've made these incredibly easy ways to satisfy those business processes and tie them all together, right?If I want to have a Lambda function that is backed behind a endpoint, an API endpoint, I just have to add four or five lines of YAML or JSON that says, “This is the event trigger, here's the route, here's the API.” And then it goes and does four, five, six different things. Now, there's some engineers that don't like that because sometimes that feels like magic. Sometimes a little bit magic is okay.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig secures your cloud from source to run. They believe, as do I, that DevOps and security are inextricably linked. If you wanna learn more about how they view this, check out their blog, it's definitely worth the read. To learn more about how they are absolutely getting it right from where I sit, visit Sysdig.com and tell them that I sent you. That's S Y S D I G.com. And my thanks to them for their continued support of this ridiculous nonsense.Corey: I feel like one of the benefits I've had with the vast majority of APIs that I've built is that because this is all relatively small-scale stuff for what amounts to basically shitposting for the sake of entertainment, I'm really the only consumer of an awful lot of these things. So, I get frustrated when I have to backtrack and make changes and teach other microservices to talk to this thing that has now changed. And it's frustrating, but I have the capacity to do that. It's just work for a period of time. I feel like that equation completely shifts when you have published this and it is now out in the world, and it's not just users, but in many cases paying customers where you can't really make those changes without significant notice, and every time you do you're creating work for those customers, so you have to be a lot more judicious about it.Allen: Oh, yeah. There is a whole lot of governance and practice that goes into production-level APIs that people integrate with. You know, they say once you push something out the door into production that you're going to support it forever. I don't disagree with that. That seems like something that a lot of people don't understand.And that's one of the reasons why I push API-first development so hard in all the content that I write is because you need to be intentional about what you're letting out the door. You need to go in and work, not just with the developers, but your product people and your analysts to say, what does this absolutely need to do, and what does it need to do in the future? And you take those things, and you work with analysts who want specifics, you work with the engineers to actually build it out. And you're very intentional about what goes out the door that first time because once it goes out with a mistake, you're either going to version it immediately or you're going to make some people very unhappy when you make a breaking change to something that they immediately started consuming.Corey: It absolutely feels like that's one of those things that AWS gets astonishingly right. I mean, I had the privilege of interviewing, at the time, Jeff Barr and then Ariel Kelman, who was their head of marketing, to basically debunk a bunch of old myths. And one thing that they started talking about extensively was the idea that an API is fundamentally a promise to your customers. And when you make a promise, you'd better damn well intend on keeping it. It's why API deprecations from AWS are effectively unique whenever something happens.It's the, this is a singular moment in time when they turn off a service or degrade old functionality in favor of new. They can add to it, they can launch a V2 of something and then start to wean people off by calling the old one classic or whatnot, but if I built something on AWS in 2008 and I wound up sleeping until today, and go and try and do the exact same thing and deploy it now, it will almost certainly work exactly as it did back then. Sure, reliability is going to be a lot better and there's a crap ton of features and whatnot that I'm not taking advantage of, but that fundamental ability to do that is awesome. Conversely, it feels like Google Cloud likes to change around a lot of their API stories almost constantly. And it's unplanned work that frustrates the heck out of me when I'm trying to build something stable and lasting on top of it.Allen: I think it goes to show the maturity of these companies as API companies versus just vendors. It's one of the things that I think AWS does [laugh]—Corey: You see the similar dichotomy with Microsoft and Apple. Microsoft's new versions of Windows generally still have functionalities in them to support stuff that was written in the '90s for a few use cases, whereas Apple's like, “Oh, your computer's more than 18-months old? Have you tried throwing it away and buying a new one? And oh, it's a new version of Mac OS, so yeah, maybe the last one would get security updates for a year and then get with the times.” And I can't shake the feeling that the correct answer is in some way, both of those, depending upon who your customer is and what it is you're trying to achieve.If Microsoft adopted the Apple approach, their customers would mutiny, and rightfully so; the expectation has been set for decades that isn't what happens. Conversely, if Apple decided now we're going to support this version of Mac OS in perpetuity, I don't think a lot of their application developers wouldn't quite know what to make of that.Allen: Yeah. I think it also comes from a standpoint of you better make it worth their while if you're going to move their cheese. I'm not a Mac user myself, but from what I hear for Mac users—and this could be rose-colored glasses—but is that their stuff works phenomenally well. You know, when a new thing comes out—Corey: Until it doesn't, absolutely. It's—whenever I say things like that on this show, I get letters. And it's, “Oh, yeah, really? They'll come up with something that is a colossal pain in the ass on Mac.” Like, yeah, “Try building a system-wide mute key.”It's yeah, that's just a hotkey away on windows and here in Mac land. It's, “But it makes such beautiful sounds. Why would you want them to be quiet?” And it's, yeah, it becomes this back-and-forth dichotomy there. And you can even explain it to iPhones as well and the Android ecosystem where it's, oh, you're going to support the last couple of versions of iOS.Well, as a developer, I don't want to do that. And Apple's position is, “Okay, great.” Almost half of the mobile users on the planet will be upgrading because they're in the ecosystem. Do you want us to be able to sell things those people are not? And they're at a point of scale where they get to dictate those terms.On some level, there are benefits to it and others, it is intensely frustrating. I don't know what the right answer is on the level of permanence on that level of platform. I only have slightly better ideas around the position of APIs. I will say that when AWS deprecates something, they reach out individually to affected customers, on some level, and invariably, when they say, “This is going to be deprecated as of August 31,” or whenever it is, yeah, it is going to slip at least twice in almost every case, just because they're not going to turn off a service that is revenue-bearing or critical-load-bearing for customers without massive amounts of notice and outreach, and in some cases according to rumor, having engineers reach out to help restructure things so it's not as big of a burden on customers. That's a level of customer focus that I don't think most other companies are capable of matching.Allen: I think that comes with the size and the history of Amazon. And one of the things that they're doing right now, we've used Amazon Cloud Cams for years, in my house. We use them as baby monitors. And they—Corey: Yea, I saw this I did something very similar with Nest. They didn't have the Cloud Cam at the right time that I was looking at it. And they just announced that they're going to be deprecating. They're withdrawing them for sale. They're not going to support them anymore. Which, oh at Amazon—we're not offering this anymore. But you tell the story; what are they offering existing customers?Allen: Yeah, so slightly upset about it because I like my Cloud Cams and I don't want to have to take them off the wall or wherever they are to replace them with something else. But what they're doing is, you know, they gave me—or they gave all the customers about eight months head start. I think they're going to be taking them offline around Thanksgiving this year, just mid-November. And what they said is as compensation for you, we're going to send you a Blink Cam—a Blink Mini—for every Cloud Cam that you have in use, and then we are going to gift you a year subscription to the Pro for Blink.Corey: That's very reasonable for things that were bought years ago. Meanwhile, I feel like not to be unkind or uncharitable here, but I use Nest Cams. And that's a Google product. I half expected if they ever get deprecated, I'll find out because Google just turns it off in the middle of the night—Allen: [laugh].Corey: —and I wake up and have to read a blog post somewhere that they put an update on Nest Cams, the same way they killed Google Reader once upon a time. That's slightly unfair, but the fact that joke even lands does say a lot about Google's reputation in this space.Allen: For sure.Corey: One last topic I want to talk with you about before we call it a show is that at the time of this recording, you recently had a blog post titled, “What does the Future Hold for Serverless?” Summarize that for me. Where do you see this serverless movement—if you'll forgive the term—going?Allen: So, I'm going to start at the end. I'm going to work back a little bit on what needs to happen for us to get there. I have a feeling that in the future—I'm going to be vague about how far in the future this is—that we'll finally have a satisfied promise of all you're going to write in the future is business logic. And what does that mean? I think what can end up happening, given the right focus, the right companies, the right feedback, at the right time, is we can write code as developers and have that get pushed up into the cloud.And a phrase that I know Jeremy Daly likes to say ‘infrastructure from code,' where it provisions resources in the cloud for you based on your use case. I've developed an application and it gets pushed up in the cloud at the time of deploying it, optimized resource allocation. Over time, what will happen—with my future vision—is when you get production traffic going through, maybe it's spiky, maybe it's consistently at a scale that outperforms the resources that it originally provisioned. We can have monitoring tools that analyze that and pick that out, find the anomalies, find the standard patterns, and adjust that infrastructure that it deployed for you automatically, where it's based on your production traffic for what it created, optimizes it for you. Which is something that you can't do on an initial deployment right now. You can put what looks best on paper, but once you actually get traffic through your application, you realize that, you know, what was on paper might not be correct.Corey: You ever noticed that whiteboard diagrams never show the reality, and they're always aspirational, and they miss certain parts? And I used to think that this was the symptom I had from working at small, scrappy companies because you know what, those big tech companies, everything they build is amazing and awesome. I know it because I've seen their conference talks. But I've been a consultant long enough now, and for a number of those companies, to realize that nope, everyone's infrastructure is basically a trash fire at any given point in time. And it works almost in spite of itself, rather than because of it.There is no golden path where everything is shiny, new and beautiful. And that, honestly, I got to say, it was really [laugh] depressing when I first discovered it. Like, oh, God, even these really smart people who are so intelligent they have to have extra brain packs bolted to their chests don't have the magic answer to all of this. The rest of us are just screwed, then. But we find ways to make it work.Allen: Yep. There's a quote, I wish I remembered who said it, but it was a military quote where, “No battle plan survives impact with the enemy—first contact with the enemy.” It's kind of that way with infrastructure diagrams. We can draw it out however we want and then you turn it on in production. It's like, “Oh, no. That's not right.”Corey: I want to mix the metaphors there and say, yeah, no architecture survives your first fight with a customer. Like, “Great, I don't think that's quite what they're trying to say.” It's like, “What, you don't attack your customers? Pfft, what's your customer service line look like?” Yeah, it's… I think you're onto something.I think that inherently everything beyond the V1 design of almost anything is an emergent property where this is what we learned about it by running it and putting traffic through it and finding these problems, and here's how it wound up evolving to account for that.Allen: I agree. I don't have anything to add on that.Corey: [laugh]. Fair enough. I really want to thank you for taking so much time out of your day to talk about how you view these things. If people want to learn more, where is the best place to find you?Allen: Twitter is probably the best place to find me: @AllenHeltonDev. I have that username on all the major social platforms, so if you want to find me on LinkedIn, same thing: AllenHeltonDev. My blog is always open as well, if you have any feedback you'd like to give there: readysetcloud.io.Corey: And we will, of course, put links to that in the show notes. Thanks again for spending so much time talking to me. I really appreciate it.Allen: Yeah, this was fun. This was a lot of fun. I love talking shop.Corey: It shows. And it's nice to talk about things I don't spend enough time thinking about. Allen Helton, cloud architect at Tyler Technologies. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment that I will reject because it was not written in valid XML.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 14, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 13, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 12, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

The Cloud Pod
180: Azure Data Explorer Says ‘All Your S3 Data are Belong to Us'

The Cloud Pod

Play Episode Listen Later Sep 9, 2022 46:00


On The Cloud Pod this week, Amazon adds the ability to embed fine-grained visualizations directly onto web pages, Google offers pay-as-you-go pricing for Apigee customers, and Microsoft launches Arm-based Azure VMs that are powered by ampere chips. Thank you to our sponsor, Foghorn Consulting, which provides top notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you're having trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week. Episode Highlights ⏰  Fine-grained visualizations can now be embedded directly into your webpages and applications ⏰  Google is now offering pay-as-you-go pricing for its Apigee API customers ⏰  Microsoft launches Arm-based Azure VMs powered by ampere chips Top Quote

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 9, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

Being CocoaB
Collaboration Over Competition - Jodi-Ann Campbell

Being CocoaB

Play Episode Listen Later Sep 8, 2022 35:55


Jodi-Ann Campbell is the CEO of Malcolm's Choice which is a digital directory for black businesses in Toronto, Canada. Jodi-Ann details the genesis of Malcolm's choice, the opportunities it has created for her, and its impact on black businesses. And shares her insight on how we can create a more environment as entrepreneurs. She also shares her insight on how entrepreneurs can change the business landscape in Canada, lessons she has learned about herself as well as the legacy she hopes to leave. CLI podcast is available on Apple Podcast, Spotify, iTunes, Youtube, Google Play, Anchor, and on your favourite podcast platforms. Click the link above to listen to the full episode! Listen, Subscribe, Review & Share

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 8, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 7, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

Screaming in the Cloud
Trivy and Open Source Communities with Anaïs Urlichs

Screaming in the Cloud

Play Episode Listen Later Sep 6, 2022 36:15


About AnaïsAnaïs is a Developer Advocate at Aqua Security, where she contributes to Aqua's cloud native open source projects. When she is not advocating DevOps best practices, she runs her own YouTube Channel centered around cloud native technologies. Before joining Aqua, Anais worked as SRE at Civo, a cloud native service provider, where she helped enhance the infrastructure for hundreds of tenant clusters. As CNCF ambassador of the year 2021, her passion lies in making tools and platforms more accessible to developers and community members.Links Referenced: Aqua Security: https://www.aquasec.com/ Aqua Open Source YouTube channel: https://www.youtube.com/c/AquaSecurityOpenSource Personal blog: https://anaisurl.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig That's snark.cloud/appconfig.Corey: This episode is sponsored in part by Honeycomb. When production is running slow, it's hard to know where problems originate. Is it your application code, users, or the underlying systems? I've got five bucks on DNS, personally. Why scroll through endless dashboards while dealing with alert floods, going from tool to tool to tool that you employ, guessing at which puzzle pieces matter? Context switching and tool sprawl are slowly killing both your team and your business. You should care more about one of those than the other; which one is up to you. Drop the separate pillars and enter a world of getting one unified understanding of the one thing driving your business: production. With Honeycomb, you guess less and know more. Try it for free at honeycomb.io/screaminginthecloud. Observability: it's more than just hipster monitoring.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while, when I start trying to find guests to chat with me and basically suffer my various slings and arrows on this show, I encounter something that I've never really had the opportunity to explore further. And today's guest leads me in just such a direction. Anaïs is an open-source developer advocate at Aqua Security, and when I was asking her whether or not she wanted to talk about various topics, one of the first thing she said was, “Don't ask me much about AWS because I've never used it,” which, oh my God. Anaïs, thank you for joining me. You must be so very happy never to have dealt with the morass of AWS.Anaïs: [laugh]. Yes, I'm trying my best to stay away from it. [laugh].Corey: Back when I got into the cloud space, for lack of a better term, AWS was sort of really the only game in town unless you wanted to start really squinting hard at what you define cloud as. I mean yes, I could have gone into Salesforce or something, but I was already sad and angry all the time. These days, you can very much go all in-on cloud. In fact, you were a CNCF ambassador, if I'm not mistaken. So, you absolutely are in the infrastructure cloud space, but you haven't dealt with AWS. That is just an interesting path. Have you found others who have gone down that same road, or are you sort of the first of a new breed?Anaïs: I think to find others who are in a similar position or have a similar experience, as you do, you first have to talk about your experience, and this is the first time, or maybe the second, that I'm openly [laugh] saying it on something that will be posted live, like, to the internet. Before I, like, I tried to stay away from mentioning it at all, do the best that I can because I'm at this point where I'm so far into my cloud-native Kubernetes journey that I feel like I should have had to deal with AWS by now, and I just didn't. And I'm doing my best and I'm very successful in avoiding it. [laugh]. So, that's where I am. Yeah.Corey: We're sort of on opposite sides of a particular fence because I spend entirely too much time being angry at AWS, but I've never really touched Kubernetes and anger. I mean, I see it in a lot of my customer accounts and I get annoyed at its data transfer bills and other things that it causes in an economic sense, but as far as the care and feeding of a production cluster, back in my SRE days, I had very old-school architectures. It's, “Oh, this is an ancient system, just like grandma used to make,” where we had the entire web tier, then a job applic—or application server tier, and then a database at the end, and everyone knew where everything was. And then containers came out of nowhere, and it seemed like okay, this solves a bunch of problems and introduces a whole bunch more. How do I orchestrate them? How do I ensure that they're healthy?And then ah, Kubernetes was the answer. And for a while, it seemed like no matter what the problem was, Kubernetes was going to be the answer because people were evangelizing it pretty hard. And now I see it almost everywhere that I turn. What's your journey been like? How did you get into the weeds of, “You know what I want to do when I grow up? That's right. I want to work on container orchestration systems.” I have a five-year-old. She has never once said that because I don't abuse my children by making them learn how clouds work. How did you wind up doing what you do?Anaïs: It's funny that you mention that. So, I'm actually of the generation of engineers who doesn't know anything else but Kubernetes. So, when you mentioned that you used to use something before, I don't really know what that looks like. I know that you can still deploy systems without Kubernetes, but I have no idea how. My journey into the cloud-native space started out of frustration from the previous industry that I was working at.So, I was working for several years as developer advocate in the open-source blockchain cryptocurrency space and it's highly similar to all of the cliches that you hear online and across the news. And out of this frustration, [laugh] I was looking at alternatives. One of them was either going into game development, into the gaming industry, or the cloud-native space and infrastructure development and deployment. And yeah, that's where I ended up. So, at the end of 2020, I joined a startup in the cloud-native space and started my social media journey.Corey: One of the things that I found that Kubernetes solved for—and to be clear, Kubernetes really came into its own after I was doing a lot more advisory work and a lot more consulting style activity rather than running my own environments, but there's an entire universe of problems that the modern day engineer never has to think about due to, partially cloud and also Kubernetes as well, which is the idea of hardware or node failure. I've had middle of the night driving across Los Angeles in a panic getting to the data center because the disk array on the primary database had degraded because the drive failed. That doesn't happen anymore. And clouds have mostly solved that. It's okay, drives fail, but yeah, that's the problem for some people who live in Virginia or Oregon. I don't have to think about it myself.But you do have to worry about instances failing; what if the primary database instance dies? Well, when everything lives in a container then that container gets moved around in the stateless way between things, well great, you really only have to care instead about okay, what if all of my instances die? Or, what if my code is really crappy? To which my question is generally, what do you mean, ‘if?' All of us write crappy code.That's the nature of the universe. We open-source only the small subset that we are not actively humiliated by, which is, in a lot of ways, what you're focusing on now, over at Aqua Sec, you are an advocate for open-source. One of the most notable projects that come out of that is Trivy, if I'm pronouncing that correctly.Anaïs: Yeah, that's correct. Yeah. So, Trivy is our main open-source project. It's an all-in-one cloud-native security scanner. And it's actually—it's focused on misconfiguration issues, so it can help you to build more robust infrastructure definitions and configurations.So ideally, a lot of the things that you just mentioned won't happen, but it obviously, highly depends on so many different factors in the cloud-native space. But definitely misconfigurations of one of those areas that can easily go wrong. And also, not just that you have data might cease to exist, but the worst thing or, like, as bad might be that it's completely exposed online. And they are databases of different exposures where you can see all the kinds of data of information from just health data to dating apps, just being online available because the IP address is not protected, right? Things like that. [laugh].Corey: We all get those emails that start with, “Your security is very important to us,” and I know just based on that opening to an email, that the rest of that email is going to explain how security was not very important to you folks. And it's the apology, “Oops, we have messed up,” email. Now, the whole world of automated security scanners is… well, it's crowded. There are a number of different services out there that the cloud providers themselves offer a bunch of these, a whole bunch of scareware vendors at the security conferences do as well. Taking a quick glance at Trivy, one of the problems I see with it, from a cloud provider perspective, is that I see nothing that it does that winds up costing extra money on your cloud bill that you then have to pay to the cloud provider, so maybe they'll put a pull request in for that one of these days. But my sarcasm aside, what is it that differentiates Trivy from a bunch of other offerings in various spaces?Anaïs: So, there are multiple factors. If we're looking from an enterprise perspective, you could be using one of the in-house scanners from any of the cloud providers available, depending which you're using. The thing is, they are not generally going to be the ones who have a dedicated research team that provides the updates based on the vulnerabilities they find across the space. So, with an open-source security scanner or from a dedicated company, you will likely have more up-to-date information in your scans. Also, lots of different companies, they're using Trivy under the hood ultimately, or for their own scans.I can link a few where you can also find them in a Trivy repository. But ultimately, a lot of companies rely on Trivy and other open-source security scanners under the hood because they are from dedicated companies. Now, the other part to Trivy and why you might want to consider using Trivy is that in larger teams, you will have different people dealing with different components of your infrastructure, of your deployments, and you could end up having to use multiple different security scanners for all your different components from your container images that you're using, whether or not they are secure, whether or not they're following best practices that you defined to your infrastructure-as-code configurations, to you're running deployments inside of your cluster, for instance. So, each of those different stages across your lifecycle, from development to runtime, will maybe either need different security scanners, or you could use one security scanner that does it all. So, you could have in a team more knowledge sharing, you could have dedicated people who know how to use the tool and who can help out across a team across the lifecycle, and similar. So, that's one of the components that you might want to consider.Another thing is how mature is a tool, right? A lot of cloud providers, what they end up doing is they provide you with a solution, but it's nice to decoupled from anything else that you're using. And especially in the cloud-native space, you're heavily reliant on open-source tools, such as for your observability stack, right? Coming from Site Reliability Engineering also myself, I love using metrics and Grafana. And for me, if anything open-source from Loki to accessing my logs, to Grafana to dashboards, and all their integrations.I love that and I want to use the same tools that I'm using for everything else, also for my security tools. I don't want to have the metrics for my security tools visualized in a different solution to my reliability metrics for my application, right? Because that ultimately makes it more difficult to correlate metrics. So, those are, like, some of the factors that you might want to consider when you're choosing a security scanner.Corey: When you talk about thinking about this, from the perspective of an SRE is—I mean, this is definitely an artifact of where you come from and how you approach this space. Because in my world, when you have ten web servers, five application servers, and two database servers and you wind up with a problem in production, how do you fix this? Oh, it's easy. You log into one of those nodes and poke around and start doing diagnostics in production. In a containerized world, you generally can't do that, or there's a problem on a container, and by the time you're aware of that, that container hasn't existed for 20 minutes.So, how do you wind up figuring out what happens? And instrumenting for telemetry and metrics and observability, particularly at scale becomes way more important than it ever was, for me. I mean, my version of monitoring was always Nagios, which was the original Call of Duty that wakes you up at two in the morning when the hard drive fails. The world has thankfully moved beyond that and a bunch of ways. But it's not first nature for me. It's always, “Oh, yeah, that's right. We have a whole telemetry solution where I can go digging into.” My first attempt is always, oh, how do I get into this thing and poke it with a stick? Sometimes that's helpful, but for modern applications, it really feels like it's not.Anaïs: Totally. When we're moving to an infrastructure to an environment where we can deploy multiple times a day, right, and update our application multiple times a day, multiple times a day, we can introduce new security issues or other things can go wrong, right? So, I want to see—as much as I want to see all of the other failures, I want to see any security-related issues that might be deployed alongside those updates at the same frequency, right?Corey: The problem that I see across all this stuff, though, is there are a bunch of tools out there that people install, but then don't configure because, “Oh, well, I bought the tool. The end.” I mean, I think it was reported almost ten years ago or so on the big Target breach that they wound up installing some tool—I want to say FireEye, but please don't quote me on that—and it wound up firing off a whole bunch of alerts, and they figured was just noise, so they turned it all off. And it turned out no, no, this was an actual breach in progress. But people are so used to all the alarms screaming at them, that they don't dig into this.I mean, one of the original security scanners was Nessus. And I seen a lot of Nessus reports because for a long time, what a lot of crappy consultancies would do is they would white-label the output of whatever it was that Nessus said and deliver that in as the report. So, you'd wind up with 700 pages of quote-unquote, “Security issues.” And you'd have to flip through to figure out that, ah, this supports a somewhat old SSL negotiation protocol, and you're focusing on that instead of the oh, and by the way, the primary database doesn't have a password set. Like, it winds up just obscuring it because there is so much. How does Trivy approach avoiding the information overload problem?Anaïs: That's a great question because everybody's complaining about vulnerability fatigue, of them, for the first time, scanning their container images and workloads and seeing maybe even hundreds of vulnerabilities. And one of the things that can be done to counteract that right from the beginning is investing your time into looking at the different flags and configurations that you can do before actually deploying Trivy to, for example, your cluster. That's one part of it. The other part is I mentioned earlier, you would use a security scan at different parts of your deployment. So, it's really about integrating scanning not just once you—like, in your production environment, once you've deployed everything, but using it already before and empowering engineers to actually use it on their machines.Now, they can either decide to do it or not; it's not part of most people's job to do security scanning, but as you move along, the more you do, the more you can reduce the noise and then ultimately, when you deploy Trivy, for example, inside of your cluster, you can do a lot of configuration such as scanning just for critical vulnerabilities, only scanning for vulnerabilities that already have a fix available, and everything else should be ignored. Those are all factors and flags that you can place into Trivy, for instance, and make it easier. Now, with Trivy, you won't have automated PRs and everything out of the box; you would have to set up the actions or, like, the ways to mitigate those vulnerabilities manually by yourself with tools, as well as integrating Trivy with your existing stack, and similar. But then obviously, if you want to have something more automated, if you want to have something that does more for you in the background, that's when you want to use to an enterprise solution and shift to something like Aqua Security Enterprise Platform that actually provides you with the automated way of mitigating vulnerabilities where you don't have to know much about it and it just gives you the solution and provides you with a PR with the updates that you need in your infrastructure-as-code configurations to mitigate the vulnerability [unintelligible 00:15:52]?Corey: I think that's probably a very fair answer because let's be serious when you're running a bank or someone for whom security matters—and yes, yes, I know, security should matter for everyone, but let's be serious, I care a little bit less about the security impact of, for example, I don't know, my Twitter for Pets nonsense, than I do a dating site where people are not out about their orientation or whatnot. Like, there is a world of difference between the security concerns there. “Oh, no, you might be able to shitpost as me if you compromise my lasttweetinaws.com Twitter client that I put out there for folks to use.” Okay, great. That is not the end of the world compared to other stuff.By the time you're talking about things that are critically important, yeah, you want to spend money on this, and you want to have an actual full-on security team. But open-source tools like this are terrific for folks who are just getting started or they're building something for fun themselves and as it turns out, don't have a full security budget for their weird late-night project. I think that there's a beautiful, I guess, spectrum, as far as what level of investment you can make into security. And it's nice to see the innovation continued happening in the space.Anaïs: And you just mentioned that dedicated security companies, they likely have a research team that's deploying honeypots and seeing what happens to them, right? Like, how are attackers using different vulnerabilities and misconfigurations and what can be done to mitigate them. And that ultimately translates into the configurations of the open-source tool as well. So, if you're using, for instance, a security scanner that doesn't have an enterprise company with a research team behind it, then you might have different input into the data of that security scanner than if you do, right? So, these are, like, additional considerations that you might want to take when choosing a scanner. And also that obviously depends on what scanning you want to do, on the size of your company, and similar, right?Corey: This episode is sponsored in part by our friend EnterpriseDB. EnterpriseDB has been powering enterprise applications with PostgreSQL for 15 years. And now EnterpriseDB has you covered wherever you deploy PostgreSQL on-premises, private cloud, and they just announced a fully-managed service on AWS and Azure called BigAnimal, all one word. Don't leave managing your database to your cloud vendor because they're too busy launching another half-dozen managed databases to focus on any one of them that they didn't build themselves. Instead, work with the experts over at EnterpriseDB. They can save you time and money, they can even help you migrate legacy applications—including Oracle—to the cloud. To learn more, try BigAnimal for free. Go to biganimal.com/snark, and tell them Corey sent you.Corey: Something that I do find fairly interesting is that you started off, as you say, doing DevRel in the open-source blockchain world, then you went to work as an SRE, and then went back to doing DevRel-style work. What got you into SRE and what got you out of SRE, other than the obvious having worked in SRE myself and being unhappy all the time? I kid, but what was it that got you into that space and then out of it?Anaïs: Yeah. Yeah, but no, it's a great question. And it's, I guess, also was shaped my perspective on different tools and, like, the user experience of different tools. But ultimately, I first worked in the cloud-native space for an enterprise tool as developer advocate. And I did not like the experience of working for a paid solution. Doing developer advocacy for it, it felt wrong in a lot of ways. A lot of times you were required to do marketing work in those situations.And that kind of got me out of developer advocacy into SRE work. And now I was working partially or mainly as SRE, and then on the side, I was doing some presentations in developer advocacy. However, that split didn't quite work, either. And I realized that the value that I add to a project is really the way I convey information, which I can't do if I'm busy fixing the infrastructure, right? I can't convey the information of as much of how the infrastructure has been fixed as I can if I'm working with an engineering team and then doing developer advocacy, solely developer advocacy within the engineering team.So, how I ultimately got back into developer advocacy was just simply by being reached out to by my manager at Aqua Security, and Itay telling me, him telling me that he has a role available and if I want to join his team. And it was open-source-focused. Given that I started my career for several years working in the open-source space and working with engineers, contributing to open-source tools, it was kind of what I wanted to go back to, what I really enjoy doing. And yeah, that's how that came about [laugh].Corey: For me, I found that I enjoy aspects of the technology part, but I find I enjoy talking to people way more. And for me, the gratifying moment that keeps me going, believe it or not, is not necessarily helping giant companies spend slightly less money on another giant company. It's watching people suddenly understand something they didn't before, it's watching the light go on in their eyes. And that's been addictive to me for a long time. I've also found that the best way for me to learn something is to teach someone else.I mean, the way I learned Git was that I foolishly wound up proposing a talk, “Terrible Ideas in Git”—we'll teach it by counterexample—four months before the talk. And they accepted it, and crap, I'd better learn enough get to give this talk effectively. I don't recommend this because if you miss the deadline, I checked, they will not move the conference for you. But there really is something to be said for watching someone learn something by way of teaching it to them.Anaïs: It's actually a common strategy for a lot of developer advocates of making up a talk and then waiting whether or not it will get accepted. [laugh] and once it gets accepted, that's when you start learning the tool and trying to figure it out. Now, it's not a good strategy, obviously, to do that because people can easily tell that you just did that for a conference. And—Corey: Sounds to me, like, you need to get better at bluffing. I kid.Anaïs: [laugh].Corey: I kid. Don't bluff your way through conference talks as a general rule. It tends not to go well. [laugh].Anaïs: No. It's a bad idea. It's a really bad idea. And so, I ultimately started learning the technologies or, like, the different tools and projects in the cloud-native space. And there are lots, if you look at the CNCF landscape, right? But just trying to talk myself through them on my YouTube channel. So, my early videos on my channel, it's just very much on the go of me looking for the first time at somebody's documentation and not making any sense out of them.Corey: It's surprising to me how far that gets you. I mean, I guess I'm always reminded of that Tom Hanks movie from my childhood Big where he wakes up—the kid wakes up as an adult one day, goes to work, and bluffs his way into working at a toy company. He's in a management meeting and just they're showing their new toy they're going to put out there and he's, “I don't get it.” Everyone looks at him like how dare you say it? And, “I don't get it. What's fun about this?” Because he's a kid.And he wants to getting promoted to vice president because wow, someone pointed out the obvious thing. And so often, it feels like using a tool or a product, be it open-source or enterprise, it is clearly something different in my experience of it when I try to use this thing than the person who developed it. And very often it's that I don't see the same things or think of the problem space the same way that the developers did, but also very often—and I don't mean to call anyone in particular out here—it's a symptom of a terrible user interface or user experience.Anaïs: What you've just said, a lot of times, it's just about saying the thing that nobody that dares to say or nobody has thought of before, and that gets you obviously, easier, further [laugh] then repeating what other people have already mentioned, right? And a lot of what you see a lot of times in these—also an open-source projects, but I think more even in closed-source enterprise organizations is that people just repeat whatever everybody else is saying in the room, right? You don't have that as much in the open-source world because you have more input or easier input in public than you do otherwise, but it still happens that I mean, people are highly similar to each other. If you're contributing to the same project, you probably have a similar background, similar expertise, similar interests, and that will get you to think in a similar way. So, if there's somebody like, like a high school student maybe, somebody just graduated, somebody from a completely different industry who's looking at those tools for the first time, it's like, “Okay, I know what I'm supposed to do, but I don't understand why I should use this tool for that.” And just pointing that out, gets you a response, most of the time. [laugh].Corey: I use Twitter and use YouTube. And obviously, I bias more for short, pithy comments that are dripping in sarcasm, whereas in a long-form video, you can talk a lot more about what you're seeing. But the problem I have with bad user experience, particularly bad developer experience, is that when it happens to me—and I know at a baseline level, that I am reasonably competent in technical spaces, but when I encounter a bad interface, my immediate instinctive reaction is, “Oh, I'm dumb. And this thing is for smart people.” And that is never, ever true, except maybe with quantum computing. Great, awesome. The Hello World tutorial for that stuff is a PhD from Berkeley. Good luck if you can get into that. But here in the real world where the rest of us play, it's just a bad developer experience, but my instinctive reaction is that there's stuff I don't know, and I'm not good enough to use this thing. And I get very upset about that.Anaïs: That's one of the things that you want to do with any technical documentation is that the first experience that anybody has, no matter the background, with your tool should be a success experience, right? Like people should look at it, use maybe one command, do one thing, one simple thing, and be like, “Yeah, this makes sense,” or, like, this was fun to do, right? Like, this first positive interaction. And it doesn't have to be complex. And that's what many people I think get wrong, that they try to show off how powerful a tool is, of like, oh, “My God, you can do all those things. It's so exciting, right?” But [laugh] ultimately, if nobody can use it or if most of the people, 99% of the people who try it for the first time have a bad experience, it makes them feel uncomfortable or any negative emotion, then it's really you're approaching it from the wrong perspective, right?Corey: That's very apt. I think it's so much of whether people stick with something long enough to learn it and find the sharp edges has to do with what their experience looks like. I mean, back when I was more or less useless when it comes to anything that looked like programming—because I was a sysadmin type—I started contributing to SaltStack. And what was amazing about that was Tom Hatch, the creator of the project had this pattern that he kept up for way too long, where whenever anyone submitted an issue, he said, “Great, well, how about you fix it?” And because we had a patch, like, “Well, I'm not good at programming.” He's like, “That's okay. No one is. Try it and we'll see.”And he accepted every patch and then immediately, you'd see another patch come in ten minutes later that fixed the problems in your patch. But it was the most welcoming and encouraging experience, and I'm not saying that's a good workflow for an open-source maintainer, but he still remains one of the best humans I know, just from that perspective alone.Anaïs: That's amazing. I think it's really about pointing out that there are different ways of doing open-source [laugh] and there is no one way to go about it. So, it's really about—I mean, it's about the community, ultimately. That's what it boils down to, of you are dependent, as an open-source project, on the community, so what is the best experience that you can give them? If that's something that you want to and can invest in, then yeah [laugh] that's probably the best outcome for everybody.Corey: I do have one more question, specifically around things that are more timely. Now, taking a quick look at Trivy and recent features, it seems like you've just now—now-ish—started supporting cloud scanning as well. Previously, it was effectively, “Oh, this scans configuration and containers. Okay, great.” Now, you're targeting actually scanning cloud providers themselves. What does this change and what brought you to this place, as someone who very happily does not deal with AWS?Anaïs: Yeah, totally. So, I just started using AWS, specifically to showcase this feature. So, if you look at the Aqua Open Source YouTube channel, you will find several tutorials that show you how to use that feature, among others.Now, what I mentioned earlier in the podcast already is that Trivy is really versatile, it allows you to scan different aspects of your stack at different stages of your development lifecycle. And that's made possible because Trivy is ultimately using different open-source projects under the hood. For example, if you want to scan your infrastructure-as-code misconfigurations, it's using a tool called tfsec, specifically for Terraform. And then other tools for other scanning, for other security scanning. Now, we have—or had; it's going to be probably deprecated—a tool called CloudSploit in the Aqua open-source project suite.Now, that's going to, kind of like, the functionality that CloudSploit was providing is going to get converted to become part of Trivy, so everything scanning-related is going to become part of Trivy that really, like, once you understand how Trivy works and all of the CLI commands in Trivy have exactly the same structure, it's really easy to scan from container images to infrastructure-as-code, to generating s-bombs to scanning also now, your cloud infrastructure and Trivy can scan any of your AWS services for misconfigurations, and it's using basically the AWS client under the hood to connect with the services of everything you have set up there, and then give you the list of misconfigurations. And once it has done the scan, you can then drill down further into the different aspects of your misconfigurations without performing the entire scan again, since you likely have lots and lots of resources, so you wouldn't want to scan them every time again, right, when you perform the scan. So, once something has been scanned, Trivy will know whether the resource changed or not, it won't scan it again. That's the same way that in-classes scanning works right now. Once a container image has been scanned for vulnerabilities, it won't scan the same container image again because that would just waste time. [laugh]. So yeah, do check it out. It's our most recent feature, and it's going to come out also to the other cloud providers out there. But we're starting with AWS and this kind of forced me to finally [laugh] look at it for the sake of it. But I'm not going to be happy. [laugh].Corey: No, I don't think anyone is. It's every time I see on a resume that someone says, “Oh, I'm an expert in AWS,” it's, “No you're not.” They have 400-some-odd services now. We have crossed the point long ago, where I can very convincingly talk about AWS services that do not exist to Amazonians and not get called out for it because who in the world knows what they run? And half of their services sound like something I made up to be funny, but they're very real. It's wild to me that it is a sprawling as it is and apparently continues to work as a viable business.But no one knows all of it and everyone feels confused, lost, and overwhelmed every time they look at the AWS console. This has been my entire career in life for the last six years, and I still feel that way. So, I'm sure everyone else does, too.Anaïs: And this is how misconfigurations happen, right? You're confused about what you're actually supposed to do and how you're supposed to do it. And that's, for example, with all the access rights in Google Cloud, something that I'm very familiar with, that completely overwhelms you and you get super frustrated by, and you don't even know what you give access to. It's like, if you've ever had to configure Discord user roles, it's a similar disaster. You will not know which user has access to which. They kind of changed it and try to improve it over the past year, but it's a similar issue that you face in cloud providers, just on a much larger-scale, not just on one chat channel. [laugh]. So.Corey: I think that is probably a fair place to leave it. I really want to thank you for spending as much time with me as you have talking about the trials and travails of, well, this industry, for lack of a better term. If people want to learn more, where's the best place to find you?Anaïs: So, I have a weekly DevOps newsletter on my blog, which is anaisurl—like, how you spell U-R-L—and then dot com. anaisurl.com. That's where I have all the links to my different channels, to all of the resources that are published where you can find out more as well. So, that's probably the best place. Yeah.Corey: And we will, of course, put a link to that in the show notes. I really want to thank you for being as generous with your time as you have been. Thank you.Anaïs: Thank you for having me. It was great.Corey: Anaïs, open-source developer advocate at Aqua Security. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment that I will never see because it's buried under a whole bunch of minor or false-positive vulnerability reports.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 6, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

Python Bytes
#300 A Jupyter merge driver for git

Python Bytes

Play Episode Listen Later Sep 6, 2022 55:21


Watch the live stream: Watch on YouTube About the show Sponsored by Microsoft for Startups Founders Hub. Special guest: Seth Larson Brian #1: Test your packages and wheels I've been building some wheels the last couple of weeks with various tools: flit, flit-core, and flit build hatch, hatchling, and hatch build setuptools, build_meta, and python -m build There are a few projects I've used to make sure my projects are in good shape wheel-inspect - you can inspect within Python code through inspect_wheel() function that converts to json. Or use on the command line with wheel2json check-wheel-contents - a linter for wheels tox - easily test the building, installation, and running of a package locally I actually start here, then utilize the other two tools Should have been obvious, but it wasn't to me Projects saved on git (such as gitHub) don't keep wheels in git. (this was obvious) When installing from git using pip install git+https://path/to/git/repo.git Your local pip will run the packaging backend to build the wheel before installing. Yet another way to test packaging. Michael #2: The Jupyter+git problem is now solved Jupyter notebooks don't work with git by default (they inherently have meaningless conflicts). With nbdev2, the Jupyter+git problem has been totally solved. Uses a set of hooks which provide clean git diffs, solve most git conflicts automatically, and ensure that any remaining conflicts can be resolved entirely within the standard Jupyter notebook environment. The techniques used to make the merge driver work are quite fascinating Seth #3: Help us test system trust stores in Python Package aiming to replace certifi called “truststore”, use system trust stores for HTTPS instead of a static list of certificates. Problem truststore is solving usually manifests in corporate networks: “unable to get local issuer certificate”. Experimental support added to pip to prove the implementation Users can try out the functionality and report issues. Brian #4: Making plots in your terminal with plotext Bob Belderbos Tutorial on using plotext - that's one t in the middle With the rise of CLI usage, plots are a nice addition. Bob's plot is great, but check out the options in the plotext docs lots-o-plots streaming data images subplots so fun Michael #5: jinja2-fragments Carson from HTMX (see podcast and course) wrote about template fragments. My jinja_partials project sorta fulfills this, but not really. I had a nice discussion with Sergi Pons Freixes who uses jinja_partials about this. He created Jinja2 fragments Seth #6: SLSA 3 Generic Builder for GitHub Actions GA Supply chain Levels for Software Artifacts, or SLSA (“salsa”) Tools to attest to and verify “provenance” of artifacts, ie “where it came from” Prove cryptographically that artifacts are built from a specific GitHub repository, commit, tag. Another future defense against stolen PyPI credentials/accounts. Generic builder means you can sign anything, like wheels/sdists Extras Brian: Bring your pytest books to PyBay, if you want them signed. I'm only bringing a small amount. I'll be presenting "Sharing is Caring - pytest fixture edition” at 3:05 “Experts Panel on Testing in Python” at 7:00 And be a zombie on my 8 am flight back unless I can change my reservation. That's this weekend, Sat Sept 10, in SF Michael: Heroku announces plans to eliminate free plans Banned paywalls PyPI phisher identified: Actor Phishing PyPI Users Identified and Actors behind PyPI supply chain attack have been active since late 2021 Major Python CVE: CVE-2020-10735: Prevent DoS by large int[HTML_REMOVED]str conversions Seth: Pyxel, retro game engine for Python, v1.8.0 added experimental web support with WASM Joke: Dev just after work

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 5, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 2, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Sep 1, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Aug 31, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Aug 30, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Aug 29, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Aug 26, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Aug 25, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

Being CocoaB
Trap Cardio Queen - Ashley Redwood

Being CocoaB

Play Episode Listen Later Aug 25, 2022 39:18


Ashley Redwood is the owner of Trap cardio and Elite Nutrition RVA wellness centre. In the full episode, Ashley shares how she became known as the queen of trap cardio, what led her to create her wellness space and how her business has impacted her community. She also addresses mindset and dieting as well as provides tips on how to make your business unique. CLI podcast is available on Apple Podcast, Spotify, iTunes, Youtube, Google Play, Anchor, and on your favourite podcast platforms. Click the link above to listen to the full episode! Listen, Subscribe, Review & Share

ClnicaAbierta
Cli­nica Abierta

ClnicaAbierta

Play Episode Listen Later Aug 24, 2022 60:00


Cli­nica Abierta con Dr. Elmo Rodriguez

Laravel News Podcast
Profiling your apps, scheduling email, and JSON API resources

Laravel News Podcast

Play Episode Listen Later Aug 23, 2022 45:54


Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community.This episode is sponsored by Honeybadger - combining error monitoring, uptime monitoring and check-in monitoring into a single, easy to use platform and making you a DevOps hero. Show links Laravel 9.24 released Nagios Grafana Nginx Amplify Laravel 9.25 released Profile your Laravel applicatio with Xhprof Email scheduler package for Laravel Zero hassle CLI application with Laravel Zero JSON API resources in Laravel How I develop applications with Laravel Event sourcing in Laravel Detect slow queries before they hit your production database

Screaming in the Cloud
How to Leverage AWS for Web Developers with Adam Elmore

Screaming in the Cloud

Play Episode Listen Later Aug 23, 2022 34:24


About AdamAdam is an independent cloud consultant that helps startups build products on AWS. He's also the host of AWS FM, a podcast with guests from around the AWS community, and an AWS DevTools Hero.Adam is passionate about open source and has made a handful of contributions to the AWS CDK over the years. In 2020 he created Ness, an open source CLI tool for deploying web sites and apps to AWS.Previously, Adam co-founded StatMuse—a Disney backed startup building technology that answers sports questions—and served as CTO for five years. He lives in Nixa, Missouri, with his wife and two children.Links Referenced: 17 Ways to Run Containers On AWS: https://www.lastweekinaws.com/blog/the-17-ways-to-run-containers-on-aws/ Twitter: https://twitter.com/aeduhm Twitch: https://www.twitch.tv/adamelmore TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while, I encounter someone in the wild that… well, I'll just be direct, makes me feel a little bit uneasy, almost like someone's walking over my grave. And I think I've finally figured out elements of what that is. It feels sometimes like I run into people—ideally not while driving—who are trying to occupy sort of the same space in the universe, and I never quite know how to react to that.Today's guest is just one such person. Adam Elmore is an independent AWS consultant, has been all over the Twitters for a while, recently started live streaming basically his every waking moment because he is just that interesting. Adam, thank you for suffering my slings and arrows—Adam: [laugh].Corey: —and agreeing to chat with me today.Adam: I would say first of all, you don't need to be worried about anyone walking over your grave. [laugh]. That was very flattering.Corey: No, honestly, I have big enterprise companies looking to put me in my grave, but that's a separate threat model. We're good on that, for now.Adam: [laugh]. I got to set myself up here to—I'm just going to laugh a lot, and your editor or somebody's going to have to deal with that. And maybe the audience will see—[laugh].Corey: Hey, I prefer that as opposed to talking to people who have absolutely no sense of humor of which they are aware. Awesome, I have a list of companies that they should apply for immediately. So, when I say that we're trying to occupy elements of the same space in the universe, let me talk a little bit about what I mean by that. You are independent as a consultant, which is how I started this whole nonsense, and then I started gathering a company around me almost accidentally. You are an AWS Dev Tools Hero, whereas I am an AWS community villain, which is kind of a polar opposite slash anti-hero approach, and it's self-granted in my case. How did you stumble into the universe of AWS? You just realized one day you were too happy and what can you do to make yourself miserable, and this was the answer, or what?Adam: Yeah, I guess. So. I mean, I've been a software developer for 15 years, like, my whole career, that's kind of what I've done. And at some point, I started a startup called StatMuse. And I was able, as sort of a co-founder there, with venture backing, like, I was able to just kind of play with the cloud.And we deployed everything on AWS, so that was—like, I was there five years; it was sort of five years of running this, I would call it like a Digital Media Studio. Like, we built technology, but we did lots of experiments, so it felt like playing on AWS. Because we built kind of weird one-offs, these digital experiences for various organizations. The Hall of Fame was one of them. We did, like, a, like, a 3-D Talking bust of John Madden, so it was like all kinds of weird technology involved.But that was sort of five years of, I guess, spending venture money [laugh] to play on AWS. And some of that was Google money; I guess I never thought about that, but Google was an investor in StatMuse. [laugh]. Yeah, so we sort of like—I ran that for five years and was able to learn just a lot of AWS stuff that really excited me. I guess, coming from normal web development stuff, it was exciting just how much leverage you have with AWS, so I sort of dove in pretty hard. And then yeah, when I left StatMuse in 2019 I've just been, I guess, going even harder into that direction. I just really enjoy it.Corey: My first real exposure to AWS was at a company where the CTO was a, I guess we'll call him an extraordinarily early cloud evangelist. I was there as a contractor, and he was super excited and would tweet nonsensical things like, “I'm never going to rack a server ever again.” And I was a grumpy sysadmin type; I came from the ops world where anything that is new shouldn't be treated with disdain and suspicion because once you've been a sysadmin for 20 minutes, you've been there long enough to see today's shiny new shit become tomorrow's legacy garbage that you're stuck supporting. So, “Oh, great. What now?”I was very down on Cloud in those days and I encountered it with increasing frequency as I stumbled my way through my career. And at the end of 2016, I wound up deciding to go out independent and fix… well, what problems am I good at fixing that I can articulate in a sentence, and well, I'd gotten surprised by AWS bills from time to time—fortunately with someone else's money; the best kind of mistake to make—and well I know a few things. Let's get really into it. In time, I came to learn that cost and architecture the same thing in cloud, and now I don't know how the hell to describe myself. Other people love to describe me, usually with varying forms of profanity, but here we are. It really turns into the idea of forging something of your own path. And you've absolutely been doing that for at least the last three years as you become someone who's increasingly well known and simultaneously harder to describe.Adam: Yeah, I would say if you figure it out, if you know how to describe me, I would love to know because just coming up with the title—for this episode you needed, like, my title, I don't know what my title is. I'm also—like, we talked about independent, so nobody sort of gives me a title. I would love to just receive one if you think of one, [laugh] if anyone listening thinks of one… it's increasingly hard to, sort of like, even decide what I care most about. I know I need to, like, probably niche down, I feel like you've kind of niched into the billing stuff. I can't just be like, “I'm an AWS guy,” because AWS is so big. But yeah, I have no idea.Corey: Anyone who claims, “Oh, I'm an expert in AWS,” is lying or trying to sell something.Adam: [laugh]. Exactly.Corey: I love that. It's, “Really? I have some questions to establish that for you.” As far as naming what it is, you do, first piece of advice, never ever, ever, ever listen to someone who works at AWS; those people are awful at naming things, as evidenced by basically every service they've ever launched. But you are actually fairly close to being an AWS expert. You did a six-week speed-run through every certification that they offer and that is nothing short of astonishing. How'd it come about?Adam: It's a unique intersection of skills that I think I have. And I'm not very self-aware, I don't know all my strengths and weaknesses and I struggle to sort of nail those down, but I think one of my strengths is just ability to, like, consume information, I guess at a high volume. So, I'm like an auditory learner; I can listen to content really fast and sort of retain enough. And then I think the other skill I have is just I'm good at tests. I've always said that, like, going back to school, like, high school, I always felt like I was really good at multiple-choice tests. I don't know if that's a skill or some kind of innate talent.But I think those two things combined, and then, like, eight years of building on AWS, and that sort of frames how I was able to take all that on. And I don't know that I really set out thinking I will do it in six weeks. I took the first few and then did them pretty fast and thought, “I wonder how quickly I could do all of them.” And I just kind of at that point, it became this sort of goal. I have to take on certain challenges occasionally that just sound fun for no reason other than they sound fun and that was kind of the thing for those six weeks. [laugh].Corey: I have two certifications: Cloud Practitioner and the SysOps Administrator Associate. Those were interesting.Adam: You took the new one, right? The new SysOps with the labs and stuff I'd love to hear about that.Corey: I did, back when it was in beta. That was a really interesting experience and I'll definitely get to that, but I wound up, for example, getting a question wrong in the Cloud Practitioner exam four years ago or so, when it was, “How long does it take to restore an RDS instance from backup?” And I gave the honest answer instead of the by-the-book, correct answer. That's part of the problem is that I've been doing this stuff too long and I know how these things break and what the real world looks like. Certifications are also very much a snapshot at a point in time.Because I write the Last Week in AWS newsletter, I'm generally up-to-the-minute on what has changed, and things that were not possible yesterday, suddenly are possible today, so I need to know when was this certification launched. Oh, it was in early 2021. Yeah, I needed to be a lot more specific; which week? And then people look at me very strangely and here we are.The Systems Administrator Certification was interesting because this is the first one, to my knowledge, where they started doing a live lab as a—Adam: Yeah.Corey: Component of this. And I don't think it's a breach of the NDA to point out that one of the exams was, “Great. Configure CloudWatch out of the box to do this thing that it's supposed to do out of the box.” And I've got to say that making the service do what it's supposed to do with no caveats is probably the sickest shade I've ever seen anyone throw at AWS, like, configuring the service is so bad that it is going to be our test to prove you know what you're doing. That is amazing.Adam: [laugh]. Yeah, I don't have any shade through I'm not as good with the, like, ability to come off, like, witty and kind while still criticizing things. So, I generally just try not to because I'm bad at it. [laugh].Corey: It's why I generally advise people don't try, in seriousness. It's not that people can't be clever; it's that the failure mode of clever is ‘asshole' and I'm not a big fan of making people feel worse based upon the things that I say and do. It's occasionally I wind up getting yelled at by Amazonians saying that the people who built a service didn't feel great about something I said, and my instinctive immediate reaction is, “Oh, shit, that wasn't my intention. How did I screw this up?” Given a bit of time, I realized that well hang on a minute because I'm not—they're not my target audience. I'm trying to explain this to other customers.And, on some level, if you're going to charge tens of millions of dollars a month for a service or more, maybe make a better one, not for nothing. So, I see both sides of it. I'm not intentionally trying to cause pain, but I'm also not out here insulting people individually. Like, sometimes people make bad decisions, sometimes individually, sometimes in a group. And then we have a service name we have to live with, and all right, I guess I'm going to make fun of that forever. It's fun that keeps it engaging for me because otherwise, it's boring.Adam: No, I hear you. No, and somebody's got to do it. I'm glad you do it and do it so well because, I mean, you got to keep them honest. Like, that's the thing. Keep AWS in check.Corey: Something that I went through somewhat recently was a bit of an awakening. I have no problem revisiting old opinions and discovering that huh, I no longer agree with it; it's time to evolve that opinion. The CDK specifically was one of those where I looked at it and thought this thing looks a little hokey. So, I started using it in Python and sure enough, the experience was garbage. So cool, the CDK is a piece of crap. There we go. My job is easy.I was convinced to take a second look at it via TypeScript, a language I do not know and did not have any previous real experience with. So, I spent a few days just powering through it, and now I'm a convert. I think it's amazing. It is my default go-to for building AWS infrastructure. And all it took was a little bit of poking and prodding to get me to change my mind on that. You've taken it to another level and you started actively contributing to the AWS CDK. What was your journey with that, honestly, remarkable piece of software?Adam: Yeah, so I started contributing to CDK when I was actually doing a lot of Python development. So, I worked with a company that was doing—there was a Python shop. So actually, the first thing I contributed was a Python function construct, which is sort of the equivalent of the Node.js function construct, which like, you can just basically point at a TypeScript file and it transpiles it, bundles it, and does all that, right? So, it makes it easy to deploy TypeScript as a Lambda function.Well, I mean, it ends up being a JavaScript Lambda function, but anyway, that was the Python function construct. And then I sort of got really into it. So, I got pretty hooked on using the CDK in every place that I could. I'm a huge fan, and I do primarily write in TypeScript these days. I love being able to write TypeScript front-end and back, so built a lot of, like, Next.JS front-ends, and then I'm building back-ends with CDK TypeScript.Yeah, I've had, like, a lot of conversations about CDK. I think there's definitely a group that's sort of, against the CDK, if you're thinking in terms of, like, beginners. And I do see where, for people who aren't as familiar with AWS, or maybe this is their entry point into cloud development, it does a lot of things that maybe you're not aware of that, you know, you're now kind of responsible for. So, it's deploying—like, it makes it really easy to write, like, three lines of TypeScript that stand up an entire VPC with all this configuration and Managed NAT Gateways and [laugh] everything else. And you may not be aware of all the things you just stood up.So, CloudFormation maybe is a little more—sort of gives you that better visibility into what you're creating. So, I've definitely seen that pushback. But I think for people who really, like, have built a lot of applications on AWS, I think the CDK is just such a time-saver. I mean, I spend so much less time building the same things in the CDK versus CloudFormation. I'm a big fan.Corey: For me, I've learned enough about JavaScript to be dangerous and it seems like TypeScript is more or less trying to automate a bunch of people's jobs away, which is basically, from I can tell, their job is to go on the internet and complain about someone's JavaScript. So great, that that's really all it does is it complains, “Oh, this ambiguous. You should be more specific about it.” And great. Awesome. I still haven't gotten into scenarios where I've been caught out by typing issues, and very often I find that it just feels like sheer bloodymindedness, but I smile, nod, bend the knee and life goes on.Adam: [laugh]. When you've got a project that's, like, I don't know, a few months old—or better, a few years old—and you need to do, like, major refactoring, that's when TypeScript really saves you just a ton of time. Like, when you can make a change in a type or in actual implementation stuff and then see the ripple effects and then sort of go around the codebase and fix those things, it's just a lot easier than doing it in JavaScript and discovering stuff at runtime. So, I'm a big TypeScript fan. I don't know where it's all headed. I know there's people that are not fans of, like, transpiling your Lambda functions, for instance. Like, why not just ship good JavaScript? And I get that case, too. Yeah, but I've definitely—I felt the productivity boost, I guess—if that's the thing—from TypeScript.Corey: For me, I'm still at a point where I'm learning the edges of where things start and where they stop. But one of the big changes I made was that I finally, after 15 years, gave up my beloved Vim as my editor for this and started using VS Code. Because the reasons that I originally went with Vi were understandable when you realize what I was. I'm always going to be remoting into network gear or random—on maintained Unix boxes. Vi is going to be everywhere on everything and that's fine.Yeah, I don't do that anymore, and increasingly, I find that everything I'm writing is local. It is not something that is tied to a remote thing that I need to login and edit by hand. At that point, we are in disaster area. And suddenly it's nice. I mean things like tab completion, where it just winds up completing the rest of the variable name or, once you enable Copilot and absolutely not CodeWhisperer yet, it winds up you tab complete your entire application. Why not? It's just outsourcing it to Stack Overflow without that pesky copy and paste step.Adam: Yeah, I don't know how in the weeds you want to get on your p—I don't know, in terms of technical stuff, but Copilot both blows me away—there are days where it autocompletes something that I just, I can't fathom how—it pulled in not just, like, the patterns that it found, obviously, in training, but, like, the context in the file I'm working and sort of figured out what I was trying to do. Sometimes it blows me away. A lot of times, though, it frustrates me because of TypeScript. Like, I'm used to Typescript and types saving me from typing a lot. Like, I can tab-complete stuff because I have good types defined, right, or it's just inferred from the libraries I'm using.It's tough though when GitHub is fighting with TypeScript and VS Code. But it's funny that you came from Vim and you now live in VS Code. I really am trying to move from VS Code to, like, the Vim world, mostly because of Twitch streamers that blow my mind with what they can do in Vim [laugh] and how fast they can move. I do—every time I move my hand, like, over to the arrow keys, I feel a little sad and I wish I just did Vim.Corey: This episode is sponsored in part by our friends at Lambda Cloud. They offer GPU instances with pricing that's not only scads better than other cloud providers, but is also accessible and transparent. Also, check this out, they get a lot more granular in terms of what's available. AWS offers NVIDIA A100 GPUs on instances that only come in one size and cost $32/hour. Lambda offers instances that offer those GPUs as single card instances for $1.10/hour. That's 73% less per GPU. That doesn't require any long term commitments or predicting what your usage is gonna look like years down the road. So if you need GPUs, check out Lambda. In beta, they're offering 10TB of free storage and, this is key, data ingress and egress are both free. Check them out at lambdalabs.com/cloud. That's l-a-m-b-d-a-l-a-b-s.com/cloud.Corey: There are people who have just made it into an entire lifestyle, on some level. And I'm fair to middling; I've known people who are dark wizards at it. In practice, I found that my productivity was never constrained by how quickly I can type. It's one of those things where it's, I actually want to stop and have my brain catch up sometimes, believe it or not, for those who follow me on Twitter. It's the idea of wanting to make sure that I am able to intelligently and rationally wrap my head around what it is I'm doing.And okay, just type out a whole bunch of boilerplate is, like, the least valuable use of anything and that is where I find things like Copilot working super well, where I, if I'm doing CloudFormation, for example, the fact that it tab-completes all the necessary attributes and can go back and change them or whatnot, that's an enormous time saver. Same story with the CDK, although with some constructs, it doesn't quite understand which ones get certain values to it. And I really liked the idea behind it. I think this is in some ways, the future of IDEs, to a point.Adam: Oh, for sure. I think, like, the case, you call that with CloudFormation, you don't have really typeahead in VS Code, at least I'm not using anything. Maybe there are extensions that give you that in VS Code. But to have Copilot fill in required prompts on a CloudFormation template, that's a lifesaver. Because I just, every time I write CloudFormation, I've just got the docs up and I'm copying stuff I've done before or whatever; like, to save that time it's huge. But CodeWhisperer, not so much? Is it not, I guess, up to snuff? I haven't seen it or played with it at all.Corey: It's still very early days and it hasn't had exposure outside of Amazonian codebases to my understanding, so it's, like, “Learn to code like an Amazonian.” And you can fill in your own joke here on that one. I imagine it's like—isn't that—aren't they primarily a Java shop, for one? And all right. It turns out most of my code doesn't need to operate the way that there's does.Adam: I didn't know that they were training it just internally. Like, I'm assuming Copilot is trained on, like, Stack Overflow or something, right? Or just all of GitHub, I guess.Corey: And GitHub and a bunch of other things, and people are yelling at them for it, and I haven't been tracking that. But honestly, the CodeWhisperer announcement taught me things about Copilot, which is weird, which tells me that none of these companies are great at explaining this. Like I can just write a comment in this of, “Add an S3 bucket,” and then Copilot will tab-complete the entirety of adding an S3 bucket, usually even secure, which is awesome. They also fix the early Copilot teething problems of tab-completing people's AWS API credentials. You know, the—yeah, they've fixed a lot of that, thankfully.Adam: Yeah.Corey: But it's still one of those neat things that you can just basically start—it gets a little bit closer to describe what you want the application to do and then it'll automatically write it for you on the back-end. Sure, sometimes it makes naive decisions that do not bear out, but again, it's still early days. I'm optimistic.Adam: Yeah, that reminds me of, like, the, I mean, the serverless cloud, so serverless framework folks, like, what they're doing where they're sort of inferring your infrastructure based on you just write an app and it sort of creates the infrastructure as code for you, or just sort of infers it all from your code. So, if you start using a bucket, it'll create a bucket for that. That definitely seems to be a movement as well, where just do less as a developer [laugh] seems to be the theme.Corey: Yeah, just move up the stack. We see this time and time again. I mean, look at the—I use this analogy from time to time from the sysadmin world, but in the late-90s, if you wanted to build a web server, you needed a spare week and an intimate knowledge of GCC compiler flags. In time, it became oh, great, now it's rpm install, then yum install, then ensure present with something like Puppet, and then Docker has it, and now it's just a checkbox on the S3 page, and you're running a static site. Things don't get harder with time, and I don't think that as a developer, your time is best spent writing by hand the proper syntax for a for loop or whatnot.It's not the differentiated value. Talk to me instead about what you want that thing to do. That was my big problem with Lambda when it first came out and I spent two weeks writing my first Lambda function—because I'm bad at programming—where I had to learn the exact format of expected for input and output, and now any Lambda function I write takes me a couple of minutes to write because I'm also bad at programming and don't know what tests are.Adam: [laugh]. Tests are overrated, I don't spend a lot of time writing t—I mean, I do a lot of stuff alone and I do a lot of stuff for myself, so in those contexts, I'm not writing tests if I'm being honest. I stream now and everyone on the stream is constantly asking, “Where are the tests?” Like, there are no tests. I'm sorry. [laugh]. Was someone else's stream.Corey: Oh yeah, it used to be though, that you had to be a little sneakier to have other people do work for you. Copilot makes it easier and presumably CodeWhisperer will, too. Used to be that if AWS launched new service and I didn't know how to configure it, all I would do is restrict a role down to only being able to work with that service, attach that to a user and then just drop the credentials on Twitter or GitHub. And I waited 20 minutes and I came back and sure enough, someone configured it and was already up and mining Bitcoin. So, turn that off, take what they built, and off the production with it. Problem solved. Oh, and rotate those credentials, unless you enjoy pain. Problem solved. The end. And I don't know if it's a best practice, but it sure was effective.Adam: Yeah, that would do it. Well, they're just like scanners now, right, like they're just scanning GitHub public repos for any credentials that are leaked like that, and they're available within seconds. You can literally, like, push a public repo with credentials and it is being [laugh] used within minutes. It's nuts.Corey: GitHub has some automatic back channel thing—I believe; I haven't done an experiment lately, but I believe that AWS will intentionally shoot down the credential as soon as it gets reported, which is kind of amazing. I really should do some more experiments with it just to see how disastrous this can get.Adam: Yeah. No, I'd be curious. Please let me know. I guess you'll tweet about it so I'll see it.Corey: Can I borrow your account for a few minutes?Adam: Yeah. [laugh].Corey: Yeah, it's fun. Now, the secret to my 17 Ways to Run Containers On AWS is in almost every case, those containers can be crypto miners, so it's not just about having too many services do the same thing; it's the attack surface continues to grow and expand in the fullness of time. I'm not saying this is right or wrong; it is what it is, but it's also something that I think people have an understated appreciation for.Let's change topic a little bit. Something you've been doing lately and talking about is the idea of building a course on AWS. You're clearly capable of doing the engineering work. That's not in question. You've been a successful consultant for years, which tells me you also know how to deliver software that meets customer requirements, as opposed to, “Well, the spec was shitty, but I wrote it anyway,” because you don't last long as a consultant if you enjoy being able to afford to eat if that's the direction you go in. Now, you're drifting toward becoming a teacher. Tell me about that. First, what makes you think that's something you're good at?Adam: So, I don't know. I don't know that I'm good at it and I guess I'll find out. I've been streaming, like, on Twitch just my work days, and that's been early signs that I think I'm okay at it, at least. I think it's very different, obviously, like, a self-paced course are going to be very different from streaming for hours, so there's a lot more editing and thoughtfulness involved, but I do think, like, I've always wanted to teach. So, even before I got into technology—I was pretty late into technology; it was after high school. Like back in high school, I always thought I wanted to be a professor.I just enjoyed, I guess the idea of presenting ideas in ways that people understood. And I live in an area—so I live in the Ozarks, it's not a very tech literate area. It became, like, this thing where I felt like I could really explain technology to people who are non-technical. And that's not necessarily what my course—what I'm aiming to do. I'm trying to teach web developers how to leverage AWS, and then sort of get out of the maybe front-end only or maybe traditional web frameworks—like, they've only worked with stuff that they deploy to Heroku or whatever—trying to teach that crowd, how to leverage AWS and all these wonderful primitives that we have.So, that's not exactly the same thing, but that's sort of like, I feel like I do have the ability to translate technology to non-technical folks. And then I guess, like, for me, at this stage of my career, you know, I've done a lot of work for a company, for startups, for individual clients, and it feels very, like—I just always feel like I'm going in a hole. Like, I feel like, I'm doing this little thing and I'm serving this one customer, but the idea of being able to, I guess, serve more people and sort of spread my reach, the idea of creating something that I can share with a lot of developers who would maybe benefit from it, it just feels better, I guess. [laugh]. I don't know exactly all the reasons why that feels better, but like, at the end of the day, my consulting kind of feels like this thing I do because I just need money.And now that I need money less and less, I just feel like I'd rather do stuff that I actually am excited about. I'm actually really excited about the outcomes for creating a course where, you know, I think I can maybe—my style of teaching or something could resonate with some group of people. Yeah, so that's it. It's AWS for web devs. The thought is that I'm going to create courses after this. Like, I hope to move into more education, less consulting. That's where I'm at.Corey: I would say you're probably selling yourself fairly short. I've seen a lot of the content you've put out over the years and I learned a lot from it every time. I think that there are some folks who put courses out where, one, they don't have the baseline knowledge around what it is that they're teaching, it just feels like a grift, and another failure mode is that people know how to do the thing, but they have no idea how to teach it to someone who isn't them. And there's nothing inherently wrong with not knowing how to teach; it is its own distinct skill. The problem is when you don't recognize that about yourself and in turn, wind up having some somewhat significant challenges.Adam: Yeah. No, I know that one of the struggles is, I work with pretty obscure technologies on AWS. Not obscure, but like, I have a very specific way I build APIs on AWS and I don't know that's generally, if you're taking a bunch of web developers and trying to move them into AWS is probably not the stack that I use. So, that is part of it, but that's also kind of to my benefit, I guess. It works for me a little bit in that I'm less familiar with maybe the more beginner-friendly way to enter into AWS.It's been years, so I think I can kind of come at it a little fresh and that'll help me produce a course that maybe meets them where they're at better. Yeah, the grifting thing, I'm definitely sensitive to just this idea of putting out a course. It was hard for me to really go out there and say I was making a course, even on Twitter, because I just feel like there's, like, some stereotype—I don't know, there's an association with that, for me at least, for my perception of course creation. But I know that there are people who've done it right and do it for the right reasons. And I think to the extent that I could hit that, you know, both those things, do it right and do it for the right reasons, then it's exciting to me. And if I can't, and it turned out not good at teaching, then I'll move on and do more consulting, I guess, [laugh] or streaming on Twitch.Corey: You are very clearly self-aware enough that if you put something out and it isn't effective, I have zero doubt that you won't just stop selling it, you'll take it down and reach out to people. Because you, more so than most, seem very cognizant of the fact that a poor experience learning something does not in most people's cases, translate to, “Oh, my teacher is shitty.” Instead, it's, “Oh, I'm bad at this and I'm not smart enough to figure it out.” That's still the problem I run into with bad developer experience on a bunch of things that get launched. If I have a bad time, I assume it's, “Oh, I'm stupid. I wish someone had told me.”And first, they did, secondly, it's the sense that no, it's just not being very clearly explained and the folks who wrote the documentation or talking about it are too close to what they've built to understand what it's like to look at this thing from fresh eyes. They're doing a poor job of setting the stage to explain the value it brings and in what scenario, you should be using this.Adam: It's a long process. I want to launch the course in the fall, but in the process of building out the course, I'm really going to be doing workshops and individual—like, I just have a lot of friends that are web developers and I'm going to be kind of getting on with them and teaching them this material and just trying to see what resonates. I'm going to a lot of trouble, I guess, to make sure I'm not just putting out a thing just to say I made a course. Like, I don't actually want to say I made a course, so if I'm going to do it, it's like most things I do I really kind of throw myself into. And I know if I spend enough energy and effort, I think I can make something that at least helps some people. I guess we'll see.Corey: I look forward to it. Any idea as far as rough timeline goes?Adam: Yeah, I hope to launch in the fall. But if it takes longer, I don't know. I've heard people say, to do a course right, you should spend a year on it. And maybe that's what I do.Corey: No, I love that answer. It's great. You're just saying I want to launch in the fall, which is sufficiently vague, and if that winds up not being vague enough, you could always qualify with, “Well, I didn't say what year.”Adam: [laugh].Corey: So, great you know, it's always going to be the fall somewhere.Adam: [laugh]. I just know, like, when someone says you should spend a year I just do things very hard. Like I really, like, throw a lot of time and obsess, like, I'm very obsessive. And when I do something, it's hard for me imagine doing any one thing for a year because I burn myself out. Like, I obsess very hard for usually, like, three months, it's usually, like, a quarter, and then I fall off the face of the earth for three months and I basically mope around the house and I'm just too tired to do anything else. So, I think right now I'm streaming and that's kind of been my obsession. I'm three weeks in so we got a few more months and then we'll see, [laugh] we'll see how I maintain it.Corey: Well, I look forward to seeing how it comes out. You'll have to come back and let us know when it's ready for launch.Adam: Yeah, that sounds great.Corey: I really want to thank you for being so generous with your time and taking me through what you're up to. If people want to learn more, what's the best place for them to find you?Adam: Yeah, I think Twitter. I mean, I mostly hang out on Twitter, and these days Twitch. So, Twitter my handle—I guess you'll put it, like, in the thing description or something. It's like the phonetic—Corey: Oh, we will absolutely toss it into the show notes, where useful content goes to linger.Adam: [laugh]. It's like A-E-D-U-H-M. It's like a—it's the phonetic way of saying Adam, I guess. And then on Twitch, I'm adamelmore. So, those are the two places I spend most my time.Corey: And off to the show notes it goes. Thank you so much for being so generous with your time. I really appreciate it, Adam.Adam: Thank you so much for having me, Corey. I really appreciate it.Corey: Adam Elmore, independent AWS consultant. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an insulting comment that attempts to teach us exactly what we got wrong, but fails utterly because you're terrible at teaching things.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Clicks and Bricks Podcast
The Revolutionary Way to Reduce Your Companies Environmental Impact -- EP #255

Clicks and Bricks Podcast

Play Episode Listen Later Aug 22, 2022 41:00


Episode #255 In today's episode of Clicks and Bricks Podcast, Ken interviews Jasveen Kaur, the founder of Clime DAO. Ken and Jasveen discuss web3. And how Clime DAO is providing a revolutionary way for Businesses to reduce their carbon footprint. About Jasveen: Currently, building Clime DAO, which is a Web3 protocol. Clime DAO allows web3 developers, Individual or consumer or household (ICH) users, Environmentalists/Climate Advisors, and Green Champions to participate in Carbon footprint reduction through CLIM token (NFT- ERC 721) and CLI token (ERC20). Her Recent stint with Robert Bosch includes building a Web3 SaaS-based Supply chain Platform(built on Quorum) and Product Autotrace. Built platform and product from scratch and experienced complete product lifecycle from creating product strategy to feature identification to product development. Prior experience with Fidelity includes supporting on-prem proprietary financial services applications. Experience includes working directly with multiple North American clients for Product implementation. Participated in multiple internal hackathons and short internal rotation, including using AI/ML to solve business problems. Overall, with 15+ years of experience creating B2B SaaS-based product and platform strategies. Successful in creating product vision and converting it into a prioritized quarterly roadmap. Led two products from Discovery, Product Market Fit to early growth stages. Experienced with using data to make data-driven decisions and emerging technologies such as Blockchain, AI, and IOT to create unique solutions. Well-versed in implementing Agile product development through cross-functional geographically dispersed teams. Exposure to many verticals/domains such as Supply Chain, Automotive, Manufacturing, Mobility, Health Care, and Retirement Wealth. Contact Clime DAO: https://www.climedao.com Contact Ken: inlink.com/ken hello@kencox.com Text: 314-370-2871 #GetToWork Connect with Us: Instagram: https://www.instagram.com/clicksandbrickspodcast/ Facebook: https://www.facebook.com/clicksandbrickspodcast/ YouTube: https://www.youtube.com/c/ClicksBricksPodcast Website: https://clickandbrickspodcast.com #businesspodcast #founderstories #entrepreneurship #entrepreneur Learn more about your ad choices. Visit megaphone.fm/adchoices