In this podcast, the seventh session in CLI's Legaltech Around the World Series, David Bushby, Managing Director, InCounsel facilitated a discussion about the local legaltech market in Latin America. David was joined by amazing guest panellists: Andrés Jara, Co-Founder & CEO, Kea Technology Inc (Chile) Bibiana Martinez Camelo, Head of Legal Operations, Bancolombia (Colombia) Silvana Stochetti, Founder & CEO, Legalify Latam (Argentina) Maxime Troubat, CEO, Juridoc (Brazil) Agustin Velazquez G.L, Managing Partner, AVA Firm (Mexico) Topics covered in this session included: An overview of the legaltech market in Latin America The drivers/agents of change and impact of legaltech in the B2B and B2C markets (and how they are connected) The challenges and opportunities for legaltech The impact of COVID on the legaltech market The importance of legaltech communities Who is funding legaltech development in Latin America and are tech developers staying, going, or returning What legaltech adoption REALLY looks like (separating the hype and hope from reality) The growing importance of digital literacy and the role of law schools in that education You'll find information about the other episodes in this series here. The series is presented in association with InCounsel. If you would prefer to watch rather than listen to this episode, you'll find the video in our CLI-Collaborate (CLIC) free Resource Hub here. Additional Resources referred to in this session You'll find the Legal Hackers website here Don't forget to subscribe to: InCounsel's Weekly Newsletter CLI's Newsletter
This is a special episode recorded live during a live coding session on YouTube (2022-09-21). The audio-only experience might not be the best one, so if you are curious to see the video and enjoy our diagrams and screen sharing, please check this episode on YouTube: https://www.youtube.com/watch?v=0TzfkbisMEA. How can you build a WeTransfer or a Dropbox Transfer clone on AWS? This is our fifth live coding stream. In this episode, we continued adding some security to our application. Specifically, we implemented 75% of the OAuth 2 device flow on top of AWS Cognito to allow our file upload CLI application to get some credentials. In order to implement this flow, we need to store some secrets. We decided to use DynamoDB and spent a lot of time discussing our data design and how and why we used the famous and controversial DynamoDB single table design principle. All our code is available in this repository: https://github.com/awsbites/weshare.click In this episode we mentioned the following resources: OAuth 2 Device Auth flow RFC8628: https://www.rfc-editor.org/rfc/rfc8628 The DynamoDB book by Alex DeBrie: https://www.dynamodbbook.com/ LevelDB: https://github.com/google/leveldb OAuth 2 Authorization framework RFC6749: https://www.rfc-editor.org/rfc/rfc6749 You can listen to AWS Bites wherever you get your podcasts: - Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-bites/id1585489017 - Spotify: https://open.spotify.com/show/3Lh7PzqBFV6yt5WsTAmO5q - Google: https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy82YTMzMTJhMC9wb2RjYXN0L3Jzcw== - Breaker: https://www.breaker.audio/aws-bites - RSS: https://anchor.fm/s/6a3312a0/podcast/rss Do you have any AWS questions you would like us to address? Leave a comment here or connect with us on Twitter: - https://twitter.com/eoins - https://twitter.com/loige #AWS #livecoding #transfer
Neural nets…sounds exciting, perhaps a little scary, and a long way from anything to do with legal practice, right? Wrong! It has everything to do with how the legal ecosystem is being challenged to reconceive and deliver legal work differently from start to finish and…beyond! It's easy to get caught up in jargon in all of this stuff – AI, machine learning, natural language processing, and so the list goes on – that's why we spoke with Pablo Arredondo, a Co-Founder & the Chief Innovation Officer at Casetext. Pablo works in this world every day, he really knows this stuff, understands legal practice, and seamlessly translates what he knows into the work he does with legal practitioners. That's no doubt a big part of why Casetext is leading the way in this area with its outstanding, user-friendly products. Our conversation in this session was a journey. We discussed what neural nets are and are not; how they apply to the legal ecosystem, especially when it comes to legal search; the difference between keyword searches, neural nets, and how you choose between them; some great practical applications of these tools in legal practice like e.g. Casetext's product AllSearch; how neural nets promise to change everything in the legal world (and other worlds too); and, the benefits that have already emerged from implementing this AI for legal practitioners and, their clients too! If you would prefer to watch rather than listen to this podcast, you'll find the video here. About the Future 50 Series In the Future 50 Series we're chatting with legalpreneurs who, through their ideas and actions, are challenging and transforming legal BAU all around the world. If you would like to recommend people for this Series, please contact us at: CLI@collaw.edu.au.
About ShinjiShinji Kim is the Founder & CEO of Select Star, an automated data discovery platform that helps you to understand & manage your data. Previously, she was the Founder & CEO of Concord Systems, a NYC-based data infrastructure startup acquired by Akamai Technologies in 2016. She led the strategy and execution of Akamai IoT Edge Connect, an IoT data platform for real-time communication and data processing of connected devices. Shinji studied Software Engineering at University of Waterloo and General Management at Stanford GSB.Links Referenced: Select Star: https://www.selectstar.com/ LinkedIn: https://www.linkedin.com/company/selectstarhq/ Twitter: https://twitter.com/selectstarhq TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: I come bearing ill tidings. Developers are responsible for more than ever these days. Not just the code that they write, but also the containers and the cloud infrastructure that their apps run on. Because serverless means it's still somebody's problem. And a big part of that responsibility is app security from code to cloud. And that's where our friend Snyk comes in. Snyk is a frictionless security platform that meets developers where they are - Finding and fixing vulnerabilities right from the CLI, IDEs, Repos, and Pipelines. Snyk integrates seamlessly with AWS offerings like code pipeline, EKS, ECR, and more! As well as things you're actually likely to be using. Deploy on AWS, secure with Snyk. Learn more at Snyk.co/scream That's S-N-Y-K.co/screamCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while, I encounter a company that resonates with something that I've been doing on some level. In this particular case, that is what's happened here, but the story is slightly different. My guest today is Shinji Kim, who's the CEO and founder at Select Star.And the joke that I was making a few months ago was that Select Stars should have been the name of the Oracle ACE program instead. Shinji, thank you for joining me and suffering my ridiculous, basically amateurish and sophomore database-level jokes because I am bad at databases. Thanks for taking the time to chat with me.Shinji: Thanks for having me here, Corey. Good to meet you.Corey: So, Select Star despite being the only query pattern that I've ever effectively been able to execute from memory, what you do as a company is described as an automated data discovery platform. So, I'm going to start at the beginning with that baseline definition. I think most folks can wrap their heads around what the idea of automated means, but the rest of the words feel like it might mean different things to different people. What is data discovery from your point of view?Shinji: Sure. The way that we define data discovery is finding and understanding data. In other words, think about how discoverable your data is in your company today. How easy is it for you to find datasets, fields, KPIs of your organization data? And when you are looking at a table, column, dashboard, report, how easy is it for you to understand that data underneath? Encompassing on that is how we define data discovery.Corey: When you talk about data lurking around the company in various places, that can mean a lot of different things to different folks. For the more structured data folks—which I tend to think of as the organized folks who are nothing like me—that tends to mean things that live inside of, for example, traditional relational databases or things that closely resemble that. I come from a grumpy old sysadmin perspective, so I'm thinking, oh, yeah, we have a Jira server in the closet and that thing's logging to its own disk, so that's going to be some information somewhere. Confluence is another source of data in an organization; it's usually where insight and a knowledge of what's going on goes to die. It's one of those write once, read never type of things.And when I start thinking about what data means, it feels like even that is something of a squishy term. From the perspective of where Select Start starts and stops, is it bounded to data that lives within relational databases? Does it go beyond that? Where does it start? Where does it stop?Shinji: So, we started the company with an intention of increasing the discoverability of data and hence providing automated data discovery capability to organizations. And the part where we see this as the most effective is where the data is currently being consumed today. So, this is, like, where the data consumption happens. So, this can be a data warehouse or data lake, but this is where your data analysts, data scientists are querying data, they are building dashboards, reports on top of, and this is where your main data mart lives.So, for us, that is primarily a cloud data warehouse today, usually has a relational data structure. On top of that, we also do a lot of deep integrations with BI tools. So, that includes tools like Tableau, Power BI, Looker, Mode. Wherever these queries from the business stakeholders, BI engineers, data analysts, data scientists run, this is a point of reference where we use to auto-generate documentation, data models, lineage, and usage information, to give it back to the data team and everyone else so that they can learn more about the dataset they're about to use.Corey: So, given that I am seeing an increased number of companies out there talking about data discovery, what is it the Select Star does that differentiates you folks from other folks using similar verbiage in how they describe what they do?Shinji: Yeah, great question. There are many players that popping up, and also, traditional data catalog's definitely starting to offer more features in this area. The main differentiator that we have in the market today, we call it fast time-to-value. Any customer that is starting with Select Star, they get to set up their instance within 24 hours, and they'll be able to get all the analytics and data models, including column-level lineage, popularity, ER diagrams, and how other people are—top users and how other people are utilizing that data, like, literally in few hours, max to, like, 24 hours. And I would say that is the main differentiator.And most of the customers I have pointed out that setup and getting started has been super easy, which is primarily backed by a lot of automation that we've created underneath the platform. On top of that, just making it super easy and simple to use. It becomes very clear to the users that it's not just for the technical data engineers and DBAs to use; this is also designed for business stakeholders, product managers, and ops folks to start using as they are learning more about how to use data.Corey: Mapping this a little bit toward the use cases that I'm the most familiar with, this big source of data that I tend to stumble over is customer AWS bills. And that's not exactly a big data problem, given that it can fit in memory if you have a sufficiently exciting computer, but using Tableau don't wind up slicing and dicing that because at some point, Excel falls down. From my perspective, problem with Excel is that it doesn't tend to work on huge datasets very well, and from the position of Salesforce, the problem with Excel is that it doesn't cost a giant pile of money every month. So, those two things combined, Tableau is the answer for what we do. But that's sort of the end-all for us of, that's where it stops.At that point, we have dashboards that we build and queries that we run that spit out the thing we're looking at, and then that goes back to inform our analysis. We don't inherently feed that back into anything else that would then inform the rest of what we do. Now, for our use case, that probably makes an awful lot of sense because we're here to help our customers with their billing challenges, not take advantage of their data to wind up informing some giant model and mispurposing that data for other things. But if we were generating that data ourselves as a part of our operation, I can absolutely see the value of tying that back into something else. You wind up almost forming a reinforcing cycle that improves the quality of data over time and lets you understand what's going on there. What are some of the outcomes that you find that customers get to by going down this particular path?Shinji: Yeah, so just to double-click on what you just talked about, the way that we see this is how we analyze the metadata and the activity logs—system logs, user logs—of how that data has been used. So, part of our auto-generated documentation for each table, each column, each dashboard, you're going to be able to see the full data lineage: where it came from, how it was transformed in the past, and where it's going to. You will also see what we call popularity score: how many unique users are utilizing this data inside the organization today, how often. And utilizing these two core models and analysis that we create, you can start looking at first mapping out the data flow, and then determining whether or not this dataset is something that you would want to continue keeping or running the data pipelines for. Because once you start mapping these usage models of tables versus dashboards, you may find that there are recurring jobs that creates all these materialized views and tables that are feeding dashboards that are not being looked at anymore.So, with this mechanism by looking initially data lineage as a concept, a lot of companies use data lineage in order to find dependencies: what is going to break if I make this change in the column or table, as well as just debugging any of issues that is currently happening in their pipeline. So, especially when you will have to debug a SQL query or pipeline that you didn't build yourself but you need to find out how to fix it, this is a really easy way to instantly find out, like, where the data is coming from. But on top of that, if you start adding this usage information, you can trace through where the main compute is happening, which largest route table is still being queried, instead of the more summarized tables that should be used, versus which are the tables and datasets that is continuing to get created, feeding the dashboards and is those dashboards actually being used on the business side. So, with that, we have customers that have saved thousands of dollars every month just by being able to deprecate dashboards and pipelines that they were afraid of deprecating in the past because they weren't sure if anyone's actually using this or not. But adopting Select Star was a great way to kind of do a full spring clean of their data warehouse as well as their BI tool. And this is an additional benefit to just having to declutter so many old, duplicated, and outdated dashboards and datasets in their data warehouse.Corey: That is, I guess, a recurring problem that I see in many different pockets of the industry as a whole. You see it in the user visibility space, you see it in the cost control space—I even made a joke about Confluence that alludes to it—this idea that you build a whole bunch of dashboards and use it to inform all kinds of charts and other systems, but then people are busy. It feels like there's no ‘and then.' Like, one of the most depressing things in the universe that you can see after having spent a fair bit of effort to build up those dashboards is the analytics for who internally has looked at any of those dashboards since the demo you gave showing it off to everyone else. It feels like in many cases, we put all these projects and amount of effort into building these things out that then don't get used.People don't want to be informed by data they want to shoot from their gut. Now, sometimes that's helpful when we're talking about observability tools that you use to trace down outages, and, “Well, our site's really stable. We don't have to look at that.” Very awesome, great, awesome use case. The business insight level of dashboard just feels like that's something you should really be checking a lot more than you are. How do you see that?Shinji: Yeah, for sure. I mean, this is why we also update these usage metrics and lineage every 24 hours for all of our customers automatically, so it's just up-to-date. And the part that more customers are asking for where we are heading to—earlier, I mentioned that our main focus has been on analyzing data consumption and understanding the consumption behavior to drive better usage of your data, or making data usage much easier. The part that we are starting to now see is more customers wanting to extend those feature capabilities to their staff of where the data is being generated. So, connecting the similar amount of analysis and metadata collection for production databases, Kafka Queues, and where the data is first being generated is one of our longer-term goals. And then, then you'll really have more of that, up to the source level, of whether the data should be even collected or whether it should even enter the data warehouse phase or not.Corey: One of the challenges I see across the board in the data space is that so many products tend to have a very specific point of the customer lifecycle, where bringing them in makes sense. Too early and it's, “Data? What do you mean data? All I have are these logs, and their purpose is basically to inflate my AWS bill because I'm bad at removing them.” And on the other side, it's, “Great. We pioneered some of these things and have built our own internal enormous system that does exactly what we need to do.” It's like, “Yes, Google, you're very smart. Good job.” And most people are somewhere between those two extremes. Where are customers on that lifecycle or timeline when using Select Star makes sense for them?Shinji: Yeah, I think that's a great question. Also the time, the best place where customers would use Select Star for is that after they have their cloud data warehouse set up. Either they have finished their migration, they're starting to utilize it with their BI tools, and they're starting to notice that it's not just, like, you know, ten to fifty tables that they're starting with; most of them have more than hundreds of tables. And they're feeling that this is starting to go out of control because we have all these data, but we are not a hundred percent sure what exactly is in our database. And this usually just happens more in larger companies, companies at thousand-plus employees, and they usually find a lot of value out of Select Star right away because, like, we will start pointing out many different things.But we also see a lot of, like, forward-thinking, fast-growing startups that are at the size of a few hundred employees, you know, they now have between five to ten-person data team, and they are really creating the right single source of truth of their data knowledge through a Select Star. So, I think you can start anywhere from when your data team size is, like, beyond five and you're continuing to grow because every time you're trying to onboard a data analyst, data scientist, you will have to go through, like, basically the same type of training of your data model, and it might actually look different because the data models and the new features, new apps that you're integrating this changes so quickly. So, I would say it's important to have that base early on and then continue to grow. But we do also see a lot of companies coming to us after having thousands of datasets or tens of thousands of datasets that it's really, like, very hard to operate and onboard anyone. And this is a place where we really shine to help their needs, as well.Corey: Sort of the, “I need a database,” to the, “Help, I have too many databases,” pipeline, where [laugh] at some point people start to—wanting to bring organization to the chaos. One thing I like about your model is that you don't seem to be making the play that every other vendor in the data space tends to, which is, “Oh, we want you to move your data onto our systems. The end.” You operate on data that is in place, which makes an awful lot of sense for the kinds of things that we're talking about. Customers are flat out not going to move their data warehouse over to your environment, just because the data gravity is ludicrous. Just the sheer amount of money it would take to egress that data from a cloud provider, for example, is monstrous.Shinji: Exactly. [laugh]. And security concerns. We don't want to be liable for any of the data—and this is, like, a very specific decision we've made very early on the company—to not access data, to not egress any of the real data, and to provide as much value as possible just utilizing the metadata and logs. And depending on the types of data warehouses, it also can be really efficient because the query history or the metadata systems tables are indexed separately. Usually, it's much lighter load on the compute side. And that definitely has, like, worked well for our advantage, especially being a SaaS tool.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig secures your cloud from source to run. They believe, as do I, that DevOps and security are inextricably linked. If you wanna learn more about how they view this, check out their blog, it's definitely worth the read. To learn more about how they are absolutely getting it right from where I sit, visit Sysdig.com and tell them that I sent you. That's S Y S D I G.com. And my thanks to them for their continued support of this ridiculous nonsense.Corey: What I like is just how straightforward the integrations are. It's clear you're extraordinarily agnostic as far as where the data itself lives. You integrate with Google's BigQuery, with Amazon Redshift, with Snowflake, and then on the other side of the world with Looker, and Tableau, and other things as well. And one of the example use cases you give is find the upstream table in BigQuery that a Looker dashboard depends on. That's one of those areas where I see something like that, and, oh, I can absolutely see the value of that.I have two or three DynamoDB tables that drive my newsletter publication system that I built—because I have deep-seated emotional problems and I take it out and everyone else via code—but as a small, contained system that I can still fit in my head. Mostly. And I still forget which table is which in some cases. Down the road, especially at scale, “Okay, where is the actual data source that's informing this because it doesn't necessarily match what I'm expecting,” is one of those incredibly valuable bits of insight. It seems like that is something that often gets lost; the provenance of data doesn't seem to work.And ideally, you know, you're staffing a company with reasonably intelligent people who are going to look at the results of something and say, “That does not align with my expectations. I'm going to dig.” As opposed to the, “Oh, yeah, that seems plausible. I'll just go with whatever the computer says.” There's an ocean of nuance between those two, but it's nice to be able to establish the validity of the path that you've gone down in order to set some of these things up.Shinji: Yeah, and this is also super helpful if you're tasked to debug a dashboard or pipeline that you did not build yourself. Maybe the person has left the company, or maybe they're out-of-office, but this dashboard has been broken and you're quote-unquote, “On call,” for data. What are you going to do? You're going to—without a tool that can show you a full lineage, you will have to start digging through somebody else's SQL code and try to map out, like, where the data is coming from, if this is calculating correctly. Usually takes, you know, few hours to just get to the bottom of the issue. And this is one of the main use cases that our customers bring up every single time, as more of, like, this is now the go-to place every time there is any data questions or data issues.Corey: The first and golden rule of cloud economics is step one, turn that shit off.Shinji: [laugh].Corey: When people are using something, you can optimize the hell out of it however you want, but nothing's going to beat turning it off. One challenge is when we're looking at various accounts and we see a Redshift cluster, and it's, “Okay. That thing's costing a few million bucks a year and no one seems to know anything about it.” They keep pointing to other teams, and it turns into this giant, like, finger-pointing exercise where no one seems to have responsibility for it. And very often, our clients will choose not to turn that thing off because on the one hand, if you don't turn it off, you're going to spend a few million bucks a year that you otherwise would not have had to.On the other, if you delete the data warehouse, and it turns out, oh, yeah, that was actually kind of important, now we don't have a company anymore. It's a question of which is the side you want to be wrong on. And in some levels, leaving something as it is and doing something else is always a more defensible answer, just because the first time your cost-saving exercises take out production, you're generally not allowed to save money anymore. This feels like it helps get to that source of truth a heck of a lot more effectively than tracing individual calls and turning into basically data center archaeologists.Shinji: [laugh]. Yeah, for sure. I mean, this is why from the get go, we try to give you all your tables, all of your database, just ordered by popularity. So, you can also see overall, like, from all the tables, whether that's thousands or tens of thousands, you're seeing the most used, has the most number of dependencies on the top, and you can also filter it by all the database tables that hasn't been touched in the last 90 days. And just having this, like, high-level view gives a lot of ideas to the data platform team about how they can optimize usage of their data warehouse.Corey: From where I tend to sit, an awful lot of customers are still relatively early in their data journey. An awful lot of the marketing that I receive from various AWS mailing lists that I found myself on because I've had the temerity to open accounts has been along the lines of oh, data discovery is super important, but first, they presuppose that I've already bought into this idea that oh, every company must be a completely data-driven company. The end. Full stop.And yeah, we're a small bespoke services consultancy. I don't necessarily know that that's the right answer here. But then it takes it one step further and starts to define the idea of data discovery as, ah, you will use it to find a PII or otherwise sensitive or restricted data inside of your datasets so you know exactly where it lives. And sure, okay, that's valuable, but it also feels like a very narrow definition compared to how you view these things.Shinji: Yeah. Basically, the way that we see data discovery is it's starting to become more of an essential capability in order for you to monitor and understand how your data is actually being used internally. It basically gives you the insights around sure, like, what are the duplicated datasets, what are the datasets that have that descriptions or not, what are something that may contain sensitive data, so on and so forth, but that's still around the characteristics of the physical datasets. Whereas I think the part that's really important around data discovery that is not being talked about as much is how the data can actually be used better. So, have it as more of a forward-thinking mechanism and in order for you to actually encourage more people to utilize data or use the data correctly, instead of trying to contain this within just one team is really where I feel like data discovery can help.And in regards to this, the other big part around data discovery is really opening up and having that transparency just within the data team. So, just within the data team, they always feel like they do have that access to the SQL queries and you can just go to GitHub and just look at the database itself, but it's so easy to get lost in the sea of metadata that is just laid out as just the list; there isn't much context around the data itself. And that context and with along with the analytics of the metadata is what we're really trying to provide automatically. So eventually, like, this can be also seen as almost like a way to, like, monitor the datasets, like, how you're currently monitoring your applications through Datadog or your website with your Google Analytics, this is something that can be also used as more of a go-to source of truth around what your state of the data is, how that's defined, and how that's being mapped to different business processes, so that there isn't much confusion around data. Everything can be called the same, but underneath it actually can mean very different things. Does that make sense?Corey: No, it absolutely does. I think that this is part of the challenge in trying to articulate value that is, I guess, specific to this niche across an entire industry. The context that drives data is going to be incredibly important, and it feels like so much of the marketing in the space is aimed at one or two pre-imagined customer profiles. And that has the side effect of making customers for whom that model doesn't align, look and feel like either doing something wrong, or makes it look like the vendor who's pitching this is somewhat out of touch. I know that I work in a relatively bounded problem space, but I still learn new things about AWS billing on virtually every engagement that I go on, just because you always get to learn more about how customers view things and how they view not just their industry, but also the specificities of their own business and their own niche.I think that is one of the challenges historically, with the idea of letting software do everything. Do you find the problems that you're solving tend to be global in nature or are you discovering strange depths of nuance on a customer-by-customer basis at this point?Shinji: Overall, a lot of the problems that we solve and the customers that we work with is very industry agnostic. As long as you are having many different datasets that you need to manage, there are common problems that arises, regardless of the industry that you're in. We do observe some industry-specific issues because your data is either, it's an unstructured data, or your data is primarily events, or you know, depending on how the data looks like, but primarily because of most of the BI solutions and data warehouses are operating as a relational databases, this is a part where we really try to build a lot of best practices, and the common analytics that we can apply to every customer that's using Select Star.Corey: I really want to thank you for taking so much time to go through the ins and outs of what it is you're doing these days. If people want to learn more, where's the best place to find you?Shinji: Yeah, I mean, it's been fun [laugh] talking here. So, we are at selectstar.com. That's our website. You can sign up for a free trial. It's completely self-service, so you don't need to get on a demo but, like, we'll also help you onboard and happy to give a free demo to whoever that is interested.We are also on LinkedIn and Twitter under selectstarhq. Yeah, I mean, we're happy to help for any companies that have these issues around wanting to increase their discoverability of data, and want to help their data team and the rest of the company to be able to utilize data better.Corey: And we will, of course, put links to all of that in the [show notes 00:28:58]. Thank you so much for your time today. I really appreciate it.Shinji: Great. Thanks for having me, Corey.Corey: Shinji Kim, CEO and founder at Select Star. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment that I won't be able to discover because there are far too many podcast platforms out there, and I have no means of discovering where you've said that thing unless you send it to me.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
In 2013, after many years working in large law firms and co-founding one of the first legal metrics start-ups, Caren Ulrich Stacy took time out to reflect on how she could bring together what she had done with what she had experienced to solve a problem that keeps knocking at her door – how do women who had left legal practice for a variety of different reasons, find their way back to it? The answer was the OnRamp Fellowship, the first returnship program for law firms that later extended to legal departments. It grew into the Diversity Lab, an incubator for innovative ways of boosting diversity and inclusion in the legal profession. The Lab ran a series of Women in Law Hackathons from 2016 – The Mansfield Rule was an idea from the first Hackathon that grew wings in US, Canadian and UK law firms and legal departments too. Named after the first woman lawyer in the US, Arabella Mansfield, its focus is on boosting and sustaining diversity in leadership and the pipeline to leadership. The Lab also subsequently established the Move the Needle initiative, a wonderful collaborative experiment between the Lab and four founding law firms, resourced over 5 years, the outcome being to produce empirical data (which will include a Report) about diversity and inclusion in hiring, retention and advancement. The Report, when released, promises to provide a tried and tested blueprint for advancing diversity and inclusion in the legal industry. What Caren has accomplished with the Diversity Lab and these two amazing initiatives (of many) we discussed, is remarkable and outstanding! What's also critically important is that the work and outcomes are supported, every step of the way, with data, data analysis, and metrics – they provide an empirical and quantifiable foundation that differentiates these initiatives from others and, tells a story that is both deeply personal but also objectively verifiable! Caren is the Founder & CEO of the Diversity Lab; the Founder of its On Ramp Fellowship; was recently appointed as the Lead Diversity, Equity, Inclusion & Accessibility Advisor to the USPTO; holds roles with the UN Women initiative, and holds Fellowships at The College of Law Practice Management and the Tory Burch Foundation. If you would prefer to watch rather than listen to this podcast, you'll find the video here. About the Future 50 Series In the Future 50 Series we're chatting with legalpreneurs who, through their ideas and actions, are challenging and transforming legal BAU all around the world. If you would like to recommend people for this Series, please contact us at: CLI@collaw.edu.au.
בפינה זו, נגיש לכם מידע על העבודה היומיומית בסביבת ענן מנקודת המבט שלנו.דוברי הפרק: אבי קינן ומייש סיידל-קיסינג. בפרק הקודם, דיברנו על מה זה deployments ואיך עושים rolling deployments. כמו כן, מאייש הרצאה לנו איך מתקינים גרסא ב-2 קליקים. בפרק זה, שהוא הפרק ה-6 והאחרון בסדרת פרקי ה-ECS, נדבר על כלי אוטומציות (CLI, ECS), מה הצורך של כלים אלו ומדוע אנחנו משתמשים בהם? רוצים להתעדכן בתכנים נוספים בנושאי ענן וטכנולוגיות מתקדמות? הירשמו עכשיו לניוזלטר שלנו ותמיד תישארו בעניינים. להרשמה: https://www.israelclouds.com/newslettersignup
What will the legal team of the future look like and, do we have a blueprint for it? That's the question Adam Curphey answers in his recently published book: The Legal Team of the Future: Law+ Skills. You can purchase Adam's book here. How, where and why the legal ecosystem is transforming is reflected in the new and different way legal work is now being conceived and the capabilities needed to deliver it better, cheaper and faster. It's creating and driving a whole new war for talent! The workforce of the future, evolving now, is a celebration of the multis – multi-disciplinary, multi-cultural, multi-generational and multi-talented and much, much more! In this podcast in the Future 50 Series of the CLI Legalpreneurs Spotlight, we chatted with Adam about his book. We discussed the changing legal ecosystem and, in particular, his Law+ model – Law at the core but plus people, business, change and technology; the challenges and opportunities for law firms, legal departments, and law schools to be discovered and derived from the framework and blueprint in the book; the business case for change; and, where to start on the Law+ journey. Adam is the Senior Manager of Innovation at Mayer Brown in the UK. He has an amazing history in innovation, legal education and capability development as a former practising lawyer, an academic, in law firms, and in advisory capacities for Lawtech UK and the O-Shaped Lawyer. This deep, lived experience is what makes this discussion and his book practical and candid…and his quirky sense of humour is what makes it fun! If you would prefer to watch rather than listen to this podcast, you'll find the video here. About the Future 50 Series In the Future 50 Series we're chatting with legalpreneurs who, through their ideas and actions, are challenging and transforming legal BAU all around the world. If you would like to recommend people for this Series, please contact us at: CLI@collaw.edu.au.
In this podcast in the Future 50 Series of the CLI Legalpreneurs Spotlight, we chatted with Zach Posner, the General Partner and Co-Founder of The LegalTech Fund (TLTF). You can see why Zach does what he does. His 15+ years experience working with early-stage companies as a CEO and in the Venture and BOD space is layered into a background in finance, strategy, and business development – it's hard to imagine a better set of capabilities for his role or more compatible with the work of TLTF. The TLTF is unique in a number of ways and in the same ways, stands out in a crowded tech funding marketplace - it was established during the pandemic, it invests in start-ups (and beyond), and it's driven by a desire on the part of the founders and investors to proactively support all entrepreneurs (even if they don't invest in them) and, importantly, it's very focussed on building a collaborative community along the way. For TLTF, it's not just about the money, it is about helping entrepreneurs and their companies succeed, supporting what they need, and being there at the beginning, middle, end, and back again! We discussed TLTF; how investment and development of legaltech has progressed (or not) pre, during and now hopefully post COVID; what barriers have held it back; how these are changing but understanding that there is a lot of room for things to grow in the legal ecosystem, gain momentum, and catch up to where other industries find themselves; when we'll know if legaltech has made a difference; and the future of the legaltech industry in the near future. We also discussed TLTF's inaugural Summit in Miami in December 2022. The Summit, like the Fund, is also unique – it will be a meeting of legaltech thought leaders, doers, and stakeholders who are looking to the future, want to play their part in it, and know the value and mutual benefit derived from working on that together. The Summit will also feature a start-up challenge with the focus again on sharing experiences and building community. If you would prefer to watch rather than listen to this podcast, you'll find the video here. About the Future 50 Series In the Future 50 Series we're chatting with legalpreneurs who, through their ideas and actions, are challenging and transforming legal BAU all around the world. If you would like to recommend people for this Series, please contact us at: CLI@collaw.edu.au.
Ecco un valido modo per aumentare la propria produttività con la CLI di git, sfruttare gli alias.A questo link trovate gli alias che utilizzo io: https://gist.github.com/andreadottor/70ec9e63f812a0a331748a695184da26Qui le risorse da qui sono partito:https://git-scm.com/book/en/v2/Git-Basics-Git-Aliaseshttps://haacked.com/archive/2014/07/28/github-flow-aliases/https://haacked.com/archive/2017/01/04/git-alias-open-url/https://opensource.com/article/20/11/git-aliases
This is a special episode recorded live during a live coding session on YouTube (2022-09-16). The audio-only experience might not be the best one, so if you are curious to see the video and enjoy our diagrams and screen sharing, please check this episode on YouTube: https://www.youtube.com/watch?v=vVic3oqqqfY. How can you build a WeTransfer or a Dropbox Transfer clone on AWS? This is our fourth live coding stream. In this episode, we started looking into adding some security to our application. Specifically, we started implementing a device auth flow on top of AWS Cognito to allow our file upload CLI application to get some credentials. All our code is available in this repository: https://github.com/awsbites/weshare.click In this episode we mentioned the following resources: Content-Disposition Header on MDN: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition OAuth 2 Device Auth flow RFC8628: https://www.rfc-editor.org/rfc/rfc8628 XKCD Comic about password security: https://xkcd.com/936/ crypto-random-string package: https://www.npmjs.com/package/crypto-random-string Dash offline documentation app: https://kapeli.com/dash You can listen to AWS Bites wherever you get your podcasts: - Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-bites/id1585489017 - Spotify: https://open.spotify.com/show/3Lh7PzqBFV6yt5WsTAmO5q - Google: https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy82YTMzMTJhMC9wb2RjYXN0L3Jzcw== - Breaker: https://www.breaker.audio/aws-bites - RSS: https://anchor.fm/s/6a3312a0/podcast/rss Do you have any AWS questions you would like us to address? Leave a comment here or connect with us on Twitter: - https://twitter.com/eoins - https://twitter.com/loige #AWS #livecoding #transfer
In episode 110 of our SAP on Azure video podcast we talk about a Bridge Framework to integrate SAP systems with Teams, support for NFS shares with Azure Files, Integrating Signavio with SAP solution Manager hosted on Azure using Azure Application Gateway and SAP on Azure high availability – change from SPN to MSI for Pacemaker clusters using Azure fencing. Then Aron Stern joins us to talk about making Azure SAP-aware. The Azure Center for SAP Solutions (ACSS) allows customers to deploy and manager their SAP systems directly from within Azure. With a few clicks -- or via APIs, CLI, ... -- you can install your SAP system, start and stop it or integrate it with other Azure PaaS services. Think of it as the Rosetta Stone for Azure and SAP. http://aka.ms/acss https://www.saponazurepodcast.de/episode110 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #SAPonAzure
About AllenAllen is a cloud architect at Tyler Technologies. He helps modernize government software by creating secure, highly scalable, and fault-tolerant serverless applications.Allen publishes content regularly about serverless concepts and design on his blog - Ready, Set Cloud!Links Referenced: Ready, Set, Cloud blog: https://readysetcloud.io Tyler Technologies: https://www.tylertech.com/ Twitter: https://twitter.com/allenheltondev Linked: https://www.linkedin.com/in/allenheltondev/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: I come bearing ill tidings. Developers are responsible for more than ever these days. Not just the code that they write, but also the containers and the cloud infrastructure that their apps run on. Because serverless means it's still somebody's problem. And a big part of that responsibility is app security from code to cloud. And that's where our friend Snyk comes in. Snyk is a frictionless security platform that meets developers where they are - Finding and fixing vulnerabilities right from the CLI, IDEs, Repos, and Pipelines. Snyk integrates seamlessly with AWS offerings like code pipeline, EKS, ECR, and more! As well as things you're actually likely to be using. Deploy on AWS, secure with Snyk. Learn more at Snyk.co/scream That's S-N-Y-K.co/screamCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while I wind up stumbling into corners of the internet that I previously had not traveled. Somewhat recently, I wound up having that delightful experience again by discovering readysetcloud.io, which has a whole series of, I guess some people might call it thought leadership, I'm going to call it instead how I view it, which is just amazing opinion pieces on the context of serverless, mixed with APIs, mixed with some prognostications about the future.Allen Helton by day is a cloud architect at Tyler Technologies, but that's not how I encountered you. First off, Allen, thank you for joining me.Allen: Thank you, Corey. Happy to be here.Corey: I was originally pointed towards your work by folks in the AWS Community Builder program, of which we both participate from time to time, and it's one of those, “Oh, wow, this is amazing. I really wish I'd discovered some of this sooner.” And every time I look through your back catalog, and I click on a new post, I see things that are either I've really agree with this or I can't stand this opinion, I want to fight about it, but more often than not, it's one of those recurring moments that I love: “Damn, I wish I had written something like this.” So first, you're absolutely killing it on the content front.Allen: Thank you, Corey, I appreciate that. The content that I make is really about the stuff that I'm doing at work. It's stuff that I'm passionate about, stuff that I'd spend a decent amount of time on, and really the most important thing about it for me, is it's stuff that I'm learning and forming opinions on and wants to share with others.Corey: I have to say, when I saw that you were—oh, your Tyler Technologies, which sounds for all the world like, oh, it's a relatively small consultancy run by some guy presumably named Tyler, and you know, it's a petite team of maybe 20, 30 people on the outside. Yeah, then I realized, wait a minute, that's not entirely true. For example, for starters, you're publicly traded. And okay, that does change things a little bit. First off, who are you people? Secondly, what do you do? And third, why have I never heard of you folks, until now?Allen: Tyler is the largest company that focuses completely on the public sector. We have divisions and products for pretty much everything that you can imagine that's in the public sector. We have software for schools, software for tax and appraisal, we have software for police officers, for courts, everything you can think of that runs the government can and a lot of times is run on Tyler software. We've been around for decades building our expertise in the domain, and the reason you probably haven't heard about us is because you might not have ever been in trouble with the law before. If you [laugh] if you have been—Corey: No, no, I learned very early on in the course of my life—which will come as a surprise to absolutely no one who spent more than 30 seconds with me—that I have remarkably little filter and if ten kids were the ones doing something wrong, I'm the one that gets caught. So, I spent a lot of time in the principal's office, so this taught me to keep my nose clean. I'm one of those squeaky-clean types, just because I was always terrified of getting punished because I knew I would get caught. I'm not saying this is the right way to go through life necessarily, but it did have the side benefit of, no, I don't really engage with law enforcement going throughout the course of my life.Allen: That's good. That's good. But one exposure that a lot of people get to Tyler is if you look at the bottom of your next traffic ticket, it'll probably say Tyler Technologies on the bottom there.Corey: Oh, so you're really popular in certain circles, I'd imagine?Allen: Super popular. Yes, yes. And of course, you get all the benefits of writing that code that says ‘if defendant equals Allen Helton then return.'Corey: I like that. You get to have the exception cases built in that no one's ever going to wind up looking into.Allen: That's right. Yes.Corey: The idea of what you're doing makes an awful lot of sense. There's a tremendous need for a wide variety of technical assistance in the public sector. What surprises me, although I guess it probably shouldn't, is how much of your content is aimed at serverless technologies and API design, which to my way of thinking, isn't really something that public sector has done a lot with. Clearly I'm wrong.Allen: Historically, you're not wrong. There's an old saying that government tends to run about ten years behind on technology. Not just technology, but all over the board and runs about ten years behind. And until recently, that's really been true. There was a case last year, a situation last year where one of the state governments—I don't remember which one it was—but they were having a crisis because they couldn't find any COBOL developers to come in and maintain their software that runs the state.And it's COBOL; you're not going to find a whole lot of people that have that skill. A lot of those people are retiring out. And what's happening is that we're getting new people sitting in positions of power and government that want innovation. They know about the cloud and they want to be able to integrate with systems quickly and easily, have little to no onboarding time. You know, there are people in power that have grown up with technology and understand that, well, with everything else, I can be up and running in five or ten minutes. I cannot do this with the software I'm consuming now.Corey: My opinion on it is admittedly conflicted because on the one hand, yeah, I don't think that governments should be running on COBOL software that runs on mainframes that haven't been supported in 25 years. Conversely, I also don't necessarily want them being run like a seed series startup, where, “Well, I wrote this code last night, and it's awesome, so off I go to production with it.” Because I can decide not to do business anymore with Twitter for Pets, and I could go on to something else, like PetFlicks, or whatever it is I choose to use. I can't easily opt out of my government. The decisions that they make stick and that is going to have a meaningful impact on my life and everyone else's life who is subject to their jurisdiction. So, I guess I don't really know where I believe the proper, I guess, pace of technological adoption should be for governments. Curious to get your thoughts on this.Allen: Well, you certainly don't want anything that's bleeding edge. That's one of the things that we kind of draw fine lines around. Because when we're dealing with government software, we're dealing with, usually, critically sensitive information. It's not medical records, but it's your criminal record, and it's things like your social security number, it's things that you can't have leaking out under any circumstances. So, the things that we're building on are things that have proven out to be secure and have best practices around security, uptime, reliability, and in a lot of cases as well, and maintainability. You know, if there are issues, then let's try to get those turned around as quickly as we can because we don't want to have any sort of downtime from the software side versus the software vendor side.Corey: I want to pivot a little bit to some of the content you've put out because an awful lot of it seems to be, I think I'll call it variations on a theme. For example, I just read some recent titles, and to illustrate my point, “Going API First: Your First 30 Days,” “Solutions Architect Tips how to Design Applications for Growth,” “3 Things to Know Before Building A Multi-Tenant Serverless App.” And the common thread that I see running through all of these things are these are things that you tend to have extraordinarily strong and vocal opinions about only after dismissing all of them the first time and slapping something together, and then sort of being forced to live with the consequences of the choices that you've made, in some cases you didn't realize you were making at the time. Are you one of those folks that has the wisdom to see what's coming down the road, or did you do what the rest of us do and basically learn all this stuff by getting it hilariously wrong and having to careen into rebound situations as a result?Allen: [laugh]. I love that question. I would like to say now, I feel like I have the vision to see something like that coming. Historically, no, not at all. Let me talk a little bit about how I got to where I am because that will shed a lot of context on that question.A few years ago, I was put into a position at Tyler that said, “Hey, go figure out this cloud thing.” Let's figure out what we need to do to move into the cloud safely, securely, quickly, all that rigmarole. And so, I did. I got to hand-select team of engineers from people that I worked with at Tyler over the past few years, and we were basically given free rein to learn. We were an R&D team, a hundred percent R&D, for about a year's worth of time, where we were learning about cloud concepts and theory and building little proof of concepts.CI/CD, serverless, APIs, multi-tenancy, a whole bunch of different stuff. NoSQL was another one of the things that we had to learn. And after that year of R&D, we were told, “Okay, now go do something with that. Go build this application.” And we did, building on our theory our cursory theory knowledge. And we get pretty close to go live, and then the business says, “What do you do in this scenario? What do you do in that scenario? What do you do here?”Corey: “I update my resume and go work somewhere else. Where's the hard part here?”Allen: [laugh].Corey: Turns out, that's not a convincing answer.Allen: Right. So, we moved quickly. And then I wouldn't say we backpedaled, but we hardened for a long time before the—prior to the go-live, with the lessons that we've learned with the eyes of Tyler, the mature enterprise company, saying, “These are the things that you have to make sure that you take into consideration in an actual production application.” One of the things that I always pushed—I was a manager for a few years of all these cloud teams—I always push do it; do it right; do it better. Right?It's kind of like crawl, walk, run. And if you follow my writing from the beginning, just looking at the titles and reading them, kind of like what you were doing, Corey, you'll see that very much. You'll see how I talk about CI/CD, you'll see me how I talk about authorization, you'll see me how I talk about multi-tenancy. And I kind of go in waves where maybe a year passes and you see my content revisit some of the topics that I've done in the past. And they're like, “No, no, no, don't do what I said before. It's not right.”Corey: The problem when I'm writing all of these things that I do, for example, my entire newsletter publication pipeline is built on a giant morass of Lambda functions and API Gateways. It's microservices-driven—kind of—and each microservice is built, almost always, with a different framework. Lately, all the new stuff is CDK. I started off with the serverless framework. There are a few other things here and there.And it's like going architecting, back in time as I have to make updates to these things from time to time. And it's the problem with having done all that myself is that I already know the answer to, “What fool designed this?” It's, well, you're basically watching me learn what I was, doing bit by bit. I'm starting to believe that the right answer on some level, is to build an inherent shelf-life into some of these things. Great, in five years, you're going to come back and re-architect it now that you know how this stuff actually works rather than patching together 15 blog posts by different authors, not all of whom are talking about the same thing and hoping for the best.Allen: Yep. That's one of the things that I really like about serverless, I view that as a giant pro of doing Serverless is that when we revisit with the lessons learned, we don't have to refactor everything at once like if it was just a big, you know, MVC controller out there in the sky. We can refactor one Lambda function at a time if now we're using a new version of the AWS SDK, or we've learned about a new best practice that needs to go in place. It's a, “While you're in there, tidy up, please,” kind of deal.Corey: I know that the DynamoDB fanatics will absolutely murder me over this one, but one of the reasons that I have multiple Dynamo tables that contain, effectively, variations on the exact same data, is because I want to have the dependency between the two different microservices be the API, not, “Oh, and under the hood, it's expecting this exact same data structure all the time.” But it just felt like that was the wrong direction to go in. That is the justification I use for myself why I run multiple DynamoDB tables that [laugh] have the same content. Where do you fall on the idea of data store separation?Allen: I'm a big single table design person myself, I really like the idea of being able to store everything in the same table and being able to create queries that can return me multiple different types of entity with one lookup. Now, that being said, one of the issues that we ran into, or one of the ambiguous areas when we were getting started with serverless was, what does single table design mean when you're talking about microservices? We were wondering does single table mean one DynamoDB table for an entire application that's composed of 15 microservices? Or is it one table per microservice? And that was ultimately what we ended up going with is a table per microservice. Even if multiple microservices are pushed into the same AWS account, we're still building that logical construct of a microservice and one table that houses similar entities in the same domain.Corey: So, something I wish that every service team at AWS would do as a part of their design is draw the architecture of an application that you're planning to build. Great, now assume that every single resource on that architecture diagram lives in its own distinct AWS account because somewhere in some customer, there's going to be an account boundary at every interconnection point along the way. And so, many services don't do that where it's, “Oh, that thing and the other thing has to be in the same account.” So, people have to write their own integration shims, and it makes doing the right thing of putting different services into distinct bounded AWS accounts for security or compliance reasons way harder than I feel like it needs to be.Allen: [laugh]. Totally agree with you on that one. That's one of the things that I feel like I'm still learning about is the account-level isolation. I'm still kind of early on, personally, with my opinions in how we're structuring things right now, but I'm very much of a like opinion that deploying multiple things into the same account is going to make it too easy to do something that you shouldn't. And I just try not to inherently trust people, in the sense that, “Oh, this is easy. I'm just going to cross that boundary real quick.”Corey: For me, it's also come down to security risk exposure. Like my lasttweetinaws.com Twitter shitposting thread client lives in a distinct AWS account that is separate from the AWS account that has all of our client billing data that lives within it. The idea being that if you find a way to compromise my public-facing Twitter client, great, the blast radius should be constrained to, “Yay, now you can, I don't know, spin up some cryptocurrency mining in my AWS account and I get to look like a fool when I beg AWS for forgiveness.”But that should be the end of it. It shouldn't be a security incident because I should not have the credit card numbers living right next to the funny internet web thing. That sort of flies in the face of the original guidance that AWS gave at launch. And right around 2008-era, best practices were one customer, one AWS account. And then by 2012, they had changed their perspective, but once you've made a decision to build multiple services in a single account, unwinding and unpacking that becomes an incredibly burdensome thing. It's about the equivalent of doing a cloud migration, in some ways.Allen: We went through that. We started off building one application with the intent that it was going to be a siloed application, a one-off, essentially. And about a year into it, it's one of those moments of, “Oh, no. What we're building is not actually a one-off. It's a piece to a much larger puzzle.”And we had a whole bunch of—unfortunately—tightly coupled things that were in there that we're assuming that resources were going to be in the same AWS account. So, we ended up—how long—I think we took probably two months, which in the grand scheme of things isn't that long, but two months, kind of unwinding the pieces and decoupling what was possible at the time into multiple AWS accounts, kind of, segmented by domain, essentially. But that's hard. AWS puts it, you know, it's those one-way door decisions. I think this one was a two-way door, but it locked and you could kind of jimmy the lock on the way back out.Corey: And you could buzz someone from the lobby to let you back in. Yeah, the biggest problem is not necessarily the one-way door decisions. It's the one-way door decisions that you don't realize you're passing through at the time that you do them. Which, of course, brings us to a topic near and dear to your heart—and I only recently started have opinions on this myself—and that is the proper design of APIs, which I'm sure will incense absolutely no one who's listening to this. Like, my opinions on APIs start with well, probably REST is the right answer in this day and age. I had people, like, “Well, I don't know, GraphQL is pretty awesome.” Like, “Oh, I'm thinking SOAP,” and people look at me like I'm a monster from the Black Lagoon of centuries past in XML-land. So, my particular brand of strangeness side, what do you see that people are doing in the world of API design that is the, I guess, most common or easy to make mistakes that you really wish they would stop doing?Allen: If I could boil it down to one word, fundamentalism. Let me unpack that for you.Corey: Oh, please, absolutely want to get a definition on that one.Allen: [laugh]. I approach API design from a developer experience point of view: how easy is it for both internal and external integrators to consume and satisfy the business processes that they want to accomplish? And a lot of times, REST guidelines, you know, it's all about entity basis, you know, drill into the appropriate entities and name your endpoints with nouns, not verbs. I'm actually very much onto that one.But something that you could easily do, let's say you have a business process that given a fundamentally correct RESTful API design takes ten API calls to satisfy. You could, in theory, boil that down to maybe three well-designed endpoints that aren't, quote-unquote, “RESTful,” that make that developer experience significantly easier. And if you were a fundamentalist, that option is not even on the table, but thinking about it pragmatically from a developer experience point of view, that might be the better call. So, that's one of the things that, I know feels like a hot take. Every time I say it, I get a little bit of flack for it, but don't be a fundamentalist when it comes to your API designs. Do something that makes it easier while staying in the guidelines to do what you want.Corey: For me the problem that I've kept smacking into with API design, and it honestly—let me be very clear on this—my first real exposure to API design rather than API consumer—which of course, I complain about constantly, especially in the context of the AWS inconsistent APIs between services—was when I'm building something out, and I'm reading the documentation for API Gateway, and oh, this is how you wind up having this stage linked to this thing, and here's the endpoint. And okay, great, so I would just populate—build out a structure or a schema that has the positional parameters I want to use as variables in my function. And that's awesome. And then I realized, “Oh, I might want to call this a different way. Aw, crap.” And sometimes it's easy; you just add a different endpoint. Other times, I have to significantly rethink things. And I can't shake the feeling that this is an entire discipline that exists that I just haven't had a whole lot of exposure to previously.Allen: Yeah, I believe that. One of the things that you could tie a metaphor to for what I'm saying and kind of what you're saying, is AWS SAM, the Serverless Application Model, all it does is basically macros CloudFormation resources. It's just a transform from a template into CloudFormation. CDK does same thing. But what the developers of SAM have done is they've recognized these business processes that people do regularly, and they've made these incredibly easy ways to satisfy those business processes and tie them all together, right?If I want to have a Lambda function that is backed behind a endpoint, an API endpoint, I just have to add four or five lines of YAML or JSON that says, “This is the event trigger, here's the route, here's the API.” And then it goes and does four, five, six different things. Now, there's some engineers that don't like that because sometimes that feels like magic. Sometimes a little bit magic is okay.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig secures your cloud from source to run. They believe, as do I, that DevOps and security are inextricably linked. If you wanna learn more about how they view this, check out their blog, it's definitely worth the read. To learn more about how they are absolutely getting it right from where I sit, visit Sysdig.com and tell them that I sent you. That's S Y S D I G.com. And my thanks to them for their continued support of this ridiculous nonsense.Corey: I feel like one of the benefits I've had with the vast majority of APIs that I've built is that because this is all relatively small-scale stuff for what amounts to basically shitposting for the sake of entertainment, I'm really the only consumer of an awful lot of these things. So, I get frustrated when I have to backtrack and make changes and teach other microservices to talk to this thing that has now changed. And it's frustrating, but I have the capacity to do that. It's just work for a period of time. I feel like that equation completely shifts when you have published this and it is now out in the world, and it's not just users, but in many cases paying customers where you can't really make those changes without significant notice, and every time you do you're creating work for those customers, so you have to be a lot more judicious about it.Allen: Oh, yeah. There is a whole lot of governance and practice that goes into production-level APIs that people integrate with. You know, they say once you push something out the door into production that you're going to support it forever. I don't disagree with that. That seems like something that a lot of people don't understand.And that's one of the reasons why I push API-first development so hard in all the content that I write is because you need to be intentional about what you're letting out the door. You need to go in and work, not just with the developers, but your product people and your analysts to say, what does this absolutely need to do, and what does it need to do in the future? And you take those things, and you work with analysts who want specifics, you work with the engineers to actually build it out. And you're very intentional about what goes out the door that first time because once it goes out with a mistake, you're either going to version it immediately or you're going to make some people very unhappy when you make a breaking change to something that they immediately started consuming.Corey: It absolutely feels like that's one of those things that AWS gets astonishingly right. I mean, I had the privilege of interviewing, at the time, Jeff Barr and then Ariel Kelman, who was their head of marketing, to basically debunk a bunch of old myths. And one thing that they started talking about extensively was the idea that an API is fundamentally a promise to your customers. And when you make a promise, you'd better damn well intend on keeping it. It's why API deprecations from AWS are effectively unique whenever something happens.It's the, this is a singular moment in time when they turn off a service or degrade old functionality in favor of new. They can add to it, they can launch a V2 of something and then start to wean people off by calling the old one classic or whatnot, but if I built something on AWS in 2008 and I wound up sleeping until today, and go and try and do the exact same thing and deploy it now, it will almost certainly work exactly as it did back then. Sure, reliability is going to be a lot better and there's a crap ton of features and whatnot that I'm not taking advantage of, but that fundamental ability to do that is awesome. Conversely, it feels like Google Cloud likes to change around a lot of their API stories almost constantly. And it's unplanned work that frustrates the heck out of me when I'm trying to build something stable and lasting on top of it.Allen: I think it goes to show the maturity of these companies as API companies versus just vendors. It's one of the things that I think AWS does [laugh]—Corey: You see the similar dichotomy with Microsoft and Apple. Microsoft's new versions of Windows generally still have functionalities in them to support stuff that was written in the '90s for a few use cases, whereas Apple's like, “Oh, your computer's more than 18-months old? Have you tried throwing it away and buying a new one? And oh, it's a new version of Mac OS, so yeah, maybe the last one would get security updates for a year and then get with the times.” And I can't shake the feeling that the correct answer is in some way, both of those, depending upon who your customer is and what it is you're trying to achieve.If Microsoft adopted the Apple approach, their customers would mutiny, and rightfully so; the expectation has been set for decades that isn't what happens. Conversely, if Apple decided now we're going to support this version of Mac OS in perpetuity, I don't think a lot of their application developers wouldn't quite know what to make of that.Allen: Yeah. I think it also comes from a standpoint of you better make it worth their while if you're going to move their cheese. I'm not a Mac user myself, but from what I hear for Mac users—and this could be rose-colored glasses—but is that their stuff works phenomenally well. You know, when a new thing comes out—Corey: Until it doesn't, absolutely. It's—whenever I say things like that on this show, I get letters. And it's, “Oh, yeah, really? They'll come up with something that is a colossal pain in the ass on Mac.” Like, yeah, “Try building a system-wide mute key.”It's yeah, that's just a hotkey away on windows and here in Mac land. It's, “But it makes such beautiful sounds. Why would you want them to be quiet?” And it's, yeah, it becomes this back-and-forth dichotomy there. And you can even explain it to iPhones as well and the Android ecosystem where it's, oh, you're going to support the last couple of versions of iOS.Well, as a developer, I don't want to do that. And Apple's position is, “Okay, great.” Almost half of the mobile users on the planet will be upgrading because they're in the ecosystem. Do you want us to be able to sell things those people are not? And they're at a point of scale where they get to dictate those terms.On some level, there are benefits to it and others, it is intensely frustrating. I don't know what the right answer is on the level of permanence on that level of platform. I only have slightly better ideas around the position of APIs. I will say that when AWS deprecates something, they reach out individually to affected customers, on some level, and invariably, when they say, “This is going to be deprecated as of August 31,” or whenever it is, yeah, it is going to slip at least twice in almost every case, just because they're not going to turn off a service that is revenue-bearing or critical-load-bearing for customers without massive amounts of notice and outreach, and in some cases according to rumor, having engineers reach out to help restructure things so it's not as big of a burden on customers. That's a level of customer focus that I don't think most other companies are capable of matching.Allen: I think that comes with the size and the history of Amazon. And one of the things that they're doing right now, we've used Amazon Cloud Cams for years, in my house. We use them as baby monitors. And they—Corey: Yea, I saw this I did something very similar with Nest. They didn't have the Cloud Cam at the right time that I was looking at it. And they just announced that they're going to be deprecating. They're withdrawing them for sale. They're not going to support them anymore. Which, oh at Amazon—we're not offering this anymore. But you tell the story; what are they offering existing customers?Allen: Yeah, so slightly upset about it because I like my Cloud Cams and I don't want to have to take them off the wall or wherever they are to replace them with something else. But what they're doing is, you know, they gave me—or they gave all the customers about eight months head start. I think they're going to be taking them offline around Thanksgiving this year, just mid-November. And what they said is as compensation for you, we're going to send you a Blink Cam—a Blink Mini—for every Cloud Cam that you have in use, and then we are going to gift you a year subscription to the Pro for Blink.Corey: That's very reasonable for things that were bought years ago. Meanwhile, I feel like not to be unkind or uncharitable here, but I use Nest Cams. And that's a Google product. I half expected if they ever get deprecated, I'll find out because Google just turns it off in the middle of the night—Allen: [laugh].Corey: —and I wake up and have to read a blog post somewhere that they put an update on Nest Cams, the same way they killed Google Reader once upon a time. That's slightly unfair, but the fact that joke even lands does say a lot about Google's reputation in this space.Allen: For sure.Corey: One last topic I want to talk with you about before we call it a show is that at the time of this recording, you recently had a blog post titled, “What does the Future Hold for Serverless?” Summarize that for me. Where do you see this serverless movement—if you'll forgive the term—going?Allen: So, I'm going to start at the end. I'm going to work back a little bit on what needs to happen for us to get there. I have a feeling that in the future—I'm going to be vague about how far in the future this is—that we'll finally have a satisfied promise of all you're going to write in the future is business logic. And what does that mean? I think what can end up happening, given the right focus, the right companies, the right feedback, at the right time, is we can write code as developers and have that get pushed up into the cloud.And a phrase that I know Jeremy Daly likes to say ‘infrastructure from code,' where it provisions resources in the cloud for you based on your use case. I've developed an application and it gets pushed up in the cloud at the time of deploying it, optimized resource allocation. Over time, what will happen—with my future vision—is when you get production traffic going through, maybe it's spiky, maybe it's consistently at a scale that outperforms the resources that it originally provisioned. We can have monitoring tools that analyze that and pick that out, find the anomalies, find the standard patterns, and adjust that infrastructure that it deployed for you automatically, where it's based on your production traffic for what it created, optimizes it for you. Which is something that you can't do on an initial deployment right now. You can put what looks best on paper, but once you actually get traffic through your application, you realize that, you know, what was on paper might not be correct.Corey: You ever noticed that whiteboard diagrams never show the reality, and they're always aspirational, and they miss certain parts? And I used to think that this was the symptom I had from working at small, scrappy companies because you know what, those big tech companies, everything they build is amazing and awesome. I know it because I've seen their conference talks. But I've been a consultant long enough now, and for a number of those companies, to realize that nope, everyone's infrastructure is basically a trash fire at any given point in time. And it works almost in spite of itself, rather than because of it.There is no golden path where everything is shiny, new and beautiful. And that, honestly, I got to say, it was really [laugh] depressing when I first discovered it. Like, oh, God, even these really smart people who are so intelligent they have to have extra brain packs bolted to their chests don't have the magic answer to all of this. The rest of us are just screwed, then. But we find ways to make it work.Allen: Yep. There's a quote, I wish I remembered who said it, but it was a military quote where, “No battle plan survives impact with the enemy—first contact with the enemy.” It's kind of that way with infrastructure diagrams. We can draw it out however we want and then you turn it on in production. It's like, “Oh, no. That's not right.”Corey: I want to mix the metaphors there and say, yeah, no architecture survives your first fight with a customer. Like, “Great, I don't think that's quite what they're trying to say.” It's like, “What, you don't attack your customers? Pfft, what's your customer service line look like?” Yeah, it's… I think you're onto something.I think that inherently everything beyond the V1 design of almost anything is an emergent property where this is what we learned about it by running it and putting traffic through it and finding these problems, and here's how it wound up evolving to account for that.Allen: I agree. I don't have anything to add on that.Corey: [laugh]. Fair enough. I really want to thank you for taking so much time out of your day to talk about how you view these things. If people want to learn more, where is the best place to find you?Allen: Twitter is probably the best place to find me: @AllenHeltonDev. I have that username on all the major social platforms, so if you want to find me on LinkedIn, same thing: AllenHeltonDev. My blog is always open as well, if you have any feedback you'd like to give there: readysetcloud.io.Corey: And we will, of course, put links to that in the show notes. Thanks again for spending so much time talking to me. I really appreciate it.Allen: Yeah, this was fun. This was a lot of fun. I love talking shop.Corey: It shows. And it's nice to talk about things I don't spend enough time thinking about. Allen Helton, cloud architect at Tyler Technologies. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment that I will reject because it was not written in valid XML.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
On The Cloud Pod this week, Amazon adds the ability to embed fine-grained visualizations directly onto web pages, Google offers pay-as-you-go pricing for Apigee customers, and Microsoft launches Arm-based Azure VMs that are powered by ampere chips. Thank you to our sponsor, Foghorn Consulting, which provides top notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you're having trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week. Episode Highlights ⏰ Fine-grained visualizations can now be embedded directly into your webpages and applications ⏰ Google is now offering pay-as-you-go pricing for its Apigee API customers ⏰ Microsoft launches Arm-based Azure VMs powered by ampere chips Top Quote
Jodi-Ann Campbell is the CEO of Malcolm's Choice which is a digital directory for black businesses in Toronto, Canada. Jodi-Ann details the genesis of Malcolm's choice, the opportunities it has created for her, and its impact on black businesses. And shares her insight on how we can create a more environment as entrepreneurs. She also shares her insight on how entrepreneurs can change the business landscape in Canada, lessons she has learned about herself as well as the legacy she hopes to leave. CLI podcast is available on Apple Podcast, Spotify, iTunes, Youtube, Google Play, Anchor, and on your favourite podcast platforms. Click the link above to listen to the full episode! Listen, Subscribe, Review & Share
About AnaïsAnaïs is a Developer Advocate at Aqua Security, where she contributes to Aqua's cloud native open source projects. When she is not advocating DevOps best practices, she runs her own YouTube Channel centered around cloud native technologies. Before joining Aqua, Anais worked as SRE at Civo, a cloud native service provider, where she helped enhance the infrastructure for hundreds of tenant clusters. As CNCF ambassador of the year 2021, her passion lies in making tools and platforms more accessible to developers and community members.Links Referenced: Aqua Security: https://www.aquasec.com/ Aqua Open Source YouTube channel: https://www.youtube.com/c/AquaSecurityOpenSource Personal blog: https://anaisurl.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig That's snark.cloud/appconfig.Corey: This episode is sponsored in part by Honeycomb. When production is running slow, it's hard to know where problems originate. Is it your application code, users, or the underlying systems? I've got five bucks on DNS, personally. Why scroll through endless dashboards while dealing with alert floods, going from tool to tool to tool that you employ, guessing at which puzzle pieces matter? Context switching and tool sprawl are slowly killing both your team and your business. You should care more about one of those than the other; which one is up to you. Drop the separate pillars and enter a world of getting one unified understanding of the one thing driving your business: production. With Honeycomb, you guess less and know more. Try it for free at honeycomb.io/screaminginthecloud. Observability: it's more than just hipster monitoring.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while, when I start trying to find guests to chat with me and basically suffer my various slings and arrows on this show, I encounter something that I've never really had the opportunity to explore further. And today's guest leads me in just such a direction. Anaïs is an open-source developer advocate at Aqua Security, and when I was asking her whether or not she wanted to talk about various topics, one of the first thing she said was, “Don't ask me much about AWS because I've never used it,” which, oh my God. Anaïs, thank you for joining me. You must be so very happy never to have dealt with the morass of AWS.Anaïs: [laugh]. Yes, I'm trying my best to stay away from it. [laugh].Corey: Back when I got into the cloud space, for lack of a better term, AWS was sort of really the only game in town unless you wanted to start really squinting hard at what you define cloud as. I mean yes, I could have gone into Salesforce or something, but I was already sad and angry all the time. These days, you can very much go all in-on cloud. In fact, you were a CNCF ambassador, if I'm not mistaken. So, you absolutely are in the infrastructure cloud space, but you haven't dealt with AWS. That is just an interesting path. Have you found others who have gone down that same road, or are you sort of the first of a new breed?Anaïs: I think to find others who are in a similar position or have a similar experience, as you do, you first have to talk about your experience, and this is the first time, or maybe the second, that I'm openly [laugh] saying it on something that will be posted live, like, to the internet. Before I, like, I tried to stay away from mentioning it at all, do the best that I can because I'm at this point where I'm so far into my cloud-native Kubernetes journey that I feel like I should have had to deal with AWS by now, and I just didn't. And I'm doing my best and I'm very successful in avoiding it. [laugh]. So, that's where I am. Yeah.Corey: We're sort of on opposite sides of a particular fence because I spend entirely too much time being angry at AWS, but I've never really touched Kubernetes and anger. I mean, I see it in a lot of my customer accounts and I get annoyed at its data transfer bills and other things that it causes in an economic sense, but as far as the care and feeding of a production cluster, back in my SRE days, I had very old-school architectures. It's, “Oh, this is an ancient system, just like grandma used to make,” where we had the entire web tier, then a job applic—or application server tier, and then a database at the end, and everyone knew where everything was. And then containers came out of nowhere, and it seemed like okay, this solves a bunch of problems and introduces a whole bunch more. How do I orchestrate them? How do I ensure that they're healthy?And then ah, Kubernetes was the answer. And for a while, it seemed like no matter what the problem was, Kubernetes was going to be the answer because people were evangelizing it pretty hard. And now I see it almost everywhere that I turn. What's your journey been like? How did you get into the weeds of, “You know what I want to do when I grow up? That's right. I want to work on container orchestration systems.” I have a five-year-old. She has never once said that because I don't abuse my children by making them learn how clouds work. How did you wind up doing what you do?Anaïs: It's funny that you mention that. So, I'm actually of the generation of engineers who doesn't know anything else but Kubernetes. So, when you mentioned that you used to use something before, I don't really know what that looks like. I know that you can still deploy systems without Kubernetes, but I have no idea how. My journey into the cloud-native space started out of frustration from the previous industry that I was working at.So, I was working for several years as developer advocate in the open-source blockchain cryptocurrency space and it's highly similar to all of the cliches that you hear online and across the news. And out of this frustration, [laugh] I was looking at alternatives. One of them was either going into game development, into the gaming industry, or the cloud-native space and infrastructure development and deployment. And yeah, that's where I ended up. So, at the end of 2020, I joined a startup in the cloud-native space and started my social media journey.Corey: One of the things that I found that Kubernetes solved for—and to be clear, Kubernetes really came into its own after I was doing a lot more advisory work and a lot more consulting style activity rather than running my own environments, but there's an entire universe of problems that the modern day engineer never has to think about due to, partially cloud and also Kubernetes as well, which is the idea of hardware or node failure. I've had middle of the night driving across Los Angeles in a panic getting to the data center because the disk array on the primary database had degraded because the drive failed. That doesn't happen anymore. And clouds have mostly solved that. It's okay, drives fail, but yeah, that's the problem for some people who live in Virginia or Oregon. I don't have to think about it myself.But you do have to worry about instances failing; what if the primary database instance dies? Well, when everything lives in a container then that container gets moved around in the stateless way between things, well great, you really only have to care instead about okay, what if all of my instances die? Or, what if my code is really crappy? To which my question is generally, what do you mean, ‘if?' All of us write crappy code.That's the nature of the universe. We open-source only the small subset that we are not actively humiliated by, which is, in a lot of ways, what you're focusing on now, over at Aqua Sec, you are an advocate for open-source. One of the most notable projects that come out of that is Trivy, if I'm pronouncing that correctly.Anaïs: Yeah, that's correct. Yeah. So, Trivy is our main open-source project. It's an all-in-one cloud-native security scanner. And it's actually—it's focused on misconfiguration issues, so it can help you to build more robust infrastructure definitions and configurations.So ideally, a lot of the things that you just mentioned won't happen, but it obviously, highly depends on so many different factors in the cloud-native space. But definitely misconfigurations of one of those areas that can easily go wrong. And also, not just that you have data might cease to exist, but the worst thing or, like, as bad might be that it's completely exposed online. And they are databases of different exposures where you can see all the kinds of data of information from just health data to dating apps, just being online available because the IP address is not protected, right? Things like that. [laugh].Corey: We all get those emails that start with, “Your security is very important to us,” and I know just based on that opening to an email, that the rest of that email is going to explain how security was not very important to you folks. And it's the apology, “Oops, we have messed up,” email. Now, the whole world of automated security scanners is… well, it's crowded. There are a number of different services out there that the cloud providers themselves offer a bunch of these, a whole bunch of scareware vendors at the security conferences do as well. Taking a quick glance at Trivy, one of the problems I see with it, from a cloud provider perspective, is that I see nothing that it does that winds up costing extra money on your cloud bill that you then have to pay to the cloud provider, so maybe they'll put a pull request in for that one of these days. But my sarcasm aside, what is it that differentiates Trivy from a bunch of other offerings in various spaces?Anaïs: So, there are multiple factors. If we're looking from an enterprise perspective, you could be using one of the in-house scanners from any of the cloud providers available, depending which you're using. The thing is, they are not generally going to be the ones who have a dedicated research team that provides the updates based on the vulnerabilities they find across the space. So, with an open-source security scanner or from a dedicated company, you will likely have more up-to-date information in your scans. Also, lots of different companies, they're using Trivy under the hood ultimately, or for their own scans.I can link a few where you can also find them in a Trivy repository. But ultimately, a lot of companies rely on Trivy and other open-source security scanners under the hood because they are from dedicated companies. Now, the other part to Trivy and why you might want to consider using Trivy is that in larger teams, you will have different people dealing with different components of your infrastructure, of your deployments, and you could end up having to use multiple different security scanners for all your different components from your container images that you're using, whether or not they are secure, whether or not they're following best practices that you defined to your infrastructure-as-code configurations, to you're running deployments inside of your cluster, for instance. So, each of those different stages across your lifecycle, from development to runtime, will maybe either need different security scanners, or you could use one security scanner that does it all. So, you could have in a team more knowledge sharing, you could have dedicated people who know how to use the tool and who can help out across a team across the lifecycle, and similar. So, that's one of the components that you might want to consider.Another thing is how mature is a tool, right? A lot of cloud providers, what they end up doing is they provide you with a solution, but it's nice to decoupled from anything else that you're using. And especially in the cloud-native space, you're heavily reliant on open-source tools, such as for your observability stack, right? Coming from Site Reliability Engineering also myself, I love using metrics and Grafana. And for me, if anything open-source from Loki to accessing my logs, to Grafana to dashboards, and all their integrations.I love that and I want to use the same tools that I'm using for everything else, also for my security tools. I don't want to have the metrics for my security tools visualized in a different solution to my reliability metrics for my application, right? Because that ultimately makes it more difficult to correlate metrics. So, those are, like, some of the factors that you might want to consider when you're choosing a security scanner.Corey: When you talk about thinking about this, from the perspective of an SRE is—I mean, this is definitely an artifact of where you come from and how you approach this space. Because in my world, when you have ten web servers, five application servers, and two database servers and you wind up with a problem in production, how do you fix this? Oh, it's easy. You log into one of those nodes and poke around and start doing diagnostics in production. In a containerized world, you generally can't do that, or there's a problem on a container, and by the time you're aware of that, that container hasn't existed for 20 minutes.So, how do you wind up figuring out what happens? And instrumenting for telemetry and metrics and observability, particularly at scale becomes way more important than it ever was, for me. I mean, my version of monitoring was always Nagios, which was the original Call of Duty that wakes you up at two in the morning when the hard drive fails. The world has thankfully moved beyond that and a bunch of ways. But it's not first nature for me. It's always, “Oh, yeah, that's right. We have a whole telemetry solution where I can go digging into.” My first attempt is always, oh, how do I get into this thing and poke it with a stick? Sometimes that's helpful, but for modern applications, it really feels like it's not.Anaïs: Totally. When we're moving to an infrastructure to an environment where we can deploy multiple times a day, right, and update our application multiple times a day, multiple times a day, we can introduce new security issues or other things can go wrong, right? So, I want to see—as much as I want to see all of the other failures, I want to see any security-related issues that might be deployed alongside those updates at the same frequency, right?Corey: The problem that I see across all this stuff, though, is there are a bunch of tools out there that people install, but then don't configure because, “Oh, well, I bought the tool. The end.” I mean, I think it was reported almost ten years ago or so on the big Target breach that they wound up installing some tool—I want to say FireEye, but please don't quote me on that—and it wound up firing off a whole bunch of alerts, and they figured was just noise, so they turned it all off. And it turned out no, no, this was an actual breach in progress. But people are so used to all the alarms screaming at them, that they don't dig into this.I mean, one of the original security scanners was Nessus. And I seen a lot of Nessus reports because for a long time, what a lot of crappy consultancies would do is they would white-label the output of whatever it was that Nessus said and deliver that in as the report. So, you'd wind up with 700 pages of quote-unquote, “Security issues.” And you'd have to flip through to figure out that, ah, this supports a somewhat old SSL negotiation protocol, and you're focusing on that instead of the oh, and by the way, the primary database doesn't have a password set. Like, it winds up just obscuring it because there is so much. How does Trivy approach avoiding the information overload problem?Anaïs: That's a great question because everybody's complaining about vulnerability fatigue, of them, for the first time, scanning their container images and workloads and seeing maybe even hundreds of vulnerabilities. And one of the things that can be done to counteract that right from the beginning is investing your time into looking at the different flags and configurations that you can do before actually deploying Trivy to, for example, your cluster. That's one part of it. The other part is I mentioned earlier, you would use a security scan at different parts of your deployment. So, it's really about integrating scanning not just once you—like, in your production environment, once you've deployed everything, but using it already before and empowering engineers to actually use it on their machines.Now, they can either decide to do it or not; it's not part of most people's job to do security scanning, but as you move along, the more you do, the more you can reduce the noise and then ultimately, when you deploy Trivy, for example, inside of your cluster, you can do a lot of configuration such as scanning just for critical vulnerabilities, only scanning for vulnerabilities that already have a fix available, and everything else should be ignored. Those are all factors and flags that you can place into Trivy, for instance, and make it easier. Now, with Trivy, you won't have automated PRs and everything out of the box; you would have to set up the actions or, like, the ways to mitigate those vulnerabilities manually by yourself with tools, as well as integrating Trivy with your existing stack, and similar. But then obviously, if you want to have something more automated, if you want to have something that does more for you in the background, that's when you want to use to an enterprise solution and shift to something like Aqua Security Enterprise Platform that actually provides you with the automated way of mitigating vulnerabilities where you don't have to know much about it and it just gives you the solution and provides you with a PR with the updates that you need in your infrastructure-as-code configurations to mitigate the vulnerability [unintelligible 00:15:52]?Corey: I think that's probably a very fair answer because let's be serious when you're running a bank or someone for whom security matters—and yes, yes, I know, security should matter for everyone, but let's be serious, I care a little bit less about the security impact of, for example, I don't know, my Twitter for Pets nonsense, than I do a dating site where people are not out about their orientation or whatnot. Like, there is a world of difference between the security concerns there. “Oh, no, you might be able to shitpost as me if you compromise my lasttweetinaws.com Twitter client that I put out there for folks to use.” Okay, great. That is not the end of the world compared to other stuff.By the time you're talking about things that are critically important, yeah, you want to spend money on this, and you want to have an actual full-on security team. But open-source tools like this are terrific for folks who are just getting started or they're building something for fun themselves and as it turns out, don't have a full security budget for their weird late-night project. I think that there's a beautiful, I guess, spectrum, as far as what level of investment you can make into security. And it's nice to see the innovation continued happening in the space.Anaïs: And you just mentioned that dedicated security companies, they likely have a research team that's deploying honeypots and seeing what happens to them, right? Like, how are attackers using different vulnerabilities and misconfigurations and what can be done to mitigate them. And that ultimately translates into the configurations of the open-source tool as well. So, if you're using, for instance, a security scanner that doesn't have an enterprise company with a research team behind it, then you might have different input into the data of that security scanner than if you do, right? So, these are, like, additional considerations that you might want to take when choosing a scanner. And also that obviously depends on what scanning you want to do, on the size of your company, and similar, right?Corey: This episode is sponsored in part by our friend EnterpriseDB. EnterpriseDB has been powering enterprise applications with PostgreSQL for 15 years. And now EnterpriseDB has you covered wherever you deploy PostgreSQL on-premises, private cloud, and they just announced a fully-managed service on AWS and Azure called BigAnimal, all one word. Don't leave managing your database to your cloud vendor because they're too busy launching another half-dozen managed databases to focus on any one of them that they didn't build themselves. Instead, work with the experts over at EnterpriseDB. They can save you time and money, they can even help you migrate legacy applications—including Oracle—to the cloud. To learn more, try BigAnimal for free. Go to biganimal.com/snark, and tell them Corey sent you.Corey: Something that I do find fairly interesting is that you started off, as you say, doing DevRel in the open-source blockchain world, then you went to work as an SRE, and then went back to doing DevRel-style work. What got you into SRE and what got you out of SRE, other than the obvious having worked in SRE myself and being unhappy all the time? I kid, but what was it that got you into that space and then out of it?Anaïs: Yeah. Yeah, but no, it's a great question. And it's, I guess, also was shaped my perspective on different tools and, like, the user experience of different tools. But ultimately, I first worked in the cloud-native space for an enterprise tool as developer advocate. And I did not like the experience of working for a paid solution. Doing developer advocacy for it, it felt wrong in a lot of ways. A lot of times you were required to do marketing work in those situations.And that kind of got me out of developer advocacy into SRE work. And now I was working partially or mainly as SRE, and then on the side, I was doing some presentations in developer advocacy. However, that split didn't quite work, either. And I realized that the value that I add to a project is really the way I convey information, which I can't do if I'm busy fixing the infrastructure, right? I can't convey the information of as much of how the infrastructure has been fixed as I can if I'm working with an engineering team and then doing developer advocacy, solely developer advocacy within the engineering team.So, how I ultimately got back into developer advocacy was just simply by being reached out to by my manager at Aqua Security, and Itay telling me, him telling me that he has a role available and if I want to join his team. And it was open-source-focused. Given that I started my career for several years working in the open-source space and working with engineers, contributing to open-source tools, it was kind of what I wanted to go back to, what I really enjoy doing. And yeah, that's how that came about [laugh].Corey: For me, I found that I enjoy aspects of the technology part, but I find I enjoy talking to people way more. And for me, the gratifying moment that keeps me going, believe it or not, is not necessarily helping giant companies spend slightly less money on another giant company. It's watching people suddenly understand something they didn't before, it's watching the light go on in their eyes. And that's been addictive to me for a long time. I've also found that the best way for me to learn something is to teach someone else.I mean, the way I learned Git was that I foolishly wound up proposing a talk, “Terrible Ideas in Git”—we'll teach it by counterexample—four months before the talk. And they accepted it, and crap, I'd better learn enough get to give this talk effectively. I don't recommend this because if you miss the deadline, I checked, they will not move the conference for you. But there really is something to be said for watching someone learn something by way of teaching it to them.Anaïs: It's actually a common strategy for a lot of developer advocates of making up a talk and then waiting whether or not it will get accepted. [laugh] and once it gets accepted, that's when you start learning the tool and trying to figure it out. Now, it's not a good strategy, obviously, to do that because people can easily tell that you just did that for a conference. And—Corey: Sounds to me, like, you need to get better at bluffing. I kid.Anaïs: [laugh].Corey: I kid. Don't bluff your way through conference talks as a general rule. It tends not to go well. [laugh].Anaïs: No. It's a bad idea. It's a really bad idea. And so, I ultimately started learning the technologies or, like, the different tools and projects in the cloud-native space. And there are lots, if you look at the CNCF landscape, right? But just trying to talk myself through them on my YouTube channel. So, my early videos on my channel, it's just very much on the go of me looking for the first time at somebody's documentation and not making any sense out of them.Corey: It's surprising to me how far that gets you. I mean, I guess I'm always reminded of that Tom Hanks movie from my childhood Big where he wakes up—the kid wakes up as an adult one day, goes to work, and bluffs his way into working at a toy company. He's in a management meeting and just they're showing their new toy they're going to put out there and he's, “I don't get it.” Everyone looks at him like how dare you say it? And, “I don't get it. What's fun about this?” Because he's a kid.And he wants to getting promoted to vice president because wow, someone pointed out the obvious thing. And so often, it feels like using a tool or a product, be it open-source or enterprise, it is clearly something different in my experience of it when I try to use this thing than the person who developed it. And very often it's that I don't see the same things or think of the problem space the same way that the developers did, but also very often—and I don't mean to call anyone in particular out here—it's a symptom of a terrible user interface or user experience.Anaïs: What you've just said, a lot of times, it's just about saying the thing that nobody that dares to say or nobody has thought of before, and that gets you obviously, easier, further [laugh] then repeating what other people have already mentioned, right? And a lot of what you see a lot of times in these—also an open-source projects, but I think more even in closed-source enterprise organizations is that people just repeat whatever everybody else is saying in the room, right? You don't have that as much in the open-source world because you have more input or easier input in public than you do otherwise, but it still happens that I mean, people are highly similar to each other. If you're contributing to the same project, you probably have a similar background, similar expertise, similar interests, and that will get you to think in a similar way. So, if there's somebody like, like a high school student maybe, somebody just graduated, somebody from a completely different industry who's looking at those tools for the first time, it's like, “Okay, I know what I'm supposed to do, but I don't understand why I should use this tool for that.” And just pointing that out, gets you a response, most of the time. [laugh].Corey: I use Twitter and use YouTube. And obviously, I bias more for short, pithy comments that are dripping in sarcasm, whereas in a long-form video, you can talk a lot more about what you're seeing. But the problem I have with bad user experience, particularly bad developer experience, is that when it happens to me—and I know at a baseline level, that I am reasonably competent in technical spaces, but when I encounter a bad interface, my immediate instinctive reaction is, “Oh, I'm dumb. And this thing is for smart people.” And that is never, ever true, except maybe with quantum computing. Great, awesome. The Hello World tutorial for that stuff is a PhD from Berkeley. Good luck if you can get into that. But here in the real world where the rest of us play, it's just a bad developer experience, but my instinctive reaction is that there's stuff I don't know, and I'm not good enough to use this thing. And I get very upset about that.Anaïs: That's one of the things that you want to do with any technical documentation is that the first experience that anybody has, no matter the background, with your tool should be a success experience, right? Like people should look at it, use maybe one command, do one thing, one simple thing, and be like, “Yeah, this makes sense,” or, like, this was fun to do, right? Like, this first positive interaction. And it doesn't have to be complex. And that's what many people I think get wrong, that they try to show off how powerful a tool is, of like, oh, “My God, you can do all those things. It's so exciting, right?” But [laugh] ultimately, if nobody can use it or if most of the people, 99% of the people who try it for the first time have a bad experience, it makes them feel uncomfortable or any negative emotion, then it's really you're approaching it from the wrong perspective, right?Corey: That's very apt. I think it's so much of whether people stick with something long enough to learn it and find the sharp edges has to do with what their experience looks like. I mean, back when I was more or less useless when it comes to anything that looked like programming—because I was a sysadmin type—I started contributing to SaltStack. And what was amazing about that was Tom Hatch, the creator of the project had this pattern that he kept up for way too long, where whenever anyone submitted an issue, he said, “Great, well, how about you fix it?” And because we had a patch, like, “Well, I'm not good at programming.” He's like, “That's okay. No one is. Try it and we'll see.”And he accepted every patch and then immediately, you'd see another patch come in ten minutes later that fixed the problems in your patch. But it was the most welcoming and encouraging experience, and I'm not saying that's a good workflow for an open-source maintainer, but he still remains one of the best humans I know, just from that perspective alone.Anaïs: That's amazing. I think it's really about pointing out that there are different ways of doing open-source [laugh] and there is no one way to go about it. So, it's really about—I mean, it's about the community, ultimately. That's what it boils down to, of you are dependent, as an open-source project, on the community, so what is the best experience that you can give them? If that's something that you want to and can invest in, then yeah [laugh] that's probably the best outcome for everybody.Corey: I do have one more question, specifically around things that are more timely. Now, taking a quick look at Trivy and recent features, it seems like you've just now—now-ish—started supporting cloud scanning as well. Previously, it was effectively, “Oh, this scans configuration and containers. Okay, great.” Now, you're targeting actually scanning cloud providers themselves. What does this change and what brought you to this place, as someone who very happily does not deal with AWS?Anaïs: Yeah, totally. So, I just started using AWS, specifically to showcase this feature. So, if you look at the Aqua Open Source YouTube channel, you will find several tutorials that show you how to use that feature, among others.Now, what I mentioned earlier in the podcast already is that Trivy is really versatile, it allows you to scan different aspects of your stack at different stages of your development lifecycle. And that's made possible because Trivy is ultimately using different open-source projects under the hood. For example, if you want to scan your infrastructure-as-code misconfigurations, it's using a tool called tfsec, specifically for Terraform. And then other tools for other scanning, for other security scanning. Now, we have—or had; it's going to be probably deprecated—a tool called CloudSploit in the Aqua open-source project suite.Now, that's going to, kind of like, the functionality that CloudSploit was providing is going to get converted to become part of Trivy, so everything scanning-related is going to become part of Trivy that really, like, once you understand how Trivy works and all of the CLI commands in Trivy have exactly the same structure, it's really easy to scan from container images to infrastructure-as-code, to generating s-bombs to scanning also now, your cloud infrastructure and Trivy can scan any of your AWS services for misconfigurations, and it's using basically the AWS client under the hood to connect with the services of everything you have set up there, and then give you the list of misconfigurations. And once it has done the scan, you can then drill down further into the different aspects of your misconfigurations without performing the entire scan again, since you likely have lots and lots of resources, so you wouldn't want to scan them every time again, right, when you perform the scan. So, once something has been scanned, Trivy will know whether the resource changed or not, it won't scan it again. That's the same way that in-classes scanning works right now. Once a container image has been scanned for vulnerabilities, it won't scan the same container image again because that would just waste time. [laugh]. So yeah, do check it out. It's our most recent feature, and it's going to come out also to the other cloud providers out there. But we're starting with AWS and this kind of forced me to finally [laugh] look at it for the sake of it. But I'm not going to be happy. [laugh].Corey: No, I don't think anyone is. It's every time I see on a resume that someone says, “Oh, I'm an expert in AWS,” it's, “No you're not.” They have 400-some-odd services now. We have crossed the point long ago, where I can very convincingly talk about AWS services that do not exist to Amazonians and not get called out for it because who in the world knows what they run? And half of their services sound like something I made up to be funny, but they're very real. It's wild to me that it is a sprawling as it is and apparently continues to work as a viable business.But no one knows all of it and everyone feels confused, lost, and overwhelmed every time they look at the AWS console. This has been my entire career in life for the last six years, and I still feel that way. So, I'm sure everyone else does, too.Anaïs: And this is how misconfigurations happen, right? You're confused about what you're actually supposed to do and how you're supposed to do it. And that's, for example, with all the access rights in Google Cloud, something that I'm very familiar with, that completely overwhelms you and you get super frustrated by, and you don't even know what you give access to. It's like, if you've ever had to configure Discord user roles, it's a similar disaster. You will not know which user has access to which. They kind of changed it and try to improve it over the past year, but it's a similar issue that you face in cloud providers, just on a much larger-scale, not just on one chat channel. [laugh]. So.Corey: I think that is probably a fair place to leave it. I really want to thank you for spending as much time with me as you have talking about the trials and travails of, well, this industry, for lack of a better term. If people want to learn more, where's the best place to find you?Anaïs: So, I have a weekly DevOps newsletter on my blog, which is anaisurl—like, how you spell U-R-L—and then dot com. anaisurl.com. That's where I have all the links to my different channels, to all of the resources that are published where you can find out more as well. So, that's probably the best place. Yeah.Corey: And we will, of course, put a link to that in the show notes. I really want to thank you for being as generous with your time as you have been. Thank you.Anaïs: Thank you for having me. It was great.Corey: Anaïs, open-source developer advocate at Aqua Security. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment that I will never see because it's buried under a whole bunch of minor or false-positive vulnerability reports.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Watch the live stream: Watch on YouTube About the show Sponsored by Microsoft for Startups Founders Hub. Special guest: Seth Larson Brian #1: Test your packages and wheels I've been building some wheels the last couple of weeks with various tools: flit, flit-core, and flit build hatch, hatchling, and hatch build setuptools, build_meta, and python -m build There are a few projects I've used to make sure my projects are in good shape wheel-inspect - you can inspect within Python code through inspect_wheel() function that converts to json. Or use on the command line with wheel2json check-wheel-contents - a linter for wheels tox - easily test the building, installation, and running of a package locally I actually start here, then utilize the other two tools Should have been obvious, but it wasn't to me Projects saved on git (such as gitHub) don't keep wheels in git. (this was obvious) When installing from git using pip install git+https://path/to/git/repo.git Your local pip will run the packaging backend to build the wheel before installing. Yet another way to test packaging. Michael #2: The Jupyter+git problem is now solved Jupyter notebooks don't work with git by default (they inherently have meaningless conflicts). With nbdev2, the Jupyter+git problem has been totally solved. Uses a set of hooks which provide clean git diffs, solve most git conflicts automatically, and ensure that any remaining conflicts can be resolved entirely within the standard Jupyter notebook environment. The techniques used to make the merge driver work are quite fascinating Seth #3: Help us test system trust stores in Python Package aiming to replace certifi called “truststore”, use system trust stores for HTTPS instead of a static list of certificates. Problem truststore is solving usually manifests in corporate networks: “unable to get local issuer certificate”. Experimental support added to pip to prove the implementation Users can try out the functionality and report issues. Brian #4: Making plots in your terminal with plotext Bob Belderbos Tutorial on using plotext - that's one t in the middle With the rise of CLI usage, plots are a nice addition. Bob's plot is great, but check out the options in the plotext docs lots-o-plots streaming data images subplots so fun Michael #5: jinja2-fragments Carson from HTMX (see podcast and course) wrote about template fragments. My jinja_partials project sorta fulfills this, but not really. I had a nice discussion with Sergi Pons Freixes who uses jinja_partials about this. He created Jinja2 fragments Seth #6: SLSA 3 Generic Builder for GitHub Actions GA Supply chain Levels for Software Artifacts, or SLSA (“salsa”) Tools to attest to and verify “provenance” of artifacts, ie “where it came from” Prove cryptographically that artifacts are built from a specific GitHub repository, commit, tag. Another future defense against stolen PyPI credentials/accounts. Generic builder means you can sign anything, like wheels/sdists Extras Brian: Bring your pytest books to PyBay, if you want them signed. I'm only bringing a small amount. I'll be presenting "Sharing is Caring - pytest fixture edition” at 3:05 “Experts Panel on Testing in Python” at 7:00 And be a zombie on my 8 am flight back unless I can change my reservation. That's this weekend, Sat Sept 10, in SF Michael: Heroku announces plans to eliminate free plans Banned paywalls PyPI phisher identified: Actor Phishing PyPI Users Identified and Actors behind PyPI supply chain attack have been active since late 2021 Major Python CVE: CVE-2020-10735: Prevent DoS by large int[HTML_REMOVED]str conversions Seth: Pyxel, retro game engine for Python, v1.8.0 added experimental web support with WASM Joke: Dev just after work
Ashley Redwood is the owner of Trap cardio and Elite Nutrition RVA wellness centre. In the full episode, Ashley shares how she became known as the queen of trap cardio, what led her to create her wellness space and how her business has impacted her community. She also addresses mindset and dieting as well as provides tips on how to make your business unique. CLI podcast is available on Apple Podcast, Spotify, iTunes, Youtube, Google Play, Anchor, and on your favourite podcast platforms. Click the link above to listen to the full episode! Listen, Subscribe, Review & Share
Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community.This episode is sponsored by Honeybadger - combining error monitoring, uptime monitoring and check-in monitoring into a single, easy to use platform and making you a DevOps hero. Show links Laravel 9.24 released Nagios Grafana Nginx Amplify Laravel 9.25 released Profile your Laravel applicatio with Xhprof Email scheduler package for Laravel Zero hassle CLI application with Laravel Zero JSON API resources in Laravel How I develop applications with Laravel Event sourcing in Laravel Detect slow queries before they hit your production database
Episode #255 In today's episode of Clicks and Bricks Podcast, Ken interviews Jasveen Kaur, the founder of Clime DAO. Ken and Jasveen discuss web3. And how Clime DAO is providing a revolutionary way for Businesses to reduce their carbon footprint. About Jasveen: Currently, building Clime DAO, which is a Web3 protocol. Clime DAO allows web3 developers, Individual or consumer or household (ICH) users, Environmentalists/Climate Advisors, and Green Champions to participate in Carbon footprint reduction through CLIM token (NFT- ERC 721) and CLI token (ERC20). Her Recent stint with Robert Bosch includes building a Web3 SaaS-based Supply chain Platform(built on Quorum) and Product Autotrace. Built platform and product from scratch and experienced complete product lifecycle from creating product strategy to feature identification to product development. Prior experience with Fidelity includes supporting on-prem proprietary financial services applications. Experience includes working directly with multiple North American clients for Product implementation. Participated in multiple internal hackathons and short internal rotation, including using AI/ML to solve business problems. Overall, with 15+ years of experience creating B2B SaaS-based product and platform strategies. Successful in creating product vision and converting it into a prioritized quarterly roadmap. Led two products from Discovery, Product Market Fit to early growth stages. Experienced with using data to make data-driven decisions and emerging technologies such as Blockchain, AI, and IOT to create unique solutions. Well-versed in implementing Agile product development through cross-functional geographically dispersed teams. Exposure to many verticals/domains such as Supply Chain, Automotive, Manufacturing, Mobility, Health Care, and Retirement Wealth. Contact Clime DAO: https://www.climedao.com Contact Ken: inlink.com/ken email@example.com Text: 314-370-2871 #GetToWork Connect with Us: Instagram: https://www.instagram.com/clicksandbrickspodcast/ Facebook: https://www.facebook.com/clicksandbrickspodcast/ YouTube: https://www.youtube.com/c/ClicksBricksPodcast Website: https://clickandbrickspodcast.com #businesspodcast #founderstories #entrepreneurship #entrepreneur Learn more about your ad choices. Visit megaphone.fm/adchoices