Podcasts about oidc

  • 29PODCASTS
  • 41EPISODES
  • 36mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 20, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about oidc

Latest podcast episodes about oidc

Oracle University Podcast
Oracle GoldenGate 23ai Security Strategies

Oracle University Podcast

Play Episode Listen Later May 20, 2025 16:13


GoldenGate 23ai takes security seriously, and this episode unpacks everything you need to know. GoldenGate expert Nick Wagner breaks down how authentication, access roles, and encryption protect your data.   Learn how GoldenGate integrates with identity providers, secures communication, and keeps passwords out of storage. Understand how trail files work, why they only store committed data, and how recovery processes prevent data loss.   Whether you manage replication or just want to tighten security, this episode gives you the details to lock things down without slowing operations.   Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services.  Nikita: Welcome, everyone! This is our fourth episode on Oracle GoldenGate 23ai. Last week, we discussed the terminology, different processes and what they do, and the architecture of the product at a high level. Today, we have Nick Wagner back with us to talk about the security strategies of GoldenGate. 00:56 Lois: As you know by now, Nick is a Senior Director of Product Management for GoldenGate at Oracle. He's played a key role as one of the product designers behind the latest version of GoldenGate. Hi Nick! Thank you for joining us again. Can you tell us how GoldenGate takes care of data security? Nick: So GoldenGate authentication and authorization is done in a couple of different ways. First, we have user credentials for GoldenGate for not only the source and target databases, but also for GoldenGate itself. We have integration with third-party identity management products, and everything that GoldenGate does can be secured. 01:32 Nikita: And we must have some access roles, right? Nick: There's four roles built into the GoldenGate product. You have your security role, administrator, operator, and user. They're all hierarchical. The most important one is the security user. This user is going to be the one that provides the administrative tasks. This user is able to actually create additional users and assign roles within the product. So do not lose this password and this user is extremely important. You probably don't want to use this security user as your everyday user. That would be your administrator. The administrator role is able to perform all administrative tasks within GoldenGate. So not only can they go in and create new extracts, create new replicats, create new distribution services, but they can also start and stop them. And that's where the operator role is and the user role. So the operator role allows you to go in and start/stop processes, but you can't create any new ones, which is kind of important. So this user would be the one that could go in and suspend activity. They could restart activity. But they can't actually add objects to replication. The user role is really a read-only role. They can come in. They can see what's going on. They can look at the log files. They can look at the alerts. They can look at all the watches and see exactly what GoldenGate is doing. But they're unable to make any changes to the product itself. 02:54 Lois: You mentioned the roles are hierarchical in nature. What does that mean? Nick: So anything that the user role does can be done by the operator. Anything that the operator and user roles can do can be done by the administrator. And anything that the user, operator, and administrator roles do can be done by the security role. 03:11 Lois: Ok. So, is there a single sign-on available for GoldenGate? Nick: We also have a password plugin for GoldenGate Connections. A lot of customers have asked for integration with whatever their single sign-on utility is, and so GoldenGate now has that with GoldenGate 23ai. So these are customer-created entities. So, we have some examples that you can use in our documentation on how to set up an identity provider or a third-party identity provider with GoldenGate. And this allows you to ensure that your corporate standards are met. As we started looking into this, as we started designing it, every single customer wanted something different. And so instead of trying to meet the needs for every customer and every possible combination of security credentials, we want you to be able to design it the way you need it. The passwords are never stored. They're only retrieved from the identity provider by the plugin itself. 04:05 Nikita: That's a pretty important security aspect…that when it's time to authenticate a user, we go to the identity provider. Nick: We're going to connect in and see if that password is matching. And only then do we use it. And as soon as we detect that it's matched, that password is removed. And then for the extract and replicats themselves, you can also use it for the database, data source, and data target connections, as well as for the GoldenGate users. So, it is a full-featured plugin. So, our identity provider plugin works with IAM as well as OAM. These are your standard identity manager authentication methods. The standard one is OAuth 2, as well as OIDC. And any Identity Manager that uses that is able to integrate with GoldenGate. 04:52 Lois: And how does this work? Nick: The way that it works is pretty straightforward. Once the user logs into the database, we're going to hand off authentication to the identity provider. Once the identity provider has validated that user's identity and their credentials, then it comes back to GoldenGate and says that user is able to log in to either GoldenGate or the application or the database. Once the user is logged in, we get that confirmation that's been sent out and they can continue working through GoldenGate. So, it's very straightforward on how it works. There's also a nice little UI that will help set up each additional user within those systems. All the communication is also secured as well. So any communication done through any of the GoldenGate services is encrypted using HTTPS. All the REST calls themselves are all done using HTTPS as well. All the data protection calls and all the communication across the network when we send data across a distribution service is encrypted using a secure WebSocket. And there's also trail file encryption at the operating system level for data at REST. So, this really gives you the full level of encryption for customers that need that high-end security. GoldenGate does have an option for FIPS 140-2 compliance as well. So that's even a further step for most of those customers. 06:12 Nikita: That's impressive! Because we want to maintain the highest security standards, right? Especially when dealing with sensitive information. I now want to move on to trail files. In our last episode, we briefly spoke about how they serve as logs that record and track changes made to data. But what more can you tell us about them, Nick? Nick: There's two different processes that write to the trail files. The extract process will write to the trail file and the receiver service will write to the trail file. The extract process is going to write to the trail file as it's pulling data out of that source database. Now, the extract process is controlled by a parameter file, that says, hey, here's the exact changes that I'm going to be pulling out. Here's the tables. Here's the rows that I want. As it's pulling that data out and writing it to the trail files, it's ensuring that those trail files have enough information so that the replicat process can actually construct a SQL statement and apply that change to that target platform. And so there's a lot of ways to change what's actually stored in those trail files and how it's handled. The trail files can also be used for initial loads. So when we do the initial load through GoldenGate, we can grab and write out the data for those tables, and that excludes the change data. So initial loads is pulling the data directly from the tables themselves, whereas ongoing replication is pulling it from the transaction logs. 07:38 Lois: But do we need to worry about rollbacks? Nick: Our trail files contain committed data only and all data is sequential. So this is two important things. Because it contains committed data only, we don't need to worry about rollbacks. We also don't need to worry about position within that trail file because we know all data is sequential. And so as we're reading through the trail file, we know that anything that's written in a prior location in that trial file was committed prior to something else. And as we get into the recovery aspects of GoldenGate, this will all make a lot more sense. 08:13 Lois: Before we do that, can you tell us about the naming of trail files? Nick: The trail files as far as naming, because these do reside on the operating system, you start with a two-letter trail file abbreviation and then a nine-digit sequential value. So, you almost look at it as like an archive log from Oracle, where we have a prefix and then an affix, which is numeric. Same kind of thing. So, we have our two-letter, in this case, an ab, and then we have a nine-digit number. 08:47 Transform the way you work with Oracle Database 23ai! This cutting-edge technology brings the power of AI directly to your data, making it easier to build powerful applications and manage critical workloads. Want to learn more about Database 23ai? Visit mylearn.oracle.com to pick from our range of courses and enroll today! 09:12 Nikita: Welcome back! Ok, Nick. Let's get into the GoldenGate recovery process. Nick: When we start looking at the GoldenGate recovery process, it essentially makes GoldenGate kind of point-in-time like. So on that source database, you have your extract process that's going to be capturing data from the transaction logs. In the case of Oracle, the Oracle Database is actually going to be reading those transaction logs from us and passing the change records directly to GoldenGate. We call them an LCR, Logical Change Record. And so the integrated extract and GoldenGate, the extract portion tells the database, hey, I'm now going to be interested in the following list of tables. And it gives a list of tables to that internal component, the log mining engine within the database. And it says, OK, I'm now pulling data for those tables and I'm going to send you those table changes. And so as the extract process gets sent those changes, it's going to have checkpoint information. So not only does it know where it was pulling data from out of that source database, but what it's also writing to the trail file. The trail files themselves are all sequential and they have only committed data, as we talked about earlier. The distribution service has checkpoint information that says, hey, I know where I'm reading from in the previous trail file, and I know what I've sent across the network. The receiver service is the same thing. It knows what it's receiving, as well as what it's written to the trail file and the target system. The replicat also has a checkpoint. It knows where it's reading from in the trail file, and then it knows what it's been applying into that target database.  This is where things start to become a little complicated. Our replicat process in most cases are parallel, so it'll have multiple threads applying data into that target database. Each of those threads is applying different transactions. And because of the way that the parallelism works in the replicat process, you can actually get situations where one replicat thread might be applying a transaction higher than another thread. And so you can eliminate that sequential or serial aspect of it, and we can get very high throughput speeds to the replicat. But it means that the checkpoint needs to be kind of smart enough to know how to rebuild itself if something fails. 11:32 Lois: Ok, sorry Nick, but can you go through that again? Maybe we can work backwards this time?  Nick: If the replicat process fails, when it comes back up, it's going to look to its checkpoint tables inside that target database. These checkpoint tables keep track of where each thread was at when it crashed. And so when the replicat process restarts, it goes, oh, I was applying these threads at this location in these SCNs. It'll then go and read from the trail file and say, hey, let me rebuild that data and it only applies transactions that it hasn't applied yet to that target system. There is a synchronized replicat command as well that will tell a crashed replicat to say, hey, bring all your threads up to the same high watermark. It does that process automatically as it restarts and continues normal replication. But there is an option to do it just by itself too. So that's how the replicat kind of repairs and recovers itself. It'll simply look at the trail files. Now, let's say that the replicat crashed, and it goes to read from the trail files when it restarts and that trail profile is missing. It'll actually communicate to the distribution, or excuse me, to the receiver service and say, hey, receiver service, I don't have this trail file. Can you bring it back for me? And the receiver service will communicate downstream and say, hey, distribution service, I need you to resend me trail find number 6. And so the distribution service will resend that trail file so that the replicat can reprocess it. So it's often nice to have redundant environments with GoldenGate so we can have those trail files kind of around for availability. 13:13 Nikita: What if one of these files gets corrupted? Nick: If one of those trail files is corrupt, let's say that a trail file on the target site became corrupt and the replicat can't read from it for one reason or another. Simply stop the replicat process, delete the corrupt trail file, restart the replicat process, and now it's going to rebuild that trail file from scratch based on the information from the source GoldenGate environment. And so it's very recoverable. Handles it all very well. 13:40 Nikita: And can the extract process bounce back in the same way? Nick: The extract process can also recover in a similar way. So if the extract process crashes, when it restarts itself, there's a number of things that it does. The first thing is it has to rebuild any open transactions. So it keeps all sorts of checkpoint information about the oldest transaction that it's keeping track of, any open transactions that haven't been committed, and any other transactions that have been committed that it's already written to the trail file. So as it's reprocessing that data, it knows exactly what it's committed to trail and what hasn't been committed. And there's a number of ways that it does this.  There's two main components here. One of them is called bounded recovery. Bounded recovery will allow you to set a time limit on transactions that span a certain length of time that they'll actually get flushed out to disk on that GoldenGate Hub. And that way it'll reduce the amount of time it takes GoldenGate to restart the extract process. And the other component is cache manager. Cache manager stores uncommitted transactions. And so it's a very elegant way of rebuilding itself from any kind of failure. You can also set up restart profiles so that if any process does crash, the GoldenGate service manager can automatically restart that service an x number of times across y time span. So if I say, hey, if my extract crashes, then attempt to restart it 100 times every 5 seconds. So there's a lot of things that you can do there to make it really nice and automatic repair itself and automatically resilient.  15:18 Lois: Well, that brings us to the end of this episode. Thank you, Nick, for going through the security strategies and recovery processes in such detail. Next week, we'll look at the installation of GoldenGate. Nikita: And if you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course. Until next time, this is Nikita Abraham… Lois: And Lois Houston signing off! 15:44 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Surveillance Report
Q&A: What Social Media Do We Use?

Surveillance Report

Play Episode Listen Later Mar 20, 2025 33:10


Q&A218: What social media apps are currently using the most? Questions about virtual machines and fingerprinting, as well as our recommendations for Android operating systems. Join our next Q&A on Patreon: https://www.patreon.com/collection/415684?view=expanded or XMR Chat: https://xmrchat.com/surveillancepodWelcome to the Surveillance Report Q&A - featuring Techlore & The New Oil answering your questions about privacy and security.❤️ Support us on Patreon: https://www.patreon.com/surveillancepod

PANCastâ„¢
How to Set up SP-Initiated SSO in Prisma Cloud (SaaS) with OIDC using Okta (IdP)

PANCastâ„¢

Play Episode Listen Later Feb 20, 2025 6:38


This episode talks about how to set up SP-Initiated SSO in Prisma Cloud using OIDC with Okta as the Identity Provider.

Open Source Security Podcast
Modern day authentication with Marc Boorshtein

Open Source Security Podcast

Play Episode Listen Later Feb 3, 2025 26:17


In this discussion with Tremolo Security CTO Marc Boorshtein, we explore what modern day Single Sign-On (SSO) looks like. Everyone likes to talk about zero trust, but how does that work? We talk about some of the history of authentication that got us here, and some technical details on how you should be implementing authentication into your application. We finish up with some passkey details and realize every authentication discussion really just turns into complaining how hard identity is. The blog post for this episode can be found at https://opensourcesecurity.io/2025/2025-02-modern_day_authentication_with_marc_boorshtein/

Datacenter Technical Deep Dives
GitHub Actions: Tips, Tricks, and More

Datacenter Technical Deep Dives

Play Episode Listen Later Jan 9, 2025


Welcome to the New Year and the first episode of the year: We are excited to be talking with Developer Advocate Jessican Deen about GitHub Actions and how you can use them to automate your workflows! 00:00 - Intro 05:49 - What are we going to talk about? 08:23 - Reusable Workflows 14:21 - Job Summaries 18:45 - Matrix Jobs 34:58 - Artifact Attestations 41:47 - OIDC 50:47 - Additional Resources How to find Jessica: Twitter: @jldeen Bluesky: @jldeen.dev Website: jessicadeen.com GitHub: github.com/jldeen Show Resources: Keylight CLI: github.com/jldeen/keylight-cli Excalidraw: https://excalidraw.com/ Learn GitHub Actions: https://github.com/features/actions Actions Marketplace: https://github.com/marketplace?type=actions GitHub Actions Docs: https://docs.github.com/en/actions

KuppingerCole Analysts
Building CIAM With Open Standards

KuppingerCole Analysts

Play Episode Listen Later Dec 11, 2024 13:40


In this videocast, Tom Bruggeman from DPG Media shares how his team tackled the challenges of user authentication in a fast-changing media landscape. He highlights the role of open standards like OAuth and OIDC and explains how Authlete helped create a seamless and secure user experience. Tom also offers insights into future plans, including efforts to enhance user privacy and explore data wallet solutions.

KuppingerCole Analysts Videos
Building CIAM With Open Standards

KuppingerCole Analysts Videos

Play Episode Listen Later Dec 11, 2024 13:40


In this videocast, Tom Bruggeman from DPG Media shares how his team tackled the challenges of user authentication in a fast-changing media landscape. He highlights the role of open standards like OAuth and OIDC and explains how Authlete helped create a seamless and secure user experience. Tom also offers insights into future plans, including efforts to enhance user privacy and explore data wallet solutions.

Crypto Hipster Podcast
How to Best Empower Users to Manage Their Non-Custodial Wallets Intuitively, with Zhen Yu Yong @ Web3Auth

Crypto Hipster Podcast

Play Episode Listen Later Nov 4, 2024 33:15


Zhen Yu, CEO & Co-Founder, Web3Auth   Zhen Yu Yong is the CEO and co-founder of Web3Auth, the leading non-custodial auth infrastructure that enables Web3 wallets and applications to provide a seamless user login experience to both mainstream and native Web3 users. Prior to Web3Auth, Zhen worked on various Ethereum Foundation projects as a researcher for off-chain scalability — where he built one of the first cross-chain bridges called The Peace Bridge, between ETH and ETC.  He met Vitalik Buterin face-to-face in 2016 and then decided a “decentralized computer” made sense. He was previously a Firefighter in the Singapore army and started learning programming on his days off. He studied finance at Singapore Management University.  Twitter | LinkedIn About Web3auth Web3Auth is the leading Wallet-as-a-Service (WaaS) provider that empowers every user to manage a non-custodial wallet intuitively. It leverages on enterprise-grade Multi-Party Computation and Account Abstraction tooling, alongside social logins, biometrics, OIDC, FIDO for a familiar yet seamless user experience. Web3Auth works with Fortune 500 brands (including NBCUniversal, Fox.com, McDonald's), leading Asia conglomerates (SK Planet, Square Enix) and Web3 pioneers like Trust Wallet, Metamask, Sky Mavis, Kukai, Skyweaver among others. To date, it is proud to be supporting thousands of Web3 projects with more than 20 million monthly users. The organization is growing beyond Series A, and backed by Sequoia Capital, Union Square Ventures, Binance, and more.
 Website | Twitter | Discord | Blog | Docs | Github --- Support this podcast: https://podcasters.spotify.com/pod/show/crypto-hipster-podcast/support

AWS Bites
134. Eliminate the IAM User

AWS Bites

Play Episode Listen Later Nov 1, 2024 28:15


In this episode, we discuss why IAM users and long-lived credentials are dangerous and should be avoided. We share war stories of compromised credentials and overprivileged access. We then explore solutions like centralizing IAM users, using tools like AWS Vault for temporary credentials, integrating with AWS SSO, and fully eliminating IAM users when possible.

AWS Morning Brief
AWS Prime Day Cost ~$135 million

AWS Morning Brief

Play Episode Listen Later Aug 19, 2024 3:32


AWS Morning Brief for the week of Monday, August 19th with Mike Julian. Links:How AWS powered Prime Day 2024 for record-breaking salesNew capabilities for Amazon EC2 On-Demand Capacity Reservations: Split, Move, and Modify additional attributesAmazon QuickSight now includes nested filtersAnnouncing Amazon S3 Express One Zone storage class support on Amazon EMRAmazon Verified Permissions improves support for OIDC identity providersAWS announces support for Cost Allocation Tags on AWS Transit GatewayAnnouncing Karpenter 1.0Visualize enterprise IP address management and planning with CIDR map

amazon cost cloud ip aws devops prime day modify cidr oidc mike julian last week in aws
Les Cast Codeurs Podcast
LCC 313 - 313 CCL

Les Cast Codeurs Podcast

Play Episode Listen Later Jun 15, 2024 79:45


Katia, Guillaume, Emmanuel et Antonio discutent Kotlin, Micronaut, Spring Boot, Quarkus, Langchain4j, LLMs en Java, builds reproductible et la question AMA du jour, comment fait-on carrière de dev à 40 ans ? Enregistré le 14 juin 2024 Téléchargement de l'épisode LesCastCodeurs-Episode-313.mp3 News Langages Android avec Kotlin Multiplatform our Flutter avec Dart ? https://developers.googleblog.com/en/making-development-across-platforms-easier-for-developers/ Des licenciements ont continué chez Google et l'équipe Flutter/Dart comme plein d'autres ont été touchées, mais sur les réseaux sociaux les gens ont pensé que Google désinvestissait dans Flutter et Dart. Par ailleurs, côté Android, ils poussent plutôt du côté de Kotlin et KMP, mais naturellement aussi les gens se sont demandé si Google avait pris parti pour pousser plus Kotlin/KMP plutôt que Flutter/Dart. Pour essayer de mieux faire comprendre aux développeurs l'intérêt des deux plateformes, et leurs avantages et inconvénients, les directeurs des deux plateformes ont rédigé un article commun. Si l'on souhaite une expérience plus proche du hardware et des dernières nouveautés d'Android, et d'avoir aussi une UI/UX vraiment native Android, mieux vaut aller du côté de Kotlin/KMP. Si l'on souhaite par contre une expérience multiplateforme Web, mobile, desktop avec une UX commune cross-plateforme, avec également le partage de business logic à partir d'une même base de code, Flutter et Dart sont plus adaptés. Recap de KotlinConf https://x.com/gz_k/status/1793887581433971083?s=46&t=C18cckWlfukmsB_Fx0FfxQ RPC multiplatform la pres Grow with the flow montrant la reecriture en kotlin plus simple que des solutions complexes ailleurs power-assert pour ecrire des tests Kotlin 2.0 et les evolutions majeures Kotlin multiplatforme mainteant stable Kotlin Compose Multiplatform continue a amturer Retour d'experience de la migration d'android jetpack vers Kotlin Multiplatform use cases de coroutines et scope Librairies Quarkus veut aller dans une fondation https://quarkus.io/blog/quarkus-in-a-foundation/ ameliorer l'adoption (encore plus), ameliorer la transparence, et la collaboration, encourager la participatiopn multi vendeur Premiere etape : une gouvernance plus overte Deuxieme etape: bouger dans uen foundation Echange avec la communaute sur la proposition et les fondations cibles Des criteres pour al foudnation (notamment la rapidite de delivery Quarkus 3.11 https://quarkus.io/blog/quarkus-3-11-0-released/ Websocket.next en cours Dev services pour observabilite (grafana, jaegel, open telemetry extension infinispan cache #38448 - Observability extensions - Dev Services, Dev Resources, LGTM #39836 - Infinispan Cache Extension #40309 - WebSockets Next: client endpoints #40534 - WebSockets Next: initial version of security integration #40273 - Allow quarkus:run to launch Dev Services #40539 - Support for OIDC session expired page #40600 - Introduce OidcRedirectFilter LangChain4j 0.31 est sorti https://github.com/langchain4j/langchain4j/releases/tag/0.31.0 Recherche Web pour le RAG avec Google et Tavily RAG avec les bases de données SQL (expérimental) Récupération des resources remontées par le RAG lorsque AiServices retourne un Result Observabilité LLM pour OpenAI pour être notifié des requêtes, réponses et erreurs Intégration de Cohere (embedding), Jina (embedding et re-ranking scoring), Azuere CosmosDB comme embedding store Mise à jour de Gemini avec le parallel function calling et les instructions système Spring Boot 3.3.0 est sorti https://spring.io/blog/2024/05/23/spring-boot-3-3-0-available-now support Class Data Sharing Micrometer sipport de spantag etc Amelioration Spring Security comme JwtAuthenticationCovnerter support docker compose pour les images container bitnami Virtual thread pour les websockets Support sBOM via an actuator SNI for embedded web servers une nouvelle doc via antora Micronaut 4.5 est sortie https://github.com/micronaut-projects/micronaut-platform/releases/tag/v4.5.0 Le serveur basé sur Netty inclus la détection d'opération bloquante et les modules l'utilisant indiqueront à l'utilisateur quand certaines opérations peuvent être redirigée plutôt sur un virtual thread ou dans le thread pool IO Micronaut Data inclus le support de la multitenance avec partitionnement par discriminateur pour JDBC et R2DBC Micronaut Data rajoute le pagination par curseur pour JDBC et R2DBC (important aussi pour Jakarta Data) Support des annotations Jakarta Servlet pour configurer par exemple les servelet filters Support virtual thread et HTTP/2 Un nouveau module JSON Schema pour générer des JSON Schemas pour les records Java Un nouveau module Source Gen pour faire de la génération de source pour Java et Kotlin cross-language Un nouveau module Guice pour importer des modules Guice existants Web Angular 18 est sorti https://blog.angular.dev/angular-v18-is-now-available-e79d5ac0affe Support expérimental pour la détection de changement sans zone Angular.dev est désormais le nouveau site pour les développeurs Angular Material 3, les “deferrable views”, le “built-in control flow” sont maintenant stables et intègrent une série d'améliorations Améliorations du rendu côté serveur telles que le support de l'hydratation i18n, un meilleur débogage, le support de l'hydratation dans Angular Material, et la event replay qui utilise la même bibliothèque que Google Search. Data et Intelligence Artificielle Une version pure Java du LLM Llama3 de Meta https://github.com/mukel/llama3.java/tree/main utilise la future API Vector de Java JLama, un moteur d‘exécution de LLM en Java avec l'api vector https://www.infoq.com/news/2024/05/jlama-llm-inference-java/ basé sur llama.c qui est un moteur d'inference de LLM (l'execution des requetes) jlama implementé avec vector APIs et PamanaTensorOperations plusisures alternatives (native binding, iml0ementation pure en java, scala, kotlin) Target Speech Hearing https://www.infoq.com/news/2024/05/target-speech-hearing/ Nouveau algo Deep Learning de l'Université de Washington permet d'écouter une seule personne de ton choix et effacer tout le bruit autour le système nécessite que la personne portant les écouteurs appuie sur un bouton tout en regardant quelqu'un parler ou simplement en le fixant pendant trois à cinq secondes Permet à un modèle d'apprendre les schémas vocaux du locuteur et de s'y attacher pour pouvoir les restituer à l'auditeur, même s'il se déplace et cesse de regarder cette personne. Selon les chercheurs, cela constitue une avancée significative par rapport aux écouteurs à réduction de bruit existants, qui peuvent annuler efficacement tous les sons, mais ne peuvent pas sélectionner les locuteurs en fonction de leurs caractéristiques vocales. Actuellement, le système ne peut enregistrer qu'un seul locuteur à la fois. Une autre limitation est que l'enregistrement ne réussira que si aucune autre voix forte ne provient de la même direction. L'équipe a mis en open source leur code et leur jeu de données afin de faciliter les travaux de recherche futurs pour améliorer l'audition de la parole cible. Outillage Utiliser LLM pour migrer du framework de testing https://www.infoq.com/news/2024/06/slack-automatic-test-conversion/ Slack a migré 15.000 tests de Enzyme à React Testing Library avec un succès de 80% Migration nécessaire pour le manque de support de Enzyme pour React 18 L'équipe a essayé d'automatiser la conversion avec des transformations AST, mais n'a atteint que 45 % de succès à cause de la complexité des méthodes d'Enzyme et du manque d'accès aux informations contextuelles du DOM. L'équipe a utilisé Claude 2.1 pour la conversion, avec des taux de réussite variant de 40 % à 60 %, les résultats dépendant largement de la complexité des tâches. Suite aux résultats insatisfaisants, l'équipe a décidé d'observer comment les développeurs humains abordaient la conversion des tests unitaires. Les développeurs humains utilisaient leurs connaissances sur React, Enzyme et RTL, ainsi que le contexte du rendu et les conversions AST de l'outil initial pour mieux convertir les tests unitaires. Finalement les ingénieurs de Slack ont combiné transformations AST et LLM en intégrant des composants React rendus et des conversions AST dans les invites, atteignant un taux de réussite de 80 % démontrant ainsi la complémentarité de ces technologies. Claude 2.1 est un modèle de langage de grande taille (LLM) annoncé en novembre 2023 par Anthropic. Il inclut une fenêtre contextuelle de 200 000 tokens, des réductions significatives des taux d'hallucination du modèle, des invites système et permet l'utilisation d'outils. Depuis, Anthropic a introduit la famille de modèles Claude 3, composée de trois modèles distincts, avec des capacités multimodales et une compréhension contextuelle améliorée. Un arbre de syntaxe abstraite (AST) est une représentation arborescente de la structure syntaxique abstraite du code source écrit dans un langage de programmation. Chaque nœud de l'arbre représente une construction du code source. Un arbre de syntaxe se concentre sur la structure et le contenu nécessaires pour comprendre la fonctionnalité du code. Les AST sont couramment utilisés dans les compilateurs et les interpreters pour analyser et examiner le code, permettant diverses transformations, optimisations et traductions lors de la compilation. IDE de test de JetBrains https://blog.jetbrains.com/qa/2024/05/aqua-general-availability/ Aqua, le premier IDE conçu pour l'automatisation des tests, supporte plusieurs langages (Java, Python, JavaScript, TypeScript, Kotlin, SQL) et frameworks de tests (Selenium, Playwright, Cypress). Pourquoi ? Les tests d'applications nécessitent des compétences spécifiques. Aqua, un IDE adapté, est recommandé par les ingénieurs en automatisation des tests. Aqua propose deux plans de licence : un gratuit pour les usages non commerciaux et un payant pour les usages commerciaux. cam me parait un peu contre intuitif a l'heure du devops et du TDD de faire des outils dédiés et donc des equipes ou personnes dédiées Méthodologies Les 10 principes à suivre, selon le créateur de cURL, pour être un bon BDFL (Benevolent Dictator For Life) https://daniel.haxx.se/blog/2024/05/27/my-bdfl-guiding-principles/ Être ouvert et amical Livrer des produits solides comme le roc Être un leader de l'Open Source Privilégier la sécurité Fournir une documentation de premier ordre Rester indépendant Répondre rapidement Suivre l'actualité Rester à la pointe de la technologie Respecter les retours d'information Dans un vieil article de Artima, Guido Van Rossum, le créateur de Python et premier BDFL d'un projet, se remémore un échange de 1995 qui est à l'origine de ce concept https://www.artima.com/weblogs/viewpost.jsp?thread=235725 Guido Van Rossum a été le premier à endosser ce “rôle” Un site compréhensif sur les build reproductibles https://reproducible-builds.org longue doc de la definition aux méthodes pour resoudre des problèmes spécifiques Masterclass de Fabien Olicard: Le Palais Mental https://www.youtube.com/watch?v=u6wu_iY4xd8 Technique pour retenir de l'information plus longtemps que dans sa mémoire courte Les APIs web ne devraient pas rediriger HTTP vers HTTPS https://jviide.iki.fi/http-redirects grosso modo le risque majeur est d'envoyer des données confidentielles en clair sur le réseau le mieux serait de ne pas rediriger vers HTTPS, mais par contre de retourner une vraie erreur explicite notamment les clés d'API et c'est facile de ne pas le,voir vu les redirects. Sécurité Blog de GitHub sur la provenance et l'attestation https://github.blog/2024-04-30-where-does-your-software-really-come-from/ Discute les concepts de securisation de chainne d'approvisionnement de sogiciel et comment elles s'articulent entre elle. A haut niveau discute les hash pour garantir le meme fichier La signature asymetrique pour prouver que j'ai signé (e.g. le hash) et donc que je garantis. L'attenstation qui declare des faits sur un artifact attestation de provenance: source code et instructions de build (SLSA provenance) mais il faut garantir les signature avec une autorite de certification et avec des certificats a courte vide idealement, c'est sigstore MEtionne aussi The Update Framework pour s'appuyer sur cela et garantir des undates non compromis Keycloak 25 est sorti https://www.keycloak.org/2024/06/keycloak-2500-released.html Argon2 pour le hashing de mots de passe Depreciation des adaptateurs (Tomcat, servlet etc) Java 21 et depreciation de Java 17 session utilisatur persistente meme pour les instances online (pour survivre a une rotation de keycloak ameliorations autour des passkeys management et health endpoint sur un port different Et plus Demande aux cast codeurs A 40 ans, tu peux encore être codeur reconnu ? Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 12-14 juin 2024 : Rencontres R - Vannes (France) 13-14 juin 2024 : Agile Tour Toulouse - Toulouse (France) 14 juin 2024 : DevQuest - Niort (France) 18 juin 2024 : Mobilis In Mobile 2024 - Nantes (France) 18 juin 2024 : BSides Strasbourg 2024 - Strasbourg (France) 18 juin 2024 : Tech & Wine 2024 - Lyon (France) 19-20 juin 2024 : AI_dev: Open Source GenAI & ML Summit Europe - Paris (France) 19-21 juin 2024 : Devoxx Poland - Krakow (Poland) 26-28 juin 2024 : Breizhcamp 2024 - Rennes (France) 27 juin 2024 : DotJS - Paris (France) 27-28 juin 2024 : Agi Lille - Lille (France) 4-5 juillet 2024 : Sunny Tech - Montpellier (France) 8-10 juillet 2024 : Riviera DEV - Sophia Antipolis (France) 6 septembre 2024 : JUG Summer Camp - La Rochelle (France) 6-7 septembre 2024 : Agile Pays Basque - Bidart (France) 17 septembre 2024 : We Love Speed - Nantes (France) 17-18 septembre 2024 : Agile en Seine 2024 - Issy-les-Moulineaux (France) 19-20 septembre 2024 : API Platform Conference - Lille (France) & Online 25-26 septembre 2024 : PyData Paris - Paris (France) 26 septembre 2024 : Agile Tour Sophia-Antipolis 2024 - Biot (France) 2-4 octobre 2024 : Devoxx Morocco - Marrakech (Morocco) 7-11 octobre 2024 : Devoxx Belgium - Antwerp (Belgium) 8 octobre 2024 : Red Hat Summit: Connect 2024 - Paris (France) 10 octobre 2024 : Cloud Nord - Lille (France) 10-11 octobre 2024 : Volcamp - Clermont-Ferrand (France) 10-11 octobre 2024 : Forum PHP - Marne-la-Vallée (France) 11-12 octobre 2024 : SecSea2k24 - La Ciotat (France) 16 octobre 2024 : DotPy - Paris (France) 17-18 octobre 2024 : DevFest Nantes - Nantes (France) 17-18 octobre 2024 : DotAI - Paris (France) 30-31 octobre 2024 : Agile Tour Nantais 2024 - Nantes (France) 30-31 octobre 2024 : Agile Tour Bordeaux 2024 - Bordeaux (France) 31 octobre 2024-3 novembre 2024 : PyCon.FR - Strasbourg (France) 6 novembre 2024 : Master Dev De France - Paris (France) 7 novembre 2024 : DevFest Toulouse - Toulouse (France) 8 novembre 2024 : BDX I/O - Bordeaux (France) 13-14 novembre 2024 : Agile Tour Rennes 2024 - Rennes (France) 20-22 novembre 2024 : Agile Grenoble 2024 - Grenoble (France) 21 novembre 2024 : DevFest Strasbourg - Strasbourg (France) 27-28 novembre 2024 : Cloud Expo Europe - Paris (France) 28 novembre 2024 : Who Run The Tech ? - Rennes (France) 3-5 décembre 2024 : APIdays Paris - Paris (France) 4-5 décembre 2024 : DevOpsDays Paris - Paris (France) 4-5 décembre 2024 : Open Source Experience - Paris (France) 6 décembre 2024 : DevFest Dijon - Dijon (France) 22-25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 16-18 avril 2025 : Devoxx France - Paris (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

The Bike Shed
411: Celebrating and Recapping 2023!

The Bike Shed

Play Episode Listen Later Dec 19, 2023 38:40


Stephanie is hosting a holiday cookie swap. Joël talks about participating in thoughtbot's end-of-the-year hackathon, Ralphapalooza. We had a great year on the show! The hosts wrap up the year and discuss their favorite episodes, the articles, books, and blog posts they've read and loved, and other highlights of 2023 (projects, conferences, etc). Olive Oil Sugar Cookies With Pistachios & Lemon Glaze (https://food52.com/recipes/82228-olive-oil-sugar-cookies-recipe-with-pistachios-lemon) thoughtbot's Blog (https://thoughtbot.com/blog) Episode 398: Developing Heuristics For Writing Software (https://www.bikeshed.fm/398) Episode 374: Discrete Math (https://www.bikeshed.fm/374) Episode 405: Sandi Metz's Rules (https://www.bikeshed.fm/405) Episode 391: Learn with APPL (https://www.bikeshed.fm/391) Engineering Management for the Rest of Us (https://www.engmanagement.dev/) Confident Ruby (https://pragprog.com/titles/agcr/confident-ruby/) Working with Maybe from Elm Europe (https://www.youtube.com/watch?v=43eM4kNbb6c) Sustainable Rails Book (https://sustainable-rails.com/) Episode 368: Sustainable Web Development (https://www.bikeshed.fm/368) Domain Modeling Made Functional (https://pragprog.com/titles/swdddf/domain-modeling-made-functional/) Simplifying Tests by Extracting Side Effects (https://thoughtbot.com/blog/simplify-tests-by-extracting-side-effects) The Math Every Programmer Needs (https://www.youtube.com/watch?v=wzYYT40T8G8) Mermaid.js sequence diagrams (https://mermaid.js.org/syntax/sequenc) Sense of Belonging and Software Teams (https://www.drcathicks.com/post/sense-of-belonging-and-software-teams) Preemptive Pluralization is (Probably) Not Evil (https://www.swyx.io/preemptive-pluralization) Digging through the ashes (https://everythingchanges.us/blog/digging-through-the-ashes/) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: I am so excited to talk about this. I'm, like, literally smiling [chuckles] because I'm so pumped. Sometimes, you know, we get on to record, and I'm like, oh, I got to think of something that's new, like, my life is so boring. I have nothing to share. But today, I am excited to tell you about [chuckles] the holiday cookie swap that I'm hosting this Sunday [laughs] that I haven't been able to stop thinking about or just thinking about all the cookies that I'm going to get to eat. It's going to be my first time throwing this kind of shindig, and I'm so pleased with myself because it's such a great idea. You know, it's like, you get to share cookies, and you get to have all different types of cookies, and then people get to take them home. And I get to see all my friends. And I'm really [chuckles] looking forward to it. JOËL: I don't think I've ever been to a cookie swap event. How does that work? Everybody shows up with cookies, and then you leave with what you want? STEPHANIE: That's kind of the plan. I think it's not really a...there's no rules [laughs]. You can make it whatever you want it to be. But I'm asking everyone to bring, like, two dozen cookies. And, you know, I'm hoping for a lot of fun variety. Myself I'm planning on making these pistachio olive oil cookies with a lemon glaze and also, maybe, like, a chewy ginger cookie. I haven't decided if I'm going to go so extra to make two types, but we'll see. And yeah, we'll, you know, probably have some drinks and be playing Christmas music, and yeah, we'll just hang out. And I'm hoping that everyone can kind of, like, take home a little goodie bag of cookies as well because I don't think we'll be going through all of them. JOËL: Hearing you talk about this gave me an absolutely terrible idea. STEPHANIE: Terrible or terribly awesome? [laughs] JOËL: So, imagine you have the equivalent of, let's say, a LAN party. You all show up with your laptops. STEPHANIE: [laughs] JOËL: You're on a network, and then you swap browser cookies randomly. STEPHANIE: [laughs] Oh no. That would be really funny. That's a developer's take on a cookie party [laughs] if I've ever heard one. JOËL: Slightly terrifying. Now I'm just browsing, and all of a sudden, I guess I'm logged into your Facebook or something. Maybe you only swap the tracking cookies. So, I'm not actually logged into your Facebook, but I just get to see the different ad networks it would typically show you, and you would see my ads. That's maybe kind of fun or maybe terrifying, depending on what kind of ads you normally see. STEPHANIE: That's really funny. I'm thinking about how it would just be probably very misleading and confusing for those [laughs] analytics spenders, but that's totally fine, too. Might I suggest also having real cookies to munch on as well while you are enjoying [laughs] this browser cookie-swapping party? JOËL: I 100% agree. STEPHANIE: [laughs] JOËL: I'm curious: where do you stand on raisins in oatmeal cookies? STEPHANIE: Ooh. JOËL: This is a divisive question. STEPHANIE: They're fine. I'll let other people eat them. And occasionally, I will also eat an oatmeal cookie with raisins, but I much prefer if the raisins are chocolate chips [chuckles]. JOËL: That is the correct answer. STEPHANIE: [laughs] Thank you. You know, I understand that people like them. They're not for me [laughs]. JOËL: It's okay. Fans can send us hate mail about why we're wrong about oatmeal cookies. STEPHANIE: Yeah, honestly, that's something that I'm okay with being wrong about on the internet [laughs]. So, Joël, what's new in your world? JOËL: So, as of this recording, we've just recently done thoughtbot's end-of-the-year hackathon, what we call Ralphapalooza. And this is sort of a time where you kind of get to do pretty much any sort of company or programming-related activity that you want as long as...you have to pitch it and get at least two other colleagues to join you on the project, and then you've got two days to work on it. And then you can share back to the team what you've done. I was on a project where we were trying to write a lot of blog posts for the thoughtbot blog. And so, we're just kind of getting together and pitching ideas, reviewing each other's articles, writing things at a pretty intense rate for a couple of days, trying to flood the blog with articles for the next few weeks. So, if you're following the blog and as the time this episode gets released, you're like, "Wow, there's been a lot of articles from the thoughtbot blog recently," that's why. STEPHANIE: Yes, that's awesome. I love how much energy that the blog post-writing party garnered. Like, I was just kind of observing from afar, but it sounds like, you know, people who maybe had started posts, like, throughout the year had dedicated time and a good reason to revisit them, even if they had been, you know, kind of just, like, sitting in a draft for a while. And I think what also seemed really nice was people were just around to support, to review, and were able to make that a priority. And it was really cool to see all the blog posts that are queued up for December as a result. JOËL: People wrote some great stuff. So, I'm excited to see all of those come out. I think we've got pretty much a blog post every day coming out through almost the end of December. So, it's exciting to see that much content created. STEPHANIE: Yeah. If our listeners want more thoughtbot content, check out our blog. JOËL: So, as mentioned, we're recording this at the end of the year. And I thought it might be fun to do a bit of a retrospective on what this year has been like for you and I, Stephanie, both in terms of different work that we've done, the learnings we've had, but maybe also look back a little bit on 2023 for The Bike Shed and what that looked like. STEPHANIE: Yes. I really enjoyed thinking about my year and kind of just reveling and having been doing this podcast for over a year now. And yeah, I'm excited to look back a little bit on both things we have mentioned on the show before and things maybe we haven't. To start, I'm wondering if you want to talk a little bit about some of our favorite episodes. JOËL: Favorite episodes, yes. So, I've got a couple that are among my favorites. We did a lot of good episodes this year. I really liked them. But I really appreciated the episode we did on heuristics, that's Episode 398, where we got to talk a little bit about what goes into a good heuristic, how we tend to come up with them. A lot of those, like, guidelines and best practices that you hear people talk about in the software world and how to make your own but then also how to deal with the ones you hear from others in the software community. So, I think that was an episode that the idea, on the surface, seemed really basic, and then we went pretty deep with it. And that was really fun. I think a second one that I really enjoyed was also the one that I did with Sara Jackson as a guest, talking about discrete math and its relevance to the day-to-day work that we do. That's Episode 374. We just had a lot of fun with that. I think that's a topic that more developers, more web developers, would benefit from just getting a little bit more discrete math in their lives. And also, there's a clip in there where Sara reinterprets a classic marketing jingle with some discrete math terms in there instead. It was a lot of fun. So, we'd recommend people checking that one out. STEPHANIE: Nice. Yes. I also loved those episodes. The heuristics one was really great. I'm glad you mentioned it because one of my favorite episodes is kind of along a similar vein. It's one of the more recent ones that we did. It's Episode 405, where we did a bit of a retro on Sandi Metz' Rules For Developers. And those essentially are heuristics, right? And we got to kind of be like, hey, these are someone else's heuristics. How do we feel about them? Have we embodied them ourselves? Do we follow them? What parts do we take or leave? And I just remember having a really enjoyable conversation with you about that. You and I have kind of treated this podcast a little bit like our own two-person book club [laughs]. So, it felt a little bit like that, right? Where we were kind of responding to, you know, something that we both have read up on, or tried, or whatever. So, that was a good one. Another one of my favorite episodes was Episode 391: Learn with APPL [laughs], in which we basically developed our own learning framework, or actually, credit goes to former Bike Shed host, Steph Viccari, who came up with this fun, little acronym to talk about different things that we all kind of need in our work lives to be fulfilled. Our APPL stands for Adventure, Passion, Profit, and Low risk. And that one was really fun just because it was, like, the opposite of what I just described where we're not discussing someone else's work but discovered our own thing out of, you know, these conversations that we have on the show, conversations we have with our co-workers. And yeah, I'm trying to make it a thing, so I'm plugging it again [laughs]. JOËL: I did really like that episode. One, I think, you know, this APPL framework is a little bit playful, which makes it fun. But also, I think digging into it really gives some insight on the different aspects that are relevant when planning out further growth or where you want to invest your sort of professional development time. And so, breaking down those four elements led to some really insightful conversation around where do I want to invest time learning in the next year? STEPHANIE: Yeah, absolutely. JOËL: By the way, we're mentioning a bunch of our favorite things, some past episodes, and we'll be talking about a lot of other types of resources. We will be linking all of these in the show notes. So, for any of our listeners who are like, "Oh, I wonder what is that thing they mentioned," there's going to be a giant list that you can check out. STEPHANIE: Yeah. I love whenever we are able to put out an episode with a long list of things [laughs]. JOËL: It's one of the fun things that we get to do is like, oh yeah, we referenced all these things. And there is this sort of, like, further reading, more threads to pull on for people who might be interested. So, you'd mentioned, Stephanie, that, you know, sometimes we kind of treat this as our own little mini, like, two-person book club. I know that you're a voracious reader, and you've mentioned so many books over the course of the year. Do you have maybe one or two books that have been kind of your favorites or that have stood out to you over 2023? STEPHANIE: I do. I went back through my reading list in preparation for this episode and wanted to call out the couple of books that I finished. And I think I have, you know, I mentioned I was reading them along the way. But now I get to kind of see how having read them influenced my work life this past year, which is pretty cool. So, one of them is Engineering Management for the Rest of Us by Sarah Drasner. And that's actually one that really stuck with me, even though I'm not a manager; I don't have any plans to become a manager. But one thing that she talks about early on is this idea of having a shared value system. And you can have that at the company level, right? You have your kind of corporate values. You can have that at the team level with this smaller group of people that you get to know better and kind of form relationships with. And then also, part of that is, like, knowing your individual values. And having alignment in all three of those tiers is really important in being a functioning and fulfilled team, I think. And that is something that I don't think was really spelled out very explicitly for me before, but it was helpful in framing, like, past work experiences, where maybe I, like, didn't have that alignment and now identify why. And it has helped me this year as I think about my client work, too, and kind of where I sit from that perspective and helps me realize like, oh, like, this is why I'm feeling this way, and this is why it's not quite working. And, like, what do I do about it now? So, I really enjoyed that. JOËL: Would you recommend this book to others who are maybe not considering a management path? STEPHANIE: Yeah. JOËL: So, even if you're staying in the IC track, at least for now, you think that's a really powerful book for other people. STEPHANIE: Yeah, I would say so. You know, maybe not, like, all of it, but there's definitely parts that, you know, she's writing for the rest of us, like, all of us maybe not necessarily natural born leaders who knew that that's kind of what we wanted. And so, I can see how people, you know, who are uncertain or maybe even, like, really clearly, like, "I don't think that's for me," being able to get something out of, like, either those lessons in leadership or just to feel a bit, like, validated [laughs] about the type of work that they aren't interested in. Another book that I want to plug real quick is Confident Ruby by Avdi Grimm. That one was one I referenced a lot this year, working with newer developers especially. And it actually provided a good heuristic [laughs] for me to talk about areas that we could improve code during code review. I think that wasn't really vocabulary that I'd used, you know, saying, like, "Hey, how confident is this code? How confident is this method and what it will receive and what it's returning?" And I remember, like, several conversations that I ended up having on my teams about, like, return types as a result and them having learned, like, a new way to view their code, and I thought that was really cool. JOËL: I mean, learning to deal with uncertainty and nil in Ruby or maybe even, like, error states is just such a core part of writing software. I feel like this is something that I almost wish everyone was sort of assigned maybe, like, a year into their programming career because, you know, I think the first year there's just so many things you've got to learn, right? Like basic programming and, like, all these things. But, like, you're looking maybe I can start going a little bit deeper into some topic. I think that some topic, like, pretty high up, would be building a mental model for how to deal with uncertainty because it's such a source of bugs. And Avdi Grimm's book, Confident Ruby, is...I would put that, yeah, definitely on a recommended reading list for everybody. STEPHANIE: Yeah, I agree. And I think that's why I found myself, you know, then recommending it to other people on my team and kind of having something I can point to. And that was really helpful in the kind of mentorship that I wanted to offer. JOËL: I did a deep dive into uncertainty and edge cases in programs several years back when I was getting into Elm. And I was giving a talk at Elm Europe about how Elm handles uncertainty, which is a little bit different than how Ruby does it. But a lot of the underlying concepts are very similar in terms of quarantining uncertainty and pushing it to the edges and things like that. Trying to write code that is more confident that is definitely a term that I used. And so Confident Ruby ended up being a little bit of an inspiration for my own journey there, and then, eventually, the talk that I gave that summarized my learnings there. STEPHANIE: Nice. Do you have any reading recommendations or books that stood out to you this year? JOËL: So, I've been reading two technical books kind of in tandem this year. I have not finished either of them, but I have been enjoying them. One is Sustainable Rails by David Bryant Copeland. We had an episode at the beginning of this year where we talked a little bit about our initial impressions from, I think, the first chapter of the book. But I really love that vocabulary of writing Ruby and Rails code, in particular, in a way that is sustainable for a team. And that premise, I think, just gives a really powerful mindset to approach structuring Rails apps. And the other book that I've been reading is Domain Modeling Made Functional, so kind of looking at some domain-driven design ideas. But most of the literature is typically written to an object-oriented audience, so taking a look at it from more of a functional programming perspective has been really interesting. And then I've been, weirdly enough, taking some of those ideas and translating back into the object-oriented world to apply to code I'm writing in Ruby. I think that has been a very useful exercise. STEPHANIE: That's awesome. And it's weird and cool how all those things end up converging, right? And exploring different paradigms really just lets you develop more insight into wherever you're working. JOËL: Sometimes the sort of conversion step that you have to do, that translation, can be a good tool for kind of solidifying learnings or better understanding. So, I'm doing this sort of deep learning thing where I'm taking notes as I go along. And those notes are typically around, what other concepts can I connect ideas in the book? So, I'll be reading and say, okay, on page 150, he mentioned this concept. This reminds me of this idea from TDD. I could see this applying in a different way in an object-oriented world. And interestingly, if you apply this, it sort of converges on maybe single responsibility or whatever other OO principle. And that's a really interesting connection. I always love it when you do see sort of two or three different angles converging together on the same idea. STEPHANIE: Yeah, absolutely. JOËL: I've written a blog post, I think, two years ago around how some theory from functional programming sort of OO best practices and then TDD all kind of converge on sort of the same approach to designing software. So, you can sort of go from either direction, and you kind of end in the same place or sort of end up rediscovering principles from the other two. We'll link that in the show notes. But that's something that I found was really exciting. It didn't directly come from this book because, again, I wrote this a couple of years ago. But it is always fun when you're exploring two or three different paradigms, and you find a convergence. It really deepens your understanding of what's happening. STEPHANIE: Yeah, absolutely. I like what you said about how this book is different because it is making that connection between things that maybe seem less related on the surface. Like you're saying, there's other literature written about how domain modeling and object-oriented programming make more sense a little bit more together. But it is that, like, bringing in of different schools of thought that can lead to a lot of really interesting discovery about those foundational concepts. JOËL: I feel like dabbling in other paradigms and in other languages has made me a better Ruby developer and a better OO programmer, a lot of the work I've done in Elm. This book that I'm reading is written in F#. And all these things I can kind of bring back, and I think, have made me a better Ruby developer. Have you had any experiences like that? STEPHANIE: Yeah. I think I've talked a little bit about it on the show before, but I can't exactly recall. There were times when my exploration in static typing ended up giving me that different mindset in terms of the next time I was coding in Ruby after being in TypeScript for a while, I was, like, thinking in types a lot more, and I think maybe swung a little bit towards, like, not wanting to metaprogram as much [laughs]. But I think that it was a useful, like you said, exercise sometimes, too, and just, like, doing that conversion or translating in your head to see more options available to you, and then deciding where to go from there. So, we've talked a bit about technical books that we've read. And now I kind of want to get into some in-person highlights for the year because you and I are both on the conference circuit and had some fun trips this year. JOËL: Yeah. So, I spoke at RailsConf this spring. I gave a talk on discrete math and how it is relevant in day-to-day work for developers, actually inspired by that Bike Shed episode that I mentioned earlier. So, that was kind of fun, turning a Bike Shed episode into a conference talk. And then just recently, I was at RubyConf in San Diego, and I gave a talk there around time. We often talk about time as a single quantity, but there's some subtle distinctions, so the difference between a moment in time versus a duration and some of the math that happens around that. And I gave a few sort of visual mental models to help people keep track of that. As of this recording, the talk is not out yet, so we're not going to be able to link to it. But if you're listening to this later in 2024, you can probably just Google RubyConf "Which Time Is It?" That's the name of the talk. And you'll be able to find it. STEPHANIE: Awesome. So, as someone who is giving talks and attending conferences every year, I'm wondering, was this year particularly different in any way? Was there something that you've, like, experienced or felt differently community-wise in 2023? JOËL: Conferences still feel a little bit smaller than they were pre-COVID. I think they are still bouncing back. But there's definitely an energy that's there that's nice to have on the conference scene. I don't know, have you experienced something similar? STEPHANIE: I think I know what you're talking about where, you know, there was that time when we weren't really meeting in person. And so, now we're still kind of riding that wave of, like, getting together again and being able to celebrate and have fun in that way. I, this year, got to speak at Blue Ridge Ruby in June. And that was a first-time regional conference. And so, that was, I think, something I had noticed, too, is the emergence of regional conferences as being more viable options after not having conferences for a few years. And as a regional conference, it was even smaller than the bigger national Ruby Central conferences. I really enjoyed the intimacy of that, where it was just a single track. So, everyone was watching talks together and then was on breaks together, so you could mingle. There was no FOMO of like, oh, like, I can't make this talk because I want to watch this other one. And that was kind of nice because I could, like, ask anyone, "What did you think of, like, X talk or like the one that we just kind of came out of and had that shared experience?" That was really great. And I got to go tubing for the first time [laughs] in Asheville. That's a memory, but I am still thinking about that as we get into winter. I'm like, oh yeah, the glorious days of summer [laughs] when I was getting to float down a lazy river. JOËL: Nice. I wasn't sure if this was floating down a lazy river on an inner tube or if this was someone takes you out on a lake with a speed boat, and you're getting pulled. STEPHANIE: [laughs] That's true. As a person who likes to relax [laughs], I definitely prefer that kind of tubing over a speed boat [laughs]. JOËL: What was the topic of your talk? STEPHANIE: So, I got to give my talk about nonviolent communication in pair programming for a second time. And that was also my first time giving a talk for a second time [laughs]. That was cool, too, because I got to revisit something and go deeper and kind of integrate even more experiences I had. I just kind of realized that even if you produce content once, like, there's always ways to deepen it or shape it a little better, kind of, you know, just continually improving it and as you learn more and as you get more experience and change. JOËL: Yeah. I've never given a talk twice, and now you've got me wondering if that's something I should do. Because making a bespoke talk for every conference is a lot of work, and it might be nice to be able to use it more than once. Especially I think for some of the regional conferences, there might be some value there in people who might not be able to go to a big national conference but would still like to see your talk live. Having a mix of maybe original content and then content that is sort of being reshared is probably a great combo for a regional conference. STEPHANIE: Yeah, definitely. That's actually a really good idea, yeah, to just be able to have more people see that content and access it. I like that a lot. And I think it could be really cool for you because we were just talking about all the ways that our mental models evolve the more stuff that we read and consume. And I think there's a lot of value there. One other conference that I went to this year that I just want to highlight because it was really cool that I got to do this: I went to RubyKaigi in Japan [laughs] back in the spring. And I had never gone to an international conference before, and now I'm itching to do more of that. So, it would be remiss not to mention it [laughs]. I'm definitely inspired to maybe check out some of the conferences outside of the U.S. in 2024. I think I had always been a little intimidated. I was like, oh, like, it's so far [laughs]. Do I really have, like, that good of a reason to make a trip out there? But being able to meet Rubyists from different countries and seeing how it's being used in other parts of the world, I think, made me realize that like, oh yeah, like, beyond my little bubble, there's so many cool things happening and people out there who, again, like, have that shared love of Ruby. And connecting with them was, yeah, just so new and something that I would want to do more of. So, another thing that we haven't yet gotten into is our actual work-work or our client work [laughs] that we do at thoughtbot for this year. Joël, I'm wondering, was there anything especially fun or anything that really stood out to you in terms of client work that you had to do this year? JOËL: So, two things come to mind that were novel for me. One is I did a Rails integration against Snowflake, the data warehouse, using an ODBC connection. We're not going through an API; we're going through this DB connection. And I never had to do that before. I also got to work with the new-ish Rails multi-database support, which actually worked quite nice. That was, I think, a great learning experience. Definitely ran into some weird edge cases, or some days, I was really frustrated. Some days, I was actually, like, digging into the source code of the C bindings of the ODBC gem. Those were not the best days. But definitely, I think, that kind of integration and then Snowflake as a technology was really interesting to explore. The other one that's been really interesting, I think, has been going much deeper into the single sign-on world. I've been doing an integration against a kind of enterprise SAML server that wants to initiate sign-in requests from their portal. And this is a bit of an alphabet soup, but the term here is IdP-initiated SSO. And so, I've been working with...it's a combination of this third-party kind of corporate SAML system, our application, which is a Rails app, and then Auth0 kind of sitting in the middle and getting all of them to talk to each other. There's a ridiculous number of redirects because we're talking SAML on one side and OIDC on the other and getting everything to line up correctly. But that's been a really fun, new set of things to learn. STEPHANIE: Yeah, that does sound complicated [laughs] just based on what you shared with me, but very cool. And I was excited to hear that you had had a good experience with the Rails multi-database part because that was another thing that I remember being...it had piqued my interest when it first came out. I hope I get to, you know, utilize that feature on a project soon because that sounds really fun. JOËL: One thing I've had to do for this SSO project is lean a lot on sequence diagrams, which are those diagrams that sort of show you, like, being redirected from different places, and, like, okay, server one talks to server two talks, to the browser. And so, when I've got so many different actors and sort of controllers being passed around everywhere, it's been hard to keep track of it in my head. And so, I've been doing a lot of these diagrams, both for myself to help understand it during development, and then also as documentation to share back with the team. And I found that Mermaid.js supports sequence diagrams as a diagram type. Long-term listeners of the show will know that I am a sucker for a good diagram. I love using Mermaid for a lot of things because it's supported. You can embed it in a lot of places, including in GitHub comments, pull requests. You can use it in various note systems like Notion or Obsidian. And you can also just generate your own on mermaid.live. And so, that's been really helpful to communicate with the rest of the team, like, "Hey, we've got this whole process where we've got 14 redirects across four different servers. Here's what it looks like. And here, like, we're getting a bug on, you know, redirect number 8 of 14. I wonder why," and then you can start a conversation around debugging that. STEPHANIE: Cool. I was just about to ask what tool you're using to generate your sequence diagrams. I didn't know that Mermaid supported them. So, that's really neat. JOËL: So, last year, when we kind of looked back over 2022, one thing that was really interesting that we did is we talked about what are articles that you find yourself linking to a lot that are just kind of things that maybe were on your mind or that were a big part of conversations that happened over the year? So, maybe for you, Stephanie, in 2023, what are one or two articles that you find yourself sort of constantly linking to other people? STEPHANIE: Yes. I'm excited you asked about this. One of them is an article by a person named Cat Hicks, who has a PhD in experimental psychology. She's a data scientist and social scientist. And lately, she's been doing a lot of research into the sense of belonging on software teams. And I think that's a theme that I am personally really interested in, and I think has kind of been something more people are talking about in the last few years. And she is kind of taking that maybe more squishy idea and getting numbers for it and getting statistics, and I think that's really cool. She points out belonging as, like, a different experience from just, like, happiness and fulfillment, and that really having an impact on how well a team is functioning. I got to share this with a few people who were, you know, just in that same boat of, like, trying to figure out, what are the behaviors kind of on my team that make me feel supported or not supported? And there were a lot of interesting discussions that came out of sharing this article and kind of talking about, especially in software, where we can be a little bit dogmatic. And we've kind of actually joked about it on the podcast [chuckles] before about, like, we TDD or don't TDD, or, you know, we use X tool, and that's just like what we have to do here. She writes a little bit about how that can end up, you know, not encouraging people offering, like, differing opinions and being able to feel like they have a say in kind of, like, the team's direction. And yeah, I just really enjoyed a different way of thinking about it. Joël, what about you? What are some articles you got bookmarked? [chuckles] JOËL: This year, I started using a bookmark manager, Raindrop.io. That's been nice because, for this episode, I could just look back on, what are some of my bookmarks this year? And be like, oh yeah, this is the thing that I have been using a lot. So, an article that I've been linking is an article called Preemptive Pluralization is (Probably) Not Evil. And it kind of talks a little bit about how going from code that works over a collection of two items to a collection of, you know, 20 items is very easy. But sometimes, going from one to two can be really challenging. And when are the times where you might want to preemptively make something more than one item? So, maybe using it has many association rather than it has one or making an attribute a collection rather than a single item. Controversial is not the word for it, but I think challenges a little bit of the way people typically like to write code. But across this year, I've run into multiple projects where they have been transitioning from one to many. That's been an interesting article to surface as part of those conversations. Whether your team wants to do this preemptively or whether they want to put it off and say in classic YAGNI (You Aren't Gonna Need It) form, "We'll make it single for now, and then we'll go plural," that's a conversation for your team. But I think this article is a great way to maybe frame the conversation. STEPHANIE: Cool. Yeah, I really like that almost, like, a counterpoint to YAGNI [laughs], which I don't think I've ever heard anyone say that out loud [laughs] before. But as soon as you said preemptive pluralization is not evil, I thought about all the times that I've had to, like, write code, text in which a thing, a variable could be either one or many [laughs] things. And I was like, ooh, maybe this will solve that problem for me [laughs]. JOËL: Speaking of pluralization, I'm sure you've been linking to more than just one article this year. Do you have another one that you find yourself coming up in conversations where you've always kind of like, "Hey, dropping this link," where it's almost like your thing? STEPHANIE: Yes. And that is basically everything written by Mandy Brown [laughs], who is a work coach that I actually started working with this year. And one of the articles that really inspired me or really has been a topic of conversation among my friends and co-workers is she has a blog post called Digging Through the Ashes. And it's kind of a meditation on, like, post burnout or, like, what's next, and how we have used this word as kind of a catch-all to describe, you know, this collective sense of being just really tired or demoralized or just, like, in need of a break. And what she offers in that post is kind of, like, some suggestions about, like, how can we be more specific here and really, you know, identify what it is that you're needing so that you can change how you engage with work? Because burnout can mean just that you are bored. It can mean that you are overworked. It can mean a lot of things for different people, right? And so, I definitely don't think I'm alone [laughs] in kind of having to realize that, like, oh, these are the ways that my work is or isn't changing and, like, where do I want to go next so that I might feel more sustainable? I know that's, like, a keyword that we talked about earlier, too. And that, on one hand, is both personal but also technical, right? It, like, informs the kinds of decisions that we make around our codebase and what we are optimizing for. And yeah, it is both technical and cultural. And it's been a big theme for me this year [laughs]. JOËL: Yeah. Would you say it's safe to say that sustainability would be, if you want to, like, put a single word on your theme for the year? Would that be a fair word to put there? STEPHANIE: Yeah, I think so. Definitely discovering what that means for me and helping other people discover what that means for them, too. JOËL: I feel like we kicked off the year 2023 by having that discussion of Sustainable Rails and how different technical practices can make the work there feel sustainable. So, I think that seems to have really carried through as a theme through the year for you. So, that's really cool to have seen that. And I'm sure listeners throughout the year have heard you mention these different books and articles. Maybe you've also been able to pick up a little bit on that. So, I'm glad that we do this show because you get a little bit of, like, all the bits and pieces in the day-to-day, and then we aggregate it over a year, and you can look back. You can be like, "Oh yeah, I definitely see that theme in your work." STEPHANIE: Yeah, I'm glad you pointed that out. It is actually really interesting to see how something that we had talked about early, early on just had that thread throughout the year. And speaking of sustainability, we are taking a little break from the show to enjoy the holidays. We'll be off for a few weeks, and we will be back with a new Bike Shed in January. JOËL: Cheers to a new year. STEPHANIE: Yeah, cheers to a new year. Wrapping up 2023. And we will see you all in 2024. JOËL: On that note, shall we wrap up the whole year? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at tbot.io/ referral. Or you can email us at referrals@thoughtbot.com with any questions.

JUG Istanbul
Java Monthly 15

JUG Istanbul

Play Episode Listen Later Nov 19, 2023 60:04


Java Monthly serisininin 15. bölümü ile karşınızdayız, biz podcasti çekerken çok keyif aldık. Umarım sizlerde dinlerken keyif alırsınız. Konuklarımız Hüseyin Akdoğan, Altuğ Bilgin Altıntaş, Özlem Güncan, Nesrin Aşan Bölümün konu başlıkları Java memory management (23 kasım da Java'da bellek yönetimi ile ilgili incelikleri meetup duyurusu Altuğ Bilgin Altıntaş) etkinlik link:https://t.co/pUDbIkv5X3 java champion nasıl olunur? New Oracle open source project released: Quicksql java21 Spring Boot 3.2.0-RC2 available now Quarkus 3.5.0 released - Java 21, OIDC enhancements (Official support for Java 21) 09 Kasım Quarkus 3.5.1 released - Maintenance release Helidon 4 released (Detay) Micronaut Framework 4.1.6 released! Simplifying Persistence Integration with Jakarta EE Data Babylon is #OpenJDK's newest big project javaday de konuşması olmak isteyenler için duyuru (Call for Papers for which will take place in İstanbul on May 11, 2024 is now open. The deadline for submitting a proposal is the 31th of  December 2023! https://www.papercall.io/javaday-2024)

Screaming in the Cloud
Learnings From A Lifelong Career in Open-Source with Amir Szekely

Screaming in the Cloud

Play Episode Listen Later Nov 7, 2023 38:47


Amir Szekely, Owner at CloudSnorkel, joins Corey on Screaming in the Cloud to discuss how he got his start in the early days of cloud and his solo project, CloudSnorkel. Throughout this conversation, Corey and Amir discuss the importance of being pragmatic when moving to the cloud, and the different approaches they see in developers from the early days of cloud to now. Amir shares what motivates him to develop open-source projects, and why he finds fulfillment in fixing bugs and operating CloudSnorkel as a one-man show. About AmirAmir Szekely is a cloud consultant specializing in deployment automation, AWS CDK, CloudFormation, and CI/CD. His background includes security, virtualization, and Windows development. Amir enjoys creating open-source projects like cdk-github-runners, cdk-turbo-layers, and NSIS.Links Referenced: CloudSnorkel: https://cloudsnorkel.com/ lasttootinaws.com: https://lasttootinaws.com camelcamelcamel.com: https://camelcamelcamel.com github.com/cloudsnorkel: https://github.com/cloudsnorkel Personal website: https://kichik.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and this is an episode that I have been angling for for longer than you might imagine. My guest today is Amir Szekely, who's the owner at CloudSnorkel. Amir, thank you for joining me.Amir: Thanks for having me, Corey. I love being here.Corey: So, I've been using one of your open-source projects for an embarrassingly long amount of time, and for the longest time, I make the critical mistake of referring to the project itself as CloudSnorkel because that's the word that shows up in the GitHub project that I can actually see that jumps out at me. The actual name of the project within your org is cdk-github-runners if I'm not mistaken.Amir: That's real original, right?Corey: Exactly. It's like, “Oh, good, I'll just mention that, and suddenly everyone will know what I'm talking about.” But ignoring the problems of naming things well, which is a pain that everyone at AWS or who uses it knows far too well, the product is basically magic. Before I wind up basically embarrassing myself by doing a poor job of explaining what it is, how do you think about it?Amir: Well, I mean, it's a pretty simple project, which I think what makes it great as well. It creates GitHub runners with CDK. That's about it. It's in the name, and it just does that. And I really tried to make it as simple as possible and kind of learn from other projects that I've seen that are similar, and basically learn from my pain points in them.I think the reason I started is because I actually deployed CDK runners—sorry, GitHub runners—for one company, and I ended up using the Kubernetes one, right? So, GitHub in themselves, they have two projects they recommend—and not to nudge GitHub, please recommend my project one day as well—they have the Kubernetes controller and they have the Terraform deployer. And the specific client that I worked for, they wanted to use Kubernetes. And I tried to deploy it, and, Corey, I swear, I worked three days; three days to deploy the thing, which was crazy to me. And every single step of the way, I had to go and read some documentation, figure out what I did wrong, and apparently the order the documentation was was incorrect.And I had to—I even opened tickets, and they—you know, they were rightfully like, “It's open-source project. Please contribute and fix the documentation for us.” At that point, I said, “Nah.” [laugh]. Let me create something better with CDK and I decided just to have the simplest setup possible.So usually, right, what you end up doing in these projects, you have to set up either secrets or SSM parameters, and you have to prepare the ground and you have to get your GitHub token and all those things. And that's just annoying. So, I decided to create a—Corey: So much busy work.Amir: Yes, yeah, so much busy work and so much boilerplate and so much figuring out the right way and the right order, and just annoying. So, I decided to create a setup page. I thought, “What if you can actually install it just like you install any app on GitHub,” which is the way it's supposed to be right? So, when you install cdk-github-runners—CloudSnorkel—you get an HTML page and you just click a few buttons and you tell it where to install it and it just installs it for you. And it sets the secrets and everything. And if you want to change the secret, you don't have to redeploy. You can just change the secret, right? You have to roll the token over or whatever. So, it's much, much easier to install.Corey: And I feel like I discovered this project through one of the more surreal approaches—and I had cause to revisit it a few weeks ago when I was redoing my talk for the CDK Community Day, which has since happened and people liked the talk—and I mentioned what CloudSnorkel had been doing and how I was using the runners accordingly. So, that was what I accidentally caused me to pop back up with, “Hey, I've got some issues here.” But we'll get to that. Because once upon a time, I built a Twitter client for creating threads because shitposting is my love language, I would sit and create Twitter threads in the middle of live keynote talks. Threading in the native client was always terrible, and I wanted to build something that would help me do that. So, I did.And it was up for a while. It's not anymore because I'm not paying $42,000 a month in API costs to some jackass, but it still exists in the form of lasttootinaws.com if you want to create threads on Mastodon. But after I put this out, some people complained that it was slow.To which my response was, “What do you mean? It's super fast for me in San Francisco talking to it hosted in Oregon.” But on every round trip from halfway around the world, it became a problem. So, I got it into my head that since this thing was fully stateless, other than a Lambda function being fronted via an API Gateway, that I should deploy it to every region. It didn't quite fit into a Cloudflare Worker or into one of the Edge Lambda functions that AWS has given up on, but okay, how do I deploy something to every region?And the answer is, with great difficulty because it's clear that no one was ever imagining with all those regions that anyone would use all of them. It's imagined that most customers use two or three, but customers are different, so which two or three is going to be widely varied. So, anything halfway sensible about doing deployments like this didn't work out. Again, because this thing was also a Lambda function and an API Gateway, it was dirt cheap, so I didn't really want to start spending stupid amounts of money doing deployment infrastructure and the rest.So okay, how do I do this? Well, GitHub Actions is awesome. It is basically what all of AWS's code offerings wish that they were. CodeBuild is sad and this was kind of great. The problem is, once you're out of the free tier, and if you're a bad developer where you do a deploy on every iteration, suddenly it starts costing for what I was doing in every region, something like a quarter of per deploy, which adds up when you're really, really bad at programming.Amir: [laugh].Corey: So, their matrix jobs are awesome, but I wanted to do some self-hosted runners. How do I do that? And I want to keep it cheap, so how do I do a self-hosted runner inside of a Lambda function? Which led me directly to you. And it was nothing short of astonishing. This was a few years ago. I seem to recall that it used to be a bit less well-architected in terms of its elegance. Did it always use step functions, for example, to wind up orchestrating these things?Amir: Yeah, so I do remember that day. We met pretty much… basically as a joke because the Lambda Runner was a joke that I did, and I posted on Twitter, and I was half-proud of my joke that starts in ten seconds, right? But yeah, no, the—I think it always used functions. I've been kind of in love with the functions for the past two years. They just—they're nice.Corey: Oh, they're magic, and AWS is so bad at telling their story. Both of those things are true.Amir: Yeah. And the API is not amazing. But like, when you get it working—and you know, you have to spend some time to get it working—it's really nice because then you have nothing to manage, ever. And they can call APIs directly now, so you don't have to even create Lambdas. It's pretty cool.Corey: And what I loved is you wind up deploying this thing to whatever account you want it to live within. What is it, the OIDC? I always get those letters in the wrong direction. OIDC, I think, is correct.Amir: I think it's OIDC, yeah.Corey: Yeah, and it winds up doing this through a secure method as opposed to just okay, now anyone with access to the project can deploy into your account, which is not ideal. And it just works. It spins up a whole bunch of these Lambda functions that are using a Docker image as the deployment environment. And yeah, all right, if effectively my CDK deploy—which is what it's doing inside of this thing—doesn't complete within 15 minutes, then it's not going to and the thing is going to break out. We've solved the halting problem. After 15 minutes, the loop will terminate. The end.But that's never been a problem, even with getting ACM certificates spun up. It completes well within that time limit. And its cost to me is effectively nothing. With one key exception: that you made the choice to use Secrets Manager to wind up storing a lot of the things it cares about instead of Parameter Store, so I think you wind up costing me—I think there's two of those different secrets, so that's 80 cents a month. Which I will be demanding in blood one of these days if I ever catch you at re:Invent.Amir: I'll buy you beer [laugh].Corey: There we go. That'll count. That'll buy, like, several months of that. That works—at re:Invent, no. The beers there are, like, $18, so that'll cover me for years. We're set.Amir: We'll split it [laugh].Corey: Exactly. Problem solved. But I like the elegance of it, I like how clever it is, and I want to be very clear, though, it's not just for shitposting. Because it's very configurable where, yes, you can use Lambda functions, you can use Spot Instances, you can use CodeBuild containers, you can use Fargate containers, you can use EC2 instances, and it just automatically orchestrates and adds these self-hosted runners to your account, and every build gets a pristine environment as a result. That is no small thing.Amir: Oh, and I love making things configurable. People really appreciate it I feel, you know, and gives people kind of a sense of power. But as long as you make that configuration simple enough, right, or at least the defaults good defaults, right, then, even with that power, people still don't shoot themselves in the foot and it still works really well. By the way, we just added ECS recently, which people really were asking for because it gives you the, kind of, easy option to have the runner—well, not the runner but at least the runner infrastructure staying up, right? So, you can have auto-scaling group backing ECS and then the runner can start up a lot faster. It was actually very important to other people because Lambda, as fast that it is, it's limited, and Fargate, for whatever reason, still to this day, takes a minute to start up.Corey: Yeah. What's wild to me about this is, start to finish, I hit a deploy to the main branch and it sparks the thing up, runs the deploy. Deploy itself takes a little over two minutes. And every time I do this, within three minutes of me pushing to commit, the deploy is done globally. It is lightning fast.And I know it's easy to lose yourself in the idea of this being a giant shitpost, where, oh, who's going to do deployment jobs in Lambda functions? Well, kind of a lot of us for a variety of reasons, some of which might be better than others. In my case, it was just because I was cheap, but the massive parallelization ability to do 20 simultaneous deploys in a matrix configuration that doesn't wind up smacking into rate limits everywhere, that was kind of great.Amir: Yeah, we have seen people use Lambda a lot. It's mostly for, yeah, like you said, small jobs. And the environment that they give you, it's kind of limited, so you can't actually install packages, right? There is no sudo, and you can't actually install anything unless it's in your temp directory. But still, like, just being able to run a lot of little jobs, it's really great. Yeah.Corey: And you can also make sure that there's a Docker image ready to go with the stuff that you need, just by configuring how the build works in the CDK. I will admit, I did have a couple of bug reports for you. One was kind of useful, where it was not at all clear how to do this on top of a Graviton-based Lambda function—because yeah, that was back when not everything really supported ARM architectures super well—and a couple of other times when the documentation was fairly ambiguous from my perspective, where it wasn't at all clear, what was I doing? I spent four hours trying to beat my way through it, I give up, filed an issue, went to get a cup of coffee, came back, and the answer was sitting there waiting for me because I'm not convinced you sleep.Amir: Well, I am a vampire. My last name is from the Transylvania area [laugh]. So—Corey: Excellent. Excellent.Amir: By the way, not the first time people tell me that. But anyway [laugh].Corey: There's something to be said for getting immediate responsiveness because one of the reasons I'm always so loath to go and do a support ticket anywhere is this is going to take weeks. And then someone's going to come back with a, “I don't get it.” And try and, like, read the support portfolio to you. No, you went right into yeah, it's this. Fix it and your problem goes away. And sure enough, it did.Amir: The escalation process that some companies put you through is very frustrating. I mean, lucky for you, CloudSnorkel is a one-man show and this man loves solving bugs. So [laugh].Corey: Yeah. Do you know of anyone using it for anything that isn't ridiculous and trivial like what I'm using it for?Amir: Yeah, I have to think whether or not I can… I mean, so—okay. We have a bunch of dedicated users, right, the GitHub repo, that keep posting bugs and keep posting even patches, right, so you can tell that they're using it. I even have one sponsor, one recurring sponsor on GitHub that uses it.Corey: It's always nice when people thank you via money.Amir: Yeah. Yeah, it is very validating. I think [BLEEP] is using it, but I also don't think I can actually say it because I got it from the GitHub.Corey: It's always fun. That's the beautiful part about open-source. You don't know who's using this. You see what other things people are working on, and you never know, is one of their—is this someone's side project, is it a skunkworks thing, or God forbid, is this inside of every car going forward and no one bothered to tell me about that. That is the magic and mystery of open-source. And you've been doing open-source for longer than I have and I thought I was old. You were originally named in some of the WinAMP credits, for God's sake, that media player that really whipped the llama's ass.Amir: Oh, yeah, I started real early. I started about when I was 15, I think. I started off with Pascal or something or even Perl, and then I decided I have to learn C and I have to learn Windows API. I don't know what possessed me to do that. Win32 API is… unique [laugh].But once I created those applications for myself, right, I think there was—oh my God, do you know the—what is it called, Sherlock in macOS, right? And these days, for PowerToys, there is the equivalent of it called, I don't know, whatever that—PowerBar? That's exactly—that was that. That's a project I created as a kid. I wanted something where I can go to the Run menu of Windows when you hit Winkey R, and you can just type something and it will start it up, right?I didn't want to go to the Start menu and browse and click things. I wanted to do everything with the keyboard. So, I created something called Blazerun [laugh], which [laugh] helped you really easily create shortcuts that went into your path, right, the Windows path, so you can really easily start them from Winkey R. I don't think that anyone besides me used it, but anyway, that thing needed an installer, right? Because Windows, you got to install things. So, I ended up—Corey: Yeah, these days on Mac OS, I use Alfred for that which is kind of long in the tooth, but there's a launch bar and a bunch of other stuff for it. What I love is that if I—I can double-tap the command key and that just pops up whatever I need it to and tell the computer what to do. It feels like there's an AI play in there somewhere if people can figure out how to spend ten minutes on building AI that does something other than lets them fire their customer service staff.Amir: Oh, my God. Please don't fire customer service staff. AI is so bad.Corey: Yeah, when I reach out to talk to a human, I really needed a human.Amir: Yes. Like, I'm not calling you because I want to talk to a robot. I know there's a website. Leave me alone, just give me a person.Corey: Yeah. Like, you already failed to solve my problem on your website. It's person time.Amir: Exactly. Oh, my God. Anyway [laugh]. So, I had to create an installer, right, and I found it was called NSIS. So, it was a Nullsoft “SuperPiMP” installation system. Or in the future, when Justin, the guy who created Winamp and NSIS, tried to tone down a little bit, Nullsoft Scriptable Installation System. And SuperPiMP is—this is such useless history for you, right, but SuperPiMP is the next generation of PiMP which is Plug-in Mini Packager [laugh].Corey: I remember so many of the—like, these days, no one would ever name any project like that, just because it's so off-putting to people with sensibilities, but back then that was half the stuff that came out. “Oh, you don't like how this thing I built for free in the wee hours when I wasn't working at my fast food job wound up—you know, like, how I chose to name it, well, that's okay. Don't use it. Go build your own. Oh, what you're using it anyway. That's what I thought.”Amir: Yeah. The source code was filled with profanity, too. And like, I didn't care, I really did not care, but some people would complain and open bug reports and patches. And my policy was kind of like, okay if you're complaining, I'm just going to ignore you. If you're opening a patch, fine, I'm going to accept that you're—you guys want to create something that's sensible for everybody, sure.I mean, it's just source code, you know? Whatever. So yeah, I started working on that NSIS. I used it for myself and I joined the forums—and this kind of answers to your question of why I respond to things so fast, just because of the fun—I did the same when I was 15, right? I started going on the forums, you remember forums? You remember that [laugh]?Corey: Oh, yeah, back before they all became terrible and monetized.Amir: Oh, yeah. So, you know, people were using NSIS, too, and they had requests, right? They wanted. Back in the day—what was it—there was only support for 16-bit colors for the icon, so they want 32-bit colors and big colors—32—big icon, sorry, 32 pixels by 32 pixels. Remember, 32 pixels?Corey: Oh, yes. Not well, and not happily, but I remember it.Amir: Yeah. So, I started just, you know, giving people—working on that open-source and creating up a fork. It wasn't even called ‘fork' back then, but yeah, I created, like, a little fork of myself and I started adding all these features. And people were really happy, and kind of created, like, this happy cycle for myself: when people were happy, I was happy coding. And then people were happy by what I was coding. And then they were asking for more and they were getting happier, the more I responded.So, it was kind of like a serotonin cycle that made me happy and made everybody happy. So, it's like a win, win, win, win, win. And that's how I started with open-source. And eventually… NSIS—again, that installation system—got so big, like, my fork got so big, and Justin, the guy who works on WinAMP and NSIS, he had other things to deal with. You know, there's a whole history there with AOL. I'm sure you've heard all the funny stories.Corey: Oh, yes. In fact, one thing that—you want to talk about weird collisions of things crossing, one of the things I picked up from your bio when you finally got tired of telling me no and agreed to be on the show was that you're also one of the team who works on camelcamelcamel.com. And I keep forgetting that's one of those things that most people have no idea exists. But it's very simple: all it does is it tracks Amazon products that you tell it to and alerts you when there's a price drop on the thing that you're looking at.It's something that is useful. I try and use it for things of substance or hobbies because I feel really pathetic when I'm like, get excited emails about a price drop in toilet paper. But you know, it's very handy just to keep an idea for price history, where okay, am I actually being ripped off? Oh, they claim it's their big Amazon Deals day and this is 40% off. Let's see what camelcamelcamel has to say.Oh, surprise. They just jacked the price right beforehand and now knocked 40% off. Genius. I love that. It always felt like something that was going to be blown off the radar by Amazon being displeased, but I discovered you folks in 2010 and here you are now, 13 years later, still here. I will say the website looks a lot better now.Amir: [laugh]. That's a recent change. I actually joined camel, maybe two or three years ago. I wasn't there from the beginning. But I knew the guy who created it—again, as you were saying—from the Winamp days, right? So, we were both working in the free—well, it wasn't freenode. It was not freenode. It was a separate IRC server that, again, Justin created for himself. It was called landoleet.Corey: Mmm. I never encountered that one.Amir: Yeah, no, it was pretty private. The only people that cared about WinAMP and NSIS ended up joining there. But it was a lot of fun. I met a lot of friends there. And yeah, I met Daniel Green there as well, and he's the guy that created, along with some other people in there that I think want to remain anonymous so I'm not going to mention, but they also were on the camel project.And yeah, I was kind of doing my poor version of shitposting on Twitter about AWS, kind of starting to get some traction and maybe some clients and talk about AWS so people can approach me, and Daniel approached me out of the blue and he was like, “Do you just post about AWS on Twitter or do you also do some AWS work?” I was like, “I do some AWS work.”Corey: Yes, as do all of us. It's one of those, well crap, we're getting called out now. “Do you actually know how any of this stuff works?” Like, “Much to my everlasting shame, yes. Why are you asking?”Amir: Oh, my God, no, I cannot fix your printer. Leave me alone.Corey: Mm-hm.Amir: I don't want to fix your Lambdas. No, but I do actually want to fix your Lambdas. And so, [laugh] he approached me and he asked if I can help them move camelcamelcamel from their data center to AWS. So, that was a nice big project. So, we moved, actually, all of camelcamelcamel into AWS. And this is how I found myself not only in the Winamp credits, but also in the camelcamelcamel credits page, which has a great picture of me riding a camel.Corey: Excellent. But one of the things I've always found has been that when you take an application that has been pre-existing for a while in a data center and then move it into the cloud, you suddenly have to care about things that no one sensible pays any attention to in the land of the data center. Because it's like, “What do I care about how much data passes between my application server and the database? Wait, what do you mean that in this configuration, that's a chargeable data transfer? Oh, dear Lord.” And things that you've never had to think about optimizing are suddenly things are very much optimizing.Because let's face it, when it comes to putting things in racks and then running servers, you aren't auto-scaling those things, so everything tends to be running over-provisioned, for very good reasons. It's an interesting education. Anything you picked out from that process that you think it'd be useful for folks to bear in mind if they're staring down the barrel of the same thing?Amir: Yeah, for sure. I think… in general, right, not just here. But in general, you always want to be pragmatic, right? You don't want to take steps are huge, right? So, the thing we did was not necessarily rewrite everything and change everything to AWS and move everything to Lambda and move everything to Docker.Basically, we did a mini lift-and-shift, but not exactly lift-and-shift, right? We didn't take it as is. We moved to RDS, we moved to ElastiCache, right, we obviously made use of security groups and session connect and we dropped SSH Sage and we improved the security a lot and we locked everything down, all the permissions and all that kind of stuff, right? But like you said, there's stuff that you start having to pay attention to. In our case, it was less the data transfer because we have a pretty good CDN. There was more of IOPS. So—and IOPS, specifically for a database.We had a huge database with about one terabyte of data and a lot of it is that price history that you see, right? So, all those nice little graphs that we create in—what do you call them, charts—that we create in camelcamelcamel off the price history. There's a lot of data behind that. And what we always want to do is actually remove that from MySQL, which has been kind of struggling with it even before the move to AWS, but after the move to AWS, where everything was no longer over-provisioned and we couldn't just buy a few more NVMes on Amazon for 100 bucks when they were on sale—back when we had to pay Amazon—Corey: And you know, when they're on sale. That's the best part.Amir: And we know [laugh]. We get good prices on NVMe. But yeah, on Amazon—on AWS, sorry—you have to pay for io1 or something, and that adds up real quick, as you were saying. So, part of that move was also to move to something that was a little better for that data structure. And we actually removed just that data, the price history, the price points from MySQL to DynamoDB, which was a pretty nice little project.Actually, I wrote about it in my blog. There is, kind of, lessons learned from moving one terabyte from MySQL to DynamoDB, and I think the biggest lesson was about hidden price of storage in DynamoDB. But before that, I want to talk about what you asked, which was the way that other people should make that move, right? So again, be pragmatic, right? If you Google, “How do I move stuff from DynamoDB to MySQL,” everybody's always talking about their cool project using Lambda and how you throttle Lambda and how you get throttled from DynamoDB and how you set it up with an SQS, and this and that. You don't need all that.Just fire up an EC2 instance, write some quick code to do it. I used, I think it was Go with some limiter code from Uber, and that was it. And you don't need all those Lambdas and SQS and the complication. That thing was a one-time thing anyway, so it doesn't need to be super… super-duper serverless, you know?Corey: That is almost always the way that it tends to play out. You encounter these weird little things along the way. And you see so many things that are tied to this is how architecture absolutely must be done. And oh you're not a real serverless person if you don't have everything running in Lambda and the rest. There are times where yeah, spin up an EC2 box, write some relatively inefficient code in ten minutes and just do the thing, and then turn it off when you're done. Problem solved. But there's such an aversion to that. It's nice to encounter people who are pragmatists more than they are zealots.Amir: I mostly learned that lesson. And both Daniel Green and me learned that lesson from the Winamp days. Because we both have written plugins for Winamp and we've been around that area and you can… if you took one of those non-pragmatist people, right, and you had them review the Winamp code right now—or even before—they would have a million things to say. That code was—and NSIS, too, by the way—and it was so optimized. It was so not necessarily readable, right? But it worked and it worked amazing. And Justin would—if you think I respond quickly, right, Justin Frankel, the guy who wrote Winamp, he would release versions of NSIS and of Winamp, like, four versions a day, right? That was before [laugh] you had CI/CD systems and GitHub and stuff. That was just CVS. You remember CVS [laugh]?Corey: Oh, I've done multiple CVS migrations. One to Git and a couple to Subversion.Amir: Oh yeah, Subversion. Yep. Done ‘em all. CVS to Subversion to Git. Yep. Yep. That was fun.Corey: And these days, everyone's using Git because it—we're beginning to have a monoculture.Amir: Yeah, yeah. I mean, but Git is nicer than Subversion, for me, at least. I've had more fun with it.Corey: Talk about damning with faint praise.Amir: Faint?Corey: Yeah, anything's better than Subversion, let's be honest here.Amir: Oh [laugh].Corey: I mean, realistically, copying a bunch of files and directories to a.bak folder is better than Subversion.Amir: Well—Corey: At least these days. But back then it was great.Amir: Yeah, I mean, the only thing you had, right [laugh]?Corey: [laugh].Amir: Anyway, achieving great things with not necessarily the right tools, but just sheer power of will, that's what I took from the Winamp days. Just the entire world used Winamp. And by the way, the NSIS project that I was working on, right, I always used to joke that every computer in the world ran my code, every Windows computer in the world when my code, just because—Corey: Yes.Amir: So, many different companies use NSIS. And none of them cared that the code was not very readable, to put it mildly.Corey: So, many companies founder on those shores where they lose sight of the fact that I can point to basically no companies that died because their code was terrible, yeah, had an awful lot that died with great-looking code, but they didn't nail the business problem.Amir: Yeah. I would be lying if I said that I nailed exactly the business problem at NSIS because the most of the time I would spend there and actually shrinking the stub, right, there was appended to your installer data, right? So, there's a little stub that came—the executable, basically, that came before your data that was extracted. I spent, I want to say, years of my life [laugh] just shrinking it down by bytes—by literal bytes—just so it stays under 34, 35 kilobytes. It was kind of a—it was a challenge and something that people appreciated, but not necessarily the thing that people appreciate the most. I think the features—Corey: Well, no I have to do the same thing to make sure something fits into a Lambda deployment package. The scale changes, the problem changes, but somehow everything sort of rhymes with history.Amir: Oh, yeah. I hope you don't have to disassemble code to do that, though because that's uh… I mean, it was fun. It was just a lot.Corey: I have to ask, how much work went into building your cdk-github-runners as far as getting it to a point of just working out the door? Because I look at that and it feels like there's—like, the early versions, yeah, there wasn't a whole bunch of code tied to it, but geez, the iterative, “How exactly does this ridiculous step functions API work or whatnot,” feels like I'm looking at weeks of frustration. At least it would have been for me.Amir: Yeah, yeah. I mean, it wasn't, like, a day or two. It was definitely not—but it was not years, either. I've been working on it I think about a year now. Don't quote me on that. But I've put a lot of time into it. So, you know, like you said, the skeleton code is pretty simple: it's a step function, which as we said, takes a long time to get right. The functions, they are really nice, but their definition language is not very straightforward. But beyond that, right, once that part worked, it worked. Then came all the bug reports and all the little corner cases, right? We—Corey: Hell is other people's use cases. Always is. But that's honestly better than a lot of folks wind up experiencing where they'll put an open-source project up and no one ever knows. So, getting users is often one of the biggest barriers to a lot of this stuff. I've found countless hidden gems lurking around on GitHub with a very particular search for something that no one had ever looked at before, as best I can tell.Amir: Yeah.Corey: Open-source is a tricky thing. There needs to be marketing brought into it, there needs to be storytelling around it, and has to actually—dare I say—solve a problem someone has.Amir: I mean, I have many open-source projects like that, that I find super useful, I created for myself, but no one knows. I think cdk-github-runners, I'm pretty sure people know about it only because you talked about it on Screaming in the Cloud or your newsletter. And by the way, thank you for telling me that you talked about it last week in the conference because now we know why there was a spike [laugh] all of a sudden. People Googled it.Corey: Yeah. I put links to it as well, but it's the, yeah, I use this a lot and it's great. I gave a crappy explanation on how it works, but that's the trick I've found between conference talks and, dare I say, podcast episodes, you gives people a glimpse and a hook and tell them where to go to learn more. Otherwise, you're trying to explain every nuance and every intricacy in 45 minutes. And you can't do that effectively in almost every case. All you're going to do is drive people away. Make it sound exciting, get them to see the value in it, and then let them go.Amir: You have to explain the market for it, right? That's it.Corey: Precisely.Amir: And I got to say, I somewhat disagree with your—or I have a different view when you say that, you know, open-source projects needs marketing and all those things. It depends on what open-source is for you, right? I don't create open-source projects so they are successful, right? It's obviously always nicer when they're successful, but—and I do get that cycle of happiness that, like I was saying, people create bugs and I have to fix them and stuff, right? But not every open-source project needs to be a success. Sometimes it's just fun.Corey: No. When I talk about marketing, I'm talking about exactly what we're doing here. I'm not talking take out an AdWords campaign or something horrifying like that. It's you build something that solved the problem for someone. The big problem that worries me about these things is how do you not lose sleep at night about the fact that solve someone's problem and they don't know that it exists?Because that drives me nuts. I've lost count of the number of times I've been beating my head against a wall and asked someone like, “How would you handle this?” Like, “Oh, well, what's wrong with this project?” “What do you mean?” “Well, this project seems to do exactly what you want it to do.” And no one has it all stuffed in their head. But yeah, then it seems like open-source becomes a little more corporatized and it becomes a lead gen tool for people to wind up selling their SaaS services or managed offerings or the rest.Amir: Yeah.Corey: And that feels like the increasing corporatization of open-source that I'm not a huge fan of.Amir: Yeah. I mean, I'm not going to lie, right? Like, part of why I created this—or I don't know if it was part of it, but like, I had a dream that, you know, I'm going to get, oh, tons of GitHub sponsors, and everybody's going to use it and I can retire on an island and just make money out of this, right? Like, that's always a dream, right? But it's a dream, you know?And I think bottom line open-source is… just a tool, and some people use it for, like you were saying, driving sales into their SaaS, some people, like, may use it just for fun, and some people use it for other things. Or some people use it for politics, even, right? There's a lot of politics around open-source.I got to tell you a story. Back in the NSIS days, right—talking about politics—so this is not even about politics of open-source. People made NSIS a battleground for their politics. We would have translations, right? People could upload their translations. And I, you know, or other people that worked on NSIS, right, we don't speak every language of the world, so there's only so much we can do about figuring out if it's a real translation, if it's good or not.Back in the day, Google Translate didn't exist. Like, these days, we check Google Translate, we kind of ask a few questions to make sure they make sense. But back in the day, we did the best that we could. At some point, we got a patch for Catalan language, I'm probably mispronouncing it—but the separatist people in Spain, I think, and I didn't know anything about that. I was a young kid and… I just didn't know.And I just included it, you know? Someone submitted a patch, they worked hard, they wanted to be part of the open-source project. Why not? Sure I included it. And then a few weeks later, someone from Spain wanted to change Catalan into Spanish to make sure that doesn't exist for whatever reason.And then they just started fighting with each other and started making demands of me. Like, you have to do this, you have to do that, you have to delete that, you have to change the name. And I was just so baffled by why would someone fight so much over a translation of an open-source project. Like, these days, I kind of get what they were getting at, right?Corey: But they were so bad at telling that story that it was just like, so basically, screw, “You for helping,” is how it comes across.Amir: Yeah, screw you for helping. You're a pawn now. Just—you're a pawn unwittingly. Just do what I say and help me in my political cause. I ended up just telling both of them if you guys can agree on anything, I'm just going to remove both translations. And that's what I ended up doing. I just removed both translations. And then a few months later—because we had a release every month basically, I just added both of them back and I've never heard from them again. So sort of problem solved. Peace the Middle East? I don't know.Corey: It's kind of wild just to see how often that sort of thing tends to happen. It's a, I don't necessarily understand why folks are so opposed to other people trying to help. I think they feel like there's this loss of control as things are slipping through their fingers, but it's a really unwelcoming approach. One of the things that got me deep into the open-source ecosystem surprisingly late in my development was when I started pitching in on the SaltStack project right after it was founded, where suddenly everything I threw their way was merged, and then Tom Hatch, the guy who founded the project, would immediately fix all the bugs and stuff I put in and then push something else immediately thereafter. But it was such a welcoming thing.Instead of nitpicking me to death in the pull request, it just got merged in and then silently fixed. And I thought that was a classy way to do it. Of course, it doesn't scale and of course, it causes other problems, but I envy the simplicity of those days and just the ethos behind that.Amir: That's something I've learned the last few years, I would say. Back in the NSIS day, I was not like that. I nitpicked. I nitpicked a lot. And I can guess why, but it just—you create a patch—in my mind, right, like you create a patch, you fix it, right?But these days I get, I've been on the other side as well, right? Like I created patches for open-source projects and I've seen them just wither away and die, and then five years later, someone's like, “Oh, can you fix this line to have one instead of two, and then I'll merge it.” I'm like, “I don't care anymore. It was five years ago. I don't work there anymore. I don't need it. If you want it, do it.”So, I get it these days. And these days, if someone creates a patch—just yesterday, someone created a patch to format cdk-github-runners in VS Code. And they did it just, like, a little bit wrong. So, I just fixed it for them and I approved it and pushed it. You know, it's much better. You don't need to bug people for most of it.Corey: You didn't yell at them for having the temerity to contribute?Amir: My voice is so raw because I've been yelling for five days at them, yeah.Corey: Exactly, exactly. I really want to thank you for taking the time to chat with me about how all this stuff came to be and your own path. If people want to learn more, where's the best place for them to find you?Amir: So, I really appreciate you having me and driving all this traffic to my projects. If people want to learn more, they can always go to cloudsnorkel.com; it has all the projects. github.com/cloudsnorkel has a few more. And then my private blog is kichik.com. So, K-I-C-H-I-K dot com. I don't post there as much as I should, but it has some interesting AWS projects from the past few years that I've done.Corey: And we will, of course, put links to all of that in the show notes. Thank you so much for taking the time. I really appreciate it.Amir: Thank you, Corey. It was really nice meeting you.Corey: Amir Szekely, owner of CloudSnorkel. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment. Heck, put it on all of the podcast platforms with a step function state machine that you somehow can't quite figure out how the API works.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

airhacks.fm podcast with adam bien
JAX-RS, OAuth, OpenID Connect (OIDC), Authentication, Authorization and Quarkus

airhacks.fm podcast with adam bien

Play Episode Listen Later Sep 24, 2023 59:42


An airhacks.fm conversation with Sergey Beryozkin (@sberyozkin) about: RPC vs. REST, Paul Sandoz was driving the JAX-RS specification, the scalability of REST, the Tolerant Reader pattern, HATEOAS, Jersey was the reference implementation of JAX-RS, JAX-RS without servlets, the problems with OAuth 1, OAuth 2 fixed OAuth 1 problems, the session fixation problem, OIDC builds on OAuth 2, in OAuth 2 there are no sessions, Confidential OIDC client, OIDC extension, Elytron Security OAuth 2.0, ID tokens vs. access tokens, Opaque access tokens vs. JWT access tokens, the implicit flow, SmallRye JWT extension vs. OIDC extension, the importance of standards, the value of standards, passkeys the NeXT big thing, verifiable credentiats, JSON web proof, mutual TLS support in Quarkus, automatic certificate renewal Sergey Beryozkin on twitter: @sberyozkin

Identity At The Center
#222 - Identity Standards with Justin Richer of Bespoke Engineering

Identity At The Center

Play Episode Listen Later Jul 17, 2023 89:56


On this episode of the Identity at the Center podcast, Jim and Jeff are joined by Justin Richer, Security & Standards Architect and Founder of Bespoke Engineering. Justin shares how he got into IAM and his book, "OAuth2 in Action". He also introduces "Cards Against Identity" and discusses how OIDC would be different if it were written anew today. The conversation turns to GNAP (Grant Negotiation and Authorization Protocol) and closes with a question from listener Markus about “trust in HR” and implementing automation being more of a political issue than a technical one. Tune in to hear this fascinating conversation! Connect with Justin: https://www.linkedin.com/in/justinricher/ Learn more about Bespoke Engineering: https://bspk.io/ Why CSCW Applications Fail: Problems in the Design and Evaluation of Organizational Interfaces - https://www.cs.uml.edu/~holly/teaching/91550/spring2012/p85-grudin.pdf Book - OAuth2 in Action: https://www.manning.com/books/oauth-2-in-action GNAP: https://oauth.net/gnap/ Cards Against Identity: http://www.cardsagainstidentity.com Gridlock Boston: https://bspk.io/games/gridlock/ Checkout Psycliq: https://psycliq.com/ The Precious Cinnamon Roll: https://www.theonion.com/beautiful-cinnamon-roll-too-good-for-this-world-too-pu-1819576048 Authenticate Conference: Use code IDAC15PODCAST for 15% off your registration fees. Learn more about the Authenticate conference: https://authenticatecon.com/event/authenticate-2023/ Connect with us on LinkedIn: Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/ Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/ Visit the show on the web at idacpodcast.com and follow @IDACPodcast on Twitter.

AWS Bites
86. How do you integrate AWS with other clouds?

AWS Bites

Play Episode Listen Later Jun 22, 2023 20:50


Are you struggling with securely integrating workloads running on-premises, in Azure, or in any other cloud with a workload running in AWS? In this exciting episode of the AWS Bites podcast, we dive into 6 different options for securely and efficiently integrating workloads between clouds. From providing a public API in AWS with an authorization method to using IAM roles anywhere to using OIDC federated identities, we explore the advantages and disadvantages of each option. We even cover the use of SSM hybrid activations and creating the interface on the Azure/Data Centre side and polling from AWS. Don't miss out on this informative discussion about the best practices for integrating workloads between clouds. Tune in now and let's have some cloud fun together!

AWS Morning Brief
Screwing Up the Messaging and Also the RSA Dates

AWS Morning Brief

Play Episode Listen Later Apr 20, 2023 4:36


Last week in security news: Creating an AWS Backup Account, Azure had another cross-tenant access vulnerability, Security Hub Hurts My Self-Esteem, and more!Links: Corey hosted a partner panel at AWS Container Day at KubeCon  This post on using OIDC to secure your CI/CD pipelines mirrors what I did with GitHub actions a year or so ago. Teri Radichel has a piece on Creating an AWS Backup Account Slack is conducting an absolute masterclass in how to screw up messaging to your target audience. Azure had another cross-tenant access vulnerability Security Hub Hurts My Self-Esteem AWS Security Profile: Matt Luttrell, Principal Solutions Architect for AWS Identity Tool of the Week: iamlive 

Matrix Live
Matrix Tutorials #5 — Calling Synapse's admin API with Python (and OIDC fun)

Matrix Live

Play Episode Listen Later Dec 6, 2022 14:12


This week we're learning how to perform API calls to the Synapse Admin API! Why for? To do a reconciliation of accounts and finally close our OIDC tutorial! Part one of the OIDC tutorial: https://www.youtube.com/watch?v=MwOh4NvPdtQ

Matrix Live
Matrix Tutorials #4 — An OIDC playground with Synapse and Authentik

Matrix Live

Play Episode Listen Later Oct 7, 2022 21:18


This week we're learning how to set-up an OIDC playground: we're deploying a Synapse and an Authentik (OpenID provider) instance, populating them each with similar or different users, and configuring them to play with each other. Next week we're going to see how to perform an actual reconciliation of accounts. 00:00 Hello 01:21 Cloning the project 02:55 Updating variables 08:40 Installing Synapse, Authentik and Traefik 11:41 Populating with users 13:30 Connecting Synapse to Authentik Playground repo: https://github.com/matrix-org/tutorial-oidc-playground (This Week in) Matrix

Cloud Posse DevOps
Cloud Posse DevOps "Office Hours" (2022-08-24)

Cloud Posse DevOps "Office Hours" Podcast

Play Episode Listen Later Aug 24, 2022 55:59


Cloud Posse holds public "Office Hours" every Wednesday at 11:30am PST to answer questions on all things related to DevOps, Terraform, Kubernetes, CICD. Basically, it's like an interactive "Lunch & Learn" session where we get together for about an hour and talk shop. These are totally free and just an opportunity to ask us (or our community of experts) any questions you may have. You can register here: https://cloudposse.com/office-hoursJoin the conversation: https://slack.cloudposse.com/Find out how we can help your company:https://cloudposse.com/quizhttps://cloudposse.com/accelerate/Learn more about Cloud Posse:https://cloudposse.comhttps://github.com/cloudpossehttps://sweetops.com/https://newsletter.cloudposse.comhttps://podcast.cloudposse.com/[00:00:00] Intro[00:01:19] New Cloud Posse Terraform Module for DMS Modules (with complete example)https://github.com/cloudposse/terraform-aws-dms/tree/main/examples/complete[00:03:24] GitHub Actions: OpenID Connect supports advanced OIDC policieshttps://github.blog/changelog/2022-08-23-github-actions-enhancements-to-openid-connect-support-to-enable-secure-cloud-deployments-at-scale[00:05:34] Fascinating Approach to Multi Cloud with a Custom Terraform Providerhttps://github.com/multycloud/terraform-provider-multy[00:17:58] Finally a Terraform CURL Provider (NOT the same as rest provider) via weekly.tfhttps://registry.terraform.io/providers/devops-rob/terracurl/latest/docs/resources/request[00:19:24] Combine CURL provider with OIDC provider to automate more APIshttps://github.com/SvenHamers/terraform-provider-oauth[00:24:08]  Door Dash Failure Story: Lessons Learned with Readiness Probeshttps://doordash.engineering/2022/08/09/how-to-handle-kubernetes-health-checks/[00:26:57]  AWS Support launches support for managing cases in Slackhttps://aws.amazon.com/about-aws/whats-new/2022/08/aws-support-launches-managing-cases-slack/[00:32:28] Is there a deep-dive video covering the Cloud Posse Way?[00:35:14] I'm building out TF for Aurora and would appreciate any input on the process. [00:51:26] How to store the CA root certificate?[00:54:55] Outro#officehours,#cloudposse,#sweetops,#devops,#sre,#terraform,#kubernetes,#awsSupport the show

AWS Bites
45. What's the magic of OIDC identity providers?

AWS Bites

Play Episode Listen Later Jul 14, 2022 27:28


If you are thinking of using an external CICD tool to deploy to AWS you are probably wondering how to securely connect your pipelines to your AWS account. You could create a user for your CICD tool of choice and copy some hard coded credentials into it, but, let's face it: this doesn't feel like the right - or at least the most secure - approach! In the previous episode we discussed how AWS and GitHub solved this problem by using OIDC identity providers and this seems to be a good solution to the problem. In this episode of AWS Bites we will try to demystify the secrets of OIDC identity providers and explain how they work and what's the trust model between AWS and an OIDC provider like GitHub actions. We will also explain all the steps required to integrate AWS with GitHub, how JWT works in this particular scenario and other use cases where you could use OIDC providers. In this episode, we mentioned the following resources: - GitHub docs explaining how to integrate with AWS as an OIDC provider: https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect - Article “What's in a JWT” https://loige.co/whats-in-a-jwt - jwtinfo, CLI tool to inspect JWT: https://github.com/lmammino/jwtinfo - AWS action to assume a role from a GitHub Pipeline: https://github.com/aws-actions/configure-aws-credentials#assuming-a-role - Great post by Elias Brange detailing how to setup GitHub OIDC integration for AWS: https://www.eliasbrange.dev/posts/secure-aws-deploys-from-github-actions-with-oidc/ - Previous episode on why you should consider GitHub Actions rather than AWS CodePipeline: https://awsbites.com/44-do-you-use-codepipeline-or-github-actions/ This episode is also available on YouTube: https://www.youtube.com/AWSBites You can listen to AWS Bites wherever you get your podcasts. See https://awsbites.com for all the links. Do you have any AWS questions you would like us to address? Connect with us on Twitter: - https://twitter.com/eoins - https://twitter.com/loige

AWS Bites
44. Do you use CodePipeline or GitHub Actions?

AWS Bites

Play Episode Listen Later Jul 7, 2022 29:02


Automated, Continuous Build and Continuous Delivery are must-haves when building modern applications on AWS. To achieve this, you have numerous options, including third party providers like GitHub Actions and Circle CI, and the AWS services, CodePipeline and CodeBuild. In this episode we focus on GitHub Actions and we compare it with the native AWS features offered by services like CodePipeline and Code Build. In particular we discuss what CodePipeline offers and how to set it up, what the tradeoffs are and when to choose one over the other. We also discuss when you should look outside AWS to a third-party provider and highlight when GitHub Actions can be a great fit for your AWS CI/CD needs! In this episode, we mentioned the following resources: - Example pipeline for a serverless mono repo using CDK is available in SLIC Starter: https://github.com/fourTheorem/slic-starter/tree/main/packages/cicd - 50+ official actions provided by GitHub themselves: https://github.com/actions - How to configure OIDC integrations with AWS and other services like GitHub: https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services - GitHub Actions billing details: https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions - Workshop illustrating how to create CodeBuild and CodePipeline resources using CDK: https://cdkworkshop.com/20-typescript/70-advanced-topics/200-pipelines/3000-new-pipeline.html - Paul Swail's article “Why I switched from AWS CodePipeline to GitHub Actions”: https://serverlessfirst.com/switch-codepipeline-to-github-actions/ - A tutorial article by AWS showing how to authenticate and use GitHub actions to build & deploy a web app to an EC2 instance https://aws.amazon.com/blogs/devops/integrating-with-github-actions-ci-cd-pipeline-to-deploy-a-web-app-to-amazon-ec2/ - Other examples of when it is OK to ditch AWS services for third party (previous podcast episode): https://awsbites.com/43-when-is-it-ok-to-cheat-on-aws/ This episode is also available on YouTube: https://www.youtube.com/AWSBites You can listen to AWS Bites wherever you get your podcasts. See https://awsbites.com Leave a comment here or connect with us on Twitter: - https://twitter.com/eoins - https://twitter.com/loige

Break Things On Purpose
Developer Advocacy and Innersource with Aaron Clark

Break Things On Purpose

Play Episode Listen Later Jun 14, 2022 40:55


In this episode, we cover: Aaron talks about starting out as a developer and the early stages of cloud development at RBC (1:05) Aaron discusses transitioning to developer advocacy (12:25) Aaron identifies successes he had in his early days of developer advocacy (20:35) Jason asks what it looks like to assist developers in achieving completion with long term maintenance projects, or “sustainable development” (25:40)  Jason and Aaron discuss what “innersource” is and why it's valuable in an organization (29:29) Aaron answers the question “how do you keep skills and knowledge up to date?” (33:55) Aaron talks about job opportunities at RBC (38:55) Links Referenced: Royal Bank of Canada: https://www.rbcroyalbank.com Opportunities at RBC: https://jobs.rbc.com/ca/en TranscriptAaron: And I guess some PM asked my boss, “So, Aaron doesn't come to our platform status meetings, he doesn't really take tickets, and he doesn't take support rotation. What does Aaron do for the Cloud Platform Team?”Jason: [laugh].Jason: Welcome to Break Things on Purpose, a podcast about reliability, learning, and building better systems. In this episode, we talk with Aaron Clark, Director of Developer Advocacy at the Royal Bank of Canada. We chat with him about his journey from developer to advocate, the power of applying open-source principles within organizations—known as innersource—and his advice to keep learning.Jason: Welcome to the show, Aaron.Aaron: Thanks for having me, Jason. My name is Aaron Clark. I'm a developer advocate for cloud at RBC. That is the Royal Bank of Canada. And I've been at the bank for… well, since February 2010.Jason: So, when you first joined the bank, you were not a developer advocate, though?Aaron: Right. So, I have been in my current role since 2019. I've been part of the cloud program since 2017. Way back in 2010, I joined as a Java developer. So, my background in terms of being a developer is pretty much heavy on Java. Java and Spring Boot, now.I joined working on a bunch of Java applications within one of the many functions areas within the Royal Bank. The bank is gigantic. That's kind of one of the things people sometimes struggle to grasp. It's such a large organization. We're something like 100,000… yeah, 100,000 employees, around 10,000 of that is in technology, so developers, developer adjacent roles like business analysts, and QE, and operations and support, and all of those roles.It's a big organization. And that's one of the interesting things to kind of grapple with when you join the organization. So, I joined in a group called Risk IT. We built solely internal-facing applications. I worked on a bunch of stuff in there.I'm kind of a generalist, where I have interest in all the DevOps things. I set up one of the very first Hudson servers in Risk—well, in the bank, but specifically in Risk—and I admin'ed it on the side because nobody else was doing it and it needed doing. After a few years of doing that and working on a bunch of different projects, I was occasionally just, “We need this project to succeed, to have a good foundation at the start, so Aaron, you're on this project for six months and then you're doing something different.” Which was really interesting. At the same time, I always worry about the problem where if you don't stay on something for very long, you never learn the consequences of the poor decisions you may have made because you don't have to deal with it.Jason: [laugh].Aaron: And that was like the flip side of, I hope I'm making good decisions here. It seemed to be pretty good, people seemed happy with it, but I always worry about that. Like, being in a role for a few years where you build something, and then it's in production, and you're running it and you're dealing with, “Oh, I made this decision that seems like a good idea at the time. Turns out that's a bad idea. Don't do that next time.” You never learned that if you don't stay in a role.When I was overall in Risk IT for four, almost five years, so I would work with a bunch of the teams who maybe stayed on this project, they'd come ask me questions. It's like, I'm not gone gone. I'm just not working on that project for the next few months or whatever. And then I moved into another part of the organization, like, a sister group called Finance IT that runs kind of the—builds and runs the general ledger for the bank. Or at least for a part of capital markets.It gets fuzzy as the organization moves around. And groups combine and disperse and things like that. That group, I actually had some interesting stuff that was when I started working on more things like cloud, looking at cloud, the bank was starting to bring in cloud. So, I was still on the application development side, but I was interested in it. I had been to some conferences like OSCON, and started to hear about and learn about things like Docker, things like Kubernetes, things like Spring Boot, and I was like this is some really neat stuff.I was working on a Spark-based ETL system, on one of the early Hadoop clusters at the bank. So, I've been I'm like, super, super lucky that I got to do a lot of this stuff, work on all of these new things when they were really nascent within the organization. I've also had really supportive leadership. So, like, I was doing—that continuous integration server, that was totally on the side; I got involved in a bunch of reuse ideas of, we have this larger group; we're doing a lot of similar things; let's share some of the libraries and things like that. That was before being any, like, developer advocate or anything like that I was working on these.And I was actually funded for a year to promote and work on reuse activities, basically. And that was—I learned a lot, I made a lot of mistakes that I now, like, inform some of the decisions I make in my current role, but I was doing all of this, and I almost described it as I kind of taxed my existing project because I'm working on this team, but I have this side thing that I have to do. And I might need to take a morning and not work on your project because I have to, like, maintain this build machine for somebody. And I had really supportive leadership. They were great.They recognize the value of these activities, and didn't really argue about the fact that I was taking time away from whatever the budget said I was supposed to be doing, which was really good. So, I started doing that, and I was working in finance as the Cloud Team was starting to go through a revamp—the initial nascent Cloud Team at the bank—and I was doing cloud things from the app dev side, but at the same time within my group, anytime something surprising became broken, somebody had some emergency that they needed somebody to drop in and be clever and solve things, that person became me. And I was running into a lot of distractions in that sense. And it's nice to be the person who gets to work on, “Oh, this thing needs rescuing. Help us, Aaron.”That's fantastic; it feels really good, right, up until you're spending a lot of your time doing it and you can't do the things that you're really interested in. So, I actually decided to move over to the Cloud Team and work on kind of defining how we build applications for the cloud, which was really—it was a really good time. It was a really early time in the bank, so nobody really knew how we were going to build applications, how we were going to put them on the cloud, what does that structure look like? I got to do a lot of reading and research and learning from other people. One of the key things about, like, a really large organization that's a little slow-moving like the bank and is a little bit risk-averse in terms of technology choices, people always act like that's always a bad thing.And sometimes it is because we're sometimes not adopting things that we would really get a lot of benefit out of, but the other side of it is, by the time we get to a lot of these technologies and platforms, a bunch of the sharp edges have kind of been sanded off. Like, the Facebooks and the Twitters of the world, they've adopted it and they've discovered all of these problems and been, like, duct-taping them together. And they've kind of found, “Oh, we need to have actual, like, security built into this system,” or things like that, and they've dealt with it. So, by the time we get to it, some of those issues are just not there anymore. We don't have to deal with them.Which is an underrated positive of being in a more conservative organization around that. So, we were figuring there's a lot of things we could learn from. When we were looking at microservices and, kind of, Spring Boot Spring Cloud, the initial cloud parts that had been brought into the organization were mainly around Cloud Foundry. And we were helping some initial app teams build their applications, which we probably over-engineered some of those applications, in the sense that we were proving out patterns that you didn't desperately need for building those applications. Like, you could have probably just done it with a web app and relational database and it would have been fine.But we were proving out some of the patterns of how do you build something for broader scale with microservices and things like that. We learned a bunch about the complexities of doing that too early, but we also learned a bunch about how to do this so we could teach other application teams. And that's kind of the group that I became part of, where I wasn't a platform operator on the cloud, but I was working with dev teams, building things with dev teams to help them learn how to build stuff for cloud. And this was my first real exposure to that scope and scale of the bank. I'd been in the smaller groups and one of the things that you start to encounter when you start to interact with the larger parts of the bank is just, kind of, how many silos there are, how diverse the tech stacks are in an organization of that size.Like, we have areas that do things with Java, we have areas doing things with .NET Framework, we have areas doing lots of Python, we have areas doing lots of Node, especially as the organization started building more web applications. While you're building things with Angular and using npm for the front-end, so you're building stuff on the back-end with Node as well. Whether that is a good technology choice, a lot of the time you're building with what you have. Even within Java, we'd have teams building with Spring Boot, and lots of groups doing that, but someone else is interested in Google Guice, so they're building—instead of Spring, they're using Google Guice as their dependency injection framework.Or they have a… like, there's the mainframe, right? You have this huge technology stack where lots of people are building Java EE applications still and trying to evolve that from the old grungy days of Java EE to the much nicer modern ways of it. And some of the technology conversations are things like, “Well, you can use this other technology; that's fine, but if you're using that, and we're using something else over here, we can't help each other. When I solve a problem, I can't really help solve it for you as well. You have to solve it for yourself with your framework.”I talked to a team once using Vertex in Java, and I asked them, “Why are you using Vertex?” And they said, “Well, that's what our team knew.” I was like, “That's a good technology choice in the sense that we have to deliver. This is what we know, so this is the thing we know we can succeed with rather than actually learning something new on the job while trying to deliver something.” That's often a recipe for challenges if not outright failure.Jason: Yeah. So, it sounds like that's kind of where you come in; if all these teams are doing very disparate things, right—Aaron: Mm-hm.Jason: That's both good and bad, right? That's the whole point of microservices is independent teams, everyone's decoupled, more velocity. But also, there's huge advantages—especially in an org the size of RBC—to leverage some of the learnings from one team to another, and really, like, start to share these best practices. I'm guessing that's where you come into play now in your current role.Aaron: Yeah. And that's the part where how do we have the flexibility for people to make their own choices while standardizing so we don't have this enormous sprawl, so we can build on things? And this is starting to kind of where I started really getting involved in community stuff and doing developer advocacy. And part of how this actually happened—and this is another one of those cases where I've been very fortunate and I've had great leaders—I was working as part of the Cloud Platform Team, the Special Projects group that I was, a couple of people left; I was the last one left. It's like, “Well, you can't be your own department, so you're part of Cloud Platform.” But I'm not an operator. I don't take a support rotation.And I'm ostensibly building tooling, but I'm mostly doing innersource. This is where the innersource community started to spin up at RBC. I was one of the, kind of, founding members of the innersource community and getting that going. We had built a bunch of libraries for cloud, so those were some of the first projects into innersource where I was maintaining the library for Java and Spring using OIDC. And this is kind of predating Spring Security's native support for OIDC—so Open ID Connect—And I was doing a lot of that, I was supporting app teams who were trying to adopt that library, I was involved in some of the other early developer experience things around, you complain this thing is bad as the developer; why do we have to do this? You get invited to one of the VP's regular weekly meetings to discuss, and now you're busy trying to fix, kind of, parts of the developer experience. I was doing this, and I guess some PM asked my boss, “So, Aaron doesn't come to our platform status meetings, he doesn't really take tickets, and he doesn't take support rotation. What does Aaron do for the Cloud Platform Team?”Jason: [laugh].Aaron: And my boss was like, “Well, Aaron's got a lot of these other things that he's involved with that are really valuable.” One of the other things I was doing at this point was I was hosting the Tech Talk speaking series, which is kind of an internal conference-style talks where we get an expert from within the organization and we try to cross those silos where we find someone who's a machine-learning expert; come and explain how TensorFlow works. Come and explain how Spark works, why it's awesome. And we get those experts to come and do presentations internally for RBC-ers. And I was doing that and doing all of the support work for running that event series with the co-organizers that we had.And at the end of the year, when they were starting up a new initiative to really focus on how do we start promoting cloud adoption rather than just people arrive at the platform and start using it and figure it out for themselves—you can only get so far with that—my boss sits me down. He says. “So, we really like all the things that you've been doing, all of these community things and things like that, so we're going to make that your job now.” And this is how I arrived at there. It's not like I applied to be a developer advocate. I was doing all of these things on the side and all of a sudden, 75% of my time was all of these side projects, and that became my job.So, it's not really the most replicable, like, career path, but it is one of those things where, like, getting involved in stuff is a great way to find a niche that is the things that you're passionate about. So, I changed my title. You can do that in some of our systems as long as your manager approves it, so I changed my title from the very generic ‘Senior Technical Systems Analyst—which, who knows what I actually do when that's my title—and I changed that to ‘Developer Advocate.' And that was when I started doing more research learning about what do actual developer advocates do because I want to be a developer advocate. I want to say I'm a developer advocate.For the longest time in the organization, I'm the only person in the company with that title, which is interesting because then nobody knows what to do with me because I'm not like—am I, like—I'm not a director, I'm not a VP. Like… but I'm not just a regular developer, either. Where—I don't fit in the hierarchy. Which is good because then people stop getting worried about what what are titles and things like that, and they just listen to what I say. So, I do, like, design consultations with dev teams, making sure that they knew what they were doing, or were aware of a bunch of the pitfalls when they started to get onto the cloud.I would build a lot of samples, a lot of docs, do a lot of the community engagement, so going to events internally that we'd have, doing a lot of those kinds of things. A lot of the innersource stuff I was already doing—the speaking series—but now it was my job formally, and it helped me cross a lot of those silos and work very horizontally. That's one of the different parts about my job versus a regular developer, is it's my job to cover anything to do with cloud—that at least, that I find interesting, or that my boss tells me I need to work at—and anything anywhere in the organization that touches. So, a dev team doing something with Kubernetes, I can go and talk to them. If they're building something in capital markets that might be useful, I can say, “Hey, can you share this into innersource so that other people can build on this work as well?”And that was really great because I develop all of these relationships with all of these other groups. And that was, to a degree, what the cloud program needed from me as well at that beginning. I explained that this was now my job to one of my friends. And they're like, “That sounds like the perfect job for you because you are technical, but you're really good with people.” I was like, “Am I? I guess I am now that I've been doing it for this amount of time.”And the other part of it as we've gone on more and more is because I talk to all of these development teams, I am not siloed in, I'm not as tunneled on the specific thing I'm working with, and now I can talk to the platform teams and really represent the application developer perspective. Because I'm not building the platform. And they have their priorities, and they have things that they have to worry about; I don't have to deal with that. My job is to bring the perspective of an application developer. That's my background.I'm not an operator; I don't care about the support rotation, I don't care about a bunch of the niggly things and toil of the platform. It's my job, sometimes, to say, hey, this documentation is well-intentioned. I understand how you arrived at this documentation from the perspective of being the platform team and the things that you prioritize and want to explain to people, but as an application developer, none of the information that I need to build something to run on your platform is presented in a manner that I am able to consume. So, I do, like, that side as well of providing customer feedback to the platform saying, “This thing is hard,” or, “This thing that you are asking the application teams to work on, they don't want to care about that. They shouldn't have to care about this thing.” And that sort of stuff.So, I ended up being this human router are sometimes where platform teams will say, “Do you know anybody who's doing this, who's using this thing?” Or finding one app team and say, “You should talk to that group over there because they are also doing the same thing, or they're struggling with the same thing, and you should collaborate.” Or, “They have solved this problem.” Because I don't know every single programming language we use, I don't know all of the frameworks, but I know who I asked for Python questions, and I will send teams to that person. And part of that, then, as I started doing this community work was actually building community.One of the great successes was, we have a Slack channel called ‘Cloud Adoption.' And that was the place where everybody goes to ask their questions about how do I do this thing to put something on Cloud Foundry, put it on Kubernetes? How do I do this? I don't understand. And that was sometimes my whole day was just going onto that Slack channel, answering questions, and being very helpful and trying to document things, trying to get a feel for what people were doing.It was my whole day, sometimes. It took a while to get used to that was actually, like, a successful day coming from a developer background. I'm used to building things, so I feel like success because I built something I can show you, that I did this today. And then I'd have days where I talked to a bunch of people and I don't have anything I can show you. That was, like, the hard part of taking on this role.But one of the big successes was we built this community where it wasn't just me. Other people who wanted to help people, who were just developers on different dev teams, they'd see me ask questions or answer questions, and they would then know the answers and they'd chime in. And as I started being tasked with more and more other activities, I would then get to go—I'd come back to Slack and see oh, there's a bunch of questions. Oh, it turns out, people are able to help themselves. And that was—like that's success from that standpoint of building community.And now that I've done that a couple times with Tech Talks, with some of the developer experience work, some of the cloud adoption work, I get asked internally how do you build community when we're starting up new communities around things like Site Reliability Engineering. How are we going to do that? So, I get—and that feels weird, but that's one of the things that I have been doing now. And as—like, this is a gigantic role because of all of the scope. I can touch anything with anyone in cloud.One of the scope things with the role, but also with the bank is not only do we have all these tech stacks, but we also have this really, really diverse set of technical acumen, where you have people who are experts already on Kubernetes. They will succeed no matter what I do. They'll figure it out because they're that type of personality, they're going to find all the information. If anything, some of the restrictions that we put in place to manage our environments and secure them because of the risk requirements and compliance requirements of being a regulated bank, those will get in the way. Sometimes I'm explaining why those things are there. Sometimes I'm agreeing with people. “Yeah, it sucks. I don't want to have to do this.”But at the same time, you'll have people who they just want to come in, write their code, go home. They don't want to think about technology other than that. They're not going to go and learn things on their own necessarily. And that's not the end of the world. As strange as that sounds to people who are the personality to be constantly learning and constantly getting into everything and tinkering, like, that's me too, but you still need people to keep the lights on, to do all of the other work as well. And people who are happy just doing that, that's also valuable.Because if I was in that role, I would not be happy. And someone who is happy, like, this is good for the overall organization. But the things that they need to learn, the things they need explained to them, the help they need for success is different. So, that's one of the challenges is figuring out how do you address all of those customers? And sometimes even the answer for those customers is—and this is one of the things about my role—it's like the definition is customer success.If the application you're trying to put on cloud should not go on cloud, it is my job to tell you not to put it on cloud. It is not my job to put you on cloud. I want you to succeed, not just to get there. I can get your thing on the cloud in an afternoon, probably, but if I then walk away and it breaks, like, you don't know what to do. So, a lot of the things around how do we teach people to self-serve, how do we make our internal systems more self-serve, those are kind of the things that I look at now.How do I manage my own time because the scope is so big? It's like, I need to figure out where I'm not moving a thousand things forward an inch, but I'm moving things to their completion. And I am learning to, while not managing people, still delegate and work with the community, work with the broader cloud platform group around how do I let go and help other people do things?Jason: So, you mentioned something in there that I think is really interesting, right, the goal of helping people get to completion, right? And I think that's such an interesting thing because I think as—in that advocacy role, there's often a notion of just, like, I'm going to help you get unstuck and then you can keep going, without a clear idea of where they're ultimately heading. And that kind of ties back into something that you said earlier about starting out as a developer where you build things and you kind of just, like, set it free, [laugh] and you don't think about, you know, that day two, sort of, operations, the maintenance, the ongoing kind of stuff. So, I'm curious, as you've progressed in your career, as you've gotten more wisdom from helping people out, what does that look like when you're helping people get to completion, also with the mindset of this is an application that's going to be running for quite some time. Even in the short term, you know, if it's a short-term thing, but I feel like with the bank, most things probably are somewhat long-lived. How do you balance that out? How do you approach that, helping people get to done but also keeping in mind that they have to—this app has to keep living and it has to be maintained?Aaron: Yeah, a lot of it is—like, the term we use is sustainable development. And part of that is kind of removing friction, trying to get the developers to a point where they can focus on, I guess, the term that's often used in the industry is their inner loop. And it should come as no surprise, the bank often has a lot of processes that are high in friction. There's a lot of open a ticket, wait for things. This is the part that I take my conversations with dev teams, and I ask them, “What are the things that are hard? What are the things you don't like? What are the things you wish you didn't have to do or care about?”And some of this is reading between the lines when you talk to them; it's not so much interviewing them. Like, any kind of requirements gathering, usually, it's not what they say, it's what they talk about that then you look at, oh, this is the problem; how do we unstuck that problem so that people can get to where they need to be going? And this kind of informs some of my feedback to the systems we put in place, the processes we put in place around the platform, some of the tooling we look at. I really, really love the philosophy from Docker and Solomon Hykes around, “Batteries included but removable.” I want developers to have a high baseline as a starting point.And this comes partly from my experience with Cloud Foundry. Cloud Foundry has a really great out-of-the-box dev experience for lots of things where, “I just have a web app. Just run it. It's Nginx; it's some HTML pages; I don't need to know all the details. Just make it go and give me the URL.”And I want more of that for app teams where they have a high baseline of things to work with as a starting point. And kind of every organization ends up building this, where they have—like, Netflix: Netflix OSS or Twitter with Finagle—where they have, “Here's the surrounding pieces that I want to plug in that everybody gets as a starting point. And how do we provide security? How do we provide all of these pieces that are major concerns for an app team, that they have to do, we know they have to do?” Some of these are things that only start coming up when they're on the cloud and trying to provide a lot more of that for app teams so they can focus on the business stuff and only get into the weeds when they need to.Jason: As you're talking about these frameworks that, you know, having this high quality or this high baseline of tools that people can just have, right, equipping them with a nice toolbox, I'm guessing that the innersource stuff that you're working on also helps contribute to that.Aaron: Oh, immensely. And as we've gone on and as we've matured, our innersource organization, a huge part of that is other groups as well, where they're finding things that—we need this. And they'll put—it originally it was, “We built this. We'll put it into innersource.” But what you get with that is something that is very targeted and specific to their group and maybe someone else can use it, but they can't use it without bending it a little bit.And I hate bending software to fit it. That's one of the things—it's a very common thing in the corporate environment where we have our existing processes and rather than adopting the standard approach that some tool uses, we need to take it and then bend it until it fits our existing process because we don't want to change our processes. And that gets hard because you run into weird edge cases where this is doing something strange because we bent it. And it's like, well, that's not its fault at that point. As we've started doing more innersource, a lot more things have really become innersource first, where groups realize we need to solve this together.Let's start working on it together and let's design the API as a group. And API design is really, really hard. And how do we do things with shared libraries or services. And working through that as a group, we're seeing more of that, and more commonly things where, “Well, this is a thing we're going to need. We're going to start it in innersource, we'll get some people to use it and they'll be our beta customers. And we'll inform it without really specifically targeting an application and an app team's needs.”Because they're all going to have specific needs. And that's where the, like, ‘included but removable' part comes in. How do we build things extensibly where we have the general solution and you can plug in your specifics? And we're still—like, this is not an easy problem. We're still solving it, we're still working through it, we're getting better at it.A lot of it's just how can we improve day-over-day, year-over-year, to make some of these things better? Even our, like, continuous integration and delivery pipelines to our to clouds, all of these things are in constant flux and constant evolution. We're supporting multiple languages; we're supporting multiple versions of different languages; we're talking about, hey, we need to get started adopting Java 17. None of our libraries or pipelines do that yet, but we should probably get on that since it's been out for—what—almost a year? And really working on kind of decomposing some of these things where we built it for what we needed at the time, but now it feels a bit rigid. How do we pull out the pieces?One of the big pushes in the organization after the log4j CVE and things like that broad impact on the industry is we need to do a much more thorough job around software supply chain, around knowing what we have, making sure we have scans happening and everything. And that's where, like, the pipeline work comes in. I'm consulting on the pipeline stuff where I provide a lot of customer feedback; we have a team that is working on that all full time. But doing a lot of those things and trying to build for what we need, but not cut ourselves off from the broader industry, as well. Like, my nightmare situation, from a tooling standpoint, is that we restrict things, we make decisions around security, or policy or something like that, and we cut ourselves off from the broader CNCF tooling ecosystem, we can't use any of those tools. It's like, well, now we have to build something ourselves, or—which we're never going to do it as well as the external community. Or we're going to just kind of have bad processes and no one's going to be happy so figuring out all of that.Jason: Yeah. One of the things that you mentioned about staying up to speed and having those standards reminds me of, you know, similar to that previous experience that I had was, basically, I was at an org where we said that we'd like to open-source and we used open-source and that basically meant that we forked things and then made our own weird modifications to it. And that meant, like, now, it wasn't really open-source; it was like this weird, hacked thing that you had to keep maintaining and trying to keep it up to date with the latest stuff. Sounds like you're in a better spot, but I am curious, in terms of keeping up with the latest stuff, how do you do that, right? Because you mentioned that the bank, obviously a bit slower, adopting more established software, but then there's you, right, where you're out there at the forefront and you're trying to gather best practices and new technologies that you can use at the bank, how do you do that as someone that's not building with the latest, greatest stuff? How do you keep that skills and that knowledge up to date?Aaron: I try to do reading, I try to set time aside to read things like The New Stack, listen to podcasts about technologies. It's a really broad industry; there's only so much I can keep up with. This was always one of the conversations going way back where I would have the conversation with my boss around the business proposition for me going to conferences, and explaining, like, what's the cost to acquire knowledge in an organization? And while we can bring in consultants, or we can hire people in, like, when you hire new people in, they bring in their pre-existing experiences. So, if someone comes in and they know Hadoop, they can provide information and ideas around is this a good problem to solve with Hadoop? Maybe, maybe not.I don't want to bet a project on that if I don't know anything about Hadoop or Kubernetes or… like, using something like Tilt or Skaffold with my tooling. That's one of the things I got from going to conferences, and I actually need to set more time aside to watch the videos now that everything's virtual. Like, not having that dedicated week is a problem where I'm just disconnected and I'm not dealing with anything. When you're at work, even if KubeCon's going on or Microsoft Build, I'm still doing my day-to-day, I'm getting Slack messages, and I'm not feeling like I can just ignore people. I should probably block out more time, but part of how I stay up to date with it.It's really doing a lot of that reading and research, doing conversations like this, like, the DX Buzz that we invited you to where… I explained that event—it's adjacent to internal speakers—I explained that as I was had a backlog of videos from conferences I was not watching, and secretly if I make everybody else come to lunch with me to watch these videos, I have to watch the video because I'm hosting the session to discuss it, and now I will at least watch one a month. And that's turned out to be a really successful thing internally within the organization to spread knowledge, to have conversations with people. And the other part I do, especially on the tooling side, is I still build stuff. As much as, like, I don't code nearly as much as I used to, I bring an application developer perspective, but I'm not writing code every day anymore.Which I always said was going to be the thing that would make me miserable. It's not. I still think about it, and when I do get to write code, I'm always looking for how can I improve this setup? How can I use this tool? Can I try it out? Is this better? Is this smoother for me so I'm not worrying about this thing?And then spreading that information more broadly within the developer experience group, our DevOps teams, our platform teams, talking to those teams about the things that they use. Like, we use Argo CD within one group and I haven't touched it much, but I know they've got lots of expertise, so talking to them. “How do you use this? How is this good for me? How do I make this work? How can I use it, too?”Jason: I think it's been an incredible, [laugh] as you've been chatting, there are so many different tools and technologies that you've mentioned having used or being used at the bank. Which is both—it's interesting as a, like, there's so much going on in the bank; how do you manage it all? But it's also super interesting, I think, because it shows that there's a lot of interest in just finding the right solutions and finding the right tools, and not really being super-strongly married to one particular tool or one set way to do things, which I think is pretty cool. We're coming up towards the end of our time here, so I did want to ask you, before we sign off, Aaron, do you have anything that you'd like to plug, anything you want to promote?Aaron: Yeah, the Cloud Program is hiring a ton. There's lots of job openings on all of our platform teams. There's probably job openings on my Cloud Adoption Team. So, if you think the bank sounds interesting—the bank is very stable; that's always one of the nice things—but the bank… the thing about the bank, I originally joined the bank saying, “Oh, I'll be here two years, and I'll get bored and I'll leave,” and now it's been 12 years and I'm still at the bank. Because I mentioned, like, that scope and scale of the organization, there's always something interesting happening somewhere.So, if you're interested in cloud platform stuff, we've got a huge cloud platform. If you're in—like, you want to do machine-learning, we've got an entire organization. It should come as no surprise, we have lots of data at a bank, and there's a whole organization for all sorts of different things with machine-learning, deep learning, data analytics, big data, stuff like that. Like, if you think that's interesting, and even if you're not specifically in Toronto, Canada, you can probably find an interesting role within the organization if that's something that turns your crank.Jason: Awesome. We'll post links to everything that we've mentioned, which is a ton. But go check us out, gremlin.com/podcast is where you can find the show note for this episode, and we'll have links to everything. Aaron, thank you so much for joining us. It's been a pleasure to have you.Aaron: Thanks so much for having me, Jason. I'm so happy that we got to do this.Jason: For links to all the information mentioned, visit our website at gremlin.com/podcast. If you liked this episode, subscribe to the Break Things on Purpose podcast on Spotify, Apple Podcasts, or your favorite podcast platform. Our theme song is called, “Battle of Pogs” by Komiku, and it's available on loyaltyfreakmusic.com.

Cloud Posse DevOps
Cloud Posse DevOps "Office Hours" (2022-04-20)

Cloud Posse DevOps "Office Hours" Podcast

Play Episode Listen Later Apr 20, 2022 56:02


Cloud Posse holds public "Office Hours" every Wednesday at 11:30am PST to answer questions on all things related to DevOps, Terraform, Kubernetes, CICD. Basically, it's like an interactive "Lunch & Learn" session where we get together for about an hour and talk shop. These are totally free and just an opportunity to ask us (or our community of experts) any questions you may have. You can register here: https://cloudposse.com/office-hoursJoin the conversation: https://slack.cloudposse.com/Find out how we can help your company:https://cloudposse.com/quizhttps://cloudposse.com/accelerate/Learn more about Cloud Posse:https://cloudposse.comhttps://github.com/cloudpossehttps://sweetops.com/https://newsletter.cloudposse.comhttps://podcast.cloudposse.com/[00:00:00] Intro[00:01:22] Terraform Experiment Update: Optional arguments in object variable type definitionhttps://github.com/hashicorp/terraform/issues/19898#issuecomment-1101853833[00:02:22] GitHub Says Hackers Breached Dozens of Organizations Using Stolen OAuth Access Tokens (from Heroku & TravisCI)https://thehackernews.com/2022/04/github-says-hackers-breach-dozens-of.html[00:05:53] Terraform Data Source for AWS Pricing Datahttps://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/pricing_product[00:06:26] How to Make 100K/year on GitHub Sponsorshttps://calebporzio.com/i-just-hit-dollar-100000yr-on-github-sponsors-heres-how-i-did-it[00:13:20] AWS Security Hub adds cross-Region security scores and compliance statuseshttps://aws.amazon.com/about-aws/whats-new/2022/04/aws-security-hub-cross-region-security-scores-compliance-statuses/[00:15:58] FYI, AWS Single Sign-On is now HIPAA eligiblehttps://aws.amazon.com/about-aws/whats-new/2022/04/aws-single-sign-on-hipaa-eligible/[00:17:00] AWS Shield adds automatic application-layer DDoS mitigation for ALBs with WAFhttps://aws.amazon.com/about-aws/whats-new/2022/04/aws-shield-application-balancer-automatic-ddos-mitigation/[00:23:01] Terraform + GitHub Actions & OIDC (via weekly.tf)https://blog.symops.com/2022/04/14/terraform-pipeline-with-github-actions-and-github-oidc-for-aws/[00:24:03] Hierarchical YAML Configurations in Terraformhttps://github.com/lyraproj/hiera[00:28:08] Rare Leakage of an S3 Stack Trace[00:30:21] Cloud Posse “Activation Days”? Who is interested….[00:32:27] What kind of a git repo structure do you recommend if I want to separate my terraform modules in repository?[00:39:48] Are there any examples on the use of helmfile that showcase how one might use it in a "bigger" situation?[00:54:53] Outro #officehours,#cloudposse,#sweetops,#devops,#sre,#terraform,#kubernetes,#awsSupport the show (https://cloudposse.com/office-hours/)

airhacks.fm podcast with adam bien
Piranha: Headless Applets Loaded with Maven

airhacks.fm podcast with adam bien

Play Episode Listen Later Apr 3, 2022 65:59


An airhacks.fm conversation with Arjan Tijms (@arjan_tijms) about: Payara vs. GlassFish Github contributions, refactoring introduces technical debt, GlassFish relies on JDK dependencies, piranha.cloud contributes to GlassFish, Payara and Glassfish communities are working together, contributing to opensource to save time, piranha is MicroProfile 5.0 compatible for JWT, piranha passes the majority of TCK servlet tests, the various piranha editions, You don't need an application server to run Jakarta EE applications article, AWS Serverless Java Container with Jersey integration, piranha nano is suitable for embedding, the Jakarta EE steering committee, Jakarta EE 10 is about new features, CDI-lite and back to code generation like in early EJB days, removing deprecated APIs from Jakarta EE, the SingleThreadedModel in Servlets, using Java as templating language in JSF, Wicket has a concept for programmatic few creation, JSF will add Swing-like view constructions features, OIDC authentication mechanism was contributed by Payara, piranha micro uses isolated classloaders, Maven dependencies as classpath, Arjan Tijms on twitter: @arjan_tijms, Arjan's blog omnifaces and piranha.cloud

AWS Bites
12. How do you manage your AWS credentials?

AWS Bites

Play Episode Listen Later Nov 26, 2021 8:32


In this episode, Eoin and Luciano talk about how to manage AWS credentials and different ways to manage them. From the more traditional (and not recommended) IAM credentials to SSO. In this episode we mentioned the following resources: - GitHub integration with OIDC: https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services - MFA access for assumed roles: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html - AWS vault: https://github.com/99designs/aws-vault - AWS SSO utils: https://github.com/benkehoe/aws-sso-util - AWS SSO export credentials: https://github.com/benkehoe/aws-export-credentials This episode is also available on YouTube: https://www.youtube.com/AWSBites You can listen to AWS Bites wherever you get your podcasts: - Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-bites/id1585489017 - Spotify: https://open.spotify.com/show/3Lh7PzqBFV6yt5WsTAmO5q - Google: https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy82YTMzMTJhMC9wb2RjYXN0L3Jzcw== - Breaker: https://www.breaker.audio/aws-bites - RSS: ​​https://anchor.fm/s/6a3312a0/podcast/rss Do you have any AWS questions you would like us to address? Leave a comment here or connect with us on Twitter: - https://twitter.com/eoins - https://twitter.com/loige

Futurice Tech Weeklies
You might not want OpenID

Futurice Tech Weeklies

Play Episode Listen Later Oct 27, 2021 29:07


The modern de facto solution to identity management is OpenID Connect. OIDC and OAuth2 come with their own problems though. The intention of this session is to look at some of the problems these frameworks bring, to look at some alternatives to OpenID for identity in your applications and what kinds of cases they might be applicable in.    Presenter: Mikael Viitaniemi  

Futurice Tech Weeklies
You might not want OpenID (Audio Only)

Futurice Tech Weeklies

Play Episode Listen Later Oct 27, 2021 29:07


The modern de facto solution to identity management is OpenID Connect. OIDC and OAuth2 come with their own problems though. The intention of this session is to look at some of the problems these frameworks bring, to look at some alternatives to OpenID for identity in your applications and what kinds of cases they might be applicable in.    Presenter: Mikael Viitaniemi  

Kubernetes Podcast from Google
KEDA, with Tom Kerkhove

Kubernetes Podcast from Google

Play Episode Listen Later Aug 26, 2021 34:21


KEDA, the Kubernetes Event-Driven Autoscaler, is a project that adds superpowers to the Kubernetes horizontal pod autoscaler, including zero-to-one scaling. Celebrate KEDA reaching Incubation in the CNCF by listening to an interview with maintainer Tom Kerkhove from Codit. But first, learn about Craig’s worst concert experience. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Correction to Episode 158: Mike Richards is no longer host of Jeopardy! Troy meets LeVar Burton The Chase (USA) The Chase (UK) The Judds Charlie Watts: Rolling Stones drummer dies at 80 The Rolling Stones: A Bigger Bang tour Moving stage News of the week KEDA moves to CNCF Incubation Kubescape from ARMO Security GKE adds OIDC identity provider and gVNIC support Gloo Mesh 1.1 Istio security announcement Envoy security announcement Cron jobs and timezones in Kubernetes Links from the interview KEDA: Kubernetes Event-Driven Autoscaling Bruges Codit Azure Service Fabric Azure Cloud Services Horizontal pod autoscaler Custom metrics in HPA (added in Kubernetes 1.6) Promitor: bridge between Azure Monitor and Prometheus KEDA announcement from Microsoft Scaling a deployment Scalers Microsoft moves KEDA to the CNCF Sandbox External scalers KEP for adding scale-to-zero to HPA Knative scale to zero CNCF Sandbox announcement Versions 1.0 and 2.0 Users KEDA on GitHub Tom Kerkhove on Twitter and his blog

.NET in pillole
ASP.NET Core 6 and Authentication Servers

.NET in pillole

Play Episode Listen Later May 10, 2021 9:24


Ecco la risposta ufficiale in merito ad IdentityServer nei template di ASP.NET Core.Per chi la volesse leggere:https://devblogs.microsoft.com/aspnet/asp-net-core-6-and-authentication-servers/https://github.com/dotnet/aspnetcore/issues/32494Alcune alternative:https://azure.microsoft.com/en-us/services/active-directory/https://github.com/openiddict/openiddict-corehttps://www.keycloak.org/https://wso2.com/https://auth0.com/

Pensieri in codice
Ep.57 - Condividere in sicurezza informazioni e identità: Oauth 2.0 e OpenID Connect

Pensieri in codice

Play Episode Listen Later May 6, 2021 21:15


Esistono strumenti e funzionalità che utilizziamo tutti i giorni, a volte anche senza rendercene conto, dei quali però sappiamo molto poco. Oauth e OpenID Connect sono due di questi protocolli: ci permettono di condividere informazioni e identità tra i vari siti, facendoci risparmiare tempo e fatica. In questo episodio proviamo a capire come funzionano.I link dell'episodio di oggi: OAuth 2.0 - https://oauth.net/2/ An Illustrated Guide to OAuth and OpenID Connect - https://tumblr.giaguaroblu.it/post/188612484387/an-illustrated-guide-to-oauth-and-openid-connect ------------------------------------------Sito ufficiale di Pensieri in codice - https://pensieriincodice.it Per sostenere il progetto:Compra su Amazon* - https://amzn.to/2MGITWk Lista dei desideri - https://pensieriincodice.it/360s8Kx Attrezzatura:Shure Microfono Podcast USB MV7* - https://amzn.to/3862ZRf * Link affiliato: il costo di un qualsiasi acquisto non sarà maggiore per te, ma Amazon mi girerà una piccola parte del ricavato. I miei progetti social:Pensieri in codice - https://pensieriincodice.it Canale Twitch - https://valeriogalano.it/twitch Daredevel blog - https://valeriogalano.it/daredevel Newsletter - https://valeriogalano.it/newsletter Per essere aggiornati sulle novità:Canale Telegram - https://pensieriincodice.it/canaletelegram Profilo Instagram - https://valeriogalano.it/instagram Profilo Twitter - https://valeriogalano.it/twitter Per partecipare alla discussione:Gruppo Telegram - http://bit.ly/joinPicTelegram Servizi professionali:Lezioni private su Docety - https://valeriogalano.it/docety Consulenza professionale - https://valeriogalano.it Crediti:Voce intro - Costanza Martina VitaleMusica - Kubbi - Up In My JamMusica - Light-foot - Moldy Lotion

Pensieri in codice
Condividere in Sicurezza Informazioni E Identità: Oauth 2.0 E OpenID Connect

Pensieri in codice

Play Episode Listen Later May 6, 2021 21:15


Esistono strumenti e funzionalità che utilizziamo tutti i giorni, a volte anche senza rendercene conto, dei quali però sappiamo molto poco. Oauth e OpenID Connect sono due di questi protocolli: ci permettono di condividere informazioni e identità tra i vari siti, facendoci risparmiare tempo e fatica. In questo episodio proviamo a capire come funzionano. I link dell’episodio di oggi: OAuth 2.0 - https://oauth.net/2/ An Illustrated Guide to OAuth and OpenID Connect - https://tumblr.giaguaroblu.it/post/188612484387/an-illustrated-guide-to-oauth-and-openid-connect —————————————— Sito ufficiale di Pensieri in codice - https://pensieriincodice.it Attrezzatura: Shure Microfono Podcast USB MV7* - https://amzn.to/3862ZRf Link affiliato: il costo di un qualsiasi acquisto non sarà maggiore per te, ma Amazon mi girerà una piccola parte del ricavato. I miei progetti social: Pensieri in codice - https://pensieriincodice.it Canale Twitch - https://valeriogalano.it/twitch Daredevel blog - https://valeriogalano.it/daredevel Newsletter - https://valeriogalano.it/newsletter Per essere aggiornati sulle novità: Canale Telegram - https://pensieriincodice.it/canaletelegram Profilo Instagram - https://valeriogalano.it/instagram Profilo Twitter - https://valeriogalano.it/twitter Per partecipare alla discussione: Gruppo Telegram - http://bit.ly/joinPicTelegram Servizi professionali: Lezioni private su Docety - https://valeriogalano.it/docety Consulenza professionale - https://valeriogalano.it Sostieni il progetto Sostieni tramite Satispay Sostieni tramite Revolut Sostieni tramite PayPal Sostieni utilizzando i link affiliati di Pensieri in codice: Amazon, Todoist, ProtonMail, ProtonVPN, Satispay Partner GrUSP (Codice sconto per tutti gli eventi: community_PIC) Schrödinger Hat Crediti Montaggio - Daniele Galano - https://www.instagram.com/daniele_galano/ Voce intro - Costanza Martina Vitale Musica - Kubbi - Up In My Jam Musica - Light-foot - Moldy Lotion Cover e trascrizione - Francesco Zubani

Melbourne AWS User Group
What's New in February 2021

Melbourne AWS User Group

Play Episode Listen Later Mar 22, 2021 52:27


February was a slow month for AWS releases, but Arjen, JM, and Guy did their best to make it fun anyway. Except for those times where Arjen tried to convince everyone to fall asleep to his new podcast. The News The shameless plug Arjen Without Sleep Finally in Sydney AWS Graviton2 M6g, C6g, and R6g instances now available in Asia Pacific (Seoul, Hong Kong) regions, and M6gd, C6gd, and R6gd instances now available in EU (Frankfurt), and Asia Pacific (Singapore, Sydney) regions Amazon EC2 M5zn instances, with high frequency processors and 100 Gbps networking are now available in Asia Pacific (Singapore and Sydney) AWS Direct Connect Announces Native 100 Gbps Dedicated Connections at Select Locations Serverless AWS Lambda now supports Node.js 14 Announcing General Availability of Amplify Flutter, with new data and authentication support Containers AWS App Mesh now supports mutual TLS authentication Amazon EKS clusters now support user authentication with OIDC compatible identity providers Introducing OIDC identity provider authentication for Amazon EKS | Containers Amazon EKS and EKS Distro now supports Kubernetes version 1.19 AWS Fargate increases default resource count service quotas to 1000 AWS Config now supports Amazon container services EC2 & VPC Introducing Amazon EC2 M5n, M5dn, R5n, and R5dn Bare Metal Instances Amazon Elastic File System triples read throughput AWS Elemental MediaLive adds support for VPC outputs AWS Backup Events and Metrics now available in Amazon CloudWatch Amazon Virtual Private Cloud (VPC) customers can now customize reverse DNS for their Elastic IP addresses Application Load Balancer now supports Application Cookie Stickiness Amazon VPC Traffic Mirroring is now supported on select non-Nitro instance types Scheduled Actions of Application Auto Scaling now support Local Time Zone Access Amazon EFS file systems from EC2 Mac instances running macOS Big Sur Amazon EC2 Auto Scaling now shows scaling history for deleted groups Dev & Ops Amazon CloudWatch Synthetics supports Amazon API Gateway in API blueprint Insights is now generally available for AWS X-Ray AWS Cloud9 launches visual source control integration for Git Assign a Delegated Administrator to manage AWS CloudFormation StackSets across your AWS Organization AWS CodeBuild supports Arm-based workloads using AWS Graviton2 Security Amazon GuardDuty introduces machine learning domain reputation model to expand threat detection and improve accuracy Amazon Macie announces a slew of new capabilities including support for cross-account sensitive data discovery, scanning by Amazon S3 object prefix, improved pre-scan cost estimation, and added location detail in findings Introducing Amazon VPC Endpoints for AWS CloudHSM AWS WAF adds support for JSON parsing and inspection Support for KMS encryption on S3 buckets used by AWS Config Data Storage & Processing Amazon S3 now supports AWS PrivateLink Introducing Amazon EBS Local Snapshots on Outposts You now can use PartiQL with DynamoDB local to query, insert, update, and delete table data in Amazon DynamoDB Amazon Timestream now offers cross table queries, query execution statistics, and more Amazon Aurora Global Database supports managed planned failover Managed planned failovers with Amazon Aurora Global Database | AWS Database Blog Amazon RDS for MySQL and MariaDB support replication filtering Amazon S3 on Outposts adds a smaller storage tier Amazon Redshift Query Editor now supports clusters with enhanced VPC routing, longer query run times, and all node types Amazon RDS Publishes New Events for Multi-AZ Deployments Amazon RDS for SQL Server now supports Always On Availability Groups for Standard Edition Amazon Elasticsearch Service now supports rollups, reducing storage costs for extended retention Amazon RDS now supports PostgreSQL 13 AI & ML Now create Amazon SageMaker Studio presigned URL with custom expiration time AWS DeepComposer launches Transformer Notebook on GitHub Automate quality inspection with Amazon Lookout for Vision — now generally available Other cool stuff Introducing Amazon CloudFront Security Savings Bundle Amazon SNS now supports 1-minute CloudWatch metrics Update content of inbound and outbound emails using AWS Lambda in Amazon WorkMail AWS Control Tower now provides region selection The Nanos AWS Console Mobile Application adds support for new regions Amazon Pinpoint now supports 10DLC and toll-free numbers Amazon SNS now supports 1-minute CloudWatch metrics Sponsors Gold Sponsor Innablr Silver Sponsors AC3 CMD Solutions DoIT International

The 6 Figure Developer Podcast
Episode 178 – Identity with Christos Matskas

The 6 Figure Developer Podcast

Play Episode Listen Later Jan 11, 2021 43:41


  Microsoft Identity for developers and Security in the Cloud! Christos is a developer, speaker, writer, and Microsoft Program Manager for Microsoft Identity, doing advocacy at scale. What is OIDC compliance? Why is security important? Why not write our own? What are the options out there? What about Legacy migration to OIDC? Devil's advocate, do I need this for my son's soccer team website?   Links https://cmatskas.com/ https://twitter.com/christosmatskas https://www.linkedin.com/in/christosmatskas/   Resources https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Adal-to-Msal https://dev.to/425show/secure-asp-net-blazor-wasm-apps-and-apis-with-azure-ad-b2c-474g   "Tempting Time" by Animals As Leaders used with permissions - All Rights Reserved × Subscribe now! Never miss a post, subscribe to The 6 Figure Developer Podcast! Are you interested in being a guest on The 6 Figure Developer Podcast? Click here to check availability!  

The Angular Show
E016 - Authentication and Authorization in Angular

The Angular Show

Play Episode Listen Later Jul 9, 2020 51:11


In this episode, Sam Julien, Google Developer Expert in Angular, Auth0 developer advocate, and expert (blue belt

CastOverflow
Ep. 03 - OpenID Connect

CastOverflow

Play Episode Listen Later Apr 10, 2020 2:39


No episódio anterior falamos de OAuth e nesse episódio falaremos de OIDC, OpenID Connect! =)

The Podlets - A Cloud Native Podcast
The Dichotomy of Security (Ep 10)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Dec 30, 2019 44:20


Security is inherently dichotomous because it involves hardening an application to protect it from external threats, while at the same time ensuring agility and the ability to iterate as fast as possible. This in-built tension is the major focal point of today’s show, where we talk about all things security. From our discussion, we discover that there are several reasons for this tension. The overarching problem with security is that the starting point is often rules and parameters, rather than understanding what the system is used for. This results in security being heavily constraining. For this to change, a culture shift is necessary, where security people and developers come around the same table and define what optimizing to each of them means. This, however, is much easier said than done as security is usually only brought in at the later stages of development. We also discuss why the problem of security needs to be reframed, the importance of defining what normal functionality is and issues around response and detection, along with many other security insights. The intersection of cloud native and security is an interesting one, so tune in today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Bryan Liles Nicholas Lane Key Points From This Episode: Often application and program security constrain optimum functionality. Generally, when security is talked about, it relates to the symptoms, not the root problem. Developers have not adapted internal interfaces to security. Look at what a framework or tool might be used for and then make constraints from there. The three frameworks people point to when talking about security: FISMA, NIST, and CIS. Trying to abide by all of the parameters is impossible. It is important to define what normal access is to understand what constraints look like. Why it is useful to use auditing logs in pre-production. There needs to be a discussion between developers and security people. How security with Kubernetes and other cloud native programs work. There has been some growth in securing secrets in Kubernetes over the past year. Blast radius – why understanding the extent of security malfunction effect is important. Chaos engineering is a useful framework for understanding vulnerability. Reaching across the table – why open conversations are the best solution to the dichotomy. Security and developers need to have the same goals and jargon from the outset. The current model only brings security in at the end stages of development. There needs to be a place to learn what normal functionality looks like outside of production. How Google manages to run everything in production. It is difficult to come up with security solutions for differing contexts. Why people want service meshes. Quotes: “You’re not able to actually make use of the platform as it was designed to be made use of, when those constraints are too tight.” — @mauilion [0:02:21] “The reason that people are scared of security is because security is opaque and security is opaque because a lot of people like to keep it opaque but it doesn’t have to be that way.” — @bryanl [0:04:15] “Defining what that normal access looks like is critical to us to our ability to constrain it.” — @mauilion [0:08:21] “Understanding all the avenues that you could be impacted is a daunting task.” — @apinick [0:18:44] “There has to be a place where you can go play and learn what normal is and then you can move into a world in which you can actually enforce what that normal looks like with reasonable constraints.” — @mauilion [0:33:04] “You don’t learn to ride a motorcycle on the street. You’d learn to ride a motorcycle on the dirt.” — @apinick [0:33:57] Links Mentioned in Today’s Episode: AWS — https://aws.amazon.com/Kubernetes https://kubernetes.io/IAM https://aws.amazon.com/iam/Securing a Cluster — https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/TGI Kubernetes 065 — https://www.youtube.com/watch?v=0uy2V2kYl4U&list=PL7bmigfV0EqQzxcNpmcdTJ9eFRPBe-iZa&index=33&t=0sTGI Kubernetes 066 —https://www.youtube.com/watch?v=C-vRlW7VYio&list=PL7bmigfV0EqQzxcNpmcdTJ9eFRPBe-iZa&index=32&t=0sBitnami — https://bitnami.com/Target — https://www.target.com/Netflix — https://www.netflix.com/HashiCorp — https://www.hashicorp.com/Aqua Sec — https://www.aquasec.com/CyberArk — https://www.cyberark.com/Jeff Bezos — https://www.forbes.com/profile/jeff-bezos/#4c3104291b23Istio — https://istio.io/Linkerd — https://linkerd.io/ Transcript: EPISODE 10 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores cloud native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.2] NL: Hello and welcome back to The Kubelets Podcast. My name is Nicholas Lane and this time, we’re going to be talking about the dichotomy of security. And to talk about such an interesting topic, joining me are Duffie Coolie. [0:00:54.3] DC: Hey, everybody. [0:00:55.6] NL: Bryan Liles. [0:00:57.0] BM: Hello [0:00:57.5] NL: And Carlisia Campos. [0:00:59.4] CC: Glad to be here. [0:01:00.8] NL: So, how’s it going everybody? [0:01:01.8] DC: Great. [0:01:03.2] NL: Yeah, this I think is an interesting topic. Duffie, you introduced us to this topic. And basically, what I understand, what you wanted to talk about, we’re calling it the dichotomy of security because it’s the relationship between security, like hardening your application to protect it from attack and influence from outside actors and agility to be able to create something that’s useful, the ability to iterate as fast as possible. [0:01:30.2] DC: Exactly. I mean, the idea from this came from putting together a talks for the security conference coming up here in a couple of weeks. And I was noticing that obviously, if you look at the job of somebody who is trying to provide some security for applications on their particular platform, whether that be AWS or GCE or OpenStack or Kubernetes or anything of these things. It’s frequently in their domain to kind of define constraints for all of the applications that would be deployed there, right? Such that you can provide rational defaults for things, right? Maybe you want to make sure that things can’t do a particular action because you don’t want to allow that for any application within your platform or you want to provide some constraint around quota or all of these things. And some of those constraints make total sense and some of them I think actually do impact your ability to design the systems or to consume that platform directly, right? You’re not able to actually make use of the platform as it was designed to be made use of, when those constraints are too tight. [0:02:27.1] DC: Yeah. I totally agree. There’s kind of a joke that we have in certain tech fields which is the primary responsibility of security is to halt productivity. It isn’t actually true, right? But there are tradeoffs, right? If security is too tight, you can’t move forward, right? Example of this that kind of mind are like, if you’re too tight on your firewall rules where you can’t actually use anything of value. That’s a quick example of like security gone haywire. That’s too controlling, I think. [0:02:58.2] BM: Actually. This is an interesting topic just in general but I think that before we fall prey to what everyone does when they talk about security, let’s take a step back and understand why things are the way they are. Because all we’re talking about are the symptoms of what’s going on and I’ll give you one quick example of why I say this. Things are the way they are because we haven’t made them any better. In developer land, whenever we consume external resources, what we were supposed to do and what we should be doing but what we don’t do is we should create our internal interfaces. Only program to those interfaces and then let that interface of that adapt or talk to the external service and in security world, we should be doing the same thing and we don’t do this. My canonical example for this is IAM on AWS. It’s hard to create a secure IM configuration and it’s even harder to keep it over time and it’s even harder to do it whenever you have 150, 100, 5,000 people dealing with this. What companies do is they actually create interfaces where they could describe the part of IAM they want to use and then they translate that over. The reason I bring this up is because the reason that people are scared of security is because security is opaque and security is opaque because a lot of people like to keep it opaque. But it doesn’t have to be that way. [0:04:24.3] NL: That’s a good point, that’s a reasonable design and wherever I see that devoted actually is very helpful, right? Because you highlight a critical point in that these constraints have to be understood by the people who are constrained by them, right? It will just continue to kind of like drive that wedge between the people who are responsible for them top finding t hem and the people who are being affected by them, right? That transparency, I think it’s definitely key. [0:04:48.0] BM: Right, this is our cloud native discussion, any idea of where we should start thinking about this in cloud native land? [0:04:56.0] DC: For my part, I think it’s important to understand if you can like what the consumer of a particular framework or tool might need, right? And then, just take it from there and figure out what rational constraints are. Rather than the opposite which is frequently where people go and evaluate a set of rules as defined by some particular, some third-part company. Like you look at CIS packs and you look at like a lot of these other tooling. I feel like a lot of people look at those as like, these are the hard rules, we must comply to all of these things. Legally, in some cases, that’s the case. But frequently, I think they’re just kind of like casting about for some semblance of a way to start defining constraint and they go too far, they’re no longer taking into account what the consumers of that particular platform might meet, right? Kubernetes is a great example of this. If you look at the CIS spec for Kubernetes or if you look at a lot of the talks that I’ve seen kind of around how to secure Kubernetes, we defined like best practices for security and a lot of them are incredibly restrictive, right? I think of the problem there is that restriction comes at a cost of agility. You’re no longer able to use Kubernetes as a platform for developing microservices because you provided so much constraints that it breaks the model, you know? [0:06:12.4] NL: Okay. Let’s break this down again. I can think of a top of my head, three types of things people point to when I’m thinking about security. And spoiler alert, I am going to do some acronyms but don’t worry about the acronyms are, just understand they are security things. The first one I’ll bring up is FISMA and then I’ll think about NIST and the next one is CIS like you brought up. Really, the reason they’re so prevalent is because depending on where you are, whether you’re in a highly regulated place like a bank or you’re working for the government or you have some kind of automate concern to say a PIPA or something like that. These are the words that the auditors will use with you. There is good in those because people don’t like the CIS benchmarks because sometimes, we don’t understand why they’re there. But, from someone who is starting from nothing, those are actually great, there’s at least a great set of suggestions. But the problem is you have to understand that they’re only suggestions and they are trying to get you to a better place than you might need. But, the other side of this is that, we should never start with NIST or CIS or FISMA. What we really should do is our CISO or our Chief Security Officer or the person in charge of security. Or even just our – people who are in charge, making sure our stack, they should be defining, they should be taking what they know, whether it’s the standards and they should be building up this security posture in this security document and these rules that are built to protect whatever we’re trying to do. And then, the developers of whoever else can operate within that rather than everything literally. [0:07:46.4] DC: Yeah, agreed. Another thing I’ve spent some time talking to people about like when they start rationalizing how to implement these things or even just think about the secure surface or develop a threat model or any of those things, right? One of the things that I think it’s important is the ability to define kind of like what normal looks like, right? What normal access between applications or normal access of resources looks like. I think that your point earlier, maybe provides some abstraction in front of a secure resource such that you can actually just share that same fraction across all the things that might try to consume that external resource is a great example of the thing. Defining what that normal access looks like is critical to us to our ability to constrain it, right? I think that frequently people don’t start there, they start with the other side, they’re saying, here are all the constraints, you need to tell me which ones are too tight. You need to tell me which ones to loosen up so that you can do your job. You need to tell me which application needs access to whichever application so that I can open the firewall for you. I’m like, we need to turn that on its head. We need the environments that are perhaps less secure so that we can actually define what normal looks like and then take that definition and move it into a more secured state, perhaps by defining these across different environments, right? [0:08:58.1] BM: A good example of that would be in larger organizations, at every part of the organization does this but there is environments running your application where there are really no rules applied. What we do with that is we turn on auditing in those environments so you have two applications or a single application that talks to something and you let that application run and then after the application run, you go take a look at the audit logs and then you determine at that point what a good profile of this application is. Whenever it’s in production, you set up the security parameters, whether it be identity access or network, based on what you saw in auditing in your preproduction environment. That’s all you could run because we tested it fully in our preproduction environment, it should not do any more than that. And that’s actually something – I’ve seen tools that will do it for AWS IM. I’m sure you can do for anything else that creates auditing law. That’s a good way to get started. [0:09:54.5] NL: It sounds like what we’re coming to is that the breakdown of security or the way that security has impacted agility is when people don’t take a rational look at their own use case. instead, rely too much on the guidance of other people essentially. Instead of using things like the CIS benchmarking or NIST or FISMA, that’s one that I knew the other two and I’m like, I don’t know this other one. If they follow them less as guidelines and more as like hard set rules, that’s when we get impacts or agility. Instead of like, “Hey. This is what my application needs like you’re saying, let’s go from there.” What does this one look like? Duffie is for saying. I’m kind of curious, let’s flip that on its head a little bit, are there examples of times when agility impacts security? [0:10:39.7] BM: You want to move fast and moving fast is counter to being secure? [0:10:44.5] NL: Yes. [0:10:46.0] DC: Yeah, literally every single time we run software. When it comes down to is developers are going to want to develop and then security people are going to want to secure. And generally, I’m looking at it from a developer who has written security software that a lot of people have used, you guys had know that. Really, there needs to be a conversation, it’s the same thing as we had this dev ops conversation for a year – and then over the last couple of years, this whole dev set ops conversation has been happening. We need to have this conversation because from a security person’s point of view, you know, no access is great access. No data, you can’t get owned if you don’t have any data going across the wire. You know what? Can’t get into that server if there’s no ports opened. But practically, that doesn’t work and we find is that there is actually a failing on both sides to understand what the other person was optimizing for. [0:11:41.2] BM: That’s actually where a lot of this comes from. I will offer up that the only default secure posture is no access to anything and you should be working from that direction to where you want to be rather than working from, what should we close down? You should close down everything and then you work with allowing this list for other than block list. [0:12:00.9] NL: Yeah, I agree with that model but I think that there’s an important step that has to happen before that and that’s you know, the tooling or thee wireless phone to define what the application looks like when it’s in a normal state or the running state and if we can accomplish that, then I feel like we’re in a better position to find what that LOI list looks like and I think that one of the other challenges there of course, let’s backup for a second. I have actually worked on a platform that supported many services, hundreds of services, right? Clearly, if I needed to define what normal looked like for a hundred services or a thousand services or 2,000 services, that’s going to be difficult in a way that people approach the problem, right? How do you define for each individual service? I need to have some decoration of intent. I need the developer to engage here and tell me, what they’re expecting, to set some assumptions about the application like what it’s going to connect to, those dependences are – That sort of stuff. And I also need tooling to verify that. I need to be able to kind of like build up the whole thing so that I have some way of automatically, you know, maybe with oversight, defining what that security context looks like for this particular service on this particular platform. Trying to do it holistically is actually I think where we get into trouble, right? Obviously, we can’t scale the number of people that it takes to actually understand all of these individual services. We need to actually scale this stuff as software problem instead. [0:13:22.4] CC: With the cloud native architecture and infrastructure, I wonder if it makes it more restrictive because let’s say, these are running on Kubernetes, everything is running at Kubernetes. Things are more connected because it’s a Kubernetes, right? It’s this one huge thing that you’re running on and Kubernetes makes it easier to have access to different notes and when the nodes took those apart, of course, you have to find this connection. Still, it’s supposed to make it easy. I wonder if security from a perspective of somebody, needing to put a restriction and add miff or example, makes it harder or if it makes it easier to just delegate, you have this entire area here for you and because your app is constrained to this space or name space or this part, this node, then you can have as much access as you need, is there any difference? Do you know what I mean? Does it make sense what I said? [0:14:23.9] BM: There was actually, it’s exactly the same thing as we had before. We need to make sure that applications have access to what they need and don’t have access to what they don’t need. Now, Kubernetes does make it easier because you can have network policies and you can apply those and they’re easier to manage than who knows what networking management is holding you have. Kubernetes also has pod security policies which again, actually confederates this knowledge around my pod should be able to do this or should not be able to run its root, it shouldn’t be able to do this and be able to do that. It’s still the same practice Carlisia, but the way that we can control it is now with a standard set off tools. We still have not cracked the whole nut because the whole thing of turning auditing on to understand and then having great tool that can read audit locks from Kubernetes, just still aren’t there. Just to add one more last thing that before we add VMWare and we were Heptio, we had a coworker who wrote basically dynamic audit and that was probably one of the first steps that we would need to be able to employ this at scale. We are early, early, super early in our journey and getting this right, we just don’t have all the necessary tools yet. That’s why it’s hard and that’s why people don’t do it. [0:15:39.6] NL: I do think it is nice to have t hose and primitives are available to people who are making use of that platform though, right? Because again, kind of opens up that conversation, right? Around transparency. The goal being, if you understood the tools that we’re defining that constraint, perhaps you’d have access to view what the constraints are and understand if they’re actually rational or not with your applications. When you’re trying to resolve like I have deployed my application in dev and it’s the wild west, there’s no constraints anywhere. I can do anything within dev, right? When I’m trying to actually promote my application to staging, it gives you some platform around which you can actually sa, “If you want to get to staging, I do have to enforce these things and I have a way and again, all still part of that same API, I still have that same user experience that I had when just deploying or designing the application to getting them deployed.” I could still look at again and understand what the constraints are being applied and make sure that they’re reasonable for my application. Does my application run, does it have access to the network resources that it needs to? If not, can I see where the gaps are, you know? [0:16:38.6] DC: For anyone listening to this. Kubernetes doesn’t have all the documentation we need and no one has actually written this book yet. But on Kubernetes.io, there are a couple of documents about security and if we have shownotes, I will make sure those get included in our shownotes because I think there are things that you should at least understand what’s in a pod security policy. You should at least understand what’s in a network security policy. You should at least understand how roles and role bindings work. You should understand what you’re going to do for certificate management. How do you manage this certificate authority in Kubernetes? How do you actually work these things out? This is where you should start before you do anything else really fancy. At least, understand your landscape. [0:17:22.7] CC: Jeffrey did a TGI K talk on secrets. I think was that a series? There were a couple of them, Duffie? [0:17:29.7] DC: Yeah, there were. I need to get back and do a little more but yeah. [0:17:33.4] BM: We should then add those to our shownotes too. Hopefully they actually exist or I’m willing to see to it because in assistance. [0:17:40.3] CC: We are going to have shownotes, yes. [0:17:44.0] NL: That is interesting point, bringing up secrets and secret management and also, like secured Inexhibit. There are some tools that exist that we can use now in a cloud native world, at least in the container world. Things like vault exist, things like well, now, KBDM you can roll certificate which is really nice. We are getting to a place where we have more tooling available and I’m really happy about it. Because I remember using Kubernetes a year ago and everyone’s like, “Well. How do you secure a secret in Kubernetes?” And I’m like, “Well, it sure is basics for you to encode it. That’s on an all secure.” [0:18:15.5] BM: I would do credit Bitnami has been doing sealed secrets, that’s been out for quite a while but the problem is that how do you suppose to know about that and how are you supposed to know if it’s a good standard? And then also, how are you supposed to benchmark against that? How do you know if your secrets are okay? We haven’t talked about the other side which is response or detection of issues. We’re just talking about starting out, what do you do? [0:18:42.3] DC: That’s right. [0:18:42.6] NL: It is tricky. We’re just saying like, understanding all the avenues that you could be impacted is kind of a daunting task. Let’s talk about like the Target breach that occurred a few years ago? If anybody doesn’t remember this, basically, Target had a huge credit card breach from their database and basically, what happened is that t heir – If I recalled properly, their OIDC token had a – not expired but the audience for it was so broad that someone had hacked into one computer essentially like a register or something and they were able to get the OIDC token form the local machine. The authentication audience for that whole token was so broad that they were able to access the database that had all of the credit card information into it. These are one of these things that you don’t think about when you’re setting up security, when you’re just maybe getting started or something like that. What are the avenues of attack, right? You’d say like, “OIDC is just pure authentication mechanism, why would we need to concern ourselves with this?” And then but not understanding kind of what we were talking about last because the networking and the broadcasting, what is the blast radius of something like this and so, I feel like this is a good example of sometimes security can be really hard and getting started can be really daunting. [0:19:54.6] DC: Yeah, I agree. To Bryan’s point, it’s like, how do you test against this? How do you know that what you’ve defined is enough, right? We can define all of these constraints and we can even think that they’re pretty reasonable or rational and the application may come up and operate but how do you know? How can you verify that? What you’ve done is enough? And then also, remember. With OIDC has its own foundations and loft. You realize that it’s a very strong door but it’s only a strong door, it also do things that you can’t walk around a wall and that it’s protecting or climb over the wall that it’s protecting. There’s a bit of trust and when you get into things like the target breach, you really have to understand blast radius for anything that you’re going to do. A good example would be if you’re using shared key kind of things or like public share key. You have certificate authorities and you’re generating certificates. You should probably have multiple certificate authorities and you can have a basically, a hierarchy of these so you could have basically the root one controlled by just a few people in security. And then, each department has their own certificate authority and then you should also have things like revocation, you should be able to say that, “Hey, all this is bad and it should all go away and it probably should have every revocation list,” which a lot of us don’t have believe it or not, internally. Where if I actually kill our own certificate, a certificate was generated and I put it in my revocation list, it should not be served and in our clients that are accepting that our service is to see that, if we’re using client side certificates, we should reject these instantly. Really, what we need to do is stop looking at security as this one big thing and we need to figure out what are our blast radius. Firecracker, blowing up in my hand, it’s going to hurt me. But Nick, it’s not going to hurt you, you know? If someone drops in a huge nuclear bomb on the United States or the west coast United States, I’m talking to myself right now. You got to think about it like that. What’s the worst that can happen if this thing gets busted or get shared or someone finds that this should not happen? Every piece off data that you have that you consider secure or sensitive, you should be able to figure out what that means and that is how whenever you are defining a security posture that’s butchered to me. Because that is why you’ll notice that a lot of companies some of them do run open within a contained zone. So, within this contained zone you could talk to whomever you want. We don’t actually have to be secure here because if we lose one, we lost them all so who cares? So, we need to think about that and how do we do that in Kubernetes? Well, we use things like name spaces first of all and then we use things like this network policies and then we use things like pod security policies. We can lock some access down to just name spaces if need be. You can only talk to pods and your name space. And I am not telling you how to do this but you need to figure out talking with your developer, talking to the security people. But if you are in security you need to talk to your product management staff and your software engineering staff to figure out really how does this need to work? So, you realize that security is fun and we have all sorts of neat tools depending on what side you’re on. You know if you are on red team, you’re half knee in, you’re blue team you are saving things. We need to figure out these conversations and tooling comes from these conversations but we need to have these conversation first. [0:23:11.0] DC: I feel like a little bit of a broken record on this one but I am going to go back to chaos engineering again because I feel like it is critical to stuff like this because it enables a culture in which you can explore both the behavior of applications itself but why not also use this model to explore different ways of accessing that information? Or coming up with theories about the way the system might be vulnerable based on a particular attack or a type of attack, right? I think that this is actually one of the movements within our space that I think provides because then most hope in this particular scenario because a reasonable chaos engineering practice within an organization enables that ability to explore all of the things. You don’t have to be red team or blue team. You can just be somebody who understands this application well and the question for the day is, “How can we attack this application?” Let’s come up with theories about the way that perhaps this application could be attacked. Think about the problem differently instead of thinking about it as an access problem, think about it as the way that you extend trust to the other components within your particular distributed system like do they have access that they don’t need. Come up with a theory around being able to use some proxy component of another system to attack yet a third system. You know start playing with those ideas and prove them out within your application. A culture that embraces that I think is going to be by far a more secure culture because it lets developers and engineers explore these systems in ways that we don’t generally explore them. [0:24:36.0] BM: Right. But also, if I could operate on myself I would never need a doctor. And the reason I bring that up is because we use terms like chaos engineering and this is no disrespect to you Duffie, so don’t take it as this is panacea or this idea that we make things better and true. That is fine, it will make us better but the little secret behind chaos engineering is that it is hard. It is hard to build these experiments first of all, it is hard to collect results from these experiments. And then it is hard to extrapolate what you got out of the experiments to apply to whatever you are working on to repeat and what I would like to see is what people in our space is talking about how we can apply such techniques. But whether it is giving us more words or giving us more software that we can employ because I hate to say it, it is pretty chaotic in chaos engineering right now for Kubernetes. Because if you look at all the people out there who have done it well. And so, you look at what Netflix has done with pioneering this and then you listen to what, a company such us like Gremlin is talking about it is all fine and dandy. You need to realize that it is another piece of complexity that you have to own and just like any other things in the security world, you need to rationalize how much time you are going to spend on it first is the bottom line because if I have a “Hello, World!” app, I don’t really care about network access to that. Unless it is a “Hello, World!” app running on the same subnet as some doing some PCI data then you know it is a different conversation. [0:26:05.5] DC: Yeah. I agree and I am certainly not trying to version as a panacea but what I am trying to describe is that I feel like I am having a culture that embraces that sort of thinking is going to enable us to be in a better position to secure these applications or to handle a breach or to deal with very hard to understand or resolve problems at scale, you know? Whether that is a number of connections per second or whether that is a number of applications that we have horizontally scaled. You know like being able to embrace that sort of a culture where we asked why where we say “well, what if…” or if we actually come up you know embracing the idea of that curiosity that got you into this field, you know what I mean like the thing that is so frequently our cultures are opposite of that, right? It becomes a race to the finish and in that race to the finish, lots of pieces fall off that we are not even aware of, you know? That is what I am highlighting here when I talk about it. [0:26:56.5] NL: And so, it seems maybe the best solution to the dichotomy between security and agility is really just open conversation, in a way. People actually reaching across the aisle to talk to each other. So, if you are embracing this culture as you are saying Duffie the security team should be having constant communication with the application team instead of just like the team doing something wrong and the security team coming down and smacking their hand. And being like, “Oh you can’t do it this way because of our draconian rules” right? These people are working together and almost playing together a little bit inside of their own environment to create also a better environment. And I am sorry.I didn’t mean to cut you off there, Bryan. [0:27:34.9] BM: Oh man, I thought it was fleeting like all my thoughts. But more about what you are saying is, is that you know it is not just more conversations because we can still have conversations and I am talking about sider and subnets and attack vectors and buffer overflows and things like that. But my developer isn’t talking, “Well, I just need to be able to serve this data so accounting can do this.” And that’s what happens a lot in security conversations. You have two groups of individuals who have wholly different goals and part of that conversation needs to be aligning or jargon and then aligning on those goals but what happens with pretty much everything in the development world, we always bring our networking, our security and our operations people in right at the end, right when we are ready to ship, “Hey make this thing work.” And really it is where a lot of our problems come out. Now security either could or wanted to be involved at the beginning of a software project what we actually are talking about what we are trying to do. We are trying to open up this service to talk to this, share this kind of data. Security can be in there early saying, “Oh no you know, we are using this resource in our cloud provider. It doesn’t really matter what cloud provider and we need to protect this. This data is sitting here at rest.” If we get those conversations earlier, it would be easier to engineer solutions that to be hopefully reused so we don’t have to have that conversation in the future. [0:29:02.5] CC: But then it goes back to the issue of agility, right? Like Duffie was saying, wow you can develop, I guess a development cluster which has much less restrictive restrictions and they move to a production environment where the proper restrictions are then – then you find out or maybe station environment let’s say. And then you find out, “Oh whoops. There are a bunch of restrictions I didn’t deal with but I didn’t move a lot faster because I didn’t have them but now, I have to deal with them.” [0:29:29.5] DC: Yeah, do you think it is important to have a promotion model in which you are able to move toward a more secure deployment right? Because I guess a parallel to this is like I have heard it said that you should develop your monolith first and then when you actually have the working prototype of what you’re trying to create then consider carefully whether it is time to break this thing up into a set of distinct services, right? And consider carefully also what the value of that might be? And I think that the reason that that’s said is because it is easier. It is going to be a lower cognitive load with everything all right there in the same codebase. You understand how all of these pieces interconnect and you can quickly develop or prototype what you are working on. Whereas if you are trying to develop these things into individual micro services first, it is harder to figure out where the line is. Like where to divide all of the business logic. I think this is also important when you are thinking about the security aspects of this right? Being able to do a thing when which you are not constrained, define all of these services and your application in the model for how they communicate without constraint is important. And once you have that when you actually understand what normal looks like from that set of applications then enforce them, right? If you are able to declare that intent you are going to say like these are the ports on the list on for these things, these are the things that they are going to access, this is the way that they are going to go about accessing them. You know if you can declare that intent then that is actually that is a reasonable body of knowledge for which the security people can come along and say, “Okay well, you have told us. You informed us. You have worked with us to tell us like what your intent is. We are going to enforce that intent and see what falls out and we can iterate there.” [0:31:01.9] CC: Yeah everything you said makes sense to me. Starting with build the monolith first. I mean when you start out why which ones will have abstract things that you don’t really – I mean you might think you know but you’re only really knowing practice what you are going to need to abstract. So, don’t abstract things too early. I am a big fan of that idea. So yeah, start with the monolith and then you figure out how to break it down based on what you need. With security I would imagine the same idea resonates with me. Don’t secure things that you don’t need you don’t know just yet that needs securing except the deal breaker things. Like there is some things we know like we don’t want production that are being accessed some types of production that are some things we know we need to secure so from the beginning. [0:31:51.9] BM: Right. But I will still iterate that it is always denied by default, just remember that. It is security is actually the opposite way. We want to make sure that we have the least amount and even if it is harder for us you always want to start with un-allowed TCP communication on port 443 or UDP as well. That is what I would allow rather than saying shut everything else off. But this, I would rather have the way that we only allow that and that also goes in with our declarative nature in cloud native things we like anyways. We just say what we want and everything else doesn’t exists. [0:32:27.6] DC: I do want to clarify though because I think what you and I, we are the representative of the dichotomy right at this moment, right? I feel like what you are saying is the constraint should be the normal, being able to drop all traffic, do not allow anything is normal and then you have to declare intent to open anything up and what I am saying is frequently developers don’t know what normal looks like yet. They need to be able to explore what normal looks like by developing these patterns and then enforce them, right, which is turning the model on its head. And this is actually I think the kernel that I am trying to get to in this conversation is that there has to be a place where you can go play and learn what normal is and then you can move into a world in which you can actually enforce what that normal looks like with reasonable constraint. But until you know what that is, until you have that opportunity to learn it, all we are doing here is restricting your ability to learn. We are adding friction to the process. [0:33:25.1] BM: Right, well I think what I am trying to say here layer on top of this is that yes, I agree but then I understand what a breach can do and what bad security can do. So I will say, “Yeah, go learn. Go play all you want but not on software that will ever make it to production. Go learn these practices but you are going to have to do it outside of” – you are going to have a sandbox and that sandbox is going to be unconnected from the world I mean from our obelisk and you are going to have to learn but you are not going to practice here. This is not where you learn how to do this. [0:33:56.8] NL: Exactly right, yeah. You don’t learn to ride a motorcycle on the street you know? You’d learn to ride a motorcycle on the dirt and then you could take those skills later you know? But yeah I think we are in agreement like production is a place where we do have to enforce all of those things and having some promotion level in which you can come from a place where you learned it to a place where you are beginning to enforce it to a place where it is enforced I think is also important. And I frequently describe this as like development, staging and production, right? Staging is where you are going to hit the edges from because this is where you’re actually defining that constraint and it has to be right before it can be promoted to production, right? And I feel like the middle ground is also important. [0:34:33.6] BM: And remember that production is any environment production can reach. Any environment that can reach production is production and that is including that we do data backup dumps and we clean them up from production and we use it as data in our staging environment. If production can directly reach staging or vice versa, it is all production. That is your attack vector. That is also what is going to get in and steal your production data. [0:34:59.1] NL: That is absolutely right. Google actually makes an interesting not of caveat to that but like side point to that where like if I understand the way that Google runs, they run everything in production, right? Like dev, staging and production are all the same environment. I am more positing this is a question because I don’t know if anybody of us have the answer but I wonder how they secure their infrastructure, their environment well enough to allow people to play to learn these things? And also, to deploy production level code all in the same area? That seems really interesting to be and then if I understood that I probably would be making a lot more money. [0:35:32.6] BM: Well it is simple really. There were huge people process at Google that access gatekeeper for a lot of these stuff. So, I have never worked in Google. I have no intrinsic knowledge of Google or have talked to anyone who has given me this insight, this is all speculation disclaimer over. But you can actually run a big cluster that if you can actually prove that you have network and memory and CPU isolation between containers, which they can in certain cases and certain things that can do this. What you can do is you can use your people process and your approvals to make sure that software gets to where it needs to be. So, you can still play on the same clusters but we have great handles on network that you can’t talk to these networks or you can’t use this much network data. We have great things on CPU that this CPU would be a PCI data. We will not allow it unless it’s tied to CPU or it is PCI. Once you have that in place, you do have a lot more flexibility. But to do that, you will have to have some pretty complex approval structures and then software to back that up. So, the burden on it is not on the normal developer and that is actually what Google has done. They have so many tools and they have so many processes where if you use this tool it actually does the process for you. You don’t have to think about it. And that is what we want our developers to be. We want them to be able to use either our networking libraries or whenever they are building their containers or their Kubernetes manifest, use our tools and we will make sure based on either inspection or just explicit settings that we will build something that is as secure as we can given the inputs. And what I am saying is hard and it is capital H hard and I am actually just pitting where we want to be and where a lot of us are not. You know most people are not there. [0:37:21.9] NL: Yeah, it would be nice if we had like we said earlier like more tooling around security and the processes and all of these things. One thing I think that people seem to balk on or at least I feel is developing it for their own use case, right? It seems like people want an overarching tool to solve all the use cases in the world. And I think with the rise of cloud native applications and things like container orchestration, I would like to see people more developing for themselves around their own processes, around Kubernetes and things like that. I want to see more perspective into how people are solving their security problems, instead of just like relying on let’s say like HashiCorp or like Aqua Sec to provide all the answers like I want to see more answers of what people are doing. [0:38:06.5] BM: Oh, it is because tools like Vault are hard to write and hard to maintain and hard to keep correct because you think about other large competitors to vault and they are out there like tools like CyberArk. I have a secret and I want to make sure only certain will keep it. That is a very difficult tool but the HashiCorp advantage here is that they have made tools to speak to people who write software or people who understand ops not just as a checkbox. It is not hard to get. If you are using vault it is not hard to get a secret out if you have the right credentials. Other tools is super hard to get the secret out if you even have the right credential because they have a weird API or they just make it very hard for you or they expect you to go click on some gooey somewhere. And that is what we need to do. We need to have better programming interfaces and better operator interfaces, which extends to better security people are basis for you to use these tools. You know I don’t know how well this works in practice. But the Jeff Bezos, how teams at AWS or Amazon or forums, you know teams communicate on API and I am not saying that you shouldn’t talk, but we should definitely make sure that our API’s between teams and team who owns security stuff and teams who are writing developer stuff that we can talk on the same level of fidelity that we can having an in person conversation, we should be able to do that through our software as well. Whether that be for asking for ports or asking for our resources or just talking about the problem that we have that is my thought-leadering answer to this. This is “Bryan wants to be a VP of something one day” and that is the answer I am giving. I’m going to be the CIO that is my CIO answer. [0:39:43.8] DC: I like it. So cool. [0:39:45.5] BM: Is there anything else on this subject that we wanted to hit? [0:39:48.5] NL: No, I think we have actually touched on pretty much everything. We got a lot out of this and I am always impressed with the direction that we go and I did not expect us to go down this route and I was very pleased with the discussion we have had so far. [0:39:59.6] DC: Me too. I think if we are going to explore anything else that we talked about like you know, get it more into that state where we are talking about like that we need more feedback loops. We need people developers to talk to security people. We need security people talk to developers. We need to have some way of actually pushing that feedback loop much like some of the other cultural changes that we have seen in our industry are trying to allow for better feedback loops and other spaces. And you’ve brought up dev spec ops which is another move to try and open up that feedback loop but the problem I think is still going to be that even if we improved that feedback loop, we are at an age where – especially if you ended up in some of the larger organizations, there are too many applications to solve this problem for and I don’t know yet how to address this problem in that context, right? If you are in a state where you are a 20-person, 30-person security team and your responsibility is to secure a platform that is running a number of Kubernetes clusters, a number of Vsphere clusters, a number of cloud provider implementations whether that would be AWS or GC, I mean that is a set of problems that is very difficult. It is like I am not sure that improving the feedback loop really solves it. I know that I helps but I definitely you know, I have empathy for those folks for sure. [0:41:13.0] CC: Security is not my forte at all because whenever I am developing, I have a narrow need. You know I have to access a cluster.I have to access a machine or I have to be able to access the database. And it is usually a no brainer but I get a lot of the issues that were brought up. But as a builder of software, I have empathy for people who use software, consume software, mine and others and how can’t they have any visibility as far as security goes? For example, in the world of cloud native let’s say you are using Kubernetes, I sort of start thinking, “Well, shouldn’t there be a scanner that just lets me declare?” I think I am starting an episode right now –should there be a scanner that lets me declare for example this node can only access this set of nodes like a graph. But you just declare and then you run it periodically and you make sure of course this goes down to part of an app can only access part of the database. It can get very granular but maybe at a very high level I mean how hard can this be? For example, this pod can only access that pods but this pod cannot access this name space and just keep checking what if the name spaces changes, the permission changes. Or for example would allow only these answers can do a backup because they are the same users who will have access to the restore so they have access to all the data, you know what I mean? Just keep checking that is in place and it only changes when you want to. [0:42:48.9] BM: So, I mean I know we are at the end of this call and I want to start a whole new conversation but this is actually is why there are applications out there like Istio and Linkerd. This is why people want service meshes because they can turn off all network access and then just use the service mesh to do the communication and then they can use, they can make sure that it is encrypted on both sides and that is a honey cave on all both sides. That is why this is operated. [0:43:15.1] CC: We’ll definitely going to have an episode or multiple on service mesh but we are on the top of the hour. Nick, do your thing. [0:43:23.8] NL: All right, well, thank you so much for joining us on another interesting discussion at The Kubelets Podcast. I am Nicholas Lane, Duffie any final thoughts? [0:43:32.9] DC: There is a whole lot to discuss, I really enjoyed our conversations today. Thank you everybody. [0:43:36.5] NL: And Bryan? [0:43:37.4] BM: Oh it was good being here. Now it is lunch time. [0:43:41.1] NL: And Carlisia. [0:43:42.9] CC: I love learning from you all, thank you. Glad to be here. [0:43:46.2] NL: Totally agree. Thank you again for joining us and we’ll see you next time. Bye. [0:43:51.0] CC: Bye. [0:43:52.1] DC: Bye. [0:43:52.6] BM: Bye. [END OF EPISODE] [0:43:54.7] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

The Podlets - A Cloud Native Podcast

Welcome to the first episode of The Podlets Podcast! On the show today we’re kicking it off with some introductions to who we all are, how we got involved in VMware and a bit about our career histories up to this point. We share our vision for this podcast and explain the unique angle from which we will approach our conversations, a way that will hopefully illuminate some of the concepts we discuss in a much greater way. We also dive into our various experiences with open source, share what some of our favorite projects have been and then we define what the term “cloud native” means to each of us individually. The contribution that the Cloud Native Computing Foundation (CNCF) is making in the industry is amazing, and we talk about how they advocate the programs they adopt and just generally impact the community. We are so excited to be on this podcast and to get feedback from you, so do follow us on Twitter and be sure to tune in for the next episode! Note: our show changed name to The Podlets. Follow us: https://twitter.com/thepodlets Hosts: Carlisia Campos Kris Nóva Josh Rosso Duffie Cooley Nicholas Lane Key Points from This Episode: An introduction to us, our career histories and how we got into the cloud native realm. Contributing to open source and everyone’s favorite project they have worked on. What the purpose of this podcast is and the unique angle we will approach topics from. The importance of understanding the “why” behind tools and concepts. How we are going to be interacting with our audience and create a feedback loop. Unpacking the term “cloud native” and what it means to each of us. Differentiating between the cloud native apps and cloud native infrastructure. The ability to interact with APIs as the heart of cloud natives. More about the Cloud Native Computing Foundation (CNCF) and their role in the industry. Some of the great things that happen when a project is donated to the CNCF. The code of conduct that you need to adopt to be part of the CNCF. And much more! Quotes: “If you tell me the how before I understand what that even is, I'm going to forget.” — @carlisia [0:12:54] “I firmly believe that you can't – that you don't understand a thing if you can't teach it.” — @mauilion [0:13:51] “When you're designing software and you start your main function to be built around the cloud, or to be built around what the cloud enables us to do in the services a cloud to offer you, that is when you start to look at cloud native engineering.” — @krisnova [0:16:57] Links Mentioned in Today’s Episode: Kubernetes — https://kubernetes.io/The Podlets on Twitter — https://twitter.com/thepodlets VMware — https://www.vmware.com/Nicholas Lane on LinkedIn — https://www.linkedin.com/in/nicholas-ross-laneRed Hat — https://www.redhat.com/CoreOS — https://coreos.com/Duffie Cooley on LinkedIn — https://www.linkedin.com/in/mauilionApache Mesos — http://mesos.apache.org/Kris Nova on LinkedIn — https://www.linkedin.com/in/kris-novaSolidFire — https://www.solidfire.com/NetApp — https://www.netapp.com/us/index.aspxMicrosoft Azure — https://azure.microsoft.com/Carlisia Campos on LinkedIn — https://www.linkedin.com/in/carlisiaFastly — https://www.fastly.com/FreeBSD — https://www.freebsd.org/OpenStack — https://www.openstack.org/Open vSwitch — https://www.openvswitch.org/Istio — https://istio.io/The Kublets on GitHub — https://github.com/heptio/thekubeletsCloud Native Infrastructure on Amazon — https://www.amazon.com/Cloud-Native-Infrastructure-Applications-Environment/dp/1491984309Cloud Native Computing Foundation — https://www.cncf.io/Terraform — https://www.terraform.io/KubeCon — https://www.cncf.io/community/kubecon-cloudnativecon-events/The Linux Foundation — https://www.linuxfoundation.org/Sysdig — https://sysdig.com/opensource/falco/OpenEBS — https://openebs.io/Aaron Crickenberger — https://twitter.com/spiffxp Transcript: [INTRODUCTION] [0:00:08.1] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concept, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.3] KN: Welcome to the podcast. [0:00:42.5] NL: Hi. I’m Nicholas Lane. I’m a cloud native Architect. [0:00:45.0] CC: Who do you work for, Nicholas? [0:00:47.3] NL: I've worked for VMware, formerly of Heptio. [0:00:50.5] KN: I think we’re all VMware, formerly Heptio, aren’t we? [0:00:52.5] NL: Yes. [0:00:54.0] CC: That is correct. It just happened that way. Now Nick, why don’t you tell us how you got into this space? [0:01:02.4] NL: Okay. I originally got into the cloud native realm working for Red Hat as a consultant. At the time, I was doing OpenShift consultancy. Then my boss, Paul, Paul London, left Red Hat and I decided to follow him to CoreOS, where I met Duffie and Josh. We were on the field engineering team there and the sales engineering team. Then from there, I found myself at Heptio and now with VMware. Duffie, how about you? [0:01:30.3] DC: My name is Duffie Cooley. I'm also a cloud native architect at VMware, also recently Heptio and CoreOS. I've been working in technologies like cloud native for quite a few years now. I started my journey moving from virtual machines into containers with Mesos. I spent some time working on Mesos and actually worked with a team of really smart individuals to try and develop an API in front of that crazy Mesos thing. Then we realized, “Well, why are we doing this? There is one that's called Kubernetes. We should jump on that.” That's the direction in my time with containerization and cloud native stuff has taken. How about you Josh? [0:02:07.2] JR: Hey, I’m Josh. I similar to Duffie and Nicholas came from CoreOS and then to Heptio and then eventually VMware. Actually got my start in the middleware business oddly enough, where we worked on the Egregious Spaghetti Box, or the ESB as it’s formally known. I got to see over time how folks were doing a lot of these, I guess, more legacy monolithic applications and that sparked my interest into learning a bit about some of the cloud native things that were going on. At the time, CoreOS was at the forefront of that. It was a natural progression based on the interests and had a really good time working at Heptio with a lot of the folks that are on this call right now. Kris, you want to give us an intro? [0:02:48.4] KN: Sure. Hi, everyone. Kris Nova. I've been SRE DevOps infrastructure for about a decade now. I used to live in Boulder, Colorado. I came out of a couple startups there. I worked at SolidFire, we went to NetApp. I used to work on the Linux kernel there some. Then I was at Deis for a while when I first started contributing to Kubernetes. We got bought by Microsoft, left Microsoft, the Azure team. I was working on the original managed Kubernetes there. Left that team, joined up with Heptio, met all of these fabulous folks. I think, I wrote a book and I've been doing a lot of public speaking and some other junk along the way. Yeah. Hi. What about you, Carlisia? [0:03:28.2] CC: All right. I think it's really interesting that all the guys are lined up on one call and all the girls on another call. [0:03:34.1] NL: We should have probably broken it up more. [0:03:36.4] CC: I am a developer and have always been a developer. Before joining Heptio, I was working for Fastly, which is a CDN company. They’re doing – helping them build the latest generation of their TLS management system. At some point during my stay there, Kevin Stuart was posting on Twitter, joined Heptio. At this point, Heptio was about, I don't know, between six months in a year-old. I saw those tweets go by I’m like, “Yeah, that sounds interesting, but I'm happy where I am.” I have a very good friend, Kennedy actually. He saw those tweets and here he kept saying to me, “You should apply. You should apply, because they are great people. They did great things. Kubernetes is so hot.” I’m like, “I'm happy where I am.” Eventually, I contacted Kevin and he also said, “Yeah, that it would be a perfect match.” two months later decided to apply. The people are amazing. I did think that Kubernetes was really hard, but my decision-making went towards two things. The people are amazing and some people who were working there I already knew from previous opportunities. Some of the people that I knew – I mean, I love everyone. The only thing was that it was an opportunity for me to work with open source. I definitely could not pass that up. I could not be happier to have made that decision now with VMware acquiring Heptio, like everybody here I’m at VMware. Still happy. [0:05:19.7] KN: Has everybody here contributed to open source before? [0:05:22.9] NL: Yup, I have. [0:05:24.0] KN: What's everybody's favorite project they've worked on? [0:05:26.4] NL: That's an interesting question. From a business aspect, I really like Dex. Dex is an identity provider, or a middleware for identity provider. It provides an OIDC endpoint for multiple different identity providers. You can absorb them into Kubernetes. Since Kubernetes only has an OIDC – only accepts OIDC job tokens for authentication, that functionality that Dex provides is probably my favorite thing. Although, if I'm going to be truly honest, I think right now the thing that I'm the most excited about working on is my own project, which is starting to join like me, joining into my interest in doing Chaos engineering. What about you guys? What’s your favorite? [0:06:06.3] KN: I understood some of those words. NL: Those are things we'll touch on on different episodes. [0:06:12.0] KN: Yeah. I worked on FreeBSD for a while. That was my first welcome to open source. I mean, that was back in the olden days of IRC clients and writing C. I had a lot of fun, and still I'm really close with a lot of folks in the FreeBSD community, so that always has a special place in my heart, I think, just that was my first experience of like, “Oh, this is how you work on a team and you work collaboratively together and it's okay to fail and be open.” [0:06:39.5] NL: Nice. [0:06:40.2] KN: What about you, Josh? [0:06:41.2] JR: I worked on a project at CoreOS. Well, a project that's still out there called ALB Ingress controller. It was a way to bring the AWS ALBs, which are just layer 7 load balancers and take the Kubernetes API ingress, attach those two together so that the ALB could serve ingress. The reason that it was the most interesting, technology aside, is just it went from something that we started just myself and a colleague, and eventually gained community adoption. We had to go through the process of just being us two worrying about our concerns, to having to bring on a large community that had their own business requirements and needs, and having to say no at times and having to encourage individuals to contribute when they had ideas and issues, because we didn't have the bandwidth to solve all those problems. It was interesting not necessarily from a technical standpoint, but just to see what it actually means when something starts to gain traction. That was really cool. Yeah, how about you Duffie? [0:07:39.7] DC: I've worked on a number of projects, but I find that generally where I fit into the ecosystem is basically helping other people adopt open source technologies. I spent a quite a bit of my time working on OpenStack and I spent some time working on Open vSwitch and recently in Kubernetes. Generally speaking, I haven't found myself to be much of a contributor to of code to those projects per se, but more like my work is just enabling people to adopt those technologies because I understand the breadth of the project more than the detail of some particular aspect. Lately, I've been spending some time working more on the SIG Network and SIG-cluster-lifecycle stuff. Some of the projects that have really caught my interest are things like, Kind which is Kubernetes in Docker and working on KubeADM itself, just making sure that we don't miss anything obvious in the way that KubeADM is being used to manage the infrastructure again. [0:08:34.2] KN: What about you, Carlisia? [0:08:36.0] CC: I realize it's a mission what I'm working on at VMware. That is coincidentally the project – the open source project that is my favorite. I didn't have a lot of experience with open source, just minor contributions here and there before this project. I'm working with Valero. It's a disaster recovery tool for Kubernetes. Like I said, it's open source. We’re coming up to version 1 pretty soon. The other maintainers are amazing, super knowledgeable and very experienced, mature. I have such a joy to work with them. My favorites. [0:09:13.4] NL: That's awesome. [0:09:14.7] DC: Should we get into the concept of cloud native and start talking about what we each think of this thing? Seems like a pretty loaded topic. There are a lot of people who would think of cloud native as just a generic term, we should probably try and nail it down here. [0:09:27.9] KN: I'm excited for this one. [0:09:30.1] CC: Maybe we should talk about what this podcast show is going to be? [0:09:34.9] NL: Sure. Yeah. Totally. [0:09:37.9] CC: Since this is our first episode. [0:09:37.8] NL: Carlisia, why don't you tell us a little bit about the podcast? [0:09:40.4] CC: I will be glad to. The idea that we had was to have a show where we can discuss cloud native concepts. As opposed to talking about particular tools or particular project, we are going to aim to talk about the concepts themselves and approach it from the perspective of a distributed system idea, or issue, or concept, or a cloud native concept. From there, we can talk about what really is this problem, what people or companies have this problem? What usually are the solutions? What are the alternative ways to solve this problem? Then we can talk about tools that are out there that people can use. I don't think there is a show that approaches things from this angle. I'm really excited about bringing this to the community. [0:10:38.9] KN: It's almost like TGIK, but turned inside out, or flipped around where TGIK, we do tools first and we talk about what exactly is this tool and how do you use it, but I think this one, we're spinning that around and we're saying, “No, let's pick a broader idea and then let's explore all the different possibilities with this broader idea.” [0:10:59.2] CC: Yeah, I would say so. [0:11:01.0] JR: From the field standpoint, I think this is something we often times run into with people who are just getting started with larger projects, like Kubernetes perhaps, or anything really, where a lot of times they hear something like the word Istio come out, or some technology. Often times, the why behind it isn't really considered upfront, it's just this tool exists, it's being talked about, clearly we need to start looking at it. Really diving into the concepts and the why behind it, hopefully will bring some light to a lot of these things that we're all talking about day-to-day. [0:11:31.6] CC: Yeah. Really focusing on the what and the why. The how is secondary. That's what my vision of this show is. [0:11:41.7] KN: I like it. [0:11:43.0] NL: That's something that really excites me, because there are a lot of these concepts that I talk about in my day-to-day life, but some of them, I don't actually think that I understand pretty well. It's those words that you've heard a million times, so you know how to use them, but you don't actually know the definition of them. [0:11:57.1] CC: I'm super glad to hear you say that mister, because as a developer in many not a system – not having a sysadmin background. Of course, I did sysadmin things as a developer, but not it wasn't my day-to-day thing ever. When I started working with Kubernetes, a lot of things I didn't quite grasp and that's a super understatement. I noticed that I mean, I can ask questions. No problem. I will dig through and find out and learn. The problem is that in talking to experts, a lot of the time when people, I think, but let me talk about myself. A lot of time when I ask a question, the experts jump right to the how. What is this? “Oh, this is how you do it.” I don't know what this is. Back off a little bit, right? Back up. I don't know what this is. Why is this doing this? I don't know. If you tell me the how before I understand what that even is, I'm going to forget. That's what's going to happen. I mean, it’s great you're trying to make an effort and show me the how to do something. This is personal, the way I learn. I need to understand the how first. This is why I'm so excited about this show. It's going to be awesome. This is what we’re going to talk about. [0:13:19.2] DC: Yeah, I agree. This is definitely one of the things that excites me about this topic as well, is that I find my secret super power is troubleshooting. That means that I can actually understand what the expected relationships between things should do, right? Rather than trying to figure out. Without really digging into the actual problem of stuff and what and the how people were going, or the people who were developing the code were trying to actually solve it, or thought about it. It's hard to get to the point where you fully understand that that distributed system. I think this is a great place to start. The other thing I'll say is that I firmly believe that you can't – that you don't understand a thing if you can't teach it. This podcast for me is about that. Let's bring up all the questions and we should enable our audience to actually ask us questions somehow, and get to a place where we can get as many perspectives on a problem as we can, such that we can really dig into the detail of what the problem is before we ever talk about how to solve it. Good stuff. [0:14:18.4] CC: Yeah, absolutely. [0:14:19.8] KN: Speaking of a feedback loop from our audience and taking the problem first and then solution in second, how do we plan on interacting with our audience? Do we want to maybe start a GitHub repo, or what are we thinking? [0:14:34.2] NL: I think a GitHub repo makes a lot of sense. I also wouldn't mind doing some social media malarkey, maybe having a Twitter account that we run or something like that, where people can ask questions too. [0:14:46.5] CC: Yes. Yes to all of that. Yeah. Having an issue list that in a repo that people can just add comments, praises, thank you, questions, suggestions for concepts to talk about and say like, “Hey, I have no clue what this means. Can you all talk about it?” Yeah, we'll talk about it. Twitter. Yes. Interact with those on Twitter. I believe our Twitter handle is TheKubelets. [0:15:12.1] KN: Oh, we already have one. Nice. [0:15:12.4] NL: Yes. See, I'm learning something new already. [0:15:15.3] CC: We already have. I thought you were all were joking. We have the Kubernetes repo. We have a github repo called – [0:15:22.8] NL: Oh, perfect. [0:15:23.4] CC: heptio/thekubelets. [0:15:27.5] DC: The other thing I like that we do in TGIK is this HackMD thing. Although, I'm trying to figure out how we could really make that work for us in a show that's recorded every week like this one. I think, maybe what we could do is have it so that when people can listen to the recording, they could go to the HackMD document, put questions in or comments around things if they would like to hear more about, or maybe share their perspectives about these topics. Maybe in the following week, we could just go back and review what came in during that period of time, or during the next session. [0:15:57.7] KN: Yeah. Maybe we're merging the HackMD on the next recording. [0:16:01.8] DC: Yeah. [0:16:03.3] KN: Okay. I like it. [0:16:03.6] DC: Josh, you have any thoughts? Friendster, MySpace, anything like that? [0:16:07.2] JR: No. I think we could pass on MySpace for now, but everything else sounds great. [0:16:13.4] DC: Do we want to get into the meat of the episode? [0:16:15.3] KN: Yeah. [0:16:17.2] DC: Our true topic, what does cloud native mean to all of us? Kris, I'm interested to hear your thoughts on this. You might have written a book about this? [0:16:28.3] KN: I co-authored a book called Cloud Native Infrastructure, which it means a lot of things to a lot of people. It's one of those umbrella terms, like DevOps. It's up to you to interpret it. I think in the past couple of years of working in the cloud native space and working directly at the CNCF as a CNCF ambassador, Cloud Native Computing Foundation, they're the open source nonprofit folks behind this term cloud native. I think the best definition I've been able to come up with is when you're designing software and you start your main function to be built around the cloud, or to be built around what the cloud enables us to do in the services a cloud to offer you, that is when you start to look at cloud native engineering. I think all cloud native infrastructure is, it's designing software that manages and mutates infrastructure in that same way. I think the underlying theme here is we're no longer caddying configurations disk and doing system D restarts. Now we're just sending HTTPS API requests and getting messages back. Hopefully, if the cloud has done what we expect it to do, that broadcast some broader change. As software engineers, we can count on those guarantees to design our software around. I really think that you need to understand that it's starting with the main function first and completely engineering your app around these new ideas and these new paradigms and not necessarily a migration of a non-cloud native app. I mean, you technically could go through and do it. Sure, we've seen a lot of people do it, but I don't think that's technically cloud native. That's cloud alien. Yeah. I don't know. That's just my thought. [0:18:10.0] DC: Are you saying that cloud native approach is a greenfield approach generally? To be a cloud native application, you're going to take that into account in the DNA of your application? [0:18:20.8] KN: Right. That's exactly what I'm saying. [0:18:23.1] CC: It's interesting that never said – mentioned cloud alien, because that plays into the way I would describe the meaning of cloud native. I mean, what it is, I think Nova described it beautifully and it's a lot of – it really shows her know-how. For me, if I have to describe it, I will just parrot things that I have read, including her book. What it means to me, what it means really is I'm going to use a metaphor to explain what it means to me. Given my accent, I’m obviously not an American born, and so I'm a foreigner. Although, I do speak English pretty well, but I'm not native. English is not my native tongue. I speak English really well, but there are certain hiccups that I'm going to have every once in a while. There are things that I'm not going to know what to say, or it's going to take me a bit long to remember. I rarely run into not understanding it, something in English, but it happens sometimes. That's the same with the cloud native application. If it hasn't been built to run on cloud native platforms and systems, you can migrate an application to cognitive environment, but it's not going to fully utilize the environments, like a native app would. That's my take. [0:19:56.3] KN: Cloud immigrant. [0:19:57.9] CC: Cloud immigrant. Is Nick a cloud alien? [0:20:01.1] KN: Yeah. [0:20:02.8] CC: Are they cloud native alien, or cloud native aliens. Yeah. [0:20:07.1] JR: On that point, I'd be curious if you all feel there is a need to discern the notion of cloud native infrastructure, or platforms, then the notion of cloud native apps themselves. Where I'm going with this, it's funny hearing the Greenfield thing and what you said, Carlisia, with the immigration, if you will, notion. Oftentimes, you see these very cloud native platforms, things, the amount of Kubernetes, or even Mesos or whatever it might be. Then you see the applications themselves. Some people are using these platforms that are cloud native to be a forcing function, to make a lot of their legacy stuff adopt more cloud native principles, right? There’s this push and pull. It's like, “Do I make my app more cloud native? Do I make my infrastructure more cloud native? Do I do them both at the same time?” Be curious what your thoughts are on that, or if that resonates with you at all. [0:21:00.4] KN: I've got a response here, if I can jump in. Of course, Nova with opinions. Who would have thought? I think what I'm hearing here, Josh is as we're using these cloud native platforms, we're forcing the hand of our engineers. In a world where we may be used to just send this blind DNS request out so whatever, and we would be ignorant of where that was going, now in the cloud native world, we know there's the specific DNS implementation that we can count on. It has this feature set that we can guarantee our software around. I think it's a little bit of both and I think that there is definitely an art to understanding, yes, this is a good idea to do both applications and infrastructure. I think that's where you get into this what it needs to be a cloud native engineer. Just in the same traditional legacy infrastructure stack, there's going to be good engineering choices you can make and there's going to be bad ones and there's many different schools of thought over do I go minimalist? Do I go all in at once? What does that mean? I think we're seeing a lot of folks try a lot of different patterns here. I think there's pros and cons though. [0:22:03.9] CC: Do you want to talk about this pros and cons? Do you see patterns that are more successful for some kinds of company versus others? [0:22:11.1] KN: I mean, I think going back to the greenfield thing that we were talking about earlier, I think if you are lucky enough to build out a greenfield application, you're able to bake in greenfield infrastructure management instead as well. That's where you get these really interesting hybrid applications, just like Kubernetes, that span the course of infrastructure and application. If we were to go into Kubernetes and say, “I wanted to define a service of type load balancer,” it’s actually going to go and create a load balancer for you and actually mutate that underlying infrastructure. The only way we were able to get that power and get that paradigm is because on day one, we said we're going to do that as software engineers; taking the infrastructure where you were hidden behind the firewall, or hidden behind the load balancer in the past. The software would have no way to reason about it. They’re blind and greenfield really is going to make or break your ability to even you take the infrastructure layers. [0:23:04.3] NL: I think that's a good distinction to make, because something that I've been seeing in the field a lot is that the users will do cloud native practices, but they’ll use a tool to do the cloud native for them, right? They'll use something along the lines of HashiCorp’s Terraform to create the VMs and the load balancers for them. It's something I think that people forget about is that the application themselves can ask for these resources as well. Terraform is just using an API and your code can use an API to the same API, in fact. I think that's an important distinction. It forces the developer to think a little bit like a sysadmin sometimes. I think that's a good melding of the dev and operations into this new word. Regrettably, that word doesn't exist right now. [0:23:51.2] KN: That word can be cloud native. [0:23:53.3] DC: Cloud here to me breaks down into a different set of topics as well. I remember seeing a talk by Brandon Phillips a few years ago. In his talk, he was describing – he had some numbers up on the screen and he was talking about the fact that we were going to quickly become overwhelmed by the desire to continue to develop and put out more applications for our users. His point was that every day, there's another 10,000 new users of the Internet, new consumers that are showing up on the Internet, right? Globally, I think it's something to the tune of about 350,000 of the people in this room, right? People who understand infrastructure, people who understand how to interact with applications, or to build them, those sorts of things. There really aren't a lot of people who are in that space today, right? We're surrounded by them all the time, but they really just globally aren't that many. His point is that if we don't radically change the way that we think about the development as the deployment and the management of all of these applications that we're looking at today, we're going to quickly be overrun, right? There aren't going to be enough people on the planet to solve that problem without thinking about the problem in a fundamentally different way. For me, that's where the cloud native piece comes in. With that, comes a set of primitives, right? You need some way to automate, or to write software that will manage other software. You need the ability to manage the lifecycle of that software in a resilient way that can be managed. There are lots of platforms out there that thought about this problem, right? There are things like Mesos, there are things like Kubernetes. There's a number of different shots on goal here. There are lots of things that I've really tried to think about that problem in a fundamentally different way. I think of those primitives that being able to actually manage the lifecycle of software, being able to think about packaging that software in such a way that it can be truly portable, the idea that you have some API abstraction that brings again, that portability, such that you can make use of resources that may not be hosted on your infrastructure on your own personal infrastructure, but also in the cloud, like how do we actually make that API contract so complete that you can just take that application anywhere? These are all part of that cloud native definition in my opinion. [0:26:08.2] KN: This is so fascinating, because the human race totally already learned this lesson with the Linux kernel in the 90s, right? We had all these hardware manufacturers coming out and building all these different hardware components with different interfaces. Somebody said, “Hey, you know what? There's a lot of noise going on here. We should standardize these and build a contract.” That contract then implemented control loops, just like in Kubernetes and then Mesos. Poof, we have the Linux kernel now. We're just distributed Linux kernel version 2.0. The human race is here repeating itself all over again. [0:26:41.7] NL: Yeah. It seems like the blast radius of Linux kernel 2.0 is significantly higher than the Linux kernel itself. That made it sound like I was like, pooh-poohing what you're saying. It’s more like, we're learning the same lesson, but at a grander scale now. [0:27:00.5] KN: Yeah. I think that's a really elegant way of putting it. [0:27:03.6] DC: You do raise a good point. If you are embracing on a cloud native infrastructure, remember that little changes are big changes, right? Because you're thinking about managing the lifecycle of a thousand applications now, right? If you're going full-on cloud native, you're thinking about operating at scale, it's a byproduct of that. Little changes that you might be able to make to your laptop are now big changes that are going to affect a fleet of thousand machines, right? [0:27:30.0] KN: We see this in Kubernetes all the time, where a new version of Kubernetes comes out and something totally unexpected happens when it is ran at scale. Maybe it worked on 10 nodes, but when we need to fire up a thousand nodes, what happens then? [0:27:42.0] NL: Yeah, absolutely. That actually brings up something that to me, defines cloud native as well. A lot of my definition of cloud native follows in suit with Kris Nova's book, or Kris Nova, because your book was what introduced me to the phrase cloud native. It makes sense that your opinion informs my opinion, but something that I think that we were just starting to talk about a little bit is also the concept of stability. Cloud native applications and infrastructure means coding with instability in mind. It's not being guaranteed that your VM will live forever, because it's on somebody else's hardware, right? Their hardware could go down, and so what do you do? It has to move over really quickly, has to figure out, have the guarantees of its API and its endpoints are all going to be the same no matter what. All of these things have to exist for the code, or for your application to live in the cloud. That's something that I find to be very fascinating and that's something that really excites me, is not trying to make a barge, but rather trying to make a schooner when you're making an app. Something that can, instead of taking over the waves, can be buffeted by the waves and still continue. [0:28:55.6] KN: Yeah. It's a little more reactive. I think we see this in Kubernetes a lot. When I interviewed Joe a couple years ago, Joe Beda for the book to get a quote from him, he said, this magic phrase that has stuck with me over the past few years, which is “goal-seeking behavior.” If you look at a Kubernetes object, they all use this concept in Go called embedding. Every Kubernetes object has a status in the spec. All it is is it’s what's actually going on, versus what did I tell it, what do I want to go on. Then all we're doing is just like you said with your analogy, is we're just trying to be reactive to that and build to that. [0:29:31.1] JR: That's something I wonder if people don't think about a lot. They don't they think about the spec, but not the status part. I think the status part is as important, or more important maybe than the spec. [0:29:41.3] KN: It totally is. Because I mean, a status like, if you have one potentiality for status, your control loop is going to be relatively trivial. As you start understanding more of the problems that you could see and your code starts to mature and harden, those statuses get more complex and you get more edge cases and your code matures and your code hardens. Then we can take that and globally in these larger cloud native patterns. It's really cool. [0:30:06.6] NL: Yeah. Carlisia, you’re a developer who's now just getting into the cloud native ecosystem. What are your thoughts on developing with cloud native practices in mind? [0:30:17.7] CC: I’m not sure I can answer that. When I started developing for Kubernetes, I was like, “What is a pod?” What comes first? How does this all fit together? I joined the project [inaudible 00:30:24]. I don't have to think about that. It's basically moving the project along. I don't have to think what I have to do differently from the way I did things before. [0:30:45.1] DC: One thing that I think you probably ran into in working with the application is the management of state and how that relates to – where you actually end up coupling that state. Before in development, you might just assume that there is a database somewhere that you would have to interact with. That database is a way of actually pushing that state off of the code that you're actually going to work with. In this way, that you might think of being able to write multiple consumers of state, or multiple things that are going to mutate state and all share that same database. This is one of the patterns that comes up all the time when we start talking about cloud native architectures, is because we have to really be very careful about how we manage that state and mainly, because one of the other big benefits of it is the ability to horizontally scale things that are going to mutate, or consume state. [0:31:37.5] CC: My brain is in its infancy as it relates to Kubernetes. All that I see is APIs all the way down. It's just APIs all the way down. It’s not very different than as a developer for me, is not very much more complex than developing against the database that sits behind. Ask me again a year from now and I will have a more interesting answer. [0:32:08.7] KN: This is so fascinating, right? I remember a couple years ago when Kubernetes was first coming out and listening to some of the original “Elders of Kubernetes,” and even some of the stuff that we were working on at this time. One of the things that they said was we hope one day, somebody doesn't have to care about what's passed these APIs and gets to look at Kubernetes as APIs only. Then they hear that come from you authentically, it's like, “Hey, that's our success statement there. We nailed it.” It's really cool. [0:32:37.9] CC: Yeah. I don’t understood their patterns and I probably should be more cognizant about these patterns are, even if it's just to articulate them. To me, my day-to-day challenge is understanding the API, understanding what library call do I make to make this happen and how – which is just programming 101 almost. Not different from any other regular project. [0:33:10.1] JR: Yeah. That is something that's nice about programming with Kubernetes in mind, because a lot of times you can use the source code as documentation. I hate to say that particularly is a non-developer. I'm a sysadmin first getting into development and documentation is key in my mind. There's been more than a few times where I'm like, “How do I do this?” You can look in the source code for pretty much any application that you're using that's in Kubernetes, or around the Kubernetes ecosystem. The API for that application is there and it'll tell you what you need to do, right? It’s like, “Oh, this is how you format your config file. Got it.” [0:33:47.7] CC: At the same time, I don't want to minimize that knowing what the patterns are is very useful. I haven't had to do any design for Valero for our projects. Maybe if I had, I would have to be forced to look into that. I'm still getting to know the codebase and developing features, but no major design that I had to lead at least. I think with time, I will recognize those patterns and it will make it easier for me to understand what is happening. What I was saying is that not understanding the patterns that are behind the design of those APIs doesn't preclude me at all so call against it, against them. [0:34:30.0] KN: I feel this is the heart of cloud native. I think we totally nailed it. The heart of cloud native is in the APIs and your ability to interact with the APIs. That's what makes it programmable and that's what makes – gives you the interface for you and your software to interact with that. [0:34:45.1] DC: Yeah, I agree with that. API first. On the topic of cloud native, what about the Cloud Native Computing Foundation? What are our thoughts on the CNCF and what is the CNCF? Josh, you have any thoughts on that? [0:35:00.5] JR: Yeah. I haven't really been as close to the CNCF as I probably should, to be honest with you. One of the great things that the CNCF has put together are programs around getting projects into this, I don't know if you would call it vendor neutral type program. Maybe somebody can correct me on that. Effectively, there's a lot of different categories, like networking and storage and runtimes for containers and things of that nature. There's a really cool landscape that can show off a lot of these different technologies. A lot of the categories, I'm guessing we'll be talking about on this podcast too, right? Things like, what does it mean to do cloud native networking and so on and so forth? That's my purview of the CNCF. Of course, they put on KubeCon, which is the most important thing to me. I'm sure someone else on this call can talk deeper at an organization level what they do. [0:35:50.5] KN: I'm happy to jump in here. I've been working with them for I think three years now. I think first, it's important to know that they are a subsidiary of the Linux Foundation. The Linux Foundation is the original open source, nonprofit here, and then the CNCF is one of many, like Apache is another one that is underneath the broader Linux Foundation umbrella. I think the whole point of – or the CNCF is to be this neutral party that can help us as we start to grow and mature the ecosystem. Obviously, money is going to be involved here. Obviously, companies are going to be looking out for their best interest. It makes sense to have somebody managing the software that is outside, or external of these revenue-driven companies. That's where I think the CNCF comes into play. I think that's its main responsibility is. What happens when somebody from company A and somebody from Company B disagree with the direction that the software should go? The CNCF can come in and say, “Hey, you know what? Let's find a happy medium in here and let's find a solution that works for both folks and let's try to do this the best we can.” I think a lot of this came from lessons we learned the hard way with Linux. In a weird way, we did – we are in version 2.0, but we were able to take advantage of some of the priority here. [0:37:05.4] NL: Do you have any examples of a time in the CNCF jumped in and mediated between two companies? [0:37:11.6] KN: Yeah. I think the steering committee, the Kubernetes steering committee is a great example of this. It's a relatively new thing. It hasn't been around for a very long time. You look at the history of Kubernetes and we used to have this incubation process that has since been retired. We've tried a lot of solutions and the CNCF has been pretty instrumental and guiding the shape of how we're going to manage, solve governance for such a monolithic project. As Kubernetes grows, the problem space grows and more people get involved. We're having to come up with new ways of managing that. I think that's not necessarily a concrete example of two specific companies, but I think that's more of as people get involved, the things that used to work for us in the past are no longer working. The CNCF is able to recognize that and guide us out of that. [0:37:57.2] DC: Cool. That’s such a very good perspective on the CNCF that I didn't have before. Because like Josh, my perspective with CNCF was well, they put on that really cool party three times a year. [0:38:07.8] KN: I mean, they definitely are great at throwing parties. [0:38:12.6] NL: They are that. [0:38:14.1] CC: My perspective of the CNCF is from participating in the Kubernetes meetup here in San Diego. I’m trying to revive our meetup, which is really hard to do, but different topic. I know that they try to make it easier for people to find meetups, because they have on meetup.com, they have an organization. I don't know what the proper name is, but if you go there and you put your zip code, you'll find any meetup that's associated with them. My meetup here in San Diego is associated, can be easily found. They try to give a little bit of money for swags. We also give out ads for meetup. They offer help for finding speakers and they also have a speaker catalog on their website. They try to help in those ways, which I think is very helpful, very valuable. [0:39:14.9] DC: Yeah, I agree. I know about CNCF, mostly just from interacting with folks who are working on its behalf. Working at meeting a bunch of the people who are working on the Kubernetes project, on behalf of the CNCF, folks like Ihor and people like that, which are constantly amazingly with the amount of work that they do on behalf of the CNCF. I think it's been really good seeing what it means to provide governance over a project. I think that really highlights – that's really highlighted by the way that Kubernetes itself has managed. I think a lot of us on the call have probably worked with OpenStack and remember some of the crazy battles that went on between vendors around particular components in that stack. I've yet to actually really see that level of noise creep into the Kubernetes situation. I think squarely on the CNCF around managing governance, and also run the community for just making it accessible enough thing that people can plug into it, without actually having to get into a battle about taking ownership of CNI, for example. Nobody should own CNI. That should be its own project under its own governance. How you satisfy the needs for something like container networking should be a project that you develop as a company, and you can make the very best one that you could make it even attract as many customers to that as you want. Fundamentally, the way that your interface to that major project should be something that is abstracted in such a way that it isn't owned by any one company. There should be a contact in an API, that sort of thing. [0:40:58.1] KN: Yeah. I think the best analogy I ever heard was like, “We’re just building USB plugs.” [0:41:02.8] DC: That's actually really great. [0:41:05.7] JR: To that point Duffie, I think what's interesting is more and more companies are looking to the CNCF to determine what they're going to place their bets on from a technology perspective, right? Because they've been so burned historically from some project owned by one vendor and they don't really know where it's going to end up and so on and so forth. It's really become a very serious thing when people consider the technologies they're going to bet their business on. [0:41:32.0] DC: Yeah. When a project is absorbed into the CNCF, or donated to the CNCF, I guess. There are a number of projects that this has happened to. Obviously, if you see that iChart that is the CNCF landscape, there's just tons of things happening inside of there. It's a really interesting process, but I think that from my part, I remember recently seeing Sysdig Falco show up in that list and seeing them donate – seeing Sysdig donate Falco to the CNCF was probably one of the first times that I've actually have really tried to see what happens when that happens. I think that some of the neat stuff here that happens is that now this is an open source project. It's under the governance of the CNCF. It feels to me more an approachable project, right? I don't feel I have to deal with Sysdig directly to interact with Falco, or to contribute to it. It opens that ecosystem up around this idea, or the genesis of the idea that they built around Falco, which I think is really powerful. What do you all think of that? [0:42:29.8] KN: I think, to look at it from a different perspective, that's one example of when the CNCF helps a project liberate itself. There's plenty of other examples out there where the CNCF is an opt-in feature, that is only there if we need it. I think cluster API, which I'm sure we're going to talk about this in a later episode. I mean, just a quick overview is a lot of different vendors implementing the same API and making that composable and modular. I mean, nowhere along the way in the history of that project has the CNCF had to come and step in. We’ve been able to operate independently of that. I think because the CNCF is even there, we all are under this working agreement of we're going to take everybody's concerns into consideration and we're going to take everybody’s use case in some consideration, work together as an ecosystem. I think it's just even having that in place, whether or not you use it or not is a different story. [0:43:23.4] CC: Do you all know any project under the CNCF? [0:43:26.1] KN: I have one. [0:43:27.7] JR: Well, I've heard of this one. It's called Kubernetes. [0:43:30.1] CC: Is it called Kubernetes or Kubernetes? [0:43:32.8] JR: It’s called Kubernetes. [0:43:36.2] CC: Wow. That’s not what Duffie thinks. [0:43:38.3] DC: I don’t say it that way. No, it's been pretty fascinating seeing just the breadth of projects that are under there. In fact, I was just recently noticing that OpenEBS is up for joining the CNCF. There seems to be – it's fascinating that the things that are being generated through the CNCF and going through that life cycle as a project sometimes overlap with one another and it's very – it seems it's a delicate balance that the CNCF would have to play to keep from playing favorites. Because part of the charter of CNCF is to promote the project, right? I'm always curious to see and I'm fascinated to see how this plays out as we see projects that are normally competitive with one another under the auspice of the same organization, like a CNCF. How do they play this in such a way that they remain neutral, even it would – it seems like it would take a lot of intention. [0:44:39.9] KN: Yeah. Well, there's a difference between just being a CNCF project and being an official project, or a graduated project. There's different tiers. For instance, Kubicorn, a tool that I wrote, we just adopted the CNCF, like I think a code of conduct and there was another file I had to include in the repo and poof, were magically CNCF now. It's easy to get onboard. Once you're onboard, there's legal implications that come with that. There totally is this tier ladder stature that I'm not even super familiar with. That’s how officially CNCF you can be as your product grows and matures. [0:45:14.7] NL: What are some of the code of conduct that you have to do to be part of the CNCF? [0:45:20.8] KN: There's a repo on it. I can maybe find it and add it to the notes after this, but there's this whole tutorial that you can go through and it tells you everything you need to add and what the expectations are and what the implications are for everything. [0:45:33.5] NL: Awesome. [0:45:34.1] CC: Well, Valero is a CNCF project. We follow the what is it? The covenant? [0:45:41.2] KN: Yeah, I think that’s what it is. [0:45:43.0] CC: Yes. Which is the same that Kubernetes follows. I am not sure if there can be others that can be adopted, but this is definitely one. [0:45:53.9] NL: Yeah. According to Aaron Crickenberger, who was the Release Lead for Kubernetes 1.14, the CNCF code of conduct can be summarized as don't be a jerk. [0:46:06.6] KN: Yeah. I mean, there's more to it than that, but – [0:46:10.7] NL: That was him. [0:46:12.0] KN: Yeah. This is something that I remember seeing an open source my entire career, open source comes with this implication of you need to be well-rounded and polite and listen and be able to take others’ just thoughts and concerns into consideration. I think we just are getting used to working like that as an engineering industry. [0:46:32.6] NL: Agreed. Yeah. Which is a great point. It's something that I hadn't really thought of. The idea of development back in the day, it seems like before, there was such a thing as the CNCF are cloud native. It seemed that things were combative, or people were just trying to push their agenda as much as possible. Bully their way through. That doesn't seem that happens as much anymore. Do you guys have any thoughts on that? [0:46:58.9] DC: I think what you're highlighting is more the open source piece than the cloud native piece, which I – because I think that when you're working – open source, I think has been described a few times as a force multiplier for software development and software adoption. I think of these things are very true. If you look at a lot of the big successful closed source projects, they have – the way that people in this room and maybe people listening to this podcast might perceive them, it's definitely just fundamentally differently than some open source project. Mainly, because it feels it's more of a community-driven thing and it also feels you're not in a place where you're beholden to a set of developers that you don't know that are not interested in your best, and in what's best for you, or your organization to achieve whatever they set out to do. With open source, you can be a part of the voice of that project, right? You can jump in and say, “You know, it would really be great if this thing has this feature, or I really like how you would do this thing.” It really feels a lot more interactive and inclusive. [0:48:03.6] KN: I think that that is a natural segue to this idea of we build everything behind the scenes and then hey, it's this new open source project, that everything is done. I don't really think that's open source. We see some of these open source projects out there. If you go look at the git commit history, it's all everybody from the same company, or the same organization. To me, that's saying that while granted the source code might be technically open source, the actual act of engineering and architecting the software is not done as a group with multiple buyers into it. [0:48:37.5] NL: Yeah, that's a great point. [0:48:39.5] DC: Yeah. One of the things I really appreciate about Heptio actually is that all of the projects that we developed there were – that the developer chat for that was all kept in some neutral space, like the Kubernetes Slack, which I thought was really powerful. Because it means that not only is it open source and you can contribute code to a project, but if you want to talk to people who are also being paid to develop that project, you can just go to the channel and talk to them, right? It's more than open source. It's open community. I thought that was really great. [0:49:08.1] KN: Yeah. That's a really great way of putting it. [0:49:10.1] CC: With that said though, I hate to be a party pooper, but I think we need to say goodbye. [0:49:16.9] KN: Yeah. I think we should wrap it up. [0:49:18.5] JR: Yeah. [0:49:19.0] CC: I would like to re-emphasize that you can go to the issues list and add requests for what you want us to talk about. [0:49:29.1] DC: We should also probably link our HackMD from there, so that if you want to comment on something that we talked about during this episode, feel free to leave comments in it and we'll try to revisit those comments maybe in our next episode. [0:49:38.9] CC: Exactly. That's a good point. We will drop a link the HackMD page on the corresponding issue. There is going to be an issue for each episode, so just look for that. [0:49:51.8] KN: Awesome. Well, thanks for joining everyone. [0:49:54.1] NL: All right. Thank you. [0:49:54.6] CC: Thank you. I'm really glad to be here. [0:49:56.7] DC: Hope you enjoyed the episode and I look forward to a bunch more. [END OF EPISODE] [0:50:00.3] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter https://twitter.com/ThePodlets and on the https://thepodlets.io, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

airhacks.fm podcast with adam bien

An airhacks.fm conversation with Sebastien Blanc (@sebi2706) about: Thomson MO5, every school in France needs to have a computer, printing the name with BASIC, the REM sadness, making yellow boxes, programming Logo in French, writing "root" and "house" procedures, no procedures in BASIC, the ACSLogo for Mac OS X, Berkeley Logo (UCBLogo), the Amstrad PC1512, using AMOS programming language for writing games, writing invoicing software with 14 and AMOS, Zak McKracken and the Alien Mindbenders, Siemens Nixdorf PC, QuickBasic on Siemens Nixdorf DX2-66, the Persistence of Vision Raytracer, average calculation for school notes with QuickBasic, writing ballistic games for TI BASIC (TI 99/4A), playing Nirvana on e-guitar, starting with Java in 2002, the Rational Rose Logo Edition, learning Java EE on JOnAS, Apache Tapestry, consulting with Apache Jetspeed, writing Java EE code for 7 years, hardtimes with WebSphere, Xerces and ClassLoading, refactorings to Maven, mobile web / Grails involvements, starting at RedHat's mobile team - AeroGear, Matthias Wessendorf, Matthias loves Java Server Faces (JSF), the unified push server, starting keycloak involvement, the security challenge, the keycloak religion, keycloak ships as WildFly distribution, keycloak is a WildFly subsystem, keycloak uses hibernate for persistence, keycloak manages users with credentials, keycloak ships with ready to UI to manage users, keycloak functionality is exposed as REST services, there is a Java client available - as REST wrapper, keycloak is a "remote" proxy realm, keycloak ships with adapters for major application servers out-of-the-box, keycloak comes with SSO - different application servers can share the same session, the security realm is a "territory", in keycloak a session is optional -- a microservice can use JWT token, using OIDC tokens, keycloak comes with servlet filters for servers without adapter support, the new keycloak approach is the Keycloak Gatekeeper, Keycloak Gatekeeper is a sidecar service, apache mod_auth_openidc, keycloak is oidc compliant -- any generic OIDC library should work, the JWT creation tool JWTenizr, the "Securing JAX-RS Endpoints with JWT" screencast, the oauth flows, oauth authorization flow, implicit flow and the hybrid flow, access token has to have short lifetime, using services accounts for schedulers, keycloak has a logout backchannel - available from servlet filter, pushing a timestamp also causes logout, HttpServletRequest#logout also logouts, the killer feature: keycloak stores the private keys in one place and makes public keys available via URI, Sebastien Blanc on twitter: @sebi2706