POPULARITY
An airhacks.fm conversation with Georgios Andrianakis (@geoand86) about: early experiences with computers and programming, transition from Pascal and C to Java in university, early career working with WebLogic and EJB, move to Spring development, joining Red Hat and discovering quarkus, developing Spring compatibility layer for Quarkus, Vodafone Greece case study showing benefits of migrating from Spring to Quarkus, current work on RESTEasy Reactive and langchain4j, exploration of future AI integration in Java with projects like Llama3.java, comparison of Spring, Quarkus, and Micronaut, discussion on the evolution of Spring and its perceived bloat, potential for Quarkus and LangChain4j to revolutionize enterprise AI integration, importance of pure Java solutions for AI inference and integration with existing enterprise applications Georgios Andrianakis on twitter: @geoand86
Visual Builder Studio requires its data sources to connect to the webpage it produces using REST calls. Therefore, the data source has to provide a REST interface. A simple, easy, secure, and free way to do that is with Oracle REST Data Services (ORDS). In this episode, hosts Lois Houston and Nikita Abraham chat with Senior Principal OCI Instructor Joe Greenwald about what ORDS can do, how to easily set it up, how to work with it, and how to use it within Visual Builder Studio. Develop Fusion Applications Using Visual Builder Studio: https://mylearn.oracle.com/ou/course/develop-fusion-applications-using-visual-builder-studio/122614/ Build Visual Applications Using Visual Builder Studio: https://mylearn.oracle.com/ou/course/build-visual-applications-using-oracle-visual-builder-studio/110035/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. 00:26 Nikita: Hello and welcome to the Oracle University Podcast! I'm Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! In our last episode, we took a look at model-based development tools, their start as CASE tools, what they morphed into, and how they're currently used in Oracle software development. We're wrapping up the season with this episode today, which will be about how to access Oracle database data through a REST interface created and managed by Oracle REST Data Services, or ORDS, and how to access this data in Visual Builder Studio. 01:03 Nikita: Being able to access Oracle database data through a REST interface over the web is highly useful, but sometimes it can be complicated to create that interface in a programming language. Joe Greenwald, our Senior OCI Learning Solutions Architect and Principal Instructor is back with us one last time this season to tell us more about ORDS, and how it makes it much simpler and easier for us to REST-enable our database for use in tools like Visual Builder Studio. Hi Joe! Tell us a little about what Visual Builder Studio is and why we must REST-enable our data for VBS to be able to use it. 01:40 Joe: Hi Niki, hi Lois! Ok, so, Visual Builder Studio is Oracle's low-code software development and project asset management product for creating graphical webpage front-ends for web applications. It's the tool of choice for designing, building, and implementing all of Oracle Fusion Cloud Applications and is being used by literally tens of thousands of engineers at Oracle now to bring the next generation of Fusion Applications to our customers and the market. It's based on standards like HTML5, CSS3, and JavaScript. It's highly performant and combined with the Redwood graphical design system and components that we talked about previously, delivers a world-class experience for users. One thing about Visual Builder Studio though: it only works with data sources that have a REST interface. This is unusual. I like to think I've worked with every software development tool that Oracle's created since I joined Oracle in 1992, including some unreleased ones, and all of them allowed you to talk to the database directly. This is the first time that we've released a tool that I know of where we don't do that. Now at first, I was a little put off and wondered how's it going to do this and how much work I would have to do to create a REST interface for some simple tables in the Oracle database. Like, here's one more thing I must do just to create a page that displays data from the database. As it turns out, it's a wise design decision on the part of the designers. It simplifies the data access parts of Visual Builder Studio and makes the data access model common across the different data sources. And, thanks to ORDS, REST-enabling data in Oracle database couldn't be easier! 03:13 Lois: That's cool. We don't want to focus too much on Visual Builder Studio today. We have free courses that teach you how to create service connections to REST services to access the data and all of that. What we actually want to talk with you about is working with Oracle REST Data Service. How easy is it to work with Oracle REST Data Service to add REST support, what we call REST-enable your Oracle Database, and why it is important? Nikita: Yeah, I could use a bit of a refresher on REST myself. Could you describe what REST is, how it works for both the client and server, and what ORDS is doing for us? 03:50 Joe: Sure. So, REST is a way to make a request to a server for a resource using the HTTP web protocol from a client, like your browser, to a web server, which hands off the request to code that handles the request and sends the response back to your client/browser, which then uses it, displays or whatever. So, you can see we have two parts. We have the client, which makes the request, and the server, which handles the request and figures out what the response should be (static, dynamic, or a combination of both) and sends that back to the client. For example, a visual application built with Visual Builder Studio acts as a client making the request, just as your browser makes a request. It's really just a web app built with HTML5, CSS3, and JavaScript, and the JavaScript makes a request to the server on your behalf. Let's say you wish to access your student record within Oracle University. And now, this is a contrived example and it won't actually work, but it's good for illustrative purposes. Oracle University, let's say, publishes the URL for your student data as something like https://oracle.com/oracleuniversity/student/{studentnumber} you put in some kind of number, like the number 23, and if you enter that into your browser address bar and press Enter, then your browser, on your behalf, sends a GET request—what we call an HTTP GET operation—to the web server. When the web server receives the request, it will somehow read the record for student 23, format a response, and send the response back to the client. 05:16 Joe: That's a GET or a READ request. Now, what if you are creating a new student? Well, you fill out a form on the webpage and you click the Submit button. And it sends a POST request, which tells the server to create a new record in its storage mechanism, most likely a database of some form. If you do an update, you change certain fields on the webpage and click the Submit button and this time, an update request is made. If you wanted to delete the record, you'd find the record you want to delete and press the Submit button, and this time a delete request is made. This is the general idea, though there are different ways to do creates and updates that are really irrelevant here. Those requests to the server I mentioned are called HTTP operations and there are several of them. But the four most popular are GET to retrieve data, POST to create a new record on the server, PUT to update a record, and DELETE to remove a record. On the client side, we just need to specify where the record is that we want to retrieve—that's the oracle.com/oracleuniversity/student part of the URL and an identifying value, which makes it unique. So, when I do a GET request on customer or student 23, I'm going to get back a representation of the student data that exists in the student database for a student with ID 23. There should not be more than one of these or that would indicate an error. The response typically comes back in a format of a key:value pair called JavaScript Object Notation (JSON), but it could also be in a Text format, HTML, Excel, PDF, or whatever the server implements and is requested. 06:42 Nikita: OK great! That's on the client side making the request. But what's happening on the server side? Do I need to worry about that if I'm the client? Joe: No, that's the great part. As a client, I don't know, and frankly I'd rather not know what the server's doing or how it does it. I don't want to be dependent on the server implementation at all. I simply want to make a request and the server handles the request and sends a response. Now, just a word about what's on the server. Some data on the server is static like a PDF file or an image or an audio file, for example, and sometimes you'll see that in the URL the file type as an extension, like .pdf, and you get back a PDF file that your browser displays or that you can download to your machine. But with dynamic data, like student data coming out of a database based on the student number, a query is made against a database. The database responds with the data, and that's formatted into some type of data format—typically JSON—and sent back to the client, which then does something with it, like displaying it on a webpage. So, as we can see, the client is fairly simple in the sense that it makes a request, receives the data response, and displays it or does something with it. And that's one of the reasons why the choice to use REST and only REST in Visual Builder Studio is such a wise one. 07:54 Joe: Regardless of the different data sources or the different server implementations or how the data is stored on the server, or any of that, Visual Builder Studio doesn't know and doesn't care. What it sees is the REST request it sends and the response it gets back and then it deals with the response data regardless of how it's implemented on the server. I mentioned the server sends back a representation of the resource, in this case, for example, the student record. That's really where the abbreviation REST comes from: REpresentational State Transformation, which is a long way of saying, bring me back a representation of the resource—the thing—that I'm requesting. Now, of course, the server is a little more complex. On the server side, we would need software that is going to take the request from the web server using some programming language like Java, C#, C++, Python, or maybe even JavaScript in a Node.js application. You have a program that receives a request from the web server, executes the request (typically by connecting to the database if it's a database call), makes the request, receives a data response from the database, formats that into some form, and passes it back to the web server, which then sends it back to the client that requested it. 09:01 Lois: Ok… I think I see. I'm guessing that ORDS gets involved somehow between the client and the server. Joe: Yes, exactly. We can see that the implementation on the server side is where the complexity is. For example, if I implement a student management service in Java, I have to write a bunch of Java code, a lot of which is boilerplate, housekeeping, boring code. For simple database access, it's tedious to have to do this over and over, and if the database changes, it can be even more tedious to maintain that code to handle simple to moderately complex requests. Writing and maintaining software code to just read and write data from the database to pass to a client for a web request is cool the very first time you do it and then gets boring very quickly and it's prone to errors because it's so manual. So, it would be nice if we had a piece of software that could handle the tedious, boring, manual bits of this service. It would receive the request that our client, the browser or Visual Builder Studio for example, is sending, take that request, execute the request against the database for us, receive the response from the database, and then format it for us and send it back to us, without a developer having to write custom code on the server side. And that is what Oracle REST Data Services (ORDS) does. 10:13 Joe: ORDS contains a lightweight web server based on the Jetty web server that receives the request from the client, like a browser or Visual Builder Studio or whatever, in the form of a URL, parses the request, generates a query or an update, or an insert or delete, depending on the nature of the HTTP operation sent or requested, and sends it to the database on our behalf. The database executes the request from ORDS, sends back a response to ORDS, and ORDS formats the response for us in the JSON and sends it back to our client. In nutshell, that's it. 10:45 Lois: So ORDS does all that? And it's free? How does it work? Uhm, remember I'm not as technical as you are. Joe: Of course. ORDS is free. It's a lightweight, highly performant Java app that can run in many different modes, from stand-alone on a server to embedded in an application server like WebLogic, to running in the Oracle Cloud with the Oracle Autonomous Database (ADB). When you REST-enable your tables, your web requests are intercepted by ORDS running in ADB. It's optimized for the purpose of handling web requests, connecting to the Oracle database, and sending back formatted responses as JSON. It can also handle more complex requests as well in the form of queries with special parameters. So, you can see what ORDS does for us. It handles the request coming from the client, which could be a browser or Visual Builder Studio or APEX or whatever client—pretty much any client today can make an HTTP call—it handles the call, parses the request, makes the request to the server on our behalf, and of course security is built-in and all of that, and so we don't get to data we're not supposed to see. It receives a response from the database, formats it into the JSON key:value pair format, and sends it back to our client. 12:00 Are you planning to become an Oracle Certified Professional this year? Whether you're a seasoned IT pro or just starting your career, getting certified can give you a significant boost. And don't worry, we've got your back. Join us at one of our cert prep live events in the Oracle University Learning Community. You'll get insider tips from seasoned experts and learn from other professionals' experiences. Plus, once you've earned your certification, you'll become part of our exclusive forum for Oracle-certified users. So, what are you waiting for? Head over to mylearn.oracle.com and create an account to jump-start your journey towards certification today! 12:43 Nikita: Welcome back. So, Joe, then the next question is, what do we do to REST-enable our database? Does that only work for ADB? Joe: This can be done in a couple of different ways. It can be done implicitly, called AutoREST, or explicitly. AutoREST is very convenient. In the case of an ADB database, you log in as the user who owns the structures, select your tables, views, packages, procedures, or functions that you want to REST-enable. Choose REST and then Enable from the menu for the table, view, stored package, procedure, or function and a URL is generated using your POST, GET, PUT, and DELETE for the standard database create, retrieve, update, delete operations. And it's not just for ADB. You can do this in SQL Developer Desktop as well. Then, when you invoke the URL for the service, if you include just the name of the resource, like students, you get the entire collection back. If you add an ID at the end of the URL, like student/23, you get back the data for that specific student back, or whatever the structure is. You can add more complex filter parameters as well. And that's it! Very easy. And, of course, you can apply appropriate security and ORDS enforces it. But you also can create custom code to handle more complex requests. 13:53 Lois: Joe, what if there's custom logic or processing that you want to do when the REST call comes in and you need to write custom code to handle it? Joe: Remember, I said on the server side, we use custom code to retrieve data as well as apply business rules, validations, edits, whatever needs to be done to appropriately handle the REST call. So it's a great question, Lois. When using ORDS, you can write a REST service handler in PL/SQL and SQL, just like if you were writing a stored procedure or a function or a package in the database, which is exactly what you're doing. ORDS exposes your PL/SQL code wrapped in a REST interface with, of course, the necessary security. And since it's PL/SQL, it runs in the database, so it's highly performant, fast, and uses code you're likely already familiar with or maybe already have. Your REST service handler can call existing PL/SQL packages, procedures, and functions. For example, if you created packages with stored procedures and functions that wrap access to your database tables and views, you can REST-enable those stored procedures, functions, and packages, and call them over the web. And maintain the package access you already created. I do want to point out that the recommended way to access your tables and views is through packages, stored procedures, and functions. While you can expose your tables and views directly to REST, should you really do that? In practice, it's generally not a recommended way to do it. Do you want to expose your data in tables and views directly through a REST interface? Ideally, no, access should be through a PL/SQL wrapper, same as it's—hopefully—done today for your client-server applications. 15:26 Nikita: I understand it's easy to generate a simple REST interface for tables and so on to do basic create, retrieve, update, and delete operations. But what's required to create custom code to handle more complex business operations? Joe: The process to create your own custom handlers is a little bit more involved as you would expect. It uses your skills as a PL/SQL programmer, while hiding the details of the REST implementation to let you focus on the logic and processing. Mechanically, you'd begin by creating a module that has a URL associated with it. So, for example, you would create a URL like https://oracle.com/oracleuniversity/studentregistry. Then, within that module, you create a template that names the specific resource—or thing—that you want to work with. For example, student, or course, or registration. 16:15 Joe: Then you create the handler for it. You have a handler to do the read, another handler for the insert, another handler for an update, another handler for a delete, and even possibly multiple handlers for more complex APIs based on your needs and the parameters being passed in. You can create complex URLs with multiple parameters for passing needed information into the PL/SQL procedure, which is going to do the actual programming work for you. There are predefined implicit variables about the message itself that you can use, as well as all the parameters from the URL itself. Now, this is all done in a nice developer interface on the web if you're using SQL Developer Web with ADB or in SQL Developer for the desktop. Either one can do this because under the covers, ORDS is generating and executing the PL/SQL calls necessary to create and expose your web services. It's very easy to work with and test immediately. 17:06 Lois: Joe, how much REST knowledge do I need to use ORDS properly to create REST services? Joe: Well, you should have some basic knowledge of REST, HTTP operations, request and response messages, and JSON, since this is the data format ORDS produces. The developer interface is really not designed for somebody who knows nothing about REST at all; it's not designed to take them step-by-step through everything that needs to be done. It's not wizard-based. Rather, it's an efficient, minimal interface that can be used quickly and easily by someone who has at least some experience building REST services. But, if you have a little knowledge and you understand how REST works and how a REST interface is used and you understand PL/SQL and SQL, you could do quite a lot with only minimal knowledge. It's easy to get started and it's fun to see your data start appearing in webpages formatted for you, with very little or even no code at all as in the case of AutoREST enabling. And ORDS is free and comes as part of the database in ADB as SQL Developer Web and SQL Developer Desktop, both of which are free as well. And SQL Developer Web and SQL Developer Desktop both have a data modeler built into them so you can model your database tables, columns, and keys, and generate and execute the code necessary to create the structures immediately, and they can create graphical models of your database to aid in understanding and communication. Now, while this is not required, modeling your database structures before you build them is most definitely a best practice. 18:29 Nikita: Ok, so now that I have my REST-enabled database tables and all, how do I use them in VBS Designer? Joe: In Visual Builder Studio Designer, you define a service connection by its endpoint and paste the URL for the REST-enabled resource into the wizard, and it generates everything for you by introspecting the REST service. You can test it, see the data shape of the response, and see data returned. You access your REST-enabled data from your database from Visual Builder Studio Designer and use it to populate lists, tables, and forms using the quick start wizards built in. I'll also mention that ORDS provides other capabilities in addition to handling REST calls for the database tables and views. It also exposes over 500 different endpoints for managing your Oracle Database, things like Pluggable Database Management (PDBs), Data Pump, Data Dictionary, Performance, and Monitoring. It's very easy to use and get started with. A great place to start is to create a free, autonomous database in Oracle Cloud, start it up, and then access the database actions. You can start creating tables, columns, and keys, and loading data, or you can load your own scripts, if you've got them, to produce the tables and columns and load them. You can upload the script and run it and it will create your tables and other needed structures. You can then REST-enable them by selecting simple menu options. It's a lot of fun and easy to get started with. 19:47 Lois: So much good stuff today. Thank you, Joe, for being with us today and in the past few weeks and sharing your knowledge with us. Nikita: Yeah, it's been so nice to have you around. Joe: Thank you both! It's been great being here with you. 19:59 Lois: And remember, our Visual Builder courses, Develop Visual Applications with VBS and Develop Fusion Apps with VBS, both show you how to work with a third-party REST service. And our data modeling and design course teaches the fundamentals of data modeling. You can access all these of courses, for free, on mylearn.oracle.com. Join us next week for another episode of the Oracle University Podcast. Until then, I'm Lois Houston… Nikita: And Nikita Abraham signing off! 20:30 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
An airhacks.fm conversation with Anton Arhipov (@antonarhipov) about: playing sports games on Pentium 233 MHz the 2014 JavaOne Rockstar awards about NetBeans, Eclipse, and IntelliJ., enjoying sports games and destroying joysticks, practicing competitive swimming, swim training, starting to program in Turbo Pascal at Maelardalen University, ship simulation with Java for Vasa Museum, joining a company which maintains RefactorIT, working with Java EE and WebLogic and JRockit, joining ZeroturnAround and working on JRebel, Rebel and LiveRebel, working on a profiler, JetBrain's MPS, DevRel for TeamCity, AppCode features are appearing in fleet, Fleet is built on common UI principles, the rendering engine Skia, Kotlin and Jetpack Compose, Circles by Anton Anton Arhipov on twitter: @antonarhipov
An airhacks.fm conversation with Brian Benz (@bbenz) about: the autumn conferences: Oracle Cloud World, IBM Tech Exchange, the Oracle operator for WebLogic, Jakarta EE, and MicroProfile on Azure, Oracle Cloud World vs. JavaOne, Java EE, Jakarta EE, and MicroProfile on Azure, WebLogic on Azure, JavaOne and Oracle Cloud World, the beginnings of open source at Microsoft, Microsoft Open Tech, the first JUG meeting in Seattle by Microsoft in 2013, the program manager for Java …and Node, program managers vs. evangelists, GitHub Copilot and GitHub Copilot Chat, the slash commands and Copilot Chat, the effectiveness of AI in software development, Semantic Kernel and data indexing, Ampere CPU and 30% power reduction, the hoover dam and solar power Brian Benz on twitter: @bbenz
In this episode, Lois Houston and Nikita Abraham are joined by Cloud Engineer Nick Commisso to talk about managing Oracle Database with REST APIs. They also look at Autonomous Database built-in tools, which are pre-assembled, pre-configured, and pre-deployed, delivering a consistent user experience. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Deepak Modi, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00;00;00;00 - 00;00;39;06 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Product Innovation and Go to Market Programs with Oracle University. And with me is Nikita Abraham, Principal Technical Editor. 00;00;39;12 - 00;01;04;12 Hello again! Last week, we discussed Oracle Cloud Infrastructure's Maximum Availability Architecture. And in today's episode, we'll talk about managing Oracle Database with REST APIs and also look at Autonomous Database built-in tools with our Cloud Engineer Nick Commisso. Hi Nick, thanks for being back on the podcast. What is Oracle REST Data Services? What do you use it for? 00;01;04;14 - 00;01;31;07 Oracle is not just a relational database anymore. And the REST APIs can be deployed with Oracle REST Data Services or ORDS to handle all of these data format models. And you can use ORDS for application development and accessing the data and can be used as a powerful tool for automating management, lifecycle, provisioning, and data-dictionary-type use. 00;01;31;09 - 00;02;02;02 Oracle Cloud offers full REST APIs for DBAs and developers who would prefer to interact with Oracle Autonomous Database Cloud services programmatically over REST rather than log in to the cloud console and click through screens. This provides a mechanism for developing customized deployment and management scripts that can be saved and reused for deployments, setting gold standards, and storing entire application infrastructure stacks as version-controlled code. 00;02;02;08 - 00;02;35;15 I think before we move on, it's important to clarify. For anyone who doesn't already know, what is REST? How do Oracle Cloud Infrastructure APIs use REST and HTTPS? REST is combined with HTTPS, but is not a protocol. REST is an acronym for Representational Stateless Transfer. The Oracle Cloud Infrastructure APIs are typical REST APIs that use HTTPS requests and responses and support HTTPS and SSL protocol TLS 1.2, the most secure industry standards. 00;02;35;15 - 00;03;18;17 Calls to the Oracle Cloud Infrastructure using REST APIs can be written in popular scripting languages such as Node.js, Python, Ruby, Perl, Java, C#, Bash, or Curl. The way you interact with your data are the API calls via HTTP - GET to access your data and stored procedures. PUT to update your data. POST to insert your data and execute PL/SQL. And DELETE to remove your data. When making an HTTP request with Oracle REST Data Services, how does the process flow from the request to accessing data in the database? 00;03;18;17 - 00;03;46;21 A person, process, or computer gets ready to make an HTTP request. You need to tell the request where the thing or data is, and the request will get into the web tier where ORDS is running. ORDS then translates the REST request to a SQL statement and accesses the table to get the information requested. 00;03;46;24 - 00;04;13;11 The result normally comes back as a JSON, but can also return an HTML, binary, and CSV. With all of these requests, a collection of connections to the database or connection pool is used and all of the data might not return, depending on the device asking. The results set up links to get more data, but each each time, this links to get another request through the connection pool. 00;04;13;13 - 00;04;41;06 The default size of the connection pool are 10 and it depends how fast is the database code that's tied to the APIs. But 10 probably isn't enough. Because of the results and connection pooling, it shouldn't be long-running code when using APIs. What is the architecture of Oracle REST Data Services? Can you tell us about the integration with components like Java servlets, Tomcat, WebLogic, and Apache? 00;04;41;08 - 00;05;13;24 Also, how does ORDS enable authentication and access to data in the Oracle database through REST calls? ORDS runs in a Java servlet. Or it can be run within Tomcat or WebLogic for E-Business or Fusion. The request comes into the web server and ORDS handles the request. ORDS is included in your Oracle database license. This is a simplified view of your architecture, but there's normally a load balancer in front of the Apache server to handle the requests coming in. 00;05;13;27 - 00;05;50;09 The REST service is already hooked up into the database. Authentication with the web server and the hooks are there to be accessing the data. The code and the data is already in the database in the APEX apps. And the REST calls allow for you to access the data. It harnesses the Oracle database. In order to manage your database with automation, along with minimal human interaction, you need to use ORDS and the REST APIs that are enabled for database management to provision, control, and monitor the Oracle database. 00;05;50;11 - 00;06;20;22 You need an Oracle database for ORDS to work. ORDS can run anywhere that Oracle can run and is easily plugged into the Oracle Database Management pipeline. What are some key features and functionalities offered by the Oracle Database REST APIs? There are over 600 REST endpoints provided to manage and monitor your Oracle database. These are supported starting from 11gR2 up to the current version of the database. 00;06;20;25 - 00;06;51;14 The REST APIs have general information, data dictionary, monitoring, performance, and lifecycle management. Can you give us some examples of specific details that are accessible through the REST APIs? For performance, there's Top SQL, ASH, and AWR reports. For monitoring, you can look at sessions, locks, waits, and alert logs. Lifecycle will allow you to manage multitenant for provisioning PDBs. 00;06;51;16 - 00;07;16;08 And let's not forget about the data dictionary tables where you can report on objects and database operations. And how do you get started with ORDS? To get started using ORDS, you need to install ORDS. You run the installer and there are configuration files that are also created that can be adjusted later. You need the information about connection to the database where you want ORDS installed. 00;07;16;10 - 00;07;47;08 What goes into the database is the schema, ORDS_METADATA, and a user, ORDS_PUBLIC_USER. Are you attending Oracle CloudWorld 2023? Learn from experts, network with peers, and find out about the latest innovations when Oracle CloudWorld returns to Las Vegas from September 18 through 21. CloudWorld is the best place to learn about Oracle solutions from the people who build and use them. 00;07;47;08 - 00;08;15;02 In addition to your attendance at CloudWorld, your ticket gives you access to Oracle MyLearn and all of the cloud learning subscription content as well as three free certification exam credits. This is valid from the week you register through 60 days after the conference. So, what are you waiting for? Register today. Learn more about Oracle CloudWorld at www.oracle.com/cloudworld. 00;08;15;04 - 00;08;48;14 Welcome back. Let's move on to Oracle's data toolset. Nick, what are the key tools offered by Oracle for data analysis and integration? Oracle Data Integrator or ODI is an enterprise class data integration tool with extract, load, and transform, or ELT architecture. Enterprise Data Quality or EDQ is a sophisticated, powerful tool for profiling, cleaning, and preparing your data. 00;08;48;17 - 00;09;24;14 Analytic views built into Oracle database provides a common framework for defining universally accessible semantic models. Oracle Analytics Cloud, or OAC, is the perfect complement, providing beautiful and insightful analysis of this data. So, how do these tools come together? For our traditional market, this is a comprehensive and compelling suite of tools. Enterprise class tools for an enterprise class market. With autonomous database, we deliver an integrated platform. 00;09;24;17 - 00;09;47;24 It's not a single tool with the customer left to buy the other tools that we need, nor is it a solution delivered in kit form with the customer left to cobble it all together. It's pre-assembled, preconfigured, and pre-deployed. There is a consistent user experience with built-in best practices. It's like having an expert in a box there to guide you. 00;09;47;26 - 00;10;12;15 Components are defined in the common database layer so that they can be shared by all users in all tools. And the metadata? And all of this metadata is brought together in the catalog. So, it's not just the tools that are integrated, it's the data too, a business model spanning data sources that can be federated when appropriate and defined in a common data catalog, which eliminates silos. 00;10;12;17 - 00;10;49;24 The result is renewed confidence in data lineage and impact analysis. In other words, we have collaboration by design. This built-in collaboration between specialists eliminates silos. For example, hierarchies recognized automatically in the data preparation phase are defined in the database itself, are immediately accessible to the data analysts for aggregation purposes. Additional semantic modeling by the analysts, perhaps defining sophisticated calculations, such as percentage change since last year, and again, defined in the database itself, can be accessed by the data scientist. 00;10;49;27 - 00;11;15;19 This provides a great headstart in developing predictive models that, in turn, can be used by the CRM developer who might want to augment a customer view with the most suitable campaign to discuss during the next meeting. So, autonomous database comes with a sophisticated suite of tools pre-installed. 00;11;15;20 - 00;11;40;10 So, it's basically an open platform with open standards. If you want to speak SQL, speak SQL, so do we. We speak Python too, if that's your preference. Whether your data is in a CSV file or a JSON format, it's going to be comfortably at home in autonomous database. Using the language of your choice, analyze your data using whatever tool you're most comfortable with. 00;11;40;17 - 00;12;00;18 The whole idea is that there should be nothing new to learn. Thanks, Nick, for joining us today. To learn more about ADB built-in tools, head over to mylearn.oracle.com and get started on our free Oracle Cloud Data Management Foundations Workshop. Next week will be our last episode of the season where we'll look at Oracle Data Lakehouse. 00;12;00;21 - 00;14;47;18 Until then, this is Nikita Abraham and Lois Houston signing off. That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In today's podcast we cover four crucial cyber and technology topics, including: 1. Scattered Swine abusing Azure in latest observation 2. Six year old Oracle flaw under attack: patch 3. ScanSource suffers ransomware attack impacting services 4. New malware found impersonating AI services ChatGPT and MidJourney I'd love feedback, feel free to send your comments and feedback to | cyberandtechwithmike@gmail.com
AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Cet épisode nouvelles discute d'améliorations dans le JDK, d'Hibernate 6, de Service Weaver, de la fin d'options dans DockerHub pour certains projets open source, de Gradle, de cURL et pleins d'autres choses encore. Enregistré le 17 mars 2023 Téléchargement de l'épisode LesCastCodeurs-Episode–292.mp3 News Langages Quelle version de JDK utiliser en fonction des fonctionnalités que l'on souhaite utiliser mais aussi du long time support https://whichjdk.com/ JetBrains propose une formation Rust intégrée aux IDEs https://blog.jetbrains.com/rust/2023/02/21/learn-rust-with-jetbrains-ides/ Un apprentissage directement intégré à l'IDE Avec un plugin “Academy” dédié, qui rajoute un troisième panneau avec les instructions, les explications, et on fait des exercices dans la partie IDE Une chouette manière d'apprendre intégrée directement à son IDE Chacun doit pouvoir créer ses propres ressources d'apprentissage, et on pourrait appliquer ça à des frameworks, des outils, ou pourquoi pas son propre projet informatique ! Retravail de classes du JDK Bits / ByteArray vers un usage via VarHandle pour le swapping de bits dans Java 21 https://minborgsjavapot.blogspot.com/2023/01/java–21-performance-improvements.html petit changement mais utilisé par beaucoup de classes comme ObjectInputStream RandomAccessFile etc améliore la serialization en java Rajout de la notion de “sequenced collection” dans la hiérarchie des collections, planifié pour JDK 21 https://www.infoq.com/news/2023/03/collections-framework-makeover/ va permettre de codifier les collections qui ont un ordre donné (pas forcément trié) rajouter aussi des méthodes pour traverser des collections séquentielles à l'envers, ou pour récupérer ou ajouter un élément au début ou à la fin d'une collection ordonnée aujourd'hui ces methodes sont eparpillées dans les implémentaions et n'avaient aps de contrat commun Le guide ultime des virtual threads https://blog.rockthejvm.com/ultimate-guide-to-java-virtual-threads/ un très long article qui couvre le sujet des nouveaux virtual threads comment en créer comment ils fonctionnent le scheduler et le scheduling coopératif les “pinned” virtual threads (lorsqu'un thread virtuel est bloqué dans un vrai thread, par exemple dans un bloc synchronized ou lors d'appel de méthondes natives) les thread local et thread pools Librairies Quarkus 3 alpha 5 avec Hibernate ORM 6 et une nouvelle DevUI https://quarkus.io/blog/quarkus–3–0–0-alpha5-released/ passage d'Hibernate 5 a 6 (donc testez! switch de compatibilité supérieur pour aider la transition https://github.com/quarkusio/quarkus/wiki/Migration-Guide–3.0:-Hibernate-ORM–5-to–6-migration#database-orm-compatibility (DB interaction esp schema StatelessSession injectable Gradle 8 nouvelle DEvUI (nouveau look and feel, plus extensible pour els extensions et pplus facile a utiliser, va au dela des integrations d'extension (config etc) quarkus deploy dans la CLI, gradle et maven: deploie dans Kube, knative, OpenShift La route vers Quarkus 3, article sure infoq https://www.infoq.com/news/2023/03/road-quarkus–3/ Jakarta EE, ORM 6, Microprofile 6, virtual threads, io_uring, ReactiveStreams=> Flow io_uring reduit les copie de buffer entre userspace et kernel space pas de support JPMS en vue mais Red Hat contribue a project Leyden Camel extensions, attendez Camel 4 (passage Jakarta EE) Interview de Geert Bevin, l'auteur du framework Java RIFE2 https://devm.io/java/rife2-java-framework Google annouce Service Weaver https://opensource.googleblog.com/2023/03/introducing-service-weaver-framework-for-writing-distributed-applications.html EJB is back (Enterprise Go Beans :D) ecrire en tant que modular monolith permet au deploiement décider ce qui est distribué basé sur leur experience du surtout de maintance des microservices (contrats plus difficiles a casser - dbesoin de coordination de rollout etc) dans la communauté des entousiastes et des gens concernés par les 10 falaccies of distributed computing et le fait de cacher les appels distants EJB et corba avant cela ont été des échecs de ce point de vue la ils n'expliquement pas comment le binding de nouveax contrats et de deploiement se fait de maniere transparente des deployeurs implementables (go et GKE initialement) Etude d'opinion de certains utilisateurs de Jakarta EE (OmniFaces community) https://omnifish.ee/2023/03/10/jakarta-ee-survey–2022–2023-results/ biaisée donc attention Java EE 8 suivi par Jakarta EE 8 et derriere Jakarta EE 10 etc WildFly puis Payara puis glassfish ensuite tomee et JBoss EAP gens contents de leurs serverus d'app sand Weblogic et Websphere les api utilisées le plus JPA, CDI, REST, Faces, Servlet, Bean Validation, JTA, EJB, EL etc Produit microprofile: Quarkus puis WildFlky puis Open Liberty puis Payara et Helidon Dans microprofile: Config, rest client, open api, health et metric sont les plus utilisés Comment utiliser des records et Hibernate https://thorben-janssen.com/java-records-embeddables-hibernate/ pas en tant qu'entité encore (final, pas de constructeur vide) mais en tant qu'@Embeddable records sont immuable dans hibernate 6.2, c'est supporté par default (annoter le record @Embeddable Ca utilise le contrat EmbeddableIntentiator Cinq librairies Java super confortables https://tomaszs2.medium.com/5-amazingly-comfortable-java-libraries–887802e240de mapstruct mapper des entités en DTO jOOQ requête de bases de données typées WireMock mocker des API ou être entre le client et l'API pour ne mocker que certaines requêtes Eclipse Collections : pour rendre le code plus simple et facile à comprendre. Attention à la,surface d'attaque HikariCP connection pool rapide - agroal est dans la meme veine mais supporte JTA. C'est ce qui est dans Quarkus. Retour d'expérience sur Hibernate 6 https://www.jpa-buddy.com/blog/hibernate6-whats-new-and-why-its-important/ côté APIs et côté moteur jakarta persistence 3 ; java 11 annotations de types hibernate sont typesafe support des types JSON OOTB meilleur support des dates avec @TimeZoneStorage soit natif de la base soit avec une colonne séparée changement dans la génération des ID (changement cassant) mais stratégies de noms historique peut être activé Options autour de UUID (Time base et IP based) composite id n'ont plus besoin d'être serialisable type texte long supportés via @JdbcTypeCode multitenancy (shared schema, resolver de tenant a plugger) read by position (SQL plus court car sans alias, deserialisarion plus rapide, moins de joins dans certains cas) modele sous jacent commun entre HQL et l'api criteria et donc même moteur meilleure génération du SQL et plus de fonction SQL modernes réduisant le gap entre HQL et SQL ronctions analytiques et fenêtre quand la base les supportent graphe traverse en largeur plutôt qu'en profondeur (potentiellement plus de join donc bien mettre lazy sur vos associations) Cloud Docker supprime les organisations open source sur DockerHub https://blog.alexellis.io/docker-is-deleting-open-source-images/ Les projets open source risquent de devoir passer de 0 $ à 420 $ par an pour héberger leurs images Rétropédalage de Docker https://www.docker.com/blog/we-apologize-we-did-a-terrible-job-announcing-the-end-of-docker-free-teams/ Web Une base de connaissance sur le fonctionnement et les bonnes pratiques autour des WebHooks https://nordicapis.com/exploring-webooks-fyi-the-webhooks-knowledge-center/ Guillaume a refondu son blog https://glaforge.dev/ Cette fois ci, c'est un site web statique, généré avec Hugo, avec des articles en Markdown, hébergé sur Github Pages, buildé / publié automatiquement par Github Actions Outillage Gradle 8.0 est sorti https://docs.gradle.org/8.0/release-notes.html Une CLI connectée à OpenAI's Davinci model pour générer vos lignes de commandes https://github.com/TheR1D/shell_gpt sgpt -se "start nginx using docker, forward 443 and 80 port, mount current folder with index.html" -> docker run -d -p 443:443 -p 80:80 -v $(pwd):/usr/share/nginx/html nginx -> Execute shell command? [y/N]: y Un petit outil en ligne basé sur le modèle GPT–3 qui permet d'expliquer un bout de code https://whatdoesthiscodedo.com/g/db97d13 Copiez-collez un bout de code de moins de 1000 caractères, et le modèle de code de GPT–3, et l'outil vous explique ce que fait ces quelques lignes de code Assez impressionnant quand on pense que c'est un modèle de prédiction probabiliste des prochains caractères logiques Certaines réponses donnent vraiment l'impression parfois que l'outil comprends réellement l'intention du développeur derrière ce bout de code Git: Comment rebaser des branches en cascade https://adamj.eu/tech/2022/10/15/how-to-rebase-stacked-git-branches/ native-image va être inclu dans la prochaine version de GraalVM JDK. Plus besoin de gu install native-image https://github.com/oracle/graal/pull/5995 Si vous utilisez l'outil Mermaid pour faire des graphes d'architecture, d'interactions, etc, il y a un petit cheatsheet sympa qui montre comment faire certains diagrammes https://jojozhuang.github.io/tutorial/mermaid-cheat-sheet/ Un site avec plein de trucs et astuces sur psql, le langage SQL de PostgreSQL https://psql-tips.org/ CURL a 25 ans ! https://daniel.haxx.se/blog/2023/03/10/curl–25-years-online-celebration/ Son créateur, Daniel Stenberg, est toujours à la tête du projet cURL est utilisé dans d'innombrables projets par défaut dans plein de systèmes d'exploitation Cédric Champeau explique le concept de version catalog de Gradle et comment il améliore la productivité https://melix.github.io/blog//2023/03–12-micronaut-catalogs.html permet de réduire le temps et l'effort nécessaire à gérer la version de ses dépendances apport aussi plus de sécurité, de flexibilité, pour s'assurer qu'on a les bonnes versions les plus récentes des dépendances et qu'elles fonctionnent bien entre elles Architecture La pyramide des besoins du code de qualité https://www.fabianzeindl.com/posts/the-codequality-pyramid le bas de la pyramide supporte le haut performance de build performance de test testabilité qualité des codes de composants fonctionalités performance du code pour chaque bloc, il explique les raisons, ses definitions et des astuces pour l'ameliorer par exemples les fonctionalites changent et donc build, testabilité et qualite de code permet des changements légers en cas de changement dans les fonctionalités perf viennent ensuite ("premature opt, root of all evil), regader des besoins globaux Méthodologies Le DevSusOps est né https://www.infoq.com/news/2023/02/sustainability-develop-operation/?utm_campaign=i[…]nt&utm_source=twitter&utm_medium=feed&utm_term=culture-methods bon serieusement, comment on couvre avec un nom pareil sans déraper :man-facepalming: ah dommage Micreosoft rejoints la FinOps foundation https://www.infoq.com/news/2023/02/microsoft-joins-finops-org/?utm_campaign=infoq_content&utm_source=twitter&utm_medium=feed&utm_term=Cloud Imagine si ils avaient rejoint la DevSusOps fondation Sécurité Plein de choses qu'on peut faire avec des Yubikeys https://debugging.works/blog/yubikey-cheatsheet/ Pour générer des time-based one-time passwords, pour l'accès SSH,, pour sécuriser un base Keepass, comme 2FA pour le chiffrement de disque, pour la vérification d'identifiant personnel, pour gérer les clés privées… Loi, société et organisation Le fabricant de graveurs de CPU hollandais ASML se voit interdire d'exporter ses technologies vers la chine https://www-lemagit-fr.cdn.ampproject.org/c/s/www.lemagit.fr/actualites/365532284/Processeurs[…]le-escalade-dans-les-sanctions-contre-la-Chine?amp=1 en tous cas les technologies de gravure des deux dernières generations de la pression commerciale on passe au registre d'exclusion par decision militaire ASML s'était fait espionner récemment CAnon et Sony aussi dans la restriction Meta supprime de nouveau 10000 emplois soit 25% au total depuis la fin de l'année dernière https://www.lesechos.fr/tech-medias/hightech/meta-va-supprimer–10000-postes-de-plus–1915528 Rubrique débutant Bouger les éléments d'une liste https://www.baeldung.com/java-arraylist-move-items discute le concept d'array list en dessous et donc le coût d'insérer au milieu decouverte de Collections.swap (pour intervertir deux elements) decouverte de Collections.rotate pour “deplacer” l'index zero de la liste Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 15–18 mars 2023 : JChateau - Cheverny in the Châteaux of the Loire Valley (France) 23–24 mars 2023 : SymfonyLive Paris - Paris (France) 23–24 mars 2023 : Agile Niort - Niort (France) 30 mars 2023 : Archilocus - Online (France) 31 mars 2023–1 avril 2023 : Agile Games France - Grenoble (France) 1–2 avril 2023 : JdLL - Lyon 3e (France) 4 avril 2023 : AWS Summit Paris - Paris (France) 4 avril 2023 : Lyon Craft - Lyon (France) 5–7 avril 2023 : FIC - Lille Grand Palais (France) 12–14 avril 2023 : Devoxx France - Paris (France) 20 avril 2023 : WordPress Contributor Day - Paris (France) 20–21 avril 2023 : Toulouse Hacking Convention 2023 - Toulouse (France) 21 avril 2023 : WordCamp Paris - Paris (France) 27–28 avril 2023 : AndroidMakers by droidcon - Montrouge (France) 4–6 mai 2023 : Devoxx Greece - Athens (Greece) 10–12 mai 2023 : Devoxx UK - London (UK) 11 mai 2023 : A11yParis - Paris (France) 12 mai 2023 : AFUP Day - lle & Lyon (France) 12 mai 2023 : SoCraTes Rennes - Rennes (France) 25–26 mai 2023 : Newcrafts Paris - Paris (France) 26 mai 2023 : Devfest Lille - Lille (France) 27 mai 2023 : Polycloud - Montpellier (France) 31 mai 2023–2 juin 2023 : Devoxx Poland - Krakow (Poland) 31 mai 2023–2 juin 2023 : Web2Day - Nantes (France) 1 juin 2023 : Javaday - Paris (France) 1 juin 2023 : WAX - Aix-en-Provence (France) 2–3 juin 2023 : Sud Web - Toulouse (France) 7 juin 2023 : Serverless Days Paris - Paris (France) 15–16 juin 2023 : Le Camping des Speakers - Baden (France) 20 juin 2023 : Mobilis in Mobile - Nantes (France) 20 juin 2023 : Cloud Est - Villeurbanne (France) 21–23 juin 2023 : Rencontres R - Avignon (France) 28–30 juin 2023 : Breizh Camp - Rennes (France) 29–30 juin 2023 : Sunny Tech - Montpellier (France) 29–30 juin 2023 : Agi'Lille - Lille (France) 8 septembre 2023 : JUG Summer Camp - La Rochelle (France) 19 septembre 2023 : Salon de la Data Nantes - Nantes (France) & Online 21–22 septembre 2023 : API Platform Conference - Lille (France) & Online 25–26 septembre 2023 : BIG DATA & AI PARIS 2023 - Paris (France) 28–30 septembre 2023 : Paris Web - Paris (France) 2–6 octobre 2023 : Devoxx Belgium - Antwerp (Belgium) 10–12 octobre 2023 : Devoxx Morroco - Agadir (Morroco) 12 octobre 2023 : Cloud Nord - Lille (France) 12–13 octobre 2023 : Volcamp 2023 - Clermont-Ferrand (France) 12–13 octobre 2023 : Forum PHP 2023 - Marne-la-Vallée (France) 19–20 octobre 2023 : DevFest Nantes - Nantes (France) 10 novembre 2023 : BDX I/O - Bordeaux (France) 6–7 décembre 2023 : Open Source Experience - Paris (France) 31 janvier 2024–3 février 2024 : SnowCamp - Grenoble (France) 1–3 février 2024 : SnowCamp - Grenoble (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
A daily look at the relevant information security news from overnight - 30 June, 2022Episode 255 - 30 June 2022OpenSea Makes Waves- https://techcrunch.com/2022/06/30/nft-opensea-data-breach/ XFiles XPands - https://www.bleepingcomputer.com/news/security/xfiles-info-stealing-malware-adds-support-for-follina-delivery/8220 Miner Upgrade- https://www.zdnet.com/article/microsoft-warning-this-malware-that-targets-linux-just-got-a-big-update/Brocade Broken - https://www.securityweek.com/brocade-vulnerabilities-could-impact-storage-solutions-several-major-companiesAstraLocker Attack - https://www.bleepingcomputer.com/news/security/astralocker-20-infects-users-directly-from-word-attachments/Dangling Chromium - https://portswigger.net/daily-swig/chromium-browsers-vulnerable-to-dangling-markup-injectionHi, I'm Paul Torgersen. It's Thursday June 30th 2022, happy birthday Jayden, and this is a look at the information security news from overnight. From TechCrunch.comNFT marketplace OpenSea, has suffered a massive data breach. It seems a staffer at their vendor Customer.io shared the entire email database with a third party. If you have shared your email with OpenSea at any time in the past, you should assume you were impacted. Be on the lookout for targeted phishing emails coming your way. From BleepingComputer.com:These next two are quick hits on malware strains upgrading their exploits. The XFiles info-stealer has added a delivery module that exploits the Windows Follina vulnerability. On a side note, XFiles has also recruited new members recently and is launching new products. Details in the article. From ZDNet.com:Along those same lines, Microsoft is warning about notable updates to malware targeting Linux servers to install cryptominers and IRC bots. The 8220 gang has added new functionality to exploit the recent Confluence vulnerability, as well as an old 2019 WebLogic bug. Details in the article. From SecurityWeek.com:Broadcom revealed that the Brocade SANnav storage area network is affected by nine vulnerabilities, some of which could impact the products of their partner companies, such as HPE, NetApp, Oracle, Dell, Fujitsu, IBM, Lenovo and others. There is no evidence as of yet that these have been exploited in the wild, but get your patch on kids. From BleepingComputer.comThe ransomware strain called AstraLocker has recently released its second major version that drops its payload directly from email attachments. Specifically Word docs. Obviously this smash and grab type of attack is looking for quick payouts and not trying for persistence or lateral movement. Full write up in the article. And last today, from PortSwigger.netA recently-patched security hole in Chromium browsers allowed attackers to bypass safeguards against dangling markup injection, and extract sensitive information from webpages. While dangling markup injection is well-known and -addressed in Chrome, the new attack took advantage of an unaddressed case in how the browser upgrades unsafe HTTP connections. You know where to find the details. That's all for me today. Have a great rest of your day. Like and subscribe, and until tomorrow, be safe out there.
Kent talks about Smart Home products transitioning to aid businesses, particularly in home rentals. He analyzes the current tech and how it impacted the short-term rental experience. Ryan and Kent then discuss the current landscape of the Smart Space industry, with Kent sharing insights on trends and challenges he's witnessed. The podcast is wrapped up with what to look out for from Yonomi and the Smart Space industry.Kent Dickson is an experienced technology leader who enjoys building great teams and disruptive products. He is VP and GM of IoT Platforms & Services for Allegion, a global pioneer in seamless access, with leading brands like CISA®, Interflex®, LCN®, Schlage®, SimonsVoss®, and Von Duprin®. Before Allegion, Kent was co-founder and CEO of Yonomi, the simple connected home integration platform, which joined the Allegion family of brands in January 2021. Kent's background includes serving as General Manager of GridMachine, a massive scale Grid Computing-as-a-Service operated as a business unit of Sentient AI. Kent spent nine years at BEA Systems, leading the teams for several market-defining products in the WebLogic and AquaLogic lines. Kent has spent ten years working on the Smart Home frontier, partnering with leading device makers, voice assistants, AI innovators, and service providers. Kent holds a BS in Aerospace Engineering and an MBA from the University of Colorado, Boulder.
An airhacks.fm conversation with Stuart Marks (@stuartmarks) about: Wang 2200 Laboratories computer with 10 years, David Ahl 101 Basic Computer Games, Basic without "else", GOTO and GOSUB, Pascal Records and Java, conditional evaluation in Pascal, the criticism on Pascal, Bill Joy added the socket interface to BSD 4.2, replacing VMS with BSD, the Bill Joy long weekend, starting at Sun Microsystems, working with James Gosling on the NeWS windows system, Postcript based windows system, NeWS ran on SunOS, SunOS 5 became Solaris, the unpleasant UNIX wars with AT&T, HP and IBM, X-Window vs. NeWS, shared state and NeWS, display postscript became the NeXT system, the X-NeWS merge OS, Open Look and Motif, OSF-opensource foundation, Motif became the dominant OS, creating a eCommerce system with Java at Sun, working with James Gosling at NeWS, project Oak and Project Green, Star Seven, licensing WebLogic and Tengah, personal Java and the Java Ring, Java on Sharp Zaurus and on Palm, working on J2ME, working with JavaFX, Chris Oliver started JavaFX, F3 and Forms Follow Function, Java FX Script was an own language, Richard Bair was the JavaFX architect, Jasper Potts was was the Java FX UI designer, JavaFX is based on final classes, the fragile base class / brittle base class problem, the general subclassing problem, implementing a 2d traversial algorithm for Java FX, Sun was shrinking, Java FX was growing, Brian Goetz worked to improve the Java FX internals, RIAs - Rich Internet Applications, Silverlight, Flash, Flex and JavaFX, JavaFX supported CSS, the compiler bug war story, binding propagates side effects, Robert Field is working on jshell, Stuart Marks on twitter: @stuartmarks, Stuart Marks blog: stuartmarks.wordpress.com
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Scott Dietzen is Vice Chairman of the Board of Pure Storage and served as the Company's CEO from 2010 to 2017. Under his leadership, Pure grew to thousands of employees and completed an IPO in 2015. Dietzen is a four-time successful entrepreneur with WebLogic, Zimbra, and Transarc. Before Pure, he was President and CTO of Zimbra (now part of VMware), but originally acquired by Yahoo!, where Dietzen served as interim SVP of Communications and Communities. Prior to Zimbra, Dietzen was CTO of BEA Systems, where he helped craft the technology and business strategy for WebLogic that drove BEA from $61m in annual revenues prior to the WebLogic acquisition to over $1B. In Today's Episode with Scott Dietzen You Will Learn: 1.) The Journey to Pure Storage CEO: How did Scott make his way into the world of tech and startups? What was the hardest element of making the transition from CTO to CEO? What advice would Scott give to more technical leaders looking to make the move to CEO? Where do so many make mistakes? 2.) How To Build Trust in a Team: What are the most important ways that leaders can build trust with their teams? How can leaders be honest and share the hard truths without damaging role? What is the right tone to communicate both the big wins and big losses? Why does Scott always believe the losses teach more? How does Scott approach post-mortems? 3.) The Biggest Mistakes Founders Make: What are the single biggest hiring mistakes that founders make? What are the single biggest firing mistakes executives make? Why should founders sometimes say no to customers? How should founders approach investor selection and valuation for rounds? 4.) How to Optimise a Board: What specifically can founders do to optimise their board? What are the biggest errors founders make when communicating with their board? What is the value per word framework? How does it tell which board member is the best? Item's Mentioned In Today's Episode with Scott Dietzen Scott's Favourite Book: The 22 Immutable Laws Of Marketing
2022-01-11 Weekly News - Episode 130Watch the video version on YouTube at https://youtu.be/BkIKAlDLFkQ Hosts: Gavin Pickin - Senior Software Developer for Ortus SolutionsEric Peterson - Senior Software Developer for Ortus SolutionsThanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and almost every other Box out there. A few ways to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube. Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week Buy Ortus's Book - 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Patreon SupportWe have 37 patreons providing 97% of the funding for our Modernize or Die Podcasts via our Patreon site: https://www.patreon.com/ortussolutions.News and EventsUpcoming Ortus Webinar - cbwire + Alpine.js with Grant CopleyJanuary 28, 2022 - 11:00 AM CT - Central Time (US and Canada)In this webinar, Grant, lead developer for cbwire, will showcase how to build modern, reactive CFML apps easily using very little JavaScript.Register today: https://www.ortussolutions.com/events/webinars Log4j UpdatesLog4j-2.17.1 patch released. CommandBox images updates with the latest log4j patched jarsAdobe updated have an updated technote: https://helpx.adobe.com/coldfusion/kb/log4j-2-17-0-vulnerability-coldfusion.html Other libraries like Spreadsheet-CFML have updated as well.Note: Log4j2 Support in lucee 5.3 is coming along for 5.3.9‘Elephant Beetle' Lurks for Months in NetworksThe group blends into an environment before loading up trivial, thickly stacked, fraudulent financial transactions too tiny to be noticed but adding up to millions of dollars.This beetle adores Java. The group is “highly proficient” with Java-based attacks and often targets legacy Java apps running on Linux machines – primarily, the Java-based web servers WebSphere and WebLogic – as a means of initial entry to a target environment, the researchers explained. Beyond that, Elephant Beetle even deploys its own, complete Java web application to do the gang's bidding on compromised machines that are, meanwhile, chugging along, running legitimate apps.https://threatpost.com/elephant-beetle-months-networks-financial/177393/?fbclid=IwAR0ytUYx0IOxiNXIUE1jHvqDV0ltP_hBf7XCdEyLEYHfSaKadwf01xPkHLI Adobe WorkshopsMore Adobe #ColdFusion Workshops announced, lead by Damien Bruyndonckx2 dates announced:February 2, 20229.00 AM - 4.30 PM CET1.30 PM - 9.00 PM ISTMarch 09, 20229.00 AM - 4.30 PM CET1.30 PM - 9.00 PM ISThttps://cf-workshop.meetus.adobeevents.com/ AngularJS EOL'ed 12/31/2021As AngularJS is faced with an uncertain future, many teams are searching for answers to the current hot topic: if you are using AngularJS, do you continue to maintain your AngularJS applications or do you migrate your applications to another framework? This is not an easy (or cheap) question to answer.In this article, we'll go over some of the reasons why you should consider migrating your AngularJS applications, and some ideas on how to plan and budget for a successful migration.https://www.thisdot.co/blog/why-you-should-consider-migrating-from-angularjs-to-vue CFCasts Content Updateshttps://www.cfcasts.com Just ReleasedInto the Box 2021 are now all FREE - https://cfcasts.com/series/into-the-box-2021 Coming soonInto the Box LATAMSend your suggestions at https://cfcasts.com/supportConferences and TrainingVueJS Nation ConferenceOnline Live EventJanuary 26th & 27th 2022Register for Freehttps://vuejsnation.com/ More conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/Blogs, Tweets and Videos of the WeekTweet - Adam Cameron - TIL something new about CFOUTPUTI cannot go into details of why this is a good find, but I was unaware that one can pass an encoding algorithm name like `` (and a bunch of others) which will automatically escape the values in `#expression#`. Didn't know that.https://cfdocs.org/cfoutputhttps://twitter.com/adam_cameron/status/1480624980668915716https://twitter.com/adam_cameronTweet - James Moberg - Microsoft taking log4j stuff seriously.While performing some #coldfusion unit testing to identify #log4j exploit attempts (that my WAF may miss), I had to obfuscate the test strings or @msftsecurity would instantly quarantine & report the script. It's good to see that Microsoft is taking this seriously. #cfmlhttps://twitter.com/gamesover/status/1476347523245694984https://twitter.com/gamesoverBlog - James Moberg - Log4j Exploit Pattern Detection Using ColdFusion/CFMLHere are my initial attempts at trying to detect Log4j exploit attempts that may make it past our WAF/service provider protections. While our WAF stopped requests from Trend Micro's Log4j Tester, obfuscated requests made it through. At time of testing, Azure wasn't blocking requests. I had to be a little careful with the script as Windows kept instantly quarantining the CFM files and prevented ColdFusion from executing the template.2021-12-29: Updated rules based on Google Cloud article to additionally block rmi, ldaps & dns (in addition to stripping whitespace.)https://dev.to/gamesover/log4j-exploit-pattern-detection-using-coldfusioncfml-4l17 Tweet - Zac Spitzer - Show some love for the VS Code CFML ExtensionAwesome to see some activity on the vscode-cfml extension, a new minor release coming soon. If you use it, please show some love and star the repo https://github.com/KamasamaK/vscode-cfml #lucee #coldfusion #cfmlhttps://twitter.com/zackster/status/1476206001384828929https://twitter.com/zacksterBlog - Ben Nadel - Building An API Client With The fetch() API In JavaScriptIn my continued effort to modernize this blog, I'm thinking about trying to replace the jQuery library with more modern techniques. I don't personally have anything against jQuery; but, by replacing it, I'll have an opportunity to learn newer - and hawter - JavaScript APIs (at the expense of robust browser support). Case in point, I want to replace the jQuery.ajax() method with a fetch()-based API client. I've never used the fetch() method before; so, this will be an exciting exploration!When consuming an API, you should always create an API client…https://www.bennadel.com/blog/4179-building-an-api-client-with-the-fetch-api-in-javascript.htm Blog - Ben Nadel - Showing A Comment Preview As You Type On This BlogSince comments, on this blog, are authored using Markdown (and ColdFusion), there is a delta between what you write in the intake form and what is eventually rendered in the HTML. Much of the time, this delta is expected; however, if you have small errors in your markdown syntax, you can end up with HTML that does not reflect what you had intended to publish. To help narrow the gap between input and output, I've added a comment preview functionality to this blog.https://www.bennadel.com/blog/4178-showing-a-comment-preview-as-you-type-on-this-blog.htm Blog - Ben Nadel - Mitigating Cross-Site Scripting (XSS) Attacks With A Strict Content Security Policy (CSP) In ColdFusion 2021As I continue to evolve my blogging platform, bringing it into the modern ColdFusion era, I'm trying to catch up on best practices. Of course, I've always used SQL query parameterization to block SQL injection attacks. And, I use encodeForHtml() and encodeForHtmlAttribute() in as many places as is feasible. And when converting user-provided markdown into HTML, I use the OWASP Anti-Samy project to sanitize the HTML output. But, one thing I've never had is a Content Security Policy (CSP). A CSP is yet another line-of-defense in the war against Cross-Site Scripting (XSS) attacks.CAUTION: I Am Not A Security Experthttps://www.bennadel.com/blog/4176-mitigating-cross-site-scripting-xss-attacks-with-a-strict-content-security-policy-csp-in-coldfusion-2021.htm Blog - Ben Nadel - preserveCaseForStructKey Doesn't Work Inside Application.cfc In Adobe ColdFusion 2021Over the New Year's holiday, I ran into a rather peculiar behavior regarding the preservation of key-casing and the serializeJson() function in Adobe ColdFusion 2021. It appears that the serialization setting for preserveCaseForStructKey doesn't apply to code that resized physically within the Application.cfc life-cycle event handlers. To demonstrate this, we can setup a simple demo in which we serialize data across the event handlers and then dump-out the response:https://www.bennadel.com/blog/4175-preservecaseforstructkey-doesnt-work-inside-application-cfc-in-adobe-coldfusion-2021.htmBlog - Ben Nadel - Posting Comments Using Reply Emails And Postmark's Inbound Streams In ColdFusion 2021I've been a very happy Postmark customer for the last decade. Their SMTP and API services make sending and receiving emails absurdly simple. And, their Inbound webhooks allow you to treat Postmark as a reverse proxy that transforms inbound email delivery into API calls (webhooks) against your own servers. I've been wanting to use this feature on my blog forever; however, I was always afraid that it would lead to massive abuse. That said, in response to a recent spam attack, I was forced to add comment moderation. Which means, I can safely start playing with reply-based comment posting using Postmark's Inbound stream!https://www.bennadel.com/blog/4174-posting-comments-using-reply-emails-and-postmarks-inbound-streams-in-coldfusion-2021.htm Blog - Ben Nadel - Centralizing The Error Response Handling For My ColdFusion BlogIf you've noticed that my blog has been quite quiet over the last few weeks, it's because I've dedicated December to modernizing and upgrading my blogging infrastructure. The refactoring has been extensive, to say the least; and, on the list of things that I've wanted to for a long time is centralizing my error response handling in my ColdFusion code. It took me several days to find, factor-out, and normalize my errors; but, I think I have it at a point that I can easily refine and evolve going forward.https://www.bennadel.com/blog/4173-centralizing-the-error-response-handling-for-my-coldfusion-blog.htm CFML JobsSeveral positions available on https://www.getcfmljobs.com/Listing over 256 ColdFusion positions from 111 companies across 131 locations in 5 Countries.7 new jobs listedContract - CFML Developer at Remote - United States Jan 11https://www.getcfmljobs.com/viewjob.cfm?jobid=11407Full-Time - Software Developer - ColdFusion at Overland Park, KS - United States Jan 11https://www.getcfmljobs.com/jobs/index.cfm/united-states/Software-Developer-ColdFusion-at-Overland-Park-KS/11406Full-Time - IT Engineer Applications (Coldfusion developer/admin) : 19-0.. - United States Jan 11https://www.getcfmljobs.com/jobs/index.cfm/united-states/IT-Engineer-Applications-Coldfusion-developeradmin-1905340-at-Portland-OR/11405Full-Time - Senior Coldfusion Developer |LATAM| at Colon, PA - United States Jan 11https://www.getcfmljobs.com/jobs/index.cfm/united-states/Senior-Coldfusion-Developer-LATAM-at-Colon-PA/11404Full-Time - ColdFusion Developer at Virtual, US - United States Jan 10https://www.getcfmljobs.com/jobs/index.cfm/united-states/ColdFusionDev-US/11403Full-Time - Remote Software Developer (Cold Fusion) at Mississauga, ON - Canada Dec 31https://www.getcfmljobs.com/jobs/index.cfm/canada/Remote-CFDev-at-ON-CA/11401Full-Time - Fresh Software Engineer ( For ColdFusion Only) at Ahmedabad,.. - India Dec 30https://www.getcfmljobs.com/jobs/index.cfm/india/Fresh-Software-Engineer-For-ColdFusion-Only-at-Ahmedabad-Gujarat/11402 ForgeBox Module of the WeekJSON-DiffBy Scott SteinbeckAn ColdFusion utility for checking if 2 JSON objects have differencesCall JSONDiff.diff to get a detailed list of changes made between the JSON objects.Call JSONDiff.isSame to get a simple boolean true or false.https://www.forgebox.io/view/jsondiffVS Code Hint Tips and Tricks of the WeekExcel ViewerIf you're working with data, there's a high chance that you'll also encounter an excel spreadsheet in some form. Excel Viewer makes it easy to deal with excel data in your VS Code editor by formatting long and comma-separated strings into a tabled format. This can work wonders for your .csv, .tsv, and .tab extensions.https://marketplace.visualstudio.com/items?itemName=GrapeCity.gc-excelviewerFunny link: https://twitter.com/dawntraoz/status/1479490317766336518Thank you to all of our Patreon SupportersThese individuals are personally supporting our open source initiatives to ensure the great toolings like CommandBox, ForgeBox, ColdBox, ContentBox, TestBox and all the other boxes keep getting the continuous development they need, and funds the cloud infrastructure at our community relies on like ForgeBox for our Package Management with CommandBox. You can support us on Patreon here https://www.patreon.com/ortussolutionsNow offering Annual Memberships, pay for the year and save 10% - great for businesses. Bronze Packages and up, now get a ForgeBox Pro and CFCasts subscriptions as a perk for their Patreon Subscription. All Patreon supporters have a Profile badge on the Community Website All Patreon supporters have their own Private Forum access on the Community Website https://community.ortussolutions.com/Patreons John Wilson - Synaptrix Eric Hoffman Gary Knight Mario Rodrigues Giancarlo Gomez David Belanger Jonathan Perret Jeffry McGee - Sunstar Media6 Dean Maunder Joseph Lamoree Don Bellamy Jan Jannek Laksma Tirtohadi Carl Von Stetten Dan Card Jeremy Adams Jordan Clark Matthew Clemente Daniel Garcia Scott Steinbeck - Agri Tracking Systems Ben Nadel Mingo Hagen Brett DeLine Kai Koenig Charlie Arehart Jonas Eriksson Jason Daiger Jeff McClain Shawn Oden Matthew Darby Ross Phillips Edgardo Cabezas Patrick Flynn Stephany Monge Kevin Wright Steven Klotz You can see an up to date list of all sponsors on Ortus Solutions' Websitehttps://ortussolutions.com/about-us/sponsors★ Support this podcast on Patreon ★
An airhacks.fm conversation with Ed Burns (@edburns) about: expisode with Ed's first computer: "#161 SGI, NCSA Mosaic, Sun, Java, JSF, Java EE, Jakarta EE and Clouds" enabling Jakarta EE servers to run well on Azure, working with IBM and Oracle to support OpenLiberty on Azure and WebLogic on Azure, working with payara cloud, Azure Container Instances the cloud way of "docker run", JBoss EAP on Azure App Service, MicroProfile, Jakarta EE and Java EE application servers on Azure, Lift and Shift with kubernetes and Azure Kubernetes Service, Azure Container Apps - the sweet spot of ACI and ACR, cloud portability with Kubernetes, IaC with ARM Template, WebLogic on Kubernetes was using Bicep, "the complexity tax", Microsoft joins Java Community Process (JCP), Microsoft Build of OpenJDK, Azure Event Bus and Azure Service Bus, "#111 Java / Jakarta Messaging Service (JMS) on ...Microsoft Azure", Payara Cloud on Azure - the serverless server, OpenLiberty on AKS, JBoss EAP on Azure App Service, the Azure Service Connector, Azure Services as a Service -- the anti-corruption layer, Azure ExpressRoute and Azure Virtual Network, Event Driven Architectures and Azure Logic Apps, Ed Burns on twitter: @edburns
An airhacks.fm conversation with Mark Sailes (@MarkSailes3) about: the BBC micro computer with a cassette, the PRINT 10, 386, 486 and a Pentium with an internet connection, learning Apache, using Mandrake Linux at university, a first web page - a huge experience, PHP, MySQL and "we don't need transactions", the fantastic phpMyAdmin, using Java, C++ and Python at the university, the great JavaDoc, Eclipse and NetBeans, the great Java collection JavaDoc, migrating from java.util.Vector to java.util.List, working as backend junior Java developer, from junior over senior to team lead, 3% improvement with 97% rewrite, working for AWS, "Essentialism: The Disciplined Pursuit of Less" book, the WebLogic build engineer, pre pooling EJBs, Hey Enterprise EJB Developers Now Is The Time To Go Serverless, Lambda with API Gateway is a transition to Event Driven Architectures, Using AWS Lambda with an Application Load Balancer, cloud native, event driven architectures with AWS Lambda and Java, testable, asynchronous AWS Lambda, the serverless Kafka on AWS, archive and replay with Amazon Event Bridge, fast cold starts with AWS Lambda, milliseconds invocations with AWS Lambda, testing asynchronous AWS Lambda with JUnit, the limitations of mocking, AWS Cloud Development Kit (CDK) and AWS SAM CLI, swapping out Lambdas with SAM, describing AWS infrastructure with CDK, no YAML deployments with CDK, shareable infrastructure with compilable Java code, AWS CDK constructs--reusable cloud pieces Mark Sailes on twitter: @MarkSailes3, Mark's blog: mark-sailes.medium.com
An airhacks.fm conversation with David Blevins (@dblevins) about: Code Generation with bash, bash is your best friend, scripting as documentation, learn first, then automate, an opportunity to work on an EJB container, working on EJBOSS, working with the great Richard Monson-Haefel, co-founding openEJB with Richard, bluestone and gemstone servers, exolab was an incubator, openJMS, openEJB and castor, working with Apple to integrate openEJB with Apple's WebObjects, openEJB on Apple's WebObjects box, from experience to cash, the concept of isolated containers in openEJB, Dain Sundstrom wrote CMP for JBoss, Rickard Öberg started at openEJB for two weeks, creating Geronimo in 2003 as competitor to JBoss, announcing Geronimo at theserverside.com, Geronimo was over engineered, good idea at a bad time is a bad idea, Convention over Configuration vs. explicit configuration, openEJB's Java Serialization was faster than WebLogic's T3, Geronimo's configuration was not portable, joining gluecode, gluecode was sold to IBM, Jason van Zyl was the creator of Maven, Jason van Zyl created Sonatype, jelly - the executable XML, Maven 2 rollout was tested with openEJB, switching from codehouse to Apache, 600 people were working on WebSphere, Dan Allen was working on arquillian, Arquillian used internally openEJB, JBoss 7 became Wildfly, creating TomEE after JavaOne 2010, TomEE stopped consulting, tomitribe provides support for TomEE, Tomcat, ActiveMQ, TomEE 9 starts in 2 seconds, TomEE passes the TCK with 64MB RAM, TomEE lost access to TCK in 2013 before Java EE 7, TomEE got access in December 2019, TomEE is working on MicroProfile 4.0, TomEE uses Apache Johnzon JSON-P, TomEE uses Apache projects to implement Jakarta EE and MicroProfile specification, TomEE uses BeanValidation for JWT validation, using BeanValidation for authorization with custom data in JWT, Tribestream - the API Gateway, David Blevins on twitter: @dblevins and David's company: tomitribe
This week on the podcast, Kyle explains the new SPB and SPBAT tools for WebLogic patching, Dan discusses the new External Data Integration for Elasticsearch, and then they talk about using Git Patches with the DPK, SQRs, and COBOL. Show Notes SPBAT Patch Process for WebLogic @ 2:00 https://blogs.oracle.com/weblogicserver/update-on-the-weblogic-server-july-2021-patch-set-update-psu https://blogs.oracle.com/blogbypuneeth/different-ways-to-apply-the-quarterly-released-weblogic-critical-patch-updates Webinar External Data Integration @ 10:15 PeopleBooks Entry Debugging the ACM with the DPK @ 14:30 DPK Git Patches @ 21:00 Andy Dorfman's DPK Patches Dan Roque's DPK Patches Git for COBOL and SQR @ 30:00
An airhacks.fm conversation with (Lawrence R. Peterson) about: Tandy TRS-80 with 35 years, practicing law in 1974, terrible IBM typewriters, handling 400 cases per month, increasing the productivity of a law practice with computers, changing the law, soldering computers in leisure, learning Pascal, buying a 12k AT&T computer and learning C, writing a pleading management software with Unix and dumb terminals, writing a file-based database on UNIX, buying a SUN workstation, retooling to C++, networking programming with Sun Station and C++, "write once, run everywhere", Java was solving a lot of problems, transferring to Oracle Application Development Framework ADF, WebLogic and Java, primefaces, RichFaces, icefaces, MyFaces, woodstock and Netbeans, overloading the court with too many perfect cases, practicing Agile without knowing it, migration from WebLogic to quarkus, programming is like a murder mystery, a U.S. missionary in Bavaria, airhacks.live workshops, merging back the microservices into a monolith, From Redux to Redux Toolkit coupon code: redux4free, the bce.design template, Lawrence's software: juristec.com Lawrence's website Lawrence R. Peterson
An airhacks.fm conversation with Rudy De Busscher (@rdebusscher) about: plants and genetics, strawberry cross-pollination experiments, playing plant related games, statistic calculation and classification algorithms, tomato quality check automation, fourier transform on tomatoes, learning Pascal, learning Oracle forms, switching to Java Server Faces on WebLogic Server, from WebLogic to Glassfish, wasting time by creating a "unique snowflake", working as Java EE consultant, blood samples analysis with device integration, Java Connector Architecture and Java EE, starting at Payara, Payara implements MicroProfile 4.0, Payara implements MicroProfile "from scratch", Payara comes with deep MicroProfile integration, Payara InSight monitoring dashboard, the "happy case" focus, letsencrypt Payara integration, Payara Grid is the successor of Glassfish Shoal, persistent EJB timers can be synchronized with Hazelcast, Payara Cloud comes with "serverless" experience, Payara Cloud is kubernetes operator, the WAR as cloud deployment unit, a Payara Micro for each WAR in a Pod, Payara Server is the orchestrator, Payara Cloud is currently running on Microsoft Azure Rudy De Busscher on twitter: @rdebusscher
An airhacks.fm conversation with Shaun Smith (@shaunmsmith) about: the virtual conference problem, prerecorded talks, pre-recording and cheating, the Drive-In Conf in bulgaria, the state of FN Java, building a scalable platform is harder than building the fn-project, lambdas and functions are starting to be used properly, migrating monolith to lambda functions, deploying a JAX-RS resource as a function, moving from Oracle Clouds to Core Java at Oracle Labs, product manager for GraalVM, Maxwell, Maxine and GraalVM, airhacks.fm episode: From Maxwell over Maxine to Graal VM, SubstrateVM and Truffle, from Java bytecode to machine code, COBOL, WebAssembly, PHP, Python, R, LLVM, WebAssembly on CloudFlare, Java annotations vs. Java annotation processing, mapping Java Persistence API (JPA) is ideas to Micronaut Data, Micronaut data is based on conventions, JPA based on defaults, micronaut data is similar to iBatis, small microservices become too expensive, you can serve a a few millions of customers with a single monolith, netflix monolith architecture, the overhead of kubernetes, Google Cloud Run, heroku-like service becomes popular again, Oracle Application Cloud Service, Google Cloud Run, mult-tier compilation for truffle, booting faster with GraalVM, Java Serialization with GraalVM, Java Espresso or running Java as foreign language on Java, Espresso interprets Java bytecode, GraalVM introduces resource constraints for byte code execution, GraalVM becomes a docker-like environment, GraalVM improves security guarantees, Java SecurityManager APIs on steroids with GraalVM, the gvisor project, WebLogic multi-tenancy features, GraalVM in Oracle Database, stored procedures in Oracle Database with GraalVM or Oracle Multilingual Engine|, GraalVM ships Java VisualVM, GraalVM Community Edition comes with the same license as openJDK, benchmark suite for the JVM, GraalVM CE should perform faster as openJDK, GraalVM EE is a lot faster than GraalVM CE, GraalVM consumes less resources, GraalVM comes with partial escape analysis, GraalVM comes with G1 garbage collector, GraalVM isolates is a nested JVM, GraalVM goes JVM-less, OpenJ9 vs. GraalVM performance, openJDK performance is competitive with openJ9, AuroraJVM on Oracle Database, Oracle Coherence GoldenGate HotCache and TopLink, running JPA backwards, debezium subscribes to XStream, GraalVM advisory board Shaun Smith on twitter: @shaunmsmith
On The Cloud Pod this week, the team discusses the future of the podcast and how they'll know they've made it when listeners use Twitter to bombard Ryan with hatred when he's wrong. A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights Amazon gives Justin a long overdue birthday present. Google wants to educate the people. Azure has a new best friend but could they be a wolf in sheep’s clothing? General News: Goodbye, Friend The Apache foundation has decided to send Mesos to the attic. This makes us sad because we loved the concept. Amazon Web Services: Happy Birthday, Justin New AWS WAF Bot Control to reduce unwanted website traffic. This is great! AWS is releasing the Amazon Route 53 Resolver DNS firewall to defend against DNS-level threats. Pricing is interesting on this one. AWS launches CloudWatch Metric Streams. After years of complaints, they're finally fixing this issue. AWS Lambda@Edge changes duration billing granularity from 50ms down to 1ms. Nice price cut! AWS Direct Connect announces MACsec encryption for dedicated 10Gbps and 100Gbps connections at select locations. AWS has fulfilled their promise to Justin — three years later. Amazon announces new predictable pricing model up to 90% lower and Python Support moves to GA for CodeGuru Reviewer. If this goes down next week, blame Ryan. Google Cloud Platform: So Pretty Google is releasing an open-source set of JSON dashboards. This is super important. Google announces free AI and machine learning training for fraud detection, chatbots and more. We recommend you check these out. Google Clouds Database Migration Service is now generally available. Everything is so beautiful on paper. Google introduces request priorities for Cloud Spanner APIs. This just reinforces the fact that we don't know how Cloud Spanner works. Azure: Best Friends Microsoft’s new low-code programming language, Power FX, is in public preview. Terrible name. Microsoft announces new solutions for Oracle WebLogic on Azure Virtual Machines. They're running WebLogic on Azure because of some product requirement. The U.S. Army moves Microsoft HoloLens-based headset from prototyping to production phase. You don't get JEDI, but you get HoloLens! Microsoft launches Azure Orbital to deepen the value chain for geospatial earth imagery on cloud. Reminded us to watch Lord of War again, it's a good movie. Oracle: Win Dinner With Larry Oracle offers free cloud migration to lure new customers. Oracle CEO Larry Ellison will fly you to his private island — but if you don't sign up, you have to make your own way back. Oracle and Microsoft expand interconnection to Frankfurt, adding a third location in EMEA. Don't invite Oracle into your data center. TCP Lightning Round Anyone who makes fun of the Canadian accent wins so Justin takes this week's point and the lead, leaving scores at Justin (5), Ryan (3), Jonathan (5). Other headlines mentioned: Azure Kubernetes Service (AKS) now supports node image autoupgrade in public preview Public preview of Azure Kubernetes Service (AKS) run-command feature Amazon WorkSpaces webcam support now generally available Amazon VPC Flow Logs announces out-of-the-box integration with Amazon Athena AWS WAF now supports Labels to improve rule customization and reporting Amazon EKS is now FedRAMP-High Compliant AWS Budgets announces CloudFormation support for budget actions AWS Systems Manager Parameter Store now supports easier public parameter discoverability AWS Systems Manager Run Command now displays more logs and enables log download from the console Amazon EC2 now allows you to copy Amazon Machine Images across AWS GovCloud, AWS China and other AWS Regions AWS Systems Manager Parameter Store now supports removal of parameter labels Announcing Amazon Forecast Weather Index for Canada Things Coming Up Public Sector Summit Online — April 15–16 Discover cloud storage solutions at Azure Storage Day — April 29 AWS Regional Summits — May 10–19 AWS Summit Online Americas — May 12–13 Microsoft Build — May 19–21 (Digital) Google Financial Services Summit — May 27th Harness Unscripted Conference — June 16–17 Google Cloud Next — Not announced yet (one site says Moscone is reserved June 28–30) Google Cloud Next 2021 — October 12–14, 2021 AWS re:Invent — November 29–December 3 — Las Vegas Oracle Open World (no details yet)
An airhacks.fm conversation with Antonio Goncalves (@agoncal) about: C 64 with tapes, writing thousands of Basic lines, the Power Cartridge and assembly, the "10 GOTO 10" trick, line renumbering with Power Cartridge, the arkanoid game, form BASIC to assembly, Peeks and Pokes, Pascal, prolog to modulog transpiler, programming chips in C++ for a telekom company, discovering Java and WebLogic, the amazing minitel, minitel was huge in France, building Java Server Pages on WebLogic in 1999, joining WebLogic in London, digging wholes to find water, Java EE 5 book with Glassfish in 2007, Java EE 7 book in 2013, talking at Devoxx about JUnit 4, moving from WebLogic to GlassFish, Java EE is the Esperanto of runtimes and servers, Marc Fleury at Paris JUG, the unknown student from Iran, paying back by reviewing a book, self-publishing books, the Java EE 8 drama, the politics in Java EE 8 were stronger than technical innovation, the Java Injection spec, JSR-330, CDI drama, the road to quarkus, Grame Rocher mitronaut talk, from Spring over Micronaut to Quarkus, Practicing Quarkus and Understanding Quarkus books, Quarkus hot reload is impressive, GraalVM with Quarkus is just -Pnative, at start everything is already optimized with Quarkus, Helidon is an interesting alternative to Quarkus, Helidon's CLI is useful, WebLogic customers get support for Helidon, Antonio Goncalves on twitter: @agoncal, Antonio's github account https://github.com/agoncal and blog antoniogoncalves.org
An airhacks.fm conversation with Cedric Beust (@cbeust) about: Apple II was the first love, building an Apple II emulator, the C64 domination, starting with Basic, then switching to 6502 assembly, cracking games for fun, learning Pascal, starting to study Math because Computer Science was not available, working as administrator at school, switching to Amiga 1000 then Amiga 2000, joining the demo scene, the impact of remote applications as PhD, working with C++ and CORBA, C++ language involvement, meeting Bjaerne Stroustroup, evolving a language is hard, starting with Java 1996, joining Sun Labs in 1998, implementing "persona" at Sun Labs with Java, Sun was not the right place to work with Java, applying at Imprise to work on Borland Application Server, meeting the WebLogic developers at a party, joining WebLogic, C++ was hard to work with, Java was a fresh air, the EJB container team was 10 developers, writing EJBGen, working on Java annotations, the relation between EJBGen and xdoclet, the Attribute Oriented Programming with XDoclet, the metadata should be in the near of Java code, joining the JCP to create Java Annotations, starting at Google to work with Adwords, motivated by shortcomings of JUnit, TestNG was created in 2004, WebLogic vs. WebSphere, tests should depend on each other, TestNG was an exploration of a modern framework, Google's mobile team were 5 people in 2005, starting a mobile Gmail project at Google on J2ME, Java Mobile, Google Android's acquisition, working with Andy Rubin to develop a Java-based OS, a team of 5 developers started to build Android, Android was strategic for Larry Page, users should be in power-this was the spirit of Android, Android development was "Top Secret", leaving Google to join a startup, building internal tools for supervision at LinkedIn, creating a calendar assistant at a startup, starting as "firefighter" at Yahoo in Java space, starting okta, okta is an "universal" SSO, implementing SSO across companies at okta, okta's backend is written in Java Cedric Beust on twitter: @cbeust, Cedric's blog
An airhacks.fm conversation with Romain Grecourt (@rgrecourt) about: introduction of clean Java EE 6 API guidelines by Bill Shannon, the guidelines were implemented by Romain, the Maven Versioning Rules by Bill Shannon, predictable groupids, artifactids and package names in Java EE 6, helidon comes with a flat classloader, in helidon there is no distinction between helidon's and third party libraries, Java EE 7 fixed the uncompilable API issue, API jar is the implementation of the API, Java EE APIs from different vendors may vary, javax API was not meant to be universal, Bill Shannon was one of Solaris architects, the "Oracle Native Developer", GlassFish v2 and v3 was "bleeding edge", early GlassFIsh versions were built with Apache Ant, WebLogic multi-tenancy and vertical scaling, WebLogic build system modernization, migration from Jira and Mercurial to GitHub, migration from svn to git, GlassFish started with cvs then transition to svn, KDE's svn to git, during the transition from Java EE GlassFish to Jakarta EE GlassFish some history got lost, the "Java For Cloud" project, "Java For Cloud" is the ancestor of Helidon, weblogic 8 was very fast, GlassFish v3 was internally modularized, Helidon was inspired by Java 8 functional programming capabilities and expressjs, Java For Cloud was "Functional First and Reactive First", Java For Cloud became the Helidon Web Server, Helidon SE would compete with Vert.x, Reactive Programming is Helidon's implementation detail, Helidon supports Java Loom, Helidon SE is faster, than Helidon MicroProfile, CQRS might help with database scalability, Helidon CLI is written in Java and translated with GraalVM to a native executable, vuejs CLI developer experience inspired Helidon CLI, GraalVM: goodness of Go and greatness of Java, Helidon CLI will support pluggable extensions, Helidon comes with home-made templating framework, wad.sh - the "Watch and Deploy" tool, jib - demon-less docker image builds, incremental Docker re-builds, Helidon and direct support for Kubernetes, the minimilastic, beatiful YAML, xdoclet and Attribute Oriented Programming, maven has no knowledge about plugins, maven vs. gradle, the Thirsty Bear GlassFish party, Romain Grecourt on twitter: @rgrecourt, helidon's slack channel
An airhacks.fm conversation with Lukas Eder (@lukaseder) about: a Unisys 8086, don't break your dad's computer, playing with "format", starting with QBasic and 12 years, serial cable chat programs in QBasic, Turbo Pascal with 15, changing the font in the BIOS, starting CMS with PHP and MySQL, no transactions, no connection pools in PHP, the beginning with serverless and CGI, Java is not a website technology, Java static pages vs. PHP includes, enterprise PHP: Zend Framework, from PHP to Java, PHP 4 to PHP 5 migration and the assignment operator, enjoying Java 1.3, Ant vs. Maven 1, a reporting project for a telco company with Java and Hibernate, writing backends in SQL and frontends with XSLT, stateless, functional programming with XSQL and SQL, jooq manual was built with XSLT, apache Cocoon and XSLT, Servlets and Java Message System (JMS) with WebLogic, from Hibernate query builder to jooq in 2006, cascading interfaces which feel like SQL, everyone built a query builder, rewriting jooq - jooq2 in 2008, queryDSL - the abstraction across multiple query language, jooq only abstracts SQL, dynamic "where" clauses with criteria query, jooq stands for: j-object oriented query, jooq started with stored procedure support, SQLJ the preprocessor, PRO-C* -> the C preprocessor for Oracle to generate boring glue code, jooq 1 was a procedural query builder, jooq 2 DSL API looks like SQL and uses the query builder layer, the database first design, SQL is not composable, SQL: different syntax on different levels, 1000 lines of jooq code is not unusual, DSLContext - the starting point, commercial support for jooq is available, database migrations with jooq, opensource vs. commercial edition, dependency on products, saving costs with opensource, focus on Jakarta EE, Java EE, MicroProfile API vs. direct runtime dependencies, working with dynamic SQL and jooq, database vs. Java first Lukas Eder on twitter: @lukaseder
The conversation covers: An overview of Ravi's role as an evangelist — an often misunderstood, but important technology enabler. Balancing organizational versus individual needs when making decisions. Some of the core motivations that are driving cloud native migrations today. Why Ravi believes it in empowering engineers to make business decisions. Some of the top misconceptions about cloud native. Ravi also provides his own definition of cloud native. How cloud native architectures are forcing developers to “shift left.” Links https://harness.io/ Twitter: https://twitter.com/ravilach Harness community: https://community.harness.io/ Harness Slack: https://harnesscommunity.slack.com/ TranscriptEmily: Hi everyone. I'm Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product's value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn't talk about them. Instead, we talk a lot about technical reasons. I'm hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you'll join me.Welcome to The Business of Cloud Native, I am your host Emily Omier. And today I'm chatting with Ravi Lachhman. Ravi, I want to always start out with, first of all, saying thank you—Ravi: Sure, excited to be here.Emily: —and second of all, I like to have you introduce yourself, in your own words. What do you do? Where do you work?Ravi: Yes, sure. I'm an evangelist for Harness. So, what an evangelist does, I focus on the ecosystem, and I always like the joke, I marry people with software because when people think of evangelists, they think of a televangelist. Or at least that's what I told my mother and she believes me still. I focus on the ecosystem Harness plays in. And so, Harness is a continuous delivery as a service company. So, what that means, all of the confidence-building steps that you need to get software into production, such as approvals, test orchestration, Harness, how to do that with lots of convention, and as a service.Emily: So, when you start your day, walk me through what you're actually doing on a typical day?Ravi: a typical day—dude, I wish there was a typical day because we wear so many hats as a start-up here, but kind of a typical day for me and a typical day for my team, I ended up reading a lot. I probably read about two hours a day, at least during the business day. Now, for some people that might not be a lot, but for me, that's a lot. So, I'll usually catch up with a lot of technology news and news in general. They kind of see how certain things are playing out. So, a big fan of The New Stack big fan of InfoQ. I also like reading Hacker News for more emotional reading. The big orange angry site, I call Hacker News. And then really just interacting with the community and teams at large. So, I'm the person I used to make fun of, you know, quote-unquote, “thought leader.” I used to not understand what they do, then I became one that was like, “Oh, boy.” [laughs]. And so just providing guidance for some of our field teams, some of the marketing teams around the cloud-native ecosystem, what I'm seeing, what I'm hearing, my opinion on it. And that's pretty much it. And I get to do fun stuff like this, talking on podcasts, always excited to talk to folks and talk to the public. And then kind of just a mix of, say, making some sort of demos, or writing scaffolding code, just exploring new technologies. I'm pretty fortunate in my day to day activities.Emily: And tell me a little bit more about marrying people with software. Are you the matchmaker? Are you the priest, what role?Ravi: I can play all parts of the marrying lifecycle. Sometimes I'm the groom, sometimes I'm the priest. But I'm really helping folks make technical decisions. So, it's go a joke because I get the opportunity to take a look at a wide swath of technology. And so just helping folks make technical decisions. Oh, is this new technology hot? Does this technology make sense? Does this project fatality? What do you think? I just play, kind of, masters of ceremony on folks who are making technology decisions.Emily: What are some common decisions that you help people with, and common questions that they have?Ravi: Lot of times it comes around common questions about technology. It's always finding rationale. Why are you leveraging a certain piece of technology? The ‘why' question is always important. Let's say that you're a forward-thinking engineer or a forward-thinking technology leader. They also read a lot, and so if they come across, let's say a new hot technology, or if they're on Twitter, seeing, yeah, this particular project's getting a lot of retweets, or they go in GitHub and see oh, this project has little stars, or forks. What does that mean? So, part of my role when talking to people is actually to kind of help slow that roll down, saying, “Hey, what's the business rationale behind you making a change? Why do you actually want to go about leveraging a certain, let's say, technology?” I'm just taking more of a generic approach, saying, “Hey, what's the shiny penny today might not be the shiny penny tomorrow.” And also just providing some sort of guidance like, “Hey, let's take a look at project vitality. Let's take a look at some other metrics that projects have, like defect close ratio—you know, how often it's updates happening, what's your security posture?” And so just walking through a more, I would say the non-fun tasks or non-functional tasks, and also looking about how to operationalize something like, “Hey, given you want to make sure you're maintaining innovation, and making sure that you're maintaining business controls, what are some best operational practices?” You know, want to go for gold, or don't boil the ocean, it's helping people make decisive decisions.Emily: What do you see as sort of the common threads that connect to the conversations that you have?Ravi: Yeah, so I think a lot of the common threads are usually like people say, “Oh, we have to have it. We're going to fall behind if you don't use XYZ technology.” And when you really start getting to talking to them, it's like, let's try to line up some sort of technical debt or business problem that you have, and how about are you going to solve these particular technical challenges? It's something that, of the space I play into, which is ironic, it's the double-edged sword, I call it ‘chasing conference tech.' So, sometimes people see a really hot project, if my team implements this, I can go speak at a conference about a certain piece of technology. And it's like, eh, is that a really rational reason? Maybe. It kind of goes into taking the conversation slightly somewhere else. One of the biggest challenges I think, let's say if you're kind of climbing the engineering ranks—and this is something that I had to do as I went from a junior to a staff to a principal engineer in my roles—with that it's always having some sort of portfolio. So, if you speak at a conference, you have a portfolio, people can Google your name, funny pictures of you are not the only things that come up, but some sort of technical knowledge, and sometimes that's what people are chasing. So, it's really trying to have to balance that emotional decision with what's best for the firm, what's best for you, and just what's best for the team.Emily: That's actually a really interesting question is sometimes what's best for the individual engineer is not what's best for the organization. And when I say individual engineer, maybe it's not one individual, but five, or the team. How do you sort of help piece together and help people understand here's the business reason, that's organization-wide, but here's my personal motivation, and how do I reconcile these, and is there a way even to get both?Ravi: There actually is a way to get both. I call it the 75/25 percent rule. And let's take all the experience away from the engineers, to start with a blank slate. It has to do with the organization. An organization needs to set up engineers to be successful in being innovative. And so if we take the timeline or the scale all the way back to hiring, so when I like to hire folks, I always like to look at—my ratio is a little bit different than 75/25. I'm more of a 50/50. You bring 50 percent of the skills, and you'll learn 50 percent of the skills, versus more conservative organizations would say, “You know what? You have 75 percent of the skills, if you can learn 25 percent of the skills, this job would be interesting to you.” Versus if you have to learn 80 percent, it's going to be frustrating for the individual. And so having that kind of leeway to make decisions, and also knowing that technical change can take a lot of time, I think, as an engineer, as an engineer—as talking software engineering professions as a whole, how do you build your value? So, your value is usually calculated in two parts. It's calculated in your business domain experience and your technical skills. And so when you go project to project—and this is what might be more of, hey, if you're facing too big of a climb, you'll usually change roles. Nobody is in their position for a decade. Gone are the days that you're a lifetime engineer on one project or one product. It's kind of a given that you'll change around that because you're building your repertoire in two places: you're building domain experience, and you're building technical experience. And so knowing when to pick your battles, as cliche as that sounds, oh, you know what, this particular technology, this shiny penny came out. I seen a lot of it when Kubernetes came out, like, “Oh, we have to have it.” But—or even a lot of the cloud-native and container-based and all the ‘et cetera accessories' as I call it, as those projects get steam surrounding it. It's, “We have to have it.” It's like, eh. It's good for resume building, but there's your things to do on your own also to learn it. I think we live in a day of open source. And so as an engineer, if I want to learn a new skill, I don't necessarily have to wait for my organization to implement it. I could go and play, something like Katacoda, I can go do things on my own, I can learn and then say, “You know what, this is a good fit. I can make a bigger play to help implement it in the organization than just me wanting to learn it.” Because a lot of the learning is free these days, which I think it's amazing. I know that was a long-winded answer. But I think you can kind of quench the thirst of knowledge with playing it on your own, and that if it makes sense, you can make a much better case to the business or to technology leadership to make change.Emily: And what do you think the core business motivations are for most of the organizations that you end up talking to?Ravi: Yeah, [unintelligible] core motivation to leveraging cloud-native technology, it really depends on organization to organization. I'm pretty fortunate that I get to span, I think, a wide swath of organization—so from startups to pretty established enterprises—I kind of talk about the pretty established enterprises. A lot of the business justification, it might not be a technical justification, but there's a pseudo technical business reason, a lot of times, though, I when I talk to folks, they're big concern is portability. And so, like, hey, if you take a look at the dollar and cents rationale behind certain things, the big play there is portability. So, if you're leveraging—we can get into the definition of what cloud-native resources are, but a big draw to that is being portable—and so, hopefully, you're not tied down to a single provider, or single purveyor, and you have the ability to move. Now, that also ties into agility. Supposedly, if you're able to use ubiquitous hardware or semi-ubiquitous software, you were able to move a little bit faster. But again, what I usually see is folk's main concern is portability. And then also with that is [unintelligible] up against scale. And so as—looking at ways of reducing resources, if you could use generics, you're able to shop around a little bit better, either internally or externally, and help provide scale for a softer or lesser cost.Emily: And how frequently do you think the engineers that you talked to are aware of those core business motivations?Ravi: Hmm, it really depends on—I'm always giving you the ‘depends' answer because talking to a wide swath of folks—where I see there's more emotion involved in a good way if there's closer alignment to the business—which is something hard to do. I think it is slowly eroding and chipping away. I've definitely seen this during my career. It's the old stodgy business first technology argument, right. Like, modern teams, they're very well [unintelligible] together. So, it's not a us versus them or cat versus dog argument, “Oh, why do these engineers want to take their sweet time?” versus, “Why does the business want us to act so fast?” So, having the engineers empowered to make decisions, and have them looked at instead of being a cost center, as the center of innovation is fairly key. And so having that type of rationale, like, hey, allowing the engineers to give input into feature development, even requirement development is something I've seen changed throughout my career. It used to be a very special thing to do requirements building, versus most of the projects that I've worked on now—as an engineer, we're very, very well attuned to the requirements with the business.Emily: Do you think there's anything that gets lost in translation?Ravi: Oh, absolutely. As people, we're emotional. And so if we're all sum total of our experiences—so let's say if someone asked, Emily, you and I a question, we would probably have four different answers for that person, just because maybe we have differences in opinions, differences of sum totals of experience. And I might say, “Hey, try this or this,” and then you might say, “Try that or that.” So, it really depends. Being lost in translation is always—it's been a fundamental problem in requirements gathering and it's continued to be a fundamental problem. I think just taking that question a step further, is how you go about combating that? I think having very shortened feedback cycles are very important. So, if you have to make any sort of adjustments, gone are the days I think when I started my career, waterfall was becoming unpopular, but the first project or two I was on was very waterfall-ish just because of the size of the project we worked on, we had to agree on lots of things; we were building something for six months. Versus, if you look at today, modern development methodologies like Agile, or Scaled Agile, a lot of the feedback happens pretty regularly, which can be exhausting, but decisions are made all the time.Emily: Do you think in addition to mistranslations, do you think there are any misconceptions? And I'm talking about sort of on both sides of this equation, you know, business leaders or business motivations, and then also technologists, and let's refocus back to talk about cloud-native in particular. What sort of misconceptions do you think are sort of floating out there about cloud-native and what it means?Ravi: Yeah, so what cloud-native means—it means something different to everybody. So, you listen to your podcasts for a couple episodes, if you asked any one of the guests the question, we all would give you a different answer. So, in my definition of cloud-native—and then I'll get back to what some of the misconceptions are—I have a very basic definition: cloud-native means two pillars. It means your architecture, or your platform needs to be ephemeral, and it needs to be [indibited]. So, it needs to be able to be short-lived, and be consistent, which are two things that are at odds with each other. But if you kind of talk to folks that, hey, they might be a little more slighted towards the business, they have this idea that cloud-native will solve all your problems. So, it reminds me a lot of big data back in the day. “Oh, if you have a Hadoop cluster, it will solve all of our logistics and shipping problems.” No. That's the technology. If you have Kubernetes, it will solve all of our problems. No. That's the technology. It's just a conduit of helping you make changes. And so just making sure that understand that hey, cloud-native doesn't mean that you get the checkmark that, “Oh, you know what? We're stable. We're robust. We can scale by using all cloud-native technologies,” because cloud-native technologies are actually quite complicated. If you're introducing a lot of complexity to your architecture, does it make sense? Does that make sense? Does it give you the value you're looking for? Because at the end of the day, and this is kind of something, the older I get, the more I believe it, is that your customers don't care how you did something; they care what the result is. So, if your web application's up, they don't care if you're running a simple LAMP stack, they just care that the application is up, versus using the latest Kubernetes stack, but using some sort of cloud-native NoSQL database, and we're using [Istio], and we're using, pick your flavor du jour of cloud-native technology, your end customer actually doesn't care how you did it. They care what happened.Emily: We can talk about misconceptions that other people have, but is there anything that continues to surprise you?Ravi: Yeah, I think the biggest misconception is that there's very limited choice. And so I'll play devil's advocate, I think the CNCF, the Cloud Native Computing Foundation, there's lots of projects, I've seen the CNCF, they have something called the CNCF Landscape, and I seen it grow from 200 cards, it was 1200 cards at KubeCon, I guess, end of last year in San Diego, and it's hovering around 1500 cards. So, these cards means there's projects or vendors that play in this space. Having that much choice—this is usually surprising to people because they—if you're thinking of cloud-native, it's like saying Kleenex today, and you think of Kubernetes or other auxiliary product or project that surrounds that. And a lot of misconception would be it's helping solve for complexity. It's the quintessential computer science argument. All you do in computer science is move complexity around like an abacus. We move it left to right. We're just shifting it around, and so by leveraging certain technologies there's a lot of complication, a lot of burden that's brought in. For example, if you want to leverage, let's say, a service Istio, Istio will not solve all your networking problems. In fact, it's going to introduce a whole set of problems. And I could talk about my biggest outage, and one of the things I see with cloud-native is a lot of skills are getting shifted left because you're codifying areas that were not codified before. But that's something I would love to talk about.Emily: Tell me about your biggest outage that sounds interesting.Ravi: Yeah, I didn't know how it would manifest itself. It's ways, I think, until, like, years later that I didn't have the aha moment. I used to think it was me, it probably still is me, but—so the year was 2013, and I was working for a client, and we were—it's actually a large news site—and so we were in the midst of modernizing their application, or their streaming application. And so I was one of the first applications to actually go to AWS. And so my background is in Java, so I have a Java software engineer or J2ED or JEE engineer, and having to start working more in infrastructure was kind of a new thing, so I was very fortunate up until 2013-ish up until this point that I didn't really touch the infrastructure. I was immune to that. And now being more, kind of becoming a more senior engineer was in charge of the infrastructure for the application—which is kind of odd—but what ended up happening that—this is going to be kind of funny—since I was one of the first teams to go to AWS, the networking team wouldn't touch the configurations. So, when we were testing things, and [unintelligible] environments, we had our VPC CIDR rules—so the traffic rules—wide open. And then as we were going into production, there were rules that we had to limit traffic due to a CIDR so up until 2013, I thought a C-I-D-R like a CIDR was something you drink. I was like, “What? Like apple cider?” So, this shows you how much I know. So, basically, I had to configure the VPC or Virtual Private Cloud networking rules. Finally, when we deployed the application, unknowing to myself, CIDR calculation is a significant digit calculation. So, the larger the number you divide by, the more IPs you let in. And so instead of dividing by 16, I divided by 8. I was like, “Oh, you'll have a bigger number if you divide by a smaller number.” I end up cutting off half the traffic of the internet when we deployed to production. So, that was a very not smooth way of doing something. But how did this manifest itself? So, the experts, who would have been the networking team, refused to look at my configuration because it was a public cloud. “Nope, you don't have a slot in our data center, we look at it.” And poor me, as a JEE or J2EE engineer, I had very little experience networking. Now, if you fast forward to what this means today, a lot of the cloud-native stack, are again, slicing and dicing these CNCF cards, a lot of this, you're exposing different, let's say verticals or dimensions to engineers that they haven't really seen before. A lot of its networking related a lot of it can be storage related. And so, as a software engineer, these are verticals that I'd never had to deal with before. Now, it's kind of ironic that in 2020, hey, yes, you will be dealing with certain configurations because, hey, it's code. So, it's shifting the burden left towards the developer that, “Oh, you know what, you know networking—” or, “You do need to know your app, so here's some Istio rules that you need to include in your packaging of your application.” Which folks might scratch your head. So, yeah, again, it's like shifting complexity away from folks that have traditional expertise towards the developer. Now, times are changing. I seen a lot of this in years gone by, “Oh, no. These are pieces of code. We don't want to touch it.” Being more traditional or legacy operations team, versus today, everybody—it's kind of the merging of the two worlds. The going joke is all developers are becoming infrastructure engineers, and infrastructure engineers are becoming software engineers. So, it's the perfect blend of two worlds coming together.Emily: That's interesting. And I now think I understand what you mean by skills shifting left. Developers have to know more, and more, and more. But I'm also curious, there's also people who talk about how Kubernetes, one of its failures is that it forces this shift left of skills and that the ideal world is that developers don't need to interact with it at all. That's just a platform team. What do you think about that?Ravi: These are awesome questions. These are things I'm very passionate about. I definitely seen the evolution. So, I've been pretty fortunate that I was jumping on the application infrastructure shift around 2014, 2015, so right when Kubernetes was coming of age. So, most of my background was in distributed systems. So, I'm making very large distributed Java applications. And so when Kubernetes came out, the teams that I worked on, the applications that were deployed to Kubernetes were actually owned by the app dev team. The infrastructure team wouldn't even touch the Kubernetes cluster. It was like, “Oh, this is a development tool. This is not a platform tool.” The platform teams that I were interacting with 2015, 2016, as Kubernetes became more popular than ever, they were the legacy—well, hate to say legacy because it's kind of my background too—they were the remaining middleware engineers. We maintained a web server cluster, we maintained the message broker cluster, we maintained XYZ distributed Java infrastructure cluster. And so when looking at a tool like Kubernetes, or even there were different platforming services, so the paths I've leveraged early, or mid-2010s was Red Hat OpenShift, before and after the Kubernetes migration inside of OpenShift. And so looking at a different—how teams are set up, it used to be, “Oh, this is an app dev item. This is what houses your application.” Versus today, because the workloads are so critical that are going on to say platforms such as Kubernetes, it was that you really need that system engineering bubble of expertise. You really need those platform engineers to understand how to best scale, how to best purvey, and maintain a platform like Kubernetes. Also, one of the odd things are—going back to your point, Emily, like, hey, why things were tossed over either to the development team or going back to a developing software engineer myself, do we care what the end system is?So, it used to be, I'll talk about Java-land here for a minute, give you kind of long-winded answer of back in Java land, we really used to care about the target system, not necessarily for an application that have one node, but if we had to develop a clustered application. So, we have more than one node talking to each other, or a stateful application, we really had start developing to a specific target system. Okay, I know how JBoss WildFly clusters or I know how IBM WebSphere or WebLogic clusters. And so when we're designing our applications, we had to make sure that we play well into those clustering mechanisms. With Kubernetes, since it's generic, you don't necessarily have to play into those clustering mechanisms because there's a basic understanding. But that's been the biggest Achilles heel in Kubernetes. It wasn't designed for those type of workloads, stateful workloads that don't like dying very often. That's kind of been the push or pull. It's just a tool, there's a lot of generic, so you can assume that the target platform will handle a certain way. And you're slowly start backing off the case that you're building to a specific target platform. But as Kubernetes has evolved, especially with the operator framework, you actually are starting to build to Kubernetes in 2018, 2019, 2020.Emily: It actually brought up a question for me that, at risk of sounding naive myself, I feel like I never meet anybody who introduces themselves as a platform engineer. I meet all these developers, everyone's a developer evangelist, for example, or their background is as a developer, I feel like maybe once or twice, someone has introduced themselves as, “I'm a platform engineer,” or, “I'm an operations specialist.” I mean, is that just me? Is that a real thing?Ravi: They're very real jobs. I think… it's like saying DevOps engineer, it means something else to who you talk to you. So, I'll harp on, like ‘platform engineer.' so kind of like, the evolution of the platform engineer, if you would have talked to me in 2013, 2014, “Hey, I'm a platform engineer,” I would think that you're a software engineer focused on platform tools. Like, “Hey, I focus on authentication, authorization.” You're building—let's say we had a dozen people on this call and we're working for Acme Incorporated, there's modules that transcend every one of our teams. Let's say logging, or let's say login, or let's say, some sort of look and feel. So, the platform engineer or the platform engineering development focused platform engineering team would make common reusable modules throughout. Now, with the great rise of platforms as a service, like PCF, and OpenShift, and DCOS, they became kind of like a shift. The middleware engineers that were maintaining the message broker clusters, maintaining your web application server clusters, they're kind of shifting towards one of those platforms. Even today, Kubernetes, pick your provider du jour of Kubernetes. And so those are where the platform engineers are today. “Hey, I'm a platform engineer. I focus on OpenShift and Kubernetes.” Usually, they're very vertically focused on one or more specific platforms. And operations folks can ride very big gamut. Usually, if you put, “operations” in quotes, usually they're systems or infrastructure engineers that are very focused on the infrastructure where the platform's run.Emily: I'm obviously a words person, and it just seems like there's this vocabulary issue where everybody knows what a developer is, and so it's easy to say, “Oh, I'm a developer.” But then everything else that's related to engineering, there's not quite as much specificity, precisely because you said everybody has a slightly different understanding. It's kind of interesting.Ravi: Yeah, it's like, I think as a engineer, we're not one for titles. So, I think a engineer is a engineer. I think if you asked most engineers, it's like, “Yeah, I'm a engineer.” It's so funny, a good example of that is Tim Berners-Lee, the person who created WWW, the World Wide Web. If you looked at his LinkedIn, he just says he's a web developer. And he invented WWW. So, usually engineering-level folks, you're not—at least for myself—is not one for title.Emily: The example that you gave regarding the biggest outage of your career was basically a skills problem. Do you think that there's still a skills or knowledge issue in the cloud-native world?Ravi: Oh, absolutely. We work for incentivization. You know, my mortgage is with PNC, and they require a payment every month, unfortunately. So, I do work for an employer. Incentivization is key. So, kind of resume chasing, conference chasing there's been some of that in the cloud-native world, but what ends up happening more often than not is that we're continuously shifting left. A talk I like to give is called, “The Engineering Burden is on the Rise.” And taking a look at what, let's say, a software engineer was required to do in 2010 versus what a software engineer is required to do today in 2020. And there's a lot more burden in infrastructure that, as a software engineer you didn't have to deal with. Now, this has to do with two things, or actually one particular movement. There's a movie company, or a video company in Los Gatos, California, and there's a book company in South Lake Union in Seattle. And so these two particular companies given the rise of what's called a full lifecycle developer. Basically, if you run it, or if you operate—you operate what you run, or if you write it, you run it. So, that means that if you write a piece of code, you're in charge of the operations. You have support, you're in charge of the SLAs, SLOs, SLIs. You're ultimately responsible if a customer has a problem. And can you imagine the number of people, the amount of skill set that requires? There's this concept of a T-shaped skill that you have to have experience in so many different platforms, that it becomes a very big burden. As an engineer, I don't envy anybody entering a team that's leveraging a lot of cloud-native technology because most likely a lot of that onus will fall on the software engineer to create the deployable, to create how you build it, to fly [unintelligible] in your CI stack, write the configuration that builds it, write the configuration deploys it, write the networking rules, write how you test it, write the login interceptors. So, there's a lot going on.Emily: Is there anything else that you want to add about your experience with cloud-native that I haven't really thought to ask, yet?Ravi: It's not all doom and gloom. I'm very positive on cloud-native technologies. I think it's a great equalizer. You're kind of going back—this might be a more intrinsic, like a 30-second answer here. If you taking back that I wanted to learn certain skills in 2010, I basically had to be working for a firm. So, 2010, I was working for IBM. So, there's certain distributed Java problems I wanted to solve. I basically had to be working for a firm because the software licensing costs were so expensive, and that technology wasn't very democratized. Looking at cloud-native technology today, there's a big, big push for open source, which open source is R&D methodology. That's what open source is, it helps alleviate some sort of acquisition—but not necessarily adoption—problems. And you can learn a lot. Hey, you could pick up any project and just try to learn, try to run it. Pick up these particular distributed system skills that were very guarded, I would say, a decade ago, it's being opened up to the masses. And so there's a lot to drink from, but you can drink as much as you want from the CNCF or the cloud-native garden hose.Emily: Do you have a software engineering tool that you cannot live without?Ravi: Recently, because I deal in a lot of YAML, I need a YAML linter. So, YAML is a space-separated language. As a human, I can't tell you what spaces are. Like, you know, if you have three spaces, and the next line you have four spaces. So, I use a YAML linter. It puts periods for me, so I can count them because it's been multiple times that my demo is not syntactically correct because I missed a space and I can't see it on my screen.Emily: And how can listeners connect with you?Ravi: Oh, yeah. You can hit me up on Twitter @ravilach, R-A-V-I-L-A-C-H. Or come visit us at Harness at www.harness.io. I run the Harness community, so community.harness.io. We have a Slack channel and a Discourse, and always excited to interact with people.Emily: Thanks for listening. I hope you've learned just a little bit more about the business of cloud-native. If you'd like to connect with me or learn more about my positioning services, look me up on LinkedIn: I'm Emily Omier, that's O-M-I-E-R, or visit my website which is emilyomier.com. Thank you, and until next time.Announcer: This has been a HumblePod production. Stay humble.
Today on the show we are very lucky to be joined by Chris Umbel and Shaun Anderson from Pivotal to talk about app transformation and modernization! Our guests help companies to update their systems and move into more up-to-date setups through the Swift methodology and our conversation focusses on this journey from legacy code to a more manageable solution. We lay the groundwork for the conversation, defining a few of the key terms and concerns that arise for typical clients and then Shaun and Chris share a bit about their approach to moving things forward. From there, we move into the Swift methodology and how it plays out on a project before considering the benefits of further modernization that can occur after the initial project. Chris and Shaun share their thoughts on measuring success, advantages of their system and how to avoid roll back towards legacy code. For all this and more, join us on The Podlets Podcast, today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Josh Rosso Duffie Cooley Olive Power Key Points From This Episode: A quick introduction to our two guests and their roles at Pivotal. Differentiating between application organization and application transformation. Defining legacy and the important characteristics of technical debt and pain. The two-pronged approach at Pivotal; focusing on apps and the platform. The process of helping companies through app transformation and what it looks like. Overlap between the Java and .NET worlds; lessons to be applied to both. Breaking down the Swift methodology and how it is being used in app transformation. Incremental releases and slow modernization to avoid roll back to legacy systems. The advantages that the Swift methodology offers a new team. Possibilities of further modernization and transformation after a successful implementation. Measuring success in modernization projects in an organization using initial objectives. Quotes: “App transformation, to me, is the bucket of things that you need to do to move your product down the line.” — Shaun Anderson [0:04:54] “The pioneering teams set a lot of the guidelines for how the following teams can be doing their modernization work and it just keeps rolling down the track that way.” — Shaun Anderson [0:17:26] “Swift is a series of exercises that we use to go from a business problem into what we call a notional architecture for an application.” — Chris Umbel [0:24:16] “I think what's interesting about a lot of large organizations is that they've been so used to doing big bang releases in general. This goes from software to even process changes in their organizations.” — Chris Umbel [0:30:58] Links Mentioned in Today’s Episode: Chris Umbel — https://github.com/chrisumbel Shaun Anderson — https://www.crunchbase.com/person/shaun-anderson Pivotal — https://pivotal.io/ VMware — https://www.vmware.com/ Michael Feathers — https://michaelfeathers.silvrback.com/ Steeltoe — https://steeltoe.io/ Alberto Brandolini — https://leanpub.com/u/ziobrando Swiftbird — https://www.swiftbird.us/ EventStorming — https://www.eventstorming.com/book/ Stephen Hawking — http://www.hawking.org.uk/ Istio — https://istio.io/ Stateful and Stateless Workload Episode — https://thepodlets.io/episodes/009-stateful-and-stateless/ Pivotal Presentation on Application Transformation: https://content.pivotal.io/slides/application-transformation-workshop Transcript: EPISODE 19 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.0] CC: Hi, everybody. Welcome back to The Podlets. Today, we have an exciting show. It's myself on, Carlisia Campos. We have our usual guest hosts, Duffie Cooley, Olive Power and Josh Rosso. We also have two special guests, Chris Umbel. Did I say that right, Chris? [0:01:03.3] CU: Close enough. [0:01:03.9] CC: I should have checked before. [0:01:05.7] CU: Umbel is good. [0:01:07.1] CC: Umbel. Yeah. I'm not even the native English speaker, so you have to bear with me. Shaun Anderson. Hi. [0:01:15.6] SA: You said my name perfectly. Thank you. [0:01:18.5] CC: Yours is more standard American. Let's see, the topic of today is application modernization. Oh, I just found out word I cannot pronounce. That's my non-pronounceable words list. Also known as application transformation, I think those two terms correctly used alternatively? The experts in the house should say something. [0:01:43.8] CU: Yeah. I don't know that I would necessarily say that they're interchangeable. They're used interchangeably, I think by the general population though. [0:01:53.0] CC: Okay. We're going to definitely dig into that, how it does it not make sense to use them interchangeably, because just by the meaning, I would think so, but I'm also not in that world day-to-day and that Shaun and Chris are. By the way, please give us a brief introduction the two of you. Why don't you go first, Chris? [0:02:14.1] CU: Sure. I am Chris Umbel. I believe it was probably actually pronounced Umbel in Germany, but I go with Umbel. My title this week is the – I think .NET App Transformation Journey Lead. Even though I focus on .NET modernization, it doesn't end there. Touch a little bit of everything with Pivotal. [0:02:34.2] SA: I'm Shaun Anderson and I share the same title of the week as Chris, except for where you say .NET, I would say Java. In general, we play the same role and have slightly different focuses, but there's a lot of overlap. [0:02:48.5] CU: We get along, despite the .NET and Java thing. [0:02:50.9] SA: Usually. [0:02:51.8] CC: You both are coming from Pivotal, yeah? As most people should know, but I'm sure now everybody knows, Pivotal was just recently as of these date, which is what we are? End of January. This episode is going to be a while to release, but Pivotal was just acquired by VMware. Here we are. [0:03:10.2] SA: It's good to be here. [0:03:11.4] CC: All right. Somebody, one of you, may be let's say Chris, because you've brought this up, how this application organization differs from application transformation? Because I think we need to lay the ground and lay the definitions before we can go off and talk about things and sound experts and make sure that everybody can follow us. [0:03:33.9] CU: Sure. I think you might even get different definitions, even from within our own practice. I'll at least lay it out as I see it. I think it's probably consistent with how Shaun's going to see it as well, but it's what we tell customers anyway. At the end of the day, there are – app transformation is the larger [inaudible] bucket. That's going to include, say just the re-hosting of applications, taking applications from point A to some new point B, without necessarily improving the state of the application itself. We'd say that that's not necessarily an exercise in paying down technical debt, it's just making some change to an application or its environment. Then on the modernization side, that's when things start to get potentially a little more architectural. That's when the focus becomes paying down technical debt and really improving the application itself, usually from an architectural point of view and things start to look maybe a little bit more like rewrites at that point. [0:04:31.8] DC: Would you say that the transformation is more in-line with re-platforming, those of you that might think about it? [0:04:36.8] CU: We'd say that app transformation might include re-platforming and also the modernization. What do you think of that, Shaun? [0:04:43.0] SA: I would say transformation is not just the re-platforming, re-hosting and modernization, but also the practice to figure out which should happen as well. There's a little bit more meta in there. Typically, app transformation to me is the bucket of things that you need to do to move your product down the line. [0:05:04.2] CC: Very cool. I have two questions before we start really digging to the show, is still to lay the ground for everyone. My next question will be are we talking about modernizing and transforming apps, so they go to the clouds? Or is there a certain cut-off that we start thinking, “Oh, we need to – things get done differently for them to be called native.” Is there a differentiation, or is this one is the same as the other, like the process will be the same either way? [0:05:38.6] CU: Yeah, there's definitely a distinction. The re-platforming bucket, that re-hosting bucket of things is where your target state, at least for us coming out of Pivotal, we had definitely a product focus, where we're probably only going to be doing work if it intersects with our product, right? We're going to be doing both re-platforming targeted, say typically at a cloud environment, usually Cloud Foundry or something to that effect. Then modernization, while we're usually doing that with customers who have been running our platform, there's nothing to say that you necessarily need a cloud, or any cloud to do modernization. We tend to based on who we work for, but you could say that those disciplines and practices really are agnostic to where things run. [0:06:26.7] CC: Sorry, I was muted. I wanted to ask Shaun if you wanted to add to that. Do you have the same view? [0:06:33.1] SA: Yeah. I have the same view. I think part of what makes our process unique that way is we're not necessarily trying to target a platform for deployment, when we're going through the modernization part anyway. We're really looking at how can we design this application to be the best application it can be. It just so happens that that tends to be more 12-factor compliant that is very cloud compatible, but it's not necessarily the way that we start trying to aim for a particular platform. [0:07:02.8] CC: All right. If everybody allows me, after this next question, I'll let other hosts speak too. Sorry for monopolizing, but I'm so excited about this topic. Again, in the spirit of understanding what we're talking about, what do you define as legacy? Because that's what we're talking about, right? We’re definitely talking about a move up, move forwards. We're not talking about regression and we're not talking about scaling down. We're talking about moving up to a modern technology stack. That means, that implies we're talking about something that's legacy. What is legacy? Is it contextual? Do we have a hard definition? Is there a best practice to follow? Is there something public people can look at? Okay, if my app, or system fits this recipe then it’s considered legacy, like a diagnosis that has a consensus. [0:07:58.0] CU: I can certainly tell you how you can't necessarily define legacy. One of the ways is by the year that it was written. You can certainly say that there are certainly shops who are writing legacy code today. They're still writing legacy code. As soon as they're done with a project, it's instantly legacy. There's people that are trying to define, like another Michael Feathers definition, which is I think any application that doesn't have tests, I don't know that that fits what – our practice necessarily sees legacy as. Basically, anything that's occurred a significant amount of technical debt, regardless of when the application was written or conceived fits into that legacy bucket. Really, our work isn't necessarily as concerned about whether something's legacy or not as much as is there pain that we can solve with our practice? Like I said, we've modernized things that were in for all intents and purposes, quite modern in terms of the year they were written. [0:08:53.3] SA: Yeah. I would double down on the pain. Legacy to us often is something that was written as a prototype a year ago. Now it's ready to prove itself. It's going to be scaled up, but it wasn't built with scale in mind, or something like that. Even though it may be the latest technology, it just wasn't built for the load, for example. Sometimes legacy can be – the pain is we have applications on a mainframe and we can't find Cobol developers and we're leasing a giant mainframe and it's costing a lot of money, right? There's different flavors of pain. It also could be something as simple as a data center move. Something like that, where we've got all of our applications running on Iron and we need to go to a virtual data center somewhere, whether it's cloud or on-prem. Each one of those to us is legacy. It's all about the pain. [0:09:47.4] CU: I think is miserable as that might sound, that's really where it starts and is listening to that pain and hearing directly from customers what that pain is. Sounds terrible when you think about it that you're always in search of pain, but that isn't indeed what we do and try to alleviate that in some way. That pain is what dictates the solution that you come up with, because there are certain kinds of pain that aren't going to be solved with say, modernization approach, a more a platformed approach even. You have to listen and make sure that you're applying the right medicine to the right pain. [0:10:24.7] OP: Seems like an interesting thing bringing what you said, Chris, and then what you said earlier, Shaun. Shaun you had mentioned the target platform doesn't necessarily matter, at least upfront. Then Chris, you had implied bringing the right thing in to solve the pain, or to help remedy the pain to some degree. I think what's interesting may be about the perspectives for those on this call and you too is a lot of times our entry points are a lot more focused with infrastructure and platform teams, where they have these objectives to solve, like cost and ability to scale and so on and so forth. It seems like your entry point, at least historically is maybe a little bit more focused on finding pain points on more of the app side of the house. I'm wondering if that's a fair assessment, or if you could speak to how you find opportunities and what you're really targeting. [0:11:10.6] SA: I would say that's a fair assessment from the perspective of our services team. We're mainly app-focused, but it's almost there's a two-pronged approach, where there's platform pain and application pain. What we've seen is often solving one without the other is not a great solution, right? I think that's where it's challenging, because there's so much to know, right? It's hard to find one team or one person who can point out the pain on both sides. It just depends on often, how the customer approaches us. If they are saying something like, “We’re a credit card company and we're getting our butts kicked by this other company, because they can do biometrics and we can't yet, because of the limitations of our application.” Then we would approach it from the app-first perspective. If it's another pain point, where our operations, day two operations is really suffering, we can't scale, where we have issues that the platform is really good at solving, then we may start there. It always tends to merge together in the end. [0:12:16.4] CU: You might be surprised how much variety there is in terms of the drivers for people coming to us. There are a lot of cases where the work came to us by way of the platform work that we've done. It started with our sister team who focuses on the platform side of things. They solve the infrastructure problems ahead of us and then we close things out on the application side. We if our account teams and our organization is really listening to each individual customer that you'll find that there – that the pain is drastically different, right? There are some cases where the driver is cost and that's an easy one to understand. There are also drivers that are usually like a date, such as this data center goes dark on this date and I have to do something about it. If I'm not out of that data center, then my apps no longer run. The solution to that is very different than the solution you would have to, "Look, my application is difficult for me to maintain. It takes me forever to ship features. Help me with that." There's two very different solutions to those problems, but each of which are things that come our way. It's just that former probably comes in by way of our platform team. [0:13:31.1] DC: Yeah, that’s an interesting space to operate in in the application transformation and stuff. I've seen entities within some of the larger companies that represent this field as well. Sometimes that's called production engineering or there are a few other examples of this that I'm aware of. I'm curious how you see that happening within larger companies. Do you find that there is a particular size entity that is actually striving to do this work with the tools that they have internally, or do you find that typically, most companies are just need something like an application transformation so you can come in and help them figure out this part of it out? [0:14:09.9] SA: We've seen a wide variety, I think. One of them is maybe a company really has a commitment to get to the cloud and they get a platform and then they start putting some simple apps up, just to learn how to do it. Then they get stuck with, “Okay. Now how do we with trust get some workloads that are running our business on it?” They will often bring us in at that point, because they haven't done it before. Experimenting with something that valuable to them is — usually means that they slow down. There's other times where we've come in to modernize applications, whether it's a particular business unit for example, that may have been trying to get off the mainframe for the last two years. They’re smart people, but they get stuck again, because they haven't figured out how to do it. What often happens and Chris can talk about some examples of this is once we help them figure out how to modernize, or the recipes to follow to start getting their systems systematically on to the platform and modernize, that they tend to like forming a competency area around it, where they'll start to staff it with the people who are really interested and they take over where we started from. [0:15:27.9] CU: There might be a little bit of bias to that response, in that typically, in order to even get in the door with us, you're probably a Fortune 100, or at least a 500, or government, or something to that effect. We're going to be seeing people that one, have a mainframe to begin with. Two, would have say, capacity to fund say a dedicated transformation team, or to build a unit around that. You could say that the smaller an organization gets, maybe the easier it is to just have the entire organization just write software the modern way to begin with. At least at the large side, we do tend to see people try to build a – they'll use different names for it. Try to have a dedicated center of excellence or practice around modernization. Our hope is to help them build that and hopefully, put them in a position that that can eventually disappear, because eventually, you should no longer need that as a separate discipline. [0:16:26.0] JR: I think that's an interesting point. For me, I argue that you do need it going forward, because of the cognitive overhead between understanding how your application is going to thrive on today's complex infrastructure models and understanding how to write code that works. I think that one person that has all of that in their head all the time is a little too much, a little too far to go sometimes. [0:16:52.0] CU: That's probably true. When you consider the size the portfolios and the size of the backlog for modernization that people have, I mean, people are going to be busy on that for a very long time anyway. It's either — even if it is finite, it still has a very long life span at a minimum. [0:17:10.7] SA: At a certain point, it becomes like painting the Golden Gate Bridge. As soon as you finish, you have to start again, because of technology changes, or business needs and that thing. It's probably a very dynamic organization, but there's a lot of overlap. The pioneering teams set a lot of the guidelines for how the following teams can be doing their modernization work and it just keeps rolling down the track that way. It may be that people are busy modernizing applications off of WebLogic, or WebSphere, and it takes a two years or more to get that completed for this enterprise. It was 20, 50 different projects. To them, it was brand-new each time, which is cool actually to come into that. [0:17:56.3] JR: I'm curious, I definitely love hear it from Olive. I have one more question before I pass it out and I think we’d love to hear your thoughts on all of this. The question I have is when you're going through your day-to-day working on .NET and Java applications and helping people figure out how to go about modernizing them, what we've talked about so far is that represents some of the deeper architectural issues and stuff. You've already mentioned 12 factor after and being able to move, or thinking about the way that you frame the application as far as inputs of those things that it takes to configure, or to think with the lifecycle of those things. Are there some other common patterns that you see across the two practices, Java and .NET, that you think are just concrete examples of stuff that people should take away maybe from this episode, that they could look at their app – and they’re trying to get ahead of the game a little bit? [0:18:46.3] SA: I would say a big part of the commonality that Chris and I both work on a lot is we have a methodology called the SWIFT methodology that we use to help discover how the applications really want to behave, define a notional architecture that is again, agnostic of the implementation details. We’ll often come in with a the same process and I don't need to be a .NET expert and a .NET shop to figure out how the system really wants to be designed, how you want to break things into microservices and then the implementation becomes where those details are. Chris and I both collaborate on a lot of that work. It makes you feel a little bit better about the output when you know that the technology isn't as important. You get to actually pick which technology fits the solution best, as opposed to starting with the technology and letting a solution form around it, if that makes sense. [0:19:42.4] CU: Yeah. I'd say that interesting thing is just how difficult it is while we're going through the SWIFT process with customers, to get them to not get terribly attached to the nouns of the technology and the solution. They've usually gone in where it's not just a matter of the language, but they have something picked in their head already for data storage, for messaging, etc., and they're deeply attached to some of these decisions, deeply and emotionally attached to them. Fundamentally, when we're designing a notional architecture as we call it, really you should be making decisions on what nouns you're going to pick based on that architecture to use the tools that fit that. That's generally a bit of a process the customers have to go through. It's difficult for them to do that, because the more technical their stakeholders tend to be, often the more attached they are to the individual technology choices and breaking that is the principal role for us. [0:20:37.4] OP: Is there any help, or any investment, or any coordination with those vendors, or the purveyors of the technologies that perhaps legacy applications are, or indeed the platforms they're running on, is there any help on that side from those vendors to help with application transformation, or making those applications better? Or do organizations have to rely on a completely independent, so the team like you guys to come in and help them with that? Do you understand my point? Is there any internal – like you mentioned WebLogic, WebSphere, do the purveyors of those platforms try and drive the transformation from within there? Or is it organizations who are running those apps have to rely on independent companies like you, or like us to help them with that? [0:21:26.2] SA: I think some of it depends on what the goal of the modernization is. If it's something like, we no longer want to pay Oracle licensing fees, then of course, obviously they – WebLogic teams aren't going to be happy to help. That's not always the case. Sometimes it's a case where we may have a lot of WebLogic. It's working fine, but we just don't like where it's deployed and we'd like to containerize it, move it to Kubernetes or something like that. In that case, they're more willing to help. At least in my experience, I've found that the technology vendors are rightfully focused just on upgrading things from their perspective and they want to own the world, right? WebLogic will say, “Hey, we can do everything. We have clustering. We have messaging. We've got good access to data stores.” It's hard to find a technology vendor that has that broader vision, or the discipline to not try to fit their solutions into the problem, when maybe they're not the best fit. [0:22:30.8] CU: I think it's a broad generalization, but specifically on the Java side it seems that at least with app server vendors, the status quo is usually serving them quite well. Quite often, we’re adversary – a bit of an adversarial relationship with them on occasion. I could certainly say that within the .NET space, we've worked a relatively collaboratively with Microsoft on things like Steeltoe, which is a I wouldn't say it's a springboot analog, but at least a microservice library that helps people achieve 12-factor cloud nativeness. That's something where I guess Microsoft represents both the legacy side, but also the future side and were part of a solution together there. [0:23:19.4] SA: Actually, that's a good point because the other way that we're seeing vendors be involved is in creating operators on Kubernetes side, or Cloud Foundry tiles, something that makes it easy for their system to still be used in the new world. That's definitely helpful as well. [0:23:38.1] CC: Yeah, that's interesting. [0:23:39.7] JR: Recently, myself me people on my team went through a training from both Shaun and Chris, interestingly enough in Colorado about this thing called the SWIFT methodology. I know it's a really important methodology to how you approach some of the application transformation-like engagements. Could you two give us a high-level overview of what that methodology is? [0:24:02.3] SA: I want to hear Chris go through it, since I always answer that question first. [0:24:09.0] CU: Sure. I figured since you were the inventor, you might want to go with it Shaun, but I'll give it a stab anyway. Swift is a series of exercises that we use to go from a business problem into what we call a notional architecture for an application. The one thing that you'll hear Shaun say all the time that I think is pretty apt, which is we're trying to understand how the application wants to behave. This is a very analog process, especially at the beginning. It's one where we get people who can speak about the business problem behind an application and the business processes behind an application. We get them into a room, a relatively large room typically with a bunch of wall space and we go through a series of exercises with them, where we tease that business process apart. We start with a relatively lightweight version of Alberto Brandolini’s event storing method, where we map out with the subject matter experts, what all of the business events that occur in a system are. That is a non-technical exercise, a completely non-technical exercise. As a matter of fact, all of this uses sticky notes and arts and crafts. After we've gone through that process, we transition into Boris diagram, which is an exercise of Shaun's design that we take the domains that we've, or at least service candidates that we've extrapolated from that event storming and start to draw out a notional architecture. Like an 80% idea of what we think the architecture is going to look like. We're going to do this for slices of – thin slices of that business problem. At that point, it starts to become something that a software developer might be interested in. We have an exercise called Snappy that generally occurs concurrently, which translates that message flow, Boris diagram thing into something that's at least a little bit closer to what a developer could act upon. Again, these are sticky note and analog exercises that generally go on for about a week or so, things that we do interactively with customers to try to get a purely non-technical way, at least at first, so that we can understand that problem and tell you what an architecture is that you can then act on. We try to position this as a customer. You already have all of the answers here. What we're going to do as facilitators of these is try to pull those out of your head. You just don't know how to get to the truth, but you already know that truth and we're going to design this architecture together. How did I do, Shaun? [0:26:44.7] SA: I couldn't have said it better myself. I would say one of the interest things about this process is the reason why it was developed the way it was is because in the world of technology and especially engineers, I definitely seen that you have two modes of thought when you come from the business world to the to the technical world. Often, engineers will approach a problem in a very different way and a very focused, blindered way than business folks. Ultimately, what we try to think of is that the purpose for the software is to enable the business to run well. In order to do that, you really need to understand at least at a high-level, what the heck is the business doing? Surprisingly and almost consistently, the engineering team doing the work is separated from the business team enough that it's like playing the telephone game, right? Where the business folks say, “Well, I told them to do this.” The technical team is like, “Oh, awesome. Well then, we're going to use all this amazing technology and build something that really doesn't support you.” This process really brings everybody together to discover how the system really wants to behave. Also as a side effect, you get everybody agreeing that yes, that is the way it's supposed to be. It's exciting to see teams come together that actually never even work together. You see the light bulbs go on and say, “Oh, that's why you do that.” The end result is in a week, we can go from nobody really knows each other, or quite understands the system as a whole, to we have a backlog of work that we can prioritize based on the learnings that we have, and feel pretty comfortable that the end result is going to be pretty close to how we want to get there. Then the biggest challenge is defining how do we get from point A to point B. That's part of that layering of the Swift method is knowing when to ask those questions. [0:28:43.0] JR: A micro follow-up and then I'll keep my mouth shut for a little bit. Is there a place that people could go online to read about this methodology, or just get some ideas of what you just described? [0:28:52.7] SA: Yeah. You can go to swiftbird.us. That has a high-level overview of more the public facing of how the methodology works. Then there's also internal resources that are constantly being developed as well. That's where I would start. [0:29:10.9] CC: That sounds really neat. As always, we are going to have links on the show notes for all of this. I checked out the website for the EventStorming book. There is a resources page there and has a list of a bunch of presentations. Sounds very interesting. I wanted to ask Chris and Shaun, have you ever seen, or heard of a case where a company went through the transformation, or modernization process and then they roll back to their legacy system for any reason? [0:29:49.2] SA: That's actually a really good question. It implies that often, the way people think about modernization would be more of a big bang approach, right? Where at a certain point in time, we switch to the new system. If it doesn't work, then we roll back. Part of what we try to do is have incremental releases, where we're actually putting small slices into production where you're not rolling back a whole – from modern back to legacy. It's more of you have a week's worth of work that's going into production that's for one of the thin slices, like Chris mentioned. If that doesn't work where there's something that is unexpected about it, then you're rolling back just a small chunk. You're not really jumping off a cliff for modernization. You're really taking baby steps. If it's a two step forward and one step back, you're still making a lot of really good progress. You're also gaining confidence as you go that in the end in two years, you're going to have a completely shiny new modern system and you're comfortable with it, because you're getting there an inch of the time, as opposed to taking a big leap. [0:30:58.8] CU: I think what's interesting about a lot of large organizations is that they've been so used to doing big bang releases in general. This goes from software to even process changes in their organizations. They’ve become so used to that that it often doesn't even cross their mind that it's possible to do something incrementally. We really do often times have to get spend time getting buy-in from them on that approach. You'd be surprised that even in industries that you’d think would be fantastic with managing risk, when you look at how they actually deal with deployment of software and the rolling out of software, they’re oftentimes taking approaches that maximize their risk. There's no way to make something riskier by doing a big bang. Yeah, as Shaun mentioned, the specifics of the swift are to find a way, so that you can understand where and get a roadmap for how to carve out incremental slices, so that you can strangle a large monolithic system slowly over time. That's something that's pretty powerful. Once someone gets bought in on that, they absolutely see the value, because they're minimizing risk. They're making small changes. They're easy to roll back one at a time. You might see people who would stop somewhere along the way, and we wouldn't necessarily say that that's a problem, right? Just like not every app needs to be modernized, maybe there's portions of systems that could stay where they are. Is that a bad thing? I wouldn't necessarily say that it is. Maybe that's the way that – the best way for that organization. [0:32:35.9] DC: We've bumped into this idea now a couple of different times and I think that both Chris and Shaun have brought this up. It's a little prelude to a show that we are planning on doing. One of the operable quotes from that show is the greatest enemy of knowledge is not the ignorance, it is the illusion of knowledge. It's a quote by Stephen Hawking. It speaks exactly to that, right? When you come to a problem with a solution in your mind that is frequently difficult to understand the problem on its merit, right? It’s really interesting seeing that crop up again in this show. [0:33:08.6] CU: I think even oftentimes, the advantage of a very discovery-oriented method, such as Swift is that it allows you to start from scratch with a problem set with people maybe that you aren't familiar with and don't have some of that baggage and can ask the dumb questions to get to some of the real answers. It's another phrase that I know Shaun likes to use is that our roles is facilitator to this method are to ask dumb questions. I mean, you just can't put enough value on that, right? The only way that you're going to break that established thinking is by asking questions at the root. [0:33:43.7] OP: One question, actually there was something recently that happened in the Kubernetes community, which I thought was pretty interesting and I'd like to get your thoughts on it, which is that Istio, which is a project that operates as a service mesh, I’m sure you all are familiar with it, has recently decided to unmodernize itself in a way. It was originally developed as a set of microservices. They have had no end of difficulty in getting in optimizing the different interactions between those services and the nodes. Then recently, they decided this might be a good example of when to monolith, versus when to microservice. I'm curious what your thoughts are on that, or if you have familiarity with it. [0:34:23.0] CU: What's actually quite – I'm not going to necessarily speak too much to this. Time will tell as to if the monolithing that they're doing at the moment is appropriate or not. Quite often, the starting point for us isn't necessarily a monolith. What it is is a proposed architecture coming from a customer that they're proud of, that this is my microservice design. You'll see a simple system with maybe hundreds of nano-services. The surprise that they have is that the recommendation from us coming out of our Swift sessions is that actually, you're overthinking this. We're going to take that idea that you have any way and maybe shrink that down and to save tens of services, or just a handful of services. I think one of the mistakes that people make within enterprises, or on microservices at the moment is to say, “Well, that's not a microcservice. It’s too big.” Well, how big or how small dictates a microservice, right? Oftentimes, we at least conceptually are taking and combining services based on the customers architecture very common. [0:35:28.3] SA: Monoliths aren't necessarily bad. I mean, people use them almost as a pejorative, “Oh, you have a monolith.” In our world it's like, well monoliths are bad when they're bad. If they're not bad, then that's great. The corollary to that is micro-servicing for the sake of micro-servicing isn't necessarily a good thing either. When we go through the Boris exercise, really what we're doing is we're showing how domain-based, or capabilities relate to each other. That happens to map really well in our opinion to first, cut microservices, right? You may have an order service, or a customer service that manages some of that. Just because we map capabilities and how they relate to each other, it doesn't mean the implementation can't even be as a single monolith, but componentized inside it, right? That's part of what we try really hard to do is avoid the religion of monolith versus microservices, or even having to spend a lot of time trying to define what a microservice is to you. It's really more of well, a system wants to behave this way. Now, surprise, you just did domain-driven design and mapped out some good 12-factor compliant microservices should you choose to build it that way, but there's other constraints that always apply at that point. [0:36:47.1] OP: Is there more traction in organizations implementing this methodology on a net new business, rather than current running businesses or applications? Is there are situations more so that you have seen where a new project, or a new functionality within a business starts to drive and implement this methodology and then it creeps through the other lines of business within the organization, because that first one was successful? [0:37:14.8] CU: I'd say that based on the nature of who our customers are as an app transformation practice, based on who those customers are and what their problems are, we're generally used to having a starting point of a process, or software that exists already. There's nothing at all to mandate that it has to be that way. As a matter of fact, with folks from our labs organization, we've used these methods in what you could probably call greener fields. At the end of the day when you have a process, or even a candidate process, something that doesn't exist yet, as long as you can get those ideas onto sticky notes and onto a wall, this is a very valid way of getting – turning ideas into an architecture and an architecture into software. [0:37:59.4] SA: We've seen that happen in practice a couple times, where maybe a piece of the methodology was used, like EventStorming just to get a feel for how the business wants to behave. Then to rapidly try something out in maybe more of a evolutionary architecture approach, MVP approach to let's just build something from a user perspective just to solve this problem and then try it out. If it starts to catch hold, then iterate back and now drill into it a little bit more and say, “All right. Now we know this is going to work.” We're modernizing something that may be two weeks old just because hooray, we proved it's valuable. We didn't necessarily have to spend as much upfront time on designing that as we would in this system that's already proven itself to be of business value. [0:38:49.2] OP: This might be a bit of a broad question, but what defines success of projects like this? I mean, we mentioned earlier about cost and maybe some of the drivers are to move off certain mainframes and things like that. If you're undergoing an application transformation, it seems to me like it's an ongoing thing. How do enterprises try to evaluate that return on investment? How does it relate to success criteria? I mean, faster release times, etc., potentially might be one, but how was that typically evaluated and somebody internally saying, “Look, we are running a successful project.” [0:39:24.4] SA: I think part of what we tried to do upfront is identify what the objectives are for a particular engagement. Often, those objectives start out with one thing, right? It's too costly to keep paying IBM or Oracle for WebLogic, or WebSphere. As we go through and talk through what types of things that we can solve, those objectives get added to, right? It may be the first thing, our primary objective is we need to start moving workloads off of the mainframe, or workloads off of WebLogic, or WebSphere, or something like that. There's other objectives that are part of this too, which can include things as interesting as developer happiness, right? They have a large team of a 150 developers that are really just getting sick of doing the same old thing and having new technology. That's actually a success criteria maybe down the road a little bit, but it's more of a nice to have. In a long-winded answer of saying, when we start these and when we incept these projects, we usually start out with let's talk through what our objectives are and how we measure success, those key results for those objectives. As we're iterating through, we keep measuring ourselves against those. Sometimes the objectives change over time, which is fine because you learn more as you're going through it. Part of that incremental iterative process is measuring yourself along the way, as opposed to waiting until the end. [0:40:52.0] CC: Yeah, makes sense. I guess these projects are as you say, are continuous and constantly self-adjusting and self-analyzing to re-evaluate success criteria to go along. Yeah, so that's interesting. [0:41:05.1] SA: One other interesting note though that personally we like to measure ourselves when we see one project is moving along and if the customers start to form other projects that are similar, then we know, “Okay, great. It's taking hold.” Now other teams are starting to do the same thing. We've become the cool kids and people want to be like us. The only reason it happens for that is when you're able to show success, right? Then other teams want to be able to replicate that. [0:41:32.9] CU: The customers OKRs, oftentimes they can be a little bit easier to understand. Sometimes they're not. Typically, they involve time or money, where I'm trying to take release times from X to Y, or decrease my spend on X to Y. The way that we I think measure ourselves as a team is around how clean do we leave the campsite when we're done. We want the customers to be able to run with this and to continue to do this work and to be experts. As much as we'd love to take money from someone forever, we have a lot of people to help, right? Our goal is to help to build that practice and center of excellence and expertise within an organization, so that as their goals or ideas change, they have a team to help them with that, so we can ride off into the sunset and go help other customers. [0:42:21.1] CC: We are coming up to the end of the episode, unfortunately, because this has been such a great conversation. It turned out to be a more of an interview style, which was great. It was great getting the chance to pick your brains, Chris and Shaun. Going along with the interview format, I like to ask you, is there any question that wasn't asked, but you wish was asked? The intent here is to illuminates what this process for us and for people who are listening, especially people who they might be in pain, but they might be thinking this is just normal. [0:42:58.4] CU: That's an interesting one. I guess to some degree, that pain is unfortunately normal. That's just unfortunate. Our role is to help solve that. I think the complacency is the absolute worst thing in an organization. If there is pain, rather than saying that the solution won't work here, let’s start to talk about solutions to that. We've seen customers of all shapes and sizes. No matter how large, or cumbersome they might be, we've seen a lot of big organizations make great progress. If your organization's in pain, you can use them as an example. There is light at the end of the tunnel. [0:43:34.3] SA: It's usually not a train. [0:43:35.8] CU: Right. Usually not. [0:43:39.2] SA: Other than that, I think you asked all the questions that we always try to convey to customers of how we do things, what is modernization. There's probably a little bit about re-platforming, doing the bare minimum to get something onto to the cloud. We didn't talk a lot about that, but it's a little bit less meta, anyway. It's more technical and more recipe-driven as you discover what the workload looks like. It's more about, is it something we can easily do a CF push, or just create a container and move it up to the cloud with minimal changes? There's not conceptually not a lot of complexity. Implementation-wise, there's still a lot of challenges there too. They're not as fun to talk about for me anyway. [0:44:27.7] CC: Maybe that's a good excuse to have some of our colleagues back on here with you. [0:44:30.7] SA: Absolutely. [0:44:32.0] DC: Yeah, in a previous episode we talked about persistence and state of those sorts of things and how they relate to your applications and how when you're thinking about re-platforming and even just where you're planning on putting those applications. For us, that question comes up quite a lot. That's almost zero trying to figure out the state model and those sort of things. [0:44:48.3] CC: That episode was named States in Stateless Apps, I think. We are at the end, unfortunately. It was so great having you both here. Thank you Duffie, Shaun, Chris and I'm going by the order I'm seeing people on my video. Josh and Olive. Until next time. Make sure please to let us know your feedback. Subscribe. Give us a thumbs up. Give us a like. You know the drill. Thank you so much. Glad to be here. Bye, everybody. [0:45:16.0] JR: Bye all. [0:45:16.5] CU: Bye. [END OF EPISODE] [0:45:17.8] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.
This week, Kyle and Dan discuss SSH port knocking, a possible MOS API for Weblogic patching, and the excellent DMSViewer tools. Dan also shares some thoughts on the Unified Compare Reports in PeopleTools 8.56+. Show Notes Classic Nav De-supported @ 3:00 MOS API for Patching @ 7:45 https://github.com/oracle/weblogic-deploy-tooling https://github.com/oracle/weblogic-image-tool GetMOSPatch Utility DMSViewer @ 17:00 Port Knocking SSH @ 20:30 Unified Compare Reports in 8.56/8.57 @ 29:30
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Critical Patch For WebLogic https://isc.sans.edu/forums/diary/Critical+Actively+Exploited+WebLogic+Flaw+Patched+CVE20192729/25050/ Exim Exploits Against Other Mail Servers https://isc.sans.edu/forums/diary/Quick+Detect+Exim+Return+of+the+Wizard+Attack/25052/ SANS Fire Presentations (to be published soon) https://isc.sans.edu/presentations
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Sodinokibi Ransomware Exploits WebLogic Server Vulnerability https://blog.talosintelligence.com/2019/04/sodinokibi-ransomware-exploits-weblogic.html Facebook Leaking Sellers Exact Locations https://www.7elements.co.uk/resources/blog/facebooks-burglary-shopping-list/ Revive Adserver Deserialization Vulnerability https://www.revive-adserver.com/security/revive-sa-2019-001/ AutoMacTC: Automating Mac Forensics Triage https://www.crowdstrike.com/blog/automating-mac-forensic-triage/ Kroll Artifact Parser And Extractor (KAPE) https://learn.duffandphelps.com/kape
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
WebLogic Update https://isc.sans.edu/diary.html?storyid=24890 Docker Hub Breach https://success.docker.com/article/docker-hub-user-notification
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Unpatched Vulnerablity in WebLogic Exploited https://isc.sans.edu/forums/diary/Unpatched+Vulnerability+Alert+WebLogic+Zero+Day/24880/ Collecting Windows Service Accounts https://isc.sans.edu/forums/diary/Service+Accounts+Redux+Collecting+Service+Accounts+with+PowerShell/24882/ Confluence Vulnerablity Exploited by GandGrab https://blog.alertlogic.com/active-exploitation-of-confluence-vulnerability-cve-2019-3396-dropping-gandcrab-ransomware/ New Micrsoft Security Baseline for Windows 10 / Windows Server https://blogs.technet.microsoft.com/secguide/2019/04/24/security-baseline-draft-for-windows-10-v1903-and-windows-server-v1903/
This week, Kyle and Dan talk about using Vagrant snapshots with Vagabond, strategies for managing psft_customizations.yaml files and demonstrating Fluid to end users. Dan shares a story of how an unpatched WebLogic server can leave your PeopleSoft application vulnerable to hackers. Show Notes Securing the Oracle Listener @ 1:30 Vagabond and Vagrant Snapshots @ 2:00 Vault and Hiera @ 7:45 Consul Key-Value Store Remote Desktop Spanning @ 14:00 Why You Apply CPU Patches - A Story @ 16:30 CPU Patching Wishlist @ 23:30 DPK and Middleware-only @ 33:15 psft_customizations.yaml strategies @ 39:00 Upgrading Interaction Hub @ 50:00 Demoing Fluid @ 57:00
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Many Large Websites Affected by Branch.io XSS Flaw https://www.vpnmentor.com/blog/dom-xss-bug-affecting-tinder-shopify-yelp/ Medtronics Pacemakers Disable Remote Update https://www.medtronic.com/content/dam/medtronic-com/us-en/corporate/documents/REV-Medtronic-2090-Security-Bulletin_FNL.pdf IBM Updates WebSphere Update https://www-01.ibm.com/support/docview.wss?uid=swg22016254 Incomplete JET Database Patch https://blog.0patch.com/2018/10/patching-re-patching-and-meta-patching.html
This week on the podcast, Kyle has some follow-up on PIA installations and discovers some unpleasant Unified Navigation behavior. Dan shares some tips for Facter and testing out the redeploy option with the DPK. Show Notes PIA Installation FUP @ 1:30 Facter and FACTER_ override @ 4:30 Redeploy and WebLogic @ 8:00 CPU Patching @ 18:30 Unified Nav and SSO @ 23:00 IDDA Logging
This week on the podcast, Dan shares a Vagrant plugin to help with Root Certificates and a change to WebLogic certifications with 8.56. Then Kyle and Dan discuss running VERSION and how to deal with bad cache. Show Notes Vagrant-certificates-CA Plugin @ 2:00 WebLogic Version Changes @ 9:00 Cache Clearing @ 14:30 VERSION @ 19:00 Web Profile Cache Folder @ 26:30 When Cache Goes Bad
This week on the podcast, Dan and Kyle talk about cloud provisioning tools and how they could work for PeopleSoft Admins, Mike Ripley's POC for securing WebLogic, and changes to the Change Assistant installation process. Show Notes LinuxFilter Proof of Concept for WebLogic @ 1:30 Insecure PeopleSoft Settings @ 6:00 Random Default Local Node Name @ 9:30 ps-terraform and working with cloud provisioning tools @ 15:00 Terraform and Oracle Cloud Change Assistant Installation Changes @ 26:00 Headless PeopleTools Patching with Change Assistant @ 35:00
This week on the podcast, Dan and Kyle discuss a WebLogic exploit used for currency mining, Dan revisits the Health Center in the latest PeopleSoft Images, and Kyle explains why you need to review the Invalid View project. Show Notes Crypto Currency Exploit on PeopleSoft Web Servers @ 1:30 How to make $250k on PeopleSoft without leaving home Weblogic and Monero miner Health Center with PeopleSoft Images @ 6:00 Elasticsearch Clustering Improvements in 8.56 @ 17:00 Invalid Views @ 19:00
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Oracle E-Business Suite Server Can Be Attackt via WebLogic https://www.onapsis.com/blog/oracle-january-cpu-analysis-64-patches-affect-business-critical-applications Microsoft Resumes Patches for AMD Systems https://www.amd.com/en/corporate/speculative-execution Speculations About Yet Another CPU Attack https://skyfallattack.com Smiths Medfusion 4000 Vulnerabilities https://github.com/sgayou/medfusion-4000-research/blob/master/doc/README.md#summary
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
WebLogic Flaw Used to Install Monero Crypto Coin Miner https://isc.sans.edu/forums/diary/Campaign+is+using+a+recently+released+WebLogic+exploit+to+deploy+a+Monero+miner/23191/ Fake Anti-Virus Pages Poppding Up Like Weeds https://isc.sans.edu/forums/diary/Fake+antivirus+pages+popping+up+like+weeds/23207/ Apple Spectre/Meltdown Patches https://support.apple.com/en-us/HT201222 Meltdown Patch Fallout https://kb.pulsesecure.net/articles/Pulse_Secure_Article/KB43600/?l=en_US&fs=Search&pn=1&atype= https://forums.sandboxie.com/phpBB3/viewtopic.php?t=25114 https://support.microsoft.com/en-us/help/4072699/january-3-2018-windows-security-updates-and-antivirus-software WPA3 Announced https://www.wi-fi.org/news-events/newsroom/wi-fi-alliance-introduces-security-enhancements
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Campaign is using a recently released WebLogic exploit to deploy a Monero miner https://isc.sans.edu/forums/diary/Campaign+is+using+a+recently+released+WebLogic+exploit+to+deploy+a+Monero+miner/23191/ Misc News about Meltdown and Spectre https://www.qualcomm.com/company/product-security/bulletins AMD Processor Flaw http://seclists.org/fulldisclosure/2018/Jan/12 Western Digital MyCloud Backdoor http://gulftech.org/advisories/WDMyCloud%20Multiple%20Vulnerabilities/125
This week on the podcast, Dan has follow-up on using Hiera with Puppet environments, capturing WebLogic logs in Elasticsearch, and Kyle shares his thoughts on the Solaris "change". Then Kyle discusses the in depth failover testing and how Unified Navigation behaves when app servers fail. Show Notes Solaris is "not quite dead yet" @ 2:00 How Oracle manages acquired technologies @ 11:00 Fluid Training with Jim Marion @ 18:00 New Oracle Version Numbers @ 20:00 Failover Testing @ 29:00 How Unified Navigation Behaves when IB is down @ 38:00 Hiera command line Follow-up @ 51:00 Logstash and WebLogic logs @ 53:00 Managing Synchronous Messages? @ 56:30
This week on the podcast, we share Eric Bolinger's DPK module for WebLogic, Graham's 5 Things about PeopleSoft Images, more Fluid Ideas, and dive into ELM's Find Learning page behavior. We finish the episode discussing about Matt Tremblay's "Reverse Proxy Server with Docker" post. Show Notes Portal Navigation @ 4:00 psadminio-io_weblogic Puppet Module @ 12:00 Puppet Development Kit @ 19:30 SQLcl Webinar @ 21:15 Graham's 5 Things to Remember @ 27:00 --keep_vbox_alive Default Behavior "You are here" idea @ 29:30 PTPORTALREGISTRY Show the Path Idea ELM Find Learning @ 35:30 ELM Development @ 45:00 Increase Elasticsearch Log Level @ 50:30 Curl -X PUT -d '{ "transient": { "logger._root": "DEBUG" }}' esadmin:password@elasticserver:9200/_cluster/settings Instant Reverse Proxy Servers with Docker @ 56:00
This week Kyle and Dan discuss UI improvements from Sasank and Dan's new Fluid Stylesheets, using Git and GitLab to manage DPK files, managing Favorites with Unified Navigation and some "Gotchas" Kyle found during his 8.55 upgrade project. Show Notes Windows 6 month release schedule @ 4:00 Puppet 5 @ 8:00 Adding Breadcrumbs to Fluid @ 9:30 Sasank's UI Fixes: Search Box Focus Auto-expand the Navigator Fluid Background Images @ 13:00 psadmin.io Fluid Stylesheets @ 13:30 App Designer on Ubuntu @ 25:45 Enhancing Forms with Event Mapping @ 26:45 New Window Behavior @ 28:00 Jim Marion's New Window Bookmarklet Using the DPK to deploy POC's @ 33:00 Gitlab Follow-up @ 35:30 POSH-GIT @ 41:30 Kyle's DPKFiles Setup @ 45:15 WebLogic and MP4 files @ 50:30 Favorites and Unified Navigation @ 52:00 Process Categories @ 59:30
This week, Kyle and Dan talk about using Vagrant snapshots with Vagabond, strategies for managing psft_customizations.yaml files and demonstrating Fluid to end users. Dan shares a story of how an unpatched WebLogic server can leave your PeopleSoft application vulnerable to hackers. Show Notes Securing the Oracle Listener @ 1:30 Vagabond and Vagrant Snapshots @ 2:00 Vault and Hiera @ 7:45 Consul Key-Value Store Remote Desktop Spanning @ 14:00 Why You Apply CPU Patches - A Story @ 16:30 CPU Patching Wishlist @ 23:30 DPK and Middleware-only @ 33:15 psft_customizations.yaml strategies @ 39:00 Upgrading Interaction Hub @ 50:00 Demoing Fluid @ 57:00
This week on the podcast, Dan and Kyle launch a new course about Deployment Packages. Dan tests out a new text editor and discovers you can run OPatch on MOS. Kyle digs into Jolt Failover options with the IB and brainstorms some great configuration ideas. Show Notes Mastering PeopleSoft Administration: Deployment Packages @ 1:30 Enhancing and Extending the DPK Talk Dynamic Config Puppet Manifest @ 6:30 Shift-Backspace and App Designer @ 10:15 Atom.io @ 11:30 How SQL Server runs Linux @ 15:30 WebLogic 12c REST Services @ 12:30 Oracle REST Database Services QAS Benefits @ 27:30 OPatch on MOS @ 29:45 Oh No! Story: DR Testing @ 32:50 Jolt Failover and the IB @ 43:00 integrationGateway.properties file @ 56:00
This week on the podcast, Dan tries a different Remote Desktop tool, using RSS feeds to monitor PeopleSoft data and comparing SQL Explain Plans with SQL Developer. Then Kyle gives a great overview of the PeopleSoft Test Framework and what you need to know before using it. Show Notes mRemoteNG @ 1:30 VirtualWin and Windows 10 @ 6:30 Troubleshooting Monitor Issues @ 8:30 Query Trees and Patches @ 10:30 Query Tree Export Bug Managing Servlet Config Data @ 15:30 How Colton Works JMX and JVM/WebLogic monitoring @ 17:45 WLST Script to capture WebLogic data RSS Feeds, PeopleSoft and monitoring Integration Broker @ 23:30 Comparing Explain Plans in SQL Developer @ 32:00 COBOL Unicode Compile @ 36:00 Getting Started on PTF @ 39:00 Storing and Recording Tests 45:00 PTF Reports and Usage Monitor@ 51:45 Integrating Tests with Daily Development@ 63:00
This week we talk about HR Image 17, the new Security Automation tool, and share some comments from David Kurtz about PeopleSoft on 12c. Then, Kyle dives into WebLogic Servlet Filters and shares how filters can be used with PeopleSoft. Show Notes The psadmin.io Community Security Deployment Tool - PeopleBooks Link Disable the LLE warning in Interaction Hub Image 2: set LLE_DEPRECATION_WARN_LEVEL=NONE (or ONCE) Kyle's PSEatCookies Filter PeopleBooks - Kerberos Filter for Desktop Single Signon Essentials of Filters Chain-of-responsiibility design pattern Episode #6 - TokenChpoken and Other Security Issues
In episode 4, Dan and Kyle talk about Demo environments and why the traditional need for Demo's may change, the PeopleSoft Update Manager, and how to manager PeopleSoft Images. We also discuss the recent WebLogic vulnerability, and ask for input on our upcoming Tools and Documentation podcast.
In episode 3 of The PeopleSoft Administrator Podcast, Dan and Kyle talk about HTTPS. We discuss what HTTPS is and how to implement HTTPS with WebLogic. Dan shares how to mitigate against the newer SSL attacks, and tips and tricks to help manage certificates and simplify configuring HTTPS. We also review some of our PeopleTools 8.55 predictions after the OpenWorld presentations were released.