POPULARITY
In this episode, Lois Houston and Nikita Abraham discuss the basics of MySQL installation with MySQL expert Perside Foster. Perside covers every key step, from preparing your environment and selecting the right software, to installing MySQL, setting up secure initial user accounts, configuring the system, and managing updates efficiently. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Welcome back to another episode of the Oracle University Podcast. I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I'm joined by Lois Houston, Director of Innovation Programs. Lois: Hi everyone! In our last episode, we spoke about Oracle MySQL ecosystem and its various components. We also discussed licensing, security, and some key tools. What's on the agenda for today, Niki? 00:52 Nikita: Well Lois, today, we're going beyond tools and features to talk about installing MySQL. Whether you're setting up MySQL for the first time or looking to understand its internal structure a little better, this episode will be a valuable guide. Lois: And we're lucky to have Perside Foster back with us. Perside is a MySQL Principal Solution Engineer at Oracle. Hi Perside! Say I wanted to get started and install MySQL. What factors should I keep in mind before I do that? 01:23 Perside: The first thing to consider is the environment for the database server. MySQL is supported on many different Linux distributions. You can also run it on Windows or Apple macOS. You can run MySQL on a variety of host platforms. You can use dedicated servers in a server room or virtual machines in a data center. Developers might prefer to deploy on Docker or Kubernetes containers. And don't forget, you can deploy HeatWave, the MySQL cloud version, in many different clouds. MySQL has great multithreading capability. It also has support for Non-Uniform Memory Access or NUMA. This is particularly important if you run large systems with hundreds of concurrent connections. MySQL storage engine, InnoDB, makes effective use of your available memory. It stores your active data in a buffer pool. This greatly improves access time compared to reading straight from disk. Of course, SSDs and other solid state media are much faster than hard disks. But don't forget, MySQL can make full use of that performance benefit too. Redundancy is very important for the MySQL server. Hardware with redundant power supply, storage media, and network connections can make all the difference to your uptime. Without redundancy, a single point of failure will bring down the server if it fails. 03:26 Nikita: Got it. Perside, from where can I download the different editions of MySQL? Perside: Our most popular software is the MySQL Community Edition. It is available at no cost for mysql.com for many platforms. This version is why MySQL is the most popular database for web application. And it is also open source. MySQL Enterprise Edition is the commercial edition. It is fully supported by Oracle. You can get it from support.oracle.com as an Oracle customer. If you want to try out the enterprise features but are not yet a customer, you can get the latest version of MySQL as a trial edition from edelivery.oracle.com. Because MySQL is open source, you can get the source code from either mysql.com or GitHub. Most people don't need the source. But any developer who wants to modify the code or even contribute back to the project are welcome to do so. 04:43 Lois: Perside, can you walk us through MySQL's release model? Perside: This is divided into LTS and Innovation releases, each with a different target audience. LTS stands for long-term support. MySQL 8.4 is an LTS release and will be supported for several years. LTS releases are feature-stable. When you install an LTS release, you can apply future bug fixes and security patches without changing any behavior in the product. The bug fixes and security patches are designed to be backward compatible. This means you can upgrade easily from previous releases. LTS releases come every two years. This allows you to maintain a stable system without having to change your underlying application too frequently. You will not be forced to upgrade after two years. You can continue to enjoy support for an LTS release for up to eight years. Along with LTS releases, we also have Innovation releases. These contain the latest leading-edge features that are developed even in the middle of an LTS cycle. You can upgrade from LTS to Innovation and back again, depending on which features you require in your application. Innovation releases have a much more rapid cadence. You can get the latest features every quarter. This means Innovation releases are supported only for their specific release. So, if you're on the Innovation track, you must upgrade more frequently. All editions of MySQL are shipped as both LTS and Innovation releases. This includes the self-managed editions and also HeatWave in the cloud. You can treat both LTS and Innovation releases as production-ready. This means they are generally available releases. Innovation does not mean beta quality software. You get the same quality support from Oracle whether you're using LTS or Innovative software. The MySQL client software and other tools will operate with both LTS and innovation releases. 07:43 Nikita: What are connectors in the context of MySQL? Perside: Connectors are the language-specific software component that connects your application to MySQL. You should use the latest version of connectors. Connectors are also production-ready, generally available software. They will work with any version of MySQL that is supported at the time of the connector's release. 08:12 Nikita: How does MySQL integrate with Docker and other container platforms? Perside: You might already be familiar with the Docker store. It is used for getting containerized images of software. As an Oracle customer, you might be familiar with My Oracle Support. It provides support and updates for all supported Oracle software in patches. MySQL works well with virtualization and container platform, including Docker. You can get images from the Community Edition on Docker Hub. If you are an Enterprise Edition customer, you can get images from the Docker store for MySQL Oracle Support or from Oracle container's registry. 09:04 Lois: What resources are available for someone who wants to know more about MySQL? Perside: MySQL has detailed documentation. You should familiarize yourself with the documentation as you prepare to install MySQL. The reference manual for both Community and Enterprise editions are available at the Developer Zone at dev.mysql.com. Oracle customers also have access to the knowledge base at support.oracle.com. It contains support information on use cases and reference architectures. The product team regularly posts announcements and technical articles to several blogs. These blogs often contain pre-release announcements of upcoming features to help you prepare for your next project. Also, you'll find deep dives into technical topics and complex problems that MySQL solves. This includes some problems specific to highly available architecture. We also feature individual blogs from high profile members of our team. These include the MySQL Community evangelist lefred. He posts about upcoming events and interesting features. Also, Dimitri Kravchuk offers blogs that provide deep dives into performance. 10:53 Nikita: Ok, now that I have all this information and am prepped and ready, how do I actually install MySQL on my operating system? What's the process like? Perside: You can install MySQL on various operating system, depending on your needs. These might include several distributions of Linux or UNIX, Windows, Mac OS, Oracle Linux based on the Unbreakable Enterprise Kernel, Solaris, and freeBSD. As always, the MySQL documentation provides full details on supported operating system. It also provides the specific installation steps for each of the operating system. Plus, it tells you how to perform the initial configuration and further administrative steps. If you're installing on Windows, you have a couple of options. First, the MySQL Installer utility is the easiest way to install MySQL. It installs MySQL and performs the initial configuration based on options that you choose at installation time. It includes not only the MySQL server, but also the most important connectors, the MySQL Shell Client, MySQL Workbench Client with user interface and common utilities for troubleshooting and administration. It also installs several sample databases and models and documentation. It's the easiest way to install MySQL because it uses an installation wizard. It lets you select your installation target location, what components to install, and other options. 12:47 Lois: But what if I want to have more control? Perside: For more control over your installation, you can install MySQL from the binary zip archive. This does not include sample or supporting tools and connectors, but only contains the application's binaries, which you can install anywhere you want. This means that the initial configuration is not performed by selecting an option through a wizard. Instead, you must configure the Windows service and MySQL configuration file yourself. Linux installation is more varied. This is because of the different distribution and also because of its terms of flexibility. On many distributions of Linux, you can use the package manager native to that distribution. For example, you can use the yum package manager in all Oracle Linux to install RPM files. You can also use a binary archive to install only the files. To decide which method you want to use, it's based on several factors. How much you know about MySQL files and configuration and the operating system on which you're going to do the installation? Any applicable standard or operating procedures within your own company's IT infrastructure, how much control do you need over this installation and how flexible a method do you need? For example, the RPM package for Oracle Linux, it installs the file in specific locations and with a specific service, MySQL user account. 14:54 Transform the way you work with Oracle Database 23ai! This cutting-edge technology brings the power of AI directly to your data, making it easier to build powerful applications and manage critical workloads. Want to learn more about Database 23ai? Visit mylearn.oracle.com to pick from our range of courses and enroll today! 15:18 Nikita: Welcome back! Is there a way for me to extend the functionality of MySQL beyond its default capabilities? Perside: Much of MySQL's behavior is standard and always exists when you install the server. However, you can configure some additional behaviors by extending MySQL with plugins or components. Plugins operate closely with the server and by calling APIs exposed by the server, they add features by providing extra functions or variables. Not only do they add variables, they can also interact with the servers on global variables and functions. That makes them work as if they are dynamically loadable parts of the server itself. Components also extend functionality, but they are separate from the server and extend its functionality through a service-based architecture. You can also extend MySQL in other ways-- by creating stored procedures, triggers, and functions with standard SQL and MySQL extensions to that language, or by creating external dynamically loaded user-defined functions. 16:49 Lois: Perside, can we talk about the initial user accounts? Perside: A MySQL account identifier is more than just a username and password. It consists of three elements, two that identify the account, and one that is used for authentication. The three elements are the username, it's used to log in from the client; the hostname element, it identifies a computer or set of computers; and the password, it must be provided to gain access to MySQL. The hostname is a part of the account identifier that controls where the user can log in. It is typically a DNS computer name or an IP address. You can use a wildcard, which is the percentage sign to allow the name user to log in from any connected host, or you can use the wildcard as part of an IP address to allow login from a limited range of IP addresses. 17:58 Nikita: So, what happens when I install MySQL on my computer? Perside: When you first install MySQL on your computer, it installs several system accounts. The only user account that you can log in to is the administrative account. That's called the root account. Depending on the installation method that you use, you'll either see the initial root password on the console as you install the server, or you can read it from the log file. For security reasons, the password of a new account, such as the root account must change. MySQL prevents you from executing any other operation with that account until you have changed the password. 18:46 Lois: What are the system requirements for installing and running MySQL? Perside: The MySQL service must run as a system-level user. Each operating system has its own method for creating such a user. All operating system follows the same general principles. However, when using the MySQL installer on Windows or the RPM package installation on Oracle Linux, each installation process creates and configure the system-level user. 19:22 Lois: Perside, since MySQL is always evolving, how do I upgrade it when newer versions become available? Perside: When you upgrade MySQL, you have to bring the server down so that the upgrade process can replace all of the relevant binary executable files. And if necessary, update the data and configuration to suit the new software. The safest thing to do is to back up your whole MySQL environment. This includes not only your data in the files, such as binaries and configuration files, but also logical elements, including triggers, stored procedures, user configuration, and anything else that's required to rebuild your system. The upgrade process gives you two main options. An in-place upgrade uses your existing data directory. After you shut down your MySQL server process, you either replace the package or binaries with new versions, or you install the new binary executables in a new location and point your symbolic links to this new location. The server process detects that the data directory belongs to an earlier version and performs any required upgrade checks. 20:46 Lois: Thank you, Perside, for taking us through the practical aspects of using MySQL. If you want to learn about the MySQL architecture, visit mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Nikita: Before you go, we wanted to take a minute to thank you for taking the Oracle University Podcast survey that we put out at the end of last year. Your insights were invaluable and will help shape our future episodes. Lois: And if you missed taking the survey but have feedback to share, you can write to us at ou-podcast_ww@oracle.com. That's ou-podcast_ww@oracle.com. We'd love to hear from you. Join us next week for a discussion on MySQL database design. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 21:45 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode, hosts Lois Houston and Nikita Abraham speak with senior OCI instructor Mahendra Mehra about the capabilities of self-managed nodes in Kubernetes, including how they offer complete control over worker nodes in your OCI Container Engine for Kubernetes environment. They also explore the various options that are available to effectively manage your Kubernetes deployments. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Hello and welcome to the Oracle University Podcast! I'm Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Last week, we discussed how OKE virtual nodes can offer you a complete serverless Kubernetes experience. Nikita: Yeah, and in today's episode, we'll focus on self-managed nodes, where you get complete control over the worker nodes within your OKE environment. We'll also talk about how you can manage your Kubernetes deployments. 00:57 Lois: To tell us more about this, we have Mahendra Mehra, a senior OCI instructor with Oracle University. Hi Mahendra! Welcome back! Let's get started with self-managed nodes. Can you tell us what they are? Mahendra: In Container Engine for Kubernetes, a self-managed node is essentially a worker node that you personally create and host on a compute instance or instance pool within the compute service. Unlike managed nodes or virtual nodes, self-managed nodes are not grouped into node pools by default. They are often referred to as Bring Your Own Nodes, also abbreviated as BYON. If you wish to streamline administration and manage multiple self-managed nodes collectively, you can utilize the compute service to create a compute instance pool for hosting these nodes. This allows for greater flexibility and customization in your Kubernetes environment. 01:58 Nikita: Mahendra, what are some practical usage scenarios for OKE self-managed nodes? Mahendra: These nodes offer a range of advantages for specific use cases. Firstly, for specialized workloads, leveraging the compute service allows you to configure compute instances with shapes and image combination that may not be available for managed nodes or virtual nodes. This includes options like GPU shapes for hardware accelerated workloads or high frequency processor cores for demanding high-performance computing tasks. Secondly, if you require complete control over your compute instance configuration, self-managed nodes are the ideal choice. This gives you the flexibility to tailor each node to your specific requirements. Additionally, self-managed nodes are particularly well suited for Oracle Cloud Infrastructure cluster networks. These nodes provide high bandwidth, low latency RDMA connectivity, making them a preferred option for certain networking setups. Lastly, the use of compute instance pools with self-managed nodes enables the creation of infrastructure for handling complex distributed computing tasks. This can greatly enhance the efficiency of your Kubernetes environment. Consider these points carefully to determine the optimal use of OKE self-managed nodes in your deployments. 03:30 Lois: What do we need to consider before creating a self-managed node and integrating it into a cluster? Mahendra: There are two crucial aspects to address. Firstly, you need to confirm that the cluster to which you plan to add a self-managed node is configured appropriately. Secondly, it's essential to choose the right image for the compute instance hosting the self-managed node. 03:53 Nikita: Can you dive a little deeper into these prerequisites? Mahendra: To successfully integrate a self-managed node into your cluster, you must ensure that the cluster is an enhanced cluster. This is a crucial prerequisite for the addition of self-managed nodes. The flannel CNI plugin for pod networking should be utilized, not the VCN-native pod networking CNI plugin. This ensures optimal pod networking for your self-managed nodes. The control plane nodes of the cluster must be running Kubernetes version 1.25 or later. This is essential for compatibility and optimal performance. Lastly, maintain compatibility between the Kubernetes version on control plane nodes and worker nodes with a maximum allowable difference of two minor versions. This ensures a smooth and stable operation of your Kubernetes environment. Keep these cluster requirements in mind as you prepare to add self-managed nodes to your OKE cluster. 04:55 Lois: What about the image requirements when creating self-managed nodes? Mahendra: Choose either Oracle Linux 7 or Oracle Linux 8 image, for your self-managed nodes. Ensure that the selected image has a release date of March 28, 2023 or later. Obtain the image OCID, also known as Oracle Cloud Identifier, from the respective sources. When specifying an image, be mindful of the Kubernetes version it contains. It's your responsibility to select an image with a Kubernetes version that aligns with the Kubernetes version skew support policy. Keep in mind that the Container Engine for Kubernetes does not automatically check the compatibility. So it's up to you to ensure harmony between the Kubernetes version on the self-managed node and the cluster's control plane nodes. These considerations will help you make informed choices when configuring images for your self-managed nodes. 05:57 Nikita: I really like the flexibility and customization OKE self-managed nodes offer. Now I want to switch gears a little and ask you about OCI Service Operator for Kubernetes. Can you tell us a bit about it? Mahendra: OCI Service Operator for Kubernetes is an open-source Kubernetes add-on that transforms the way we manage and connect OCI resources within our Kubernetes clusters. This powerful operator enables you to effortlessly create, configure, and interact with OCI resources directly from your Kubernetes environment, eliminating the need for constant navigation between the Oracle Cloud Infrastructure Console, CLI, or other tools. With the OCI Service Operator, you can seamlessly leverage kubectl to call the operator framework APIs, providing a streamlined and efficient workflow. 06:53 Lois: On what framework is the OCI Service Operator built? Mahendra: OCI Service Operator for Kubernetes is built using the open-source Operator Framework toolkit. The Operator Framework manages Kubernetes-native applications called operators in an effective, automated, and scalable way. The Operator Framework comprises essential components like Operator SDK. This leverages the Kubernetes controller-runtime library, providing high-level APIs and abstractions for writing operational logic. Additionally, it offers tools for scaffolding and code generation. 07:35 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free! So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 08:14 Nikita: Welcome back! Mahendra, are there any other components within OCI Service Operator to manage Kubernetes deployments? Mahendra: The other essential component is Operator Lifecycle Manager, also abbreviated as OLM. OLM extends Kubernetes by introducing a declarative approach to install, manage, and upgrade operators within a cluster. The OCI Service Operator for Kubernetes is intelligently packaged as an Operator Lifecycle Manager bundle, simplifying the installation process on Kubernetes clusters. This comprehensive bundle encapsulates all necessary objects and definitions, including CRDs, RBACs, ConfigMaps, and deployments, making it effortlessly deployable on a cluster. 09:02 Lois: So much that users can take advantage of! What about OCI Service Operator's integration with other OCI services? Mahendra: One of its standout features is its seamless integration with a range of OCI services. The first one is Autonomous Database, specifically tailored for transaction processing, mixed workloads, analytics, and data warehousing. Enjoy automated patching, upgrades, and tuning, allowing routine maintenance tasks to be performed without human intervention. The next on the list is MySQL HeatWave, a fully-managed Database Service designed for developing and deploying secure cloud-native applications using widely adopted MySQL open-source database. Third on the list is OCI Streaming service. Experience a fully managed, scalable, and durable solution for ingesting and consuming high-volume data streams in real time. Next is Service Mesh. This service offers a set of capabilities to facilitate communication among microservices within a cloud-native application. The communication is centrally managed and secured, ensuring a smooth and secure interaction. The OCI Service Operator for Kubernetes serves as a versatile bridge, seamlessly connecting your Kubernetes clusters with these powerful Oracle Cloud Infrastructure services. 10:31 Nikita: That's awesome! I've also heard about Ingress Controllers. Can you tell us what they are? Mahendra: A Kubernetes Ingress Controller serves as the enforcer of rules defined in a Kubernetes Ingress. Its primary role is to manage, load balance, and route incoming traffic to specific service pods residing on worker nodes within the cluster. At the heart of this process is the Kubernetes Ingress Resource. Think of it as a blueprint, a rich configuration holding routing rules and options, specifically crafted for handling HTTP and HTTPS traffic. It serves as a powerful orchestrator for managing external communication with services inside the cluster. 11:15 Lois: Mahendra, how do Ingress Controllers bring about efficiency? Mahendra: Efficiency comes with consolidation. With a single ingress resource, you can neatly gather routing rules for multiple services. This eliminates the need to create a Kubernetes service of type LoadBalancer for each service seeking external or private network traffic. The OCI native ingress controller is a powerhouse. It crafts an OCI Flexible Load Balancer, your gateway to efficient request handling. The OCI native ingress controller seamlessly adapts to changes in routing rules with real-time updates. 11:53 Nikita: And what about integration with an OKE cluster? Mahendra: Absolutely. It harmonizes with the cluster for streamlined traffic management. Operating as a single pod on a randomly selected worker node, it ensures a balanced workload distribution. 12:08 Lois: Moving on, let's talk about running applications on ARM-based nodes and GPU nodes. We'll start with ARM-based nodes. Mahendra: Typically, developers use ARM-based worker nodes in Kubernetes cluster to develop and test applications. Selecting the right infrastructure is crucial for optimal performance. 12:28 Nikita: What kind of options do developers have when running applications on ARM-based nodes? Mahendra: When it comes to running applications on ARM-based nodes, you have a range of options at your fingertips. First up, consider the choice between ARM-based bare metal shapes and flexible VM shapes. Each comes with its own unique advantages. Now, let's talk about the heart of it all, the Ampere A1 Compute instances. These instances are driven by the cutting edge Ampere Altra processor, ensuring high performance and efficiency for your workloads. You must specify the ARM-based node pool shapes during cluster or node pool creation, whether you choose to navigate through the user-friendly console, leverage the flexibility of the API, or command with precision through the CLI, the process remains seamless. 13:23 Lois: Can you define pods to run exclusively on ARM-based nodes within a heterogeneous cluster setup? Mahendra: In scenarios where a cluster comprises node pools with ARM-based shapes alongside other shapes, such as AMD64, you can employ a powerful tool called node selector in the pod specification. This allows you to precisely dictate that an application should exclusively run on ARM-based worker nodes, ensuring your workloads aligns with the desired architecture. 13:55 Nikita: And before we end this episode, can you explain why developers must run applications on GPU nodes? Mahendra: Originally designed for graphics manipulations, GPUs prove highly efficient in parallel data processing. This makes them a top choice for deploying data-intensive applications. Our GPU nodes utilize cutting edge NVIDIA graphics cards ensuring efficient and powerful data processing. Seamless access to this computing prowess is made possible through CUDA libraries. To ensure smooth integration, be sure to select a GPU shape and opt for an Oracle Linux GPU image preloaded with the essential CUDA libraries. CUDA here is Compute Unified Device Architecture, which is a parallel computing platform and application-programming interface model created by NVIDIA. It allows developers to use NVIDIA graphics-processing units for general-purpose processing, rather than just rendering graphics. 14:57 Nikita: Thank you, Mahendra, for another insightful session. We appreciate you joining us today. Lois: For more information on everything we discussed, go to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. You'll find plenty of demos and skill checks to supplement your learning. Join us next week when we'll discuss vital security practices for your OKE clusters on OCI. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 15:28 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Jeff Geerling, Owner of Midwestern Mac, joins Corey on Screaming in the Cloud to discuss the importance of storytelling, problem-solving, and community in the world of cloud. Jeff shares how and why he creates content that can appeal to anybody, rather than focusing solely on the technical qualifications of his audience, and how that strategy has paid off for him. Corey and Jeff also discuss the impact of leading with storytelling as opposed to features in product launches, and what's been going on in the Raspberry Pi space recently. Jeff also expresses the impact that community has on open-source companies, and reveals his take on the latest moves from Red Hat and Hashicorp. About JeffJeff is a father, author, developer, and maker. He is sometimes called "an inflammatory enigma".Links Referenced:Personal webpage: https://jeffgeerling.com/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. A bit off the beaten path of the usual cloud-focused content on this show, today I'm speaking with Jeff Geerling, YouTuber, author, content creator, enigma, and oh, so much more. Jeff, thanks for joining me.Jeff: Thanks for having me, Corey.Corey: So, it's hard to figure out where you start versus where you stop, but I do know that as I've been exploring a lot of building up my own home lab stuff, suddenly you are right at the top of every Google search that I wind up conducting. I was building my own Kubernete on top of a Turing Pi 2, and sure enough, your teardown was the first thing that I found that, to be direct, was well-documented, and made it understandable. And that's not the first time this year that that's happened to me. What do you do exactly?Jeff: I mean, I do everything. And I started off doing web design and then I figured that design is very, I don't know, once it started transitioning to everything being JavaScript, that was not my cup of tea. So, I got into back-end work, databases, and then I realized to make that stuff work well, you got to know the infrastructure. So, I got into that stuff. And then I realized, like, my home lab is a great place to experiment on this, so I got into Raspberry Pis, low-power computing efficiency, building your own home lab, all that kind of stuff.So, all along the way, with everything I do, I always, like, document everything like crazy. That's something my dad taught me. He's an engineer in radio. And he actually hired me for my first job, he had me write an IT operations manual for the Radio Group in St. Louis. And from that point forward, that's—I always start with documentation. So, I think that was probably what really triggered that whole series. It happens to me too; I search for something, I find my old articles or my own old projects on GitHub or blog posts because I just put everything out there.Corey: I was about to ask, years ago, I was advised by Scott Hanselman to—the third time I find myself explaining something, write a blog post about it because it's easier to refer people back to that thing than it is for me to try and reconstruct it on the fly, and I'll drop things here and there. And the trick is, of course, making sure it doesn't sound dismissive and like, “Oh, I wrote a thing. Go read.” Instead of having a conversation with people. But as a result, I'll be Googling how to do things from time to time and come up with my own content as a result.It's at least a half-step up from looking at forums and the rest, where I realized halfway through that I was the one asking the question. Like, “Oh, well, at least this is useful for someone.” And I, for better or worse, at least have a pattern of going back and answering how I solved a thing after I get there, just because otherwise, it's someone asked the question ten years ago and never returns, like, how did you solve it? What did you do? It's good to close that loop.Jeff: Yeah, and I think over 50% of what I do, I've done before. When you're setting up a Kubernetes cluster, there's certain parts of it that you're going to do every time. So, whatever's not automated or the tricky bits, I always document those things. Anything that is not in the readme, is not in the first few steps, because that will help me and will help others. I think that sometimes that's the best success I've found on YouTube is also just sharing an experience.And I think that's what separates some of the content that really drives growth on a YouTube channel or whatever, or for an organization doing it because you bring the experience, like, I'm a new person to this Home Assistant, for instance, which I use to automate things at my house. I had problems with it and I just shared those problems in my video, and that video has, you know, hundreds of thousands of views. Whereas these other people who know way more than I could ever know about Home Assistant, they're pulling in fewer views because they just get into a tutorial and don't have that perspective of a beginner or somebody that runs into an issue and how do you solve that issue.So, like I said, I mean, I just always share that stuff. Every time that I have an issue with anything technological, I put it on GitHub somewhere. And then eventually, if it's something that I can really formulate into an outline of what I did, I put a blog post up on my blog. I still, even though I write I don't know how many words per week that goes into my YouTube videos or into my books or anything, I still write two or three blog posts a week that are often pretty heavy into technical detail.Corey: One of the challenges I've always had is figuring out who exactly I'm storytelling for when I'm putting something out there. Because there's a plethora, at least in cloud, of beginner content of, here's how to think about cloud, here's what the service does, here's why you should use it et cetera, et cetera. And that's all well and good, but often the things that I'm focusing on presuppose a certain baseline level of knowledge that you should have going into this. If you're trying to figure out the best way to get some service configured, I probably shouldn't have to spend the first half of the article talking about what AWS is, as a for instance. And I think that inherently limits the size of the potential audience that would be interested in the content, but it's also the kind of stuff that I wish was out there.Jeff: Yeah. There's two sides to that, too. One is, you can make content that appeals to anybody, even if they have no clue what you're talking about, or you can make content that appeals to the narrow audience that knows the base level of understanding you need. So, a lot of times with—especially on my YouTube channel, I'll put things in that is just irrelevant to 99% of the population, but I get so many comments, like, “I have no clue what you said or what you're doing, but this looks really cool.” Like, “This is fun or interesting.” Just because, again, it's bringing that story into it.Because really, I think on a base level, a lot of programmers especially don't understand—and infrastructure engineers are off the deep end on this—they don't understand the interpersonal nature of what makes something good or not, what makes something relatable. And trying to bring that into technical documentation a lot of times is what differentiates a project. So, one of the products I love and use and recommend everywhere and have a book on—a best-selling book—is Ansible. And one of the things that brought me into it and has brought so many people is the documentation started—it's gotten a little bit more complex over the years—but it started out as, “Here's some problems. Here's how you solve them.”Here's, you know, things that we all run into, like how do you connect to 12 servers at the same time? How do you have groups of servers? Like, it showed you all these little examples. And then if you wanted to go deeper, there was more documentation linked out of that. But it was giving you real-world scenarios and doing it in a simple way. And it used some little easter eggs and fun things that made it more interesting, but I think that that's missing from a lot of technical discussion and a lot of technical documentation out there is that playfulness, that human side, the get from Point A to Point B and here's why and here's how, but here's a little interesting way to do it instead of just here's how it's done.Corey: In that same era, I was one of the very early developers behind SaltStack, and I think one of the reasons that Ansible won in the market was that when you started looking into SaltStack, it got wrapped around its own axle talking about how it uses ZeroMQ for a full mesh between all of the systems there, as long—sorry [unintelligible 00:07:39] mesh network that all routes—not really a mesh network at all—it talks through a single controller that then talks to all of its subordinate nodes. Great. That's awesome. How do I use this to install a web server, is the question that people had. And it was so in love with its own cleverness in some ways. Ansible was always much more approachable in that respect and I can't understate just how valuable that was for someone who just wants to get the problem solved.Jeff: Yeah. I also looked at something like NixOS. It's kind of like the arch of distributions of—Corey: You must be at least this smart to use it in some respects—Jeff: Yeah, it's—Corey: —has been the every documentation I've had with that.Jeff: [laugh]. There's, like, this level of pride in what it does, that doesn't get to ‘and it solves this problem.' You can get there, but you have to work through the barrier of, like, we're so much better, or—I don't know what—it's not that. Like, it's just it doesn't feel like, “You're new to this and here's how you can solve a problem today, right now.” It's more like, “We have this golden architecture and we want you to come up to it.” And it's like, well, but I'm not ready for that. I'm just this random developer trying to solve the problem.Corey: Right. Like, they should have someone hanging out in their IRC channel and just watch for a week of who comes in and what questions do they have when they're just getting started and address those. Oh, you want to wind up just building a Nix box EC2 for development? Great, here's how you do that, and here's how to think about your workflow as you go. Instead, I found that I had to piece it together from a bunch of different blog posts and the rest and each one supposed that I had different knowledge coming into it than the others. And I felt like I was getting tangled up very easily.Jeff: Yeah, and I think it's telling that a lot of people pick up new technology through blog posts and Substack and Medium and whatever [Tedium 00:09:19], all these different platforms because it's somebody that's solving a problem and relating that problem, and then you have the same problem. A lot of times in the documentation, they don't take that approach. They're more like, here's all our features and here's how to use each feature, but they don't take a problem-based approach. And again, I'm harping on Ansible here with how good the documentation was, but it took that approach is you have a bunch of servers, you want to manage them, you want to install stuff on them, and all the examples flowed from that. And then you could get deeper into the direct documentation of how things worked.As a polar opposite of that, in a community that I'm very much involved in still—well, not as much as I used to be—is Drupal. Their documentation was great for developers but not so great for beginners and that was always—it still is a difficulty in that community. And I think it's a difficulty in many, especially open-source communities where you're trying to build the community, get more people interested because that's where the great stuff comes from. It doesn't come from one corporation that controls it, it comes from the community of users who are passionate about it. And it's also tough because for something like Drupal, it gets more complex over time and the complexity kind of kills off the initial ability to think, like, wow, this is a great little thing and I can get into it and start using it.And a similar thing is happening with Ansible, I think. We were at when I got started, there were a couple hundred modules. Now there's, like, 4000 modules, or I don't know how many modules, and there's all these collections, and there's namespaces now, all these things that feel like Java overhead type things leaking into it. And that diminishes that ability for me to see, like, oh, this is my simple tool that solving these problems.Corey: I think that that is a lost art in the storytelling side of even cloud marketing, where they're so wrapped around how they do what they do that they forget, customers don't care. Customers care very much about their problem that they're trying to solve. If you have an answer for solving that problem, they're very interested. Otherwise, they do not care. That seems to be a missing gap.Jeff: I think, like, especially for AWS, Google, Azure cloud platforms, when they build their new services, sometimes you're, like, “And that's for who?” For some things, it's so specialized, like, Snowmobile from Amazon, like, there's only a couple customers on the planet in a given year that needs something like that. But it's a cool story, so it's great to put that into your presentation. But some other things, like, especially nowadays with AI, seems like everybody's throwing tons of AI stuff—spaghetti—at the wall, seeing what will stick and then that's how they're doing it. But that really muddies up everything.If you have a clear vision, like with Apple, they just had their presentation on the new iPhone and the new neural engine and stuff, they talk about, “We see your heart patterns and we tell you when your heart is having problems.” They don't talk about their AI features or anything. I think that leading with that story and saying, like, here's how we use this, here's how customers can build off of it, those stories are the ones that are impactful and make people remember, like, oh Apple is the company that saves people's lives by making watches that track their heart. People don't think that about Google, even though they might have the same feature. Google says we have all these 75 sensors in our thing and we have this great platform and Android and all that. But they don't lead with the story.And that's something where I think corporate Apple is better than some of the other organizations, no matter what the technology is. But I get that feeling a lot when I'm watching launches from Amazon and Google and all their big presentations. It seems like they're tech-heavy and they're driven by, like, “What could we do with this? What could you do with this new platform that we're building,” but not, “And this is what we did with this other platform,” kind of building up through that route.Corey: Something I've been meaning to ask someone who knows for a while, and you are very clearly one of those people, I spend a lot of time focusing on controlling cloud costs and I used to think that Managed NAT Gateways were very expensive. And then I saw the current going rates for Raspberries Pi. And that has been a whole new level of wild. I mean, you mentioned a few minutes ago that you use Home Assistant. I do too.But I was contrasting the price between a late model, Raspberry Pi 4—late model; it's three years old if this point of memory serves, maybe four—versus a used small form factor PC from HP, and the second was less expensive and far more capable. Yeah it drags a bit more power and it's a little bit larger on the shelf, but it was basically no contest. What has been going on in that space?Jeff: I think one of the big things is we're at a generational improvement with those small form-factor little, like, tiny-size almost [nook-sized 00:13:59] PCs that were used all over the place in corporate environments. I still—like every doctor's office you go to, every hospital, they have, like, a thousand of these things. So, every two or three or four years, however long it is on their contract, they just pop all those out the door and then you get an E-waste company that picks up a thousand of these boxes and they got to offload them. So, the nice thing is that it seems like a year or two ago, that really started accelerating to the point where the price was driven down below 100 bucks for a fully built-out little x86 Mini PC. Sure, it's, you know, like you said, a few generations old and it pulls a little bit more power, usually six to eight watts at least, versus a Raspberry Pi at two to three watts, but especially for those of us in the US, electricity is not that expensive so adding two or three watts to your budget for a home lab computer is not that bad.The other part of that is, for the past two-and-a-half years because of the global chip shortages and because of the decisions that Raspberry Pi made, there were so few Raspberry Pis available that their prices shot up through the roof if you wanted to get one in any timely fashion. So, that finally is clearing up, although I went to the Micro Center near me yesterday, and they said that they have not had stock of Raspberry Pi 4s for, like, two months now. So, they're coming, but they're not distributed evenly everywhere. And still, the best answer, especially if you're going to run a lot of things on it, is probably to buy one of those little mini PCs if you're starting out a home lab.Or there's some other content creators who build little Kubernetes clusters with multiple mini PCs. Three of those stack up pretty nicely and they're still super quiet. I think they're great for home labs. I have two of them over on my shelf that I'm using for testing and one of them is actually in my rack. And I have another one on my desk here that I'm trying to set up for a five gigabit home router since I finally got fiber internet after years with cable and I'm still stuck on my old gigabit router.Corey: Yeah, I wound up switching to a Protectli, I think is what it's called for—it's one of those things I've installed pfSense on. Which, I'm an old FreeBSD hand and I haven't kept up with it, but that's okay. It feels like going back in time ten years, in some respects—Jeff: [laugh].Corey: —so all right. And I have a few others here and there for various things that I want locally. But invariably, I've had the WiFi controller; I've migrated that off. That lives on an EC2 box in Ohio now. And I do wind up embracing cloud services when I don't want it to go down and be consistently available, but for small stuff locally, I mean, I have an antenna on the roof doing an ADS-B receiver dance that's plugged into a Pi Zero.I have some backlogged stuff on this, but they've gotten expensive as alternatives have dropped in price significantly. But what I'm finding as I'm getting more into 3D printing and a lot of hobbyist maker tools out there, everything is built with the Raspberry Pi in mind; it has the mindshare. And yeah, I can get something with similar specs that are equivalent, but then I've got to do a whole bunch of other stuff as soon as it gets into controlling hardware via GPIO pins or whatnot. And I have to think about it very differently.Jeff: Yeah, and that's the tough thing. And that's the reason why Raspberry Pis, even though they're three years old, even though they're hard to get, they still are fetching—on the used market—way more than the original MSRP. It's just crazy. But the reason for that is the Raspberry Pi organization. And there's two: there's the Raspberry Pi Foundation that's goals are to increase educational computing and accessibility for computers for kids and learning and all that, then there's the Raspberry Pi trading company that makes the Raspberry Pis.The Trading Company has engineers who sit there 24/7 working on the software, working on the kernel drivers, working on hardware bugs, listening to people on the forums and in GitHub and everywhere, and they're all English-speaking people there—they're over in the UK—and they manufacture their own boards. So, there's a lot of things on top of that, even though they're using some silicons of Broadcom chips that are a little bit locked down and not completely open-source like some other chips might be, they're a phone number you could call if you need the support or there's a forum that has activity that you can get help in and their software that's supported. And there's a newer Linux kernel and the kernel is updated all the time. So, all those advantages mean you get a little package that will work, it'll sip two watts of power, sitting 24/7. It's reliable hardware.There's so many people that use it that it's so well tested that almost any problem you could ever run into, someone else has and there's a blog post or a forum post talking about it. And even though the hardware is not super powerful—it's three years old—you can add on a Coral TPU and do face recognition and object recognition. And throw in Frigate for Home Assistant to get notifications on your phone when your mom walks up to the door. There's so many things you can do with them and they're so flexible that they're still so valuable. I think that they really knocked it out of the park with that model, the Raspberry Pi 4, and the compute module 4, which is still impossible to get. I have not been able to buy one for two years now. Luckily, I bought 12 two-and-a-half years ago [laugh] otherwise I would be running out for all my projects that I do.Corey: Yeah. I got two at the moment and two empty slots in the Turing Pi 2, which I'll care more about if I can actually get the thing up and booted. But it presupposes you have a Windows computer or otherwise, ehh, watch this space; more coming. Great. Like, do I build a virtual machine on top of something else? It leads down the path super quickly of places I thought I'd escaped from.Jeff: Yeah, you know, outside of the Pi realm, that's the state of the communities. It's a lot of, like, figuring out your own things. I did a project—I don't know if you've heard of Mr. Beast—but we did a project for him that involves a hundred single-board computers. We couldn't find Raspberry Pi's so we had to use a different single-board computer that was available.And so, I bought an older one thinking, oh, this is, like, three or four years old—it's older than the Pi 4—and there must be enough support now. But still, there's, like, little rough edges everywhere I went and we ended up making them work, but it took us probably an extra 30 to 40 hours of development work to get those things running the same way as a Raspberry Pi. And that's just the way of things. There's so much opportunity.If one of these Chinese manufacturers that makes most of these things, if one of them decided, you know what? We're going to throw tons of money into building support for these things, get some English-speaking members of these forums to build up the community, all that stuff, I think that they could have a shot at Raspberry Pi's giant portion of the market. But so far, I haven't really seen that happen. So far, they're spamming hardware. And it's like, the hardware is awesome. These chips are great if you know how to deal with them and how to get the software running and how to deal with Linux issues, but if you don't, then they're not great because you might not even get the thing to boot.Corey: I want to harken back to something you said a minute ago, where there's value in having a community around something, where you can see everyone else has already encountered a problem like this. I think that folks who weren't around for the rise of cloud have no real insight into how difficult it used to be just getting servers into racks and everything up, and okay, they're identical, and seven of them are working, but that eighth one isn't for some strange reason. And you spend four hours troubleshooting what turns out to be a bad cable or something not seated properly and it's awful. Cloud got away from a lot of that nonsense. But it's important—at least to me—to not be Captain Edgecase, where if you pick some new cloud provider and Google for how to set up a load balancer and no one's done it before you, that's not great. Whereas if I'm googling now in the AWS realm and no one has done, the thing I'm trying to do, that should be something of a cautionary flag of maybe this isn't how most people go about approaching production. Really think twice about this.Jeff: Yep. Yeah, we ran into that on a project I was working on was using Magento—which I don't know if anybody listening uses Magento, but it's not fun—and we ran into some things where it's like, “We're doing this, and it says that they do this on their official supported platform, but I don't know how they are because the code just doesn't exist here.” So, we ran into some weird edge cases on AWS with some massive infrastructure for the databases, and I ran into scaling issues. But even there, there were forum posts in AWS here and there that had little nuggets that helped us to figure out a way to get around it. And like you say, that is a massive advantage for AWS.And we ran into an issue with, we were one of the first customers trying out the new Lambda functions for RDS—or I don't remember exactly what it was called initially—but we ended up not using that. But we ran into some of these issues and figured out we were the first customer running into this weird scaling thing when we had a certain size of database trying to use it with these Lambda calls. And eventually, they got those things solved, but with AWS, they've seen so many things and some other cloud providers haven't seen these things. So, when you have certain types of applications that need to scale in certain ways, that is so valuable and the community of users, the ability to pull from that community when you need to hire somebody in an emergency, like, we need somebody to help us get this project done and we're having this issue, you can find somebody that is, like, okay, I know how to get you from Point A to Point B and get this project out the door. You can't do that on certain platforms.And open-source projects, too. We've always had that problem in Drupal. The amount of developers who are deep into Drupal to help with the hard problems is not vast, so the ones who can do that stuff, they're all hired off and paid a handsome sum. And if you have those kinds of problems you realize, I either going to need to pay a ton of money or we're just going to have to not do that thing that we wanted to do. And that's tough.Corey: What I've found, sort of across the board, has been that there's a lot of, I guess, open-source community ethos that has bled into a lot of this space and I wanted to make sure that we have time to talk about this because I was incensed a while back when Red Hat decided, “Oh, you know that whole ten-year commitment on CentOS? That project that we acquired and are now basically stabbing in the face?”—disclosure. I used to be part of the CentOS project years ago when I was on network staff for the Freenode IRC network—then it was, “Oh yeah, we're just going to basically undermine our commitments to you and now you can pay us if you want to get that support there.” And that really set me off. Was nice to see you were right there as well in almost lockstep with me, pointing out that this is terrible, just as far as breaking promises you've made to customers. Has your anger cooled any? Because mine hasn't.Jeff: It has not. My temper has cooled. My anger has not. I don't think that they get it. After all the backlash that they got after that, I don't think that the VP-level folks at Red Hat understand that this is already impacting them and will impact them much more in the future because people like me and you, people who help other people build infrastructure and people who recommend operating systems and people who recommend patterns and things, we're just going to drop off using CentOS because it doesn't exist. It does exist and some other people are saying, “Oh, it's actually better to use this new CentOS, you know, Stream. Stream is amazing.” It's not. It's not the same thing. It's different. And—Corey: I used to work at a bank. That was not an option. I mean, granted at the bank for the production systems it was always [REL 00:25:18], but being able to spin up a pre-production environment without having to pay license fees on every VM. Yeah.Jeff: Yeah. And not only that, they did this announcement and framed it a certain way, and the community immediately saw. You know, I think that they're just angry about something, and whether it was a NASA contract with Rocky Linux, or whether it was something Oracle did, who knows, but it seems petty in retrospect, especially in comparison to the amount of backlash that came out of it. And I really don't think that they understand the thing that they had with that Red Hat Enterprise Linux is not a massive growth opportunity for Red Hat. It's, in some ways, a dying product in terms of compared to using cloud stuff, it doesn't matter.You could use CoreOS, you could use NixOS, and you could use anything, it doesn't really matter. For people like you and me, we just want to deploy our software. And if it's containers, it really doesn't matter. It's just the people in government or in certain organizations that have these roles that you have to use whatever FIPS and all that kind of stuff. So, it's not like it's a hyper-growth opportunity for them.CentOS was, like, the only reason why all the software, especially on the open-source side, was compatible with Red Hat because we could use CentOS and it was easy and simple. They took that—well, they tried to take that away and everybody's like, “That's—what are you doing?” Like, I posted my blog post and I think that sparked off quite a bit of consternation, to the point where there was a lot of personal stuff going on. I basically said, “I'm not supporting Red Hat Enterprise Linux for any of my work anymore.” Like, “From this point forward, it's not supported.”I'll support OpenELA, I'll support Rocky Linux or Oracle Linux or whatever because I can get free versions that I don't have to sign into a portal and get a license and download the license and integrate it with my CI work. I'm an open-source developer. I'm not going to pay for stuff or use 16 free licenses. Or I was reached out to and they said, “We'll give you more licenses. We'll give you extra.” And it's like, that's not how this works. Like, I don't have to call Debian and Ubuntu and [laugh] I don't even have to call Oracle to get licenses. I can just download their software and run it.So, you know, I don't think they understood the fact that they had that. And the bigger problem for me was the two-layer approach to destroying all the trust that the community had. First was in, I think it was 2019 when they said—we're in the middle of CentOS 8's release cycle—they said, “We're dropping CentOS 8. It's going to be Stream now.” And everybody was up in arms.And then Rocky Linux and [unintelligible 00:27:52] climbed in and gave us what we wanted: basically, CentOS. So, we're all happy and we had a status quo, and Rocky Linux 9 and [unintelligible 00:28:00] Linux nine came out after Red Hat 9, and the world was a happy place. And then they just dumped this thing on us and it's like, two major release cycles in a row, they did it again. Like, I don't know what this guy's thinking, but in one of the interviews, one of the Red Hat representatives said, “Well, we wanted to do this early in Red Hat 9's release cycle because people haven't started migrating.” It's like, well, I already did all my automation upgrades for CI to get all my stuff working in Rocky Linux 9 which was compatible with Red Hat Enterprise Linux 9. Am I not one of the people that's important to you?Like, who's important to you? Is it only the people who pay you money or is it also the people that empower your operating system to be a premier Enterprise Linux operating system? So, I don't know. You can tell. My anger has not died down. The amount of temper that I have about it has definitely diminished because I realize I'm talking at a wall a lot of times, when I'm having conversations on Twitter, private conversations and email, things like that.Corey: People come to argue; they don't come to actually have a discussion.Jeff: Yeah. I think that they just, they don't see the community aspect of it. They just see the business aspect. And the business aspect, if they want to figure out ways that they can get more people to pay them for their software, then maybe they should provide more value and not just cut off value streams. It doesn't make sense to me from a long-term business perspective.From a short term, maybe there were some clients who said, “Oh, shoot. We need this thing stable. We're going to pay for some more licenses.” But the engineers that those places are going to start making plans of, like, how do we make this not happen again. And the way to not make that happen, again is to use, maybe Ubuntu or maybe [unintelligible 00:29:38] or something. Who knows? But it's not going to be increasing our spend with Red Hat.Corey: That's what I think a lot of companies are missing when it comes to community as well, where it's not just a place to go to get support for whatever it is you're doing and it's not a place [where 00:29:57] these companies view prospective customers. There's more to it than that. There has to be a social undercurrent on this. I look at the communities I spend time in and in some of them dating back long enough, I've made lifelong significant friendships out of those places, just through talking about our lives, in addition to whatever the community is built around. You have to make space for that, and companies don't seem to fully understand that.Jeff: Yeah, I think that there's this thing that a community has to provide value and monetizable value, but I don't think that you get open-source if you think that that's what it is. I think some people in corporate open-source think that corporate open-source is a value stream opportunity. It's a funnel, it's something that is going to bring you more customers—like you say—but they don't realize that it's a community. It's like a group of people. It's friends, it's people who want to make the world a better place, it's people who want to support your company by wearing your t-shirt to conferences, people want to put on your red fedora because it's cool. Like, it's all of that. And when you lose some of that, you lose what makes your product differentiated from all the other ones on the market.Corey: That's what gets missed. I think that there's a goodwill aspect of it. People who have used the technology and understand its pitfalls are likelier to adopt it. I mean, if you tell me to get a website up and running, I am going to build an architecture that resembles what I've run before on providers that I've run on before because I know what the failure modes look like; I know how to get things up and running. If I'm in a hurry, trying to get something out the door, I'm going to choose the devil that I know, on some level.Don't piss me off as a community member and incentivize me to change that estimation the next time I've got something to build. Well, that doesn't show up on this quarter's numbers. Well, we have so little visibility into how decisions get made many companies that you'll never know that you have a detractor who's still salty about something you did five years ago and that's the reason the bank decided not to because that person called in their political favors to torpedo that deal and have a sweetheart offer from your competitor, et cetera and so on and so forth. It's hard to calculate the actual cost of alienating goodwill. But—Jeff: Yeah.Corey: I wish companies had a longer memory for these things.Jeff: Yeah. I mean, and thinking about that, like, there was also the HashiCorp incident where they kind of torpedoed all developer goodwill with their Terraform and other—Terraform especially, but also other products. Like, I probably, through my book and through my blog posts and my GitHub examples have brought in a lot of people into the HashiCorp ecosystem through Vagrant use, and through Packer and things like that. At this point, because of the way that they treated the open-source community with the license change, a guy like me is not going to be enthusiastic about it anymore and I'm going to—I already had started looking at alternatives for Vagrant because it doesn't mesh with modern infrastructure practices for local development as much, but now it's like that enthusiasm is completely gone. Like I had that goodwill, like you said earlier, and now I don't have that goodwill and I'm not going to spread that, I'm not going to advocate for them, I'm not going to wear their t-shirt [laugh], you know when I go out and about because it just doesn't feel as clean and cool and awesome to me as it did a month ago.And I don't know what the deal is. It's partly the economy, money's drying up, things like that, but I don't understand how the people at the top can't see these things. Maybe it's just their organization isn't set up to show the benefits from the engineers underneath, who I know some of these engineers are, like, “Yeah, I'm sorry. This was dumb. I still work here because I get a paycheck, but you know, I can't say anything on social media, but thank you for saying what you did on Twitter.” Or X.Corey: Yeah. It's nice being independent where you don't really have to fear the, well if I say this thing online, people might get mad at me and stop doing business with me or fire me. It's well, yeah, I mean, I would have to say something pretty controversial to drive away every client and every sponsor I've got at this point. And I don't generally have that type of failure mode when I get it wrong. I really want to thank you for taking the time to talk with me. If people want to learn more, where's the best place for them to find you?Jeff: Old school, my personal website, jeffgeerling.com. I link to everything from there, I have an About page with a link to every profile I've ever had, so check that out. It links to my books, my YouTube, all that kind of stuff.Corey: There's something to be said for picking a place to contact you that will last the rest of your career as opposed to, back in the olden days, my first email address was the one that my ISP gave me 25 years ago. I don't use that one anymore.Jeff: Yep.Corey: And having to tell everyone I corresponded with that it was changing was a pain in the butt. We'll definitely put a link to that one in the [show notes 00:34:44]. Thank you so much for taking the time to speak with me. I appreciate it.Jeff: Yeah, thanks. Thanks so much for having me.Corey: Jeff Geerling, YouTuber, author, content creator, and oh so very much more. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment that we will, of course, read [in action 00:35:13], just as soon as your payment of compute modules for Raspberries Pi show up in a small unmarked bag.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Steeds meer bedrijven die open source-software ontwikkelen, wijken af van de bekende standaard licentiemodellen die gangbaar zijn in de open source-wereld. In plaats daarvan kiest men voor een meer commerciële licentie. De software blijft vaak wel open source, maar als organisaties het willen gebruiken moeten ze een licentie kopen. Daarnaast is er nog de rel rond RHEL. Enkele voorbeelden van partijen die gekozen hebben voor een meer commercieel licentiemodel zijn MongoDB, ElasticSearch en recent ook HashiCorp. Het zijn echter lang niet de enige en komt wel steeds vaker voor. De oorzaak van deze switch lijkt simpelweg toe te schrijven tot geld verdienen. Het is voor open source-partijen kennelijk lastig om voldoende omzet te halen uit de ondersteuning, installatie en migratiediensten voor hun software, waardoor ze zichzelf genoodzaakt zien een ander licentiemodel te hanteren. Is open source-software vandaag de dag wellicht te goed, waardoor ondersteuning overbodig is?De RHEL-relDan is er ook nog de zogenaamde rel rondom RHEL (Red Hat Enterprise Linux). Hiervan kon iedereen voorheen de broncode en binaries inzien, maar sindskort heeft Red Hat dat aangepast: enkel enterprise klanten krijgen nog toegang. Hiermee wil Red Hat het gras wegmaaien voor de voeten van partijen die een clone van RHEL aanboden onder eigen naam met een eigen ondersteuningspakket.Het zorgde ervoor dat Ark en Rocky Linux ineens in de problemen kwamen, maar ook Oracle Linux, wat een clone van RHEL is. Dat Red Hat deze stap maakte, zorgde voor flink wat ophef in de open source-wereld. Het leverde voor dat bedrijf een deuk op het imago. Concurrent SUSE besloot om een eigen Red Hat-compatibel clone aan te kondigen, maar al snel wisten de verschillende partijen in de markt elkaar te vinden en ontstond OpenELA, de Open Enterprise Linux Association. Deze bestaat uit een samenwerking tussen Oracle, SUSE en CIQ (het moederbedrijf van Rocky Linux). OpenELA komt met een Linux-distributie die backwards compatible is met Red Hat en ook long term support moet gaan bieden.Meer hierover in deze aflevering van Techzine Talks.
Welcome episode 221 of The Cloud Pod podcast - where the forecast is always cloudy! This week your hosts, Justin, Jonathan, Ryan, and Matthew look at some of the announcements from AWS Summit, as well as try to predict the future - probably incorrectly - about what's in store at Next 2023. Plus, we talk more about the storm attack, SFTP connectors (and no, that isn't how you get to the Moscone Center for Next) Llama 2, Google Cloud Deploy and more! Titles we almost went with this week: Now You Too Can Get Ignored by Google Support via Mobile App The Tech Sector Apparently Believes Multi-Cloud is Great… We Hate You All. The cloud pod now wants all your HIPAA Data The Meta Llama is Spreading Everywhere The Cloud Pod Recursively Deploys Deploy A big thanks to this week's sponsor: Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.
In den letzten Wochen wurde kein Thema so hitzig diskutiert wie der Zugriff auf den RHEL-Quellcode. Neben zahlreichen Gegenstimmen aus der Community waren auch progressive Artikel konkurrierender Unternehmen zentraler Bestandteil der Berichterstattung. Zeit, einen Schritt zurück zu gehen und auch die Perspektive Red Hats nochmal zu beleuchten. Zusammen mit Jan Wildeboer (EMEA Evangelist, seit 18 Jahren bei Red Hat) thematisieren wir die Ankündigungen, Beweggründe und derzeit häufig gestellte Fragen. Dabei ist es auch wichtig, die Historie der jeweiligen Projekte in den Kontext zu setzen.
Coming up in this episode * The Catchup Episode (We've missed so much!) * The Red Hat Recap * Browser Watch...ing! * Some feedback, and a focus The Video Podcast (https://youtu.be/ZKm9vgJzAO8) https://youtu.be/ZKm9vgJzAO8 401 Audio Timestamps 0:00 Cold Open 2:16 The Gentoo Checkin 11:33 We Have a Lemmy! 19:24 Red Hat Recap 46:09 Browser Watch 1:05:21 Feedback 1:23:05 Community Focus: Linux Matters 1:27:03 App Focus: Jerboa & Memmy 1:34:48 Next Time: Debian 1:37:09 Stinger Banter Gentoo check in - Use the Handbook! (https://wiki.gentoo.org/wiki/Handbook:Main_Page) The wiki (https://wiki.gentoo.org/wiki/Main_Page) is just great in general. Lemmy (https://join-lemmy.org/) The Linux User Space Lemmy instance (https://lemmy.linuxuserspace.show/) feddit's community browser (https://browse.feddit.de/) Another Lemmy explorer (https://lemmyverse.net/communities) Announcements
Dans cet épisode estival Guillaume, Emmanuel et Arnaud parcourent les nouvelles du début d'été. Du Java, du Rust, du Go du coté des langages, du Micronaut, du Quarkus pour les frameworks, mais aussi du WebGPU, de l'agilité, du DDD, des sondages, de nombreux outils et surtout de l'intelligence artificielle à toutes les sauces (dans les bases de données, dans les voitures…). Enregistré le 21 juillet 2023 Téléchargement de l'épisode LesCastCodeurs-Episode-298.mp3 News Langages La release candidate de Go 1.21 supporte WASM et WASI nativement https://go.dev/blog/go1.21rc StringBuilder ou contatenation de String https://reneschwietzke.de/java/the-stringbuilder-advise-is-dead-or-isnt-it.html StringBuilder était la recommendation ca cela créait moins d'objects notamment. Mais la JVM a évolué et le compilateur ou JIT remplace cela par du code efficace Quelques petites exceptions le code froid (e.g. startup time) qui est encore interprété peut beneficier de StringBuilder autre cas, la concatenation dans des boucles où le JIT ne pourrait peut etre pas optimiser le StringBuilder “fluid” est plus efficace (inliné?) ces regles ne changement pas si des objects sont stringifié pour etre concaténés GPT 4 pas une revolution https://thealgorithmicbridge.substack.com/p/gpt-4s-secret-has-been-revealed rumeur ca beaucou de secret pas u modele a 1 trillion de parametres maus 8 a 220 Milliards combinés intelligeament les chercheurs attendaient un breakthrough amis c'est une envolution et pas particulierement nouveau methode deja implem,entee par des cherchers chez google (maintenant chez ooenai ils ont retarde la competition avec ces rumeurs de breakthrough amis 8 LLaMA peut peut etre rivaliser avec GPT4 Le blog Open Source de Google propose un article sur 5 mythes ou non sur l'apprentissage et l'utilisation de Rust https://opensource.googleblog.com/2023/06/rust-fact-vs-fiction-5-insights-from-googles-rust-journey-2022.html Il faut plus de 6 mois pour apprendre Rust : plutôt faux; quelques semaines à 3-4 mois max Le compilateur Rust est pas aussi rapide qu'on le souhaiterait — vrai ! Le code unsafe et l'interop sont les plus gros challanges — faux, c'est plutôt les macros, l'owernship/borrowing, et la programmation asynchrone Rust fournit des messages d'erreur de compilation géniaux — vrai Le code Rust est de haute qualité — vrai InfoQ sort un nouveau guide sur le Pattern Matching pour le switch de Java https://www.infoq.com/articles/pattern-matching-for-switch/ Le pattern matching supporte tous les types de référence L'article parle du cas de la valeur null L'utilisation des patterns “guarded” avec le mot clé when L'importance de l'ordre des cases Le pattern matching peut être utilisé aussi avec le default des switchs Le scope des variables du pattern Un seul pattern par case label Un seul case match-all dans un bloc switch L'exhaustivité de la couverture des types L'utilisation des generics La gestion d'erreur avec MatchException Librairies Sortie de Micronaut 4 https://micronaut.io/2023/07/14/micronaut-framework-4-0-0-released/ Langage minimal : Java 17, Groovy 4 et Kotlin 1.8 Support de la dernière version de GraalVM Utilisation des GraalVM Reachability Metadata Repository pour faciliter l'utilisation de Native Image Gradle 8 Nouveau Expression Language, à la compilation, pas possible au runtime (pour des raisons de sécurité et de support de pré-compilation) Support des Virtual Threads Nouvelle couche HTTP, éliminant les stack frames réactives quand on n'utilise pas le mode réactif Support expérimental de IO Uring et HTTP/3 Des filtres basés sur les annotations Le HTTP Client utilise maintenant le Java HTTP Client Génération de client et de serveur en Micronaut à partir de fichier OpenAPI L'utilisation YAML n'utilise plus la dépendance SnakeYAML (qui avait des problèmes de sécurité) Transition vers Jackarta terminé Et plein d'autres mises à jour de modules Couverture par InfoQ https://www.infoq.com/news/2023/07/micronaut-brings-virtual-thread/ Quarkus 3.2 et LTS https://quarkus.io/blog/quarkus-3-2-0-final-released/ https://quarkus.io/blog/quarkus-3-1-0-final-released/ https://quarkus.io/blog/lts-releases/ Infrastructure Red Hat partage les sources de sa distribution au travers de son Customer Portal, et impacte la communauté qui se base dessus https://almalinux.org/blog/impact-of-rhel-changes/ RedHat a annoncé un autre changement massif qui affecte tous les rebuilds et forks de Red Hat Enterprise Linux. À l'avenir, Red Hat publiera uniquement le code source pour les RHEL RPMs derrière leur portail client. Comme tous les clones de RHEL dépendent des sources publiées, cela perturbe encore une fois l'ensemble de l'écosystème Red Hat. Une analyse du choix de red hat sur la distribution du code source de rhel https://dissociatedpress.net/2023/06/24/red-hat-and-the-clone-wars/ Une reponse de red hat aux feux démarrés par l'annonce de la non distribution des sources de RHEL en public https://www.redhat.com/en/blog/red-hats-commitment-open-source-response-gitcentosorg-changes et un lien vers une de ces feux d'une personne proheminente dans la communauté Ansible https://www.jeffgeerling.com/blog/2023/im-done-red-hat-enterprise-linux Oracle demande a garder un Linux ouvert et gratuit https://www.oracle.com/news/announcement/blog/keep-linux-open-and-free-2023-07-10/ Suite à l'annonce d'IBM/RedHat, Oracle demande à garder Linux ouvert et gratuit IBM ne veut pas publier le code de RHEL car elle doit payer ses ingénieurs Alors que RedHat a pu maintenir son modèle économique durante des années L'article revient sur CentOS qu'IBM “a tué” en 2020 Oracle continue ses éfforts de rendre Linux ouvert et libre Oracle Linux continuera à être compatible avec RHEL jusqu'à la version 9.2, après ça sera compliqué de maintenir une comptabilité Oracle embauche des dev Linux Oracle demande à IBM de récupérer le downstream d'Oracle et de le distribuer SUSE forke RHEL https://www.suse.com/news/SUSE-Preserves-Choice-in-Enterprise-Linux/ SUSE est la société derrière Rancher, NeuVector, et SUSE Linux Enterprise (SLE) Annonce un fork de RHEL $10M d'investissement dans le projet sur les prochaines années Compatibilité assurée de RHEL et CentOS Web Google revent sont service de nom de domaine a Squarespace https://www.reddit.com/r/webdev/comments/14agag3/squarespace_acquires_google_domains/ et c'était pas gratuit donc on n'est pas censé etre le produit :wink: Squarespace est une entreprise américaine spécialisée dans la création de site internet Squarespace est un revendeur de Google Workspace depuis longtemps La vente devrait se finaliser en Q3 2023 Petite introduction à WebGPU en français https://blog.octo.com/connaissez-vous-webgpu/ Data Avec la mode des Large Language Models, on parle de plus en plus de bases de données vectorielles, pour stocker des “embeddings” (des vecteurs de nombre flottant représentant sémantiquement du texte, ou même des images). Un article explique que les Vecteurs sont le nouveau JSON dans les bases relationnelles comme PostgreSQL https://jkatz05.com/post/postgres/vectors-json-postgresql/ L'article parle en particulier de l'extension pgVector qui est une extension pour PostgreSQL pour rajouter le support des vectors comme type de colonne https://github.com/pgvector/pgvector Google Cloud annonce justement l'intégration de cette extension vectorielle à CloudSQL pour PostgreSQL et à AlloyDB pour PostgreSQL https://cloud.google.com/blog/products/databases/announcing-vector-support-in-postgresql-services-to-power-ai-enabled-applications Il y a également une vidéo, un notebook Colab, et une article plus détaillé techniquement utilisant LangChain https://cloud.google.com/blog/products/databases/using-pgvector-llms-and-langchain-with-google-cloud-databases Mais on voit aussi également Elastic améliorer Lucene pour utiliser le support des instructions SIMD pour accélérer les calculs vectoriels (produit scalaire, distance euclidienne, similarité cosinus) https://www.elastic.co/fr/blog/accelerating-vector-search-simd-instructions Outillage Le sondage de StackOverflow 2023 https://survey.stackoverflow.co/2023/ L'enquête a été réalisée auprès de 90 000 développeurs dans 185 pays. Les développeurs sont plus nombreux (+2%) que l'an dernier à travailler sur site (16% sur site, 41% remote, 42% hybrid) Les développeurs sont également de plus en plus nombreux à utiliser des outils d'intelligence artificielle, avec 70 % d'entre eux déclarant les utiliser (44%) ou prévoyant de les utiliser (25) dans leur travail. Les langages de programmation les plus populaires sont toujours JavaScript, Python et HTML/CSS. Les frameworks web les plus populaires sont Node, React, JQuery. Les bases de données les plus populaires sont PostgreSQL, MySQL, et SQLite. Les systèmes d'exploitation les plus populaires sont Windows puis macOS et Linux. Les IDE les plus populaires sont Visual Studio Code, Visual Studio et IDEA IntelliJ. Les différents types de déplacement dans Vim https://www.barbarianmeetscoding.com/boost-your-coding-fu-with-vscode-and-vim/moving-blazingly-fast-with-the-core-vim-motions/ JetBrains se mets aussi à la mode des assistants IA dans l'IDE https://blog.jetbrains.com/idea/2023/06/ai-assistant-in-jetbrains-ides/ une intégration avec OpenAI mais aussi de plus petits LLMs spécifiques à JetBrains un chat intégré pour discuter avec l'assistant, puis la possibilité d'intégrer les snippets de code là où se trouve le curseur possibilité de sélectionner du code et de demander à l'assistant d'expliquer ce que ce bout de code fait, mais aussi de suggérer un refactoring, ou de régler les problèmes potentiels on peut demander à générer la JavaDoc d'une méthode, d'une classe, etc, ou à suggérer un nom de méthode (en fonction de son contenu) génération de message de commit il faut avoir un compte JetBrains AI pour y avoir accès Des commandes macOS plus ou moins connues https://saurabhs.org/advanced-macos-commands caffeinate — pour garder le mac éveillé pbcopy / pbpaste — pour interagir avec le clipboard networkQuality — pour mesurer la rapidité de l'accès à internet sips — pour manipuler / redimensionner des images textutil — pour covertir des fichers word, texte, HTML screencapture — pour faire un screenshot say — pour donner une voix à vos commandes Le sondage de la communauté ArgoCD https://blog.argoproj.io/cncf-argo-cd-rollouts-2023-user-survey-results-514aa21c21df Un client d'API open-source et cross-platform pour GraphQL, REST, WebSockets, Server-sent events et gRPC https://github.com/Kong/insomnia Architecture Moderniser l'architecture avec la decouverte via le domain driven discovery https://www.infoq.com/articles/architecture-modernization-domain-driven-discovery/?utm_source=twitter&utm_medium=link&utm_campaign=calendar Un article très détaillé pour moderniser son architecture en utilisant une approche Domain-Driven Discovery qui se fait en 5 étapes: Encadrer le problème – Clarifier le problème que vous résolvez, les personnes touchées, les résultats souhaités et les contraintes de solution. Analyser l'état actuel – Explorer les processus opérationnels et l'architecture des systèmes existants afin d'établir une base de référence pour l'amélioration. Explorer l'état futur – Concevoir une architecture modernisée fondée sur des contextes délimités, établir des priorités stratégiques, évaluer les options et créer des solutions pour l'état futur. Créer une feuille de route – Créer un plan pour moderniser l'architecture au fil du temps en fonction des flux de travail ou des résultats souhaités. Récemment, Sfeir a lancé son blog de développement sur https://www.sfeir.dev/ plein d'articles techniques sur de nombreux thèmes : front, back, cloud, data, AI/ML, mobile aussi des tendances, des success stories par exemple dans les derniers articles : on parle d'Alan Turing, du Local Storage en Javascript, des la préparation de certifications React, l'impact de la cybersécurité sur le cloud Demis Hassabis annonce travailler sur une IA nommée Gemini qui dépassera ChatGPT https://www.wired.com/story/google-deepmind-demis-hassabis-chatgpt/ Demis Hassabis CEO de Google DeepMind créateur de AlphaGOet AlphaFold Travaille sur une IA nommé Gemini qui dépasserait ChatGPT de OpenAI Similair à GPT-4 mais avec des techniques issues de AlphaGO Encore en developpement, va prendre encore plusieurs mois Un remplaçant a Bard? Méthodologies Approcher l'agilité par les traumatismes (de developement) passés des individus https://www.infoq.com/articles/trauma-informed-agile/?utm_campaign=infoq_content&utm_source=twitter&utm_medium=feed&utm_term=culture-methods Nous subissons tous un traumatisme du développement qui rend difficile la collaboration avec d'autres - une partie cruciale du travail dans le développement de logiciels agiles. Diriger d'une manière tenant compte des traumatismes n'est pas pratiquer la psychothérapie non sollicitée, et ne justifie pas les comportements destructeurs sans les aborder. Être plus sensible aux traumatismes dans votre leadership peut aider tout le monde à agir de façon plus mature et plus disponible sur le plan cognitif, surtout dans des situations émotionnellement difficiles. Dans les milieux de travail tenant compte des traumatismes, les gens accordent plus d'attention à leur état physique et émotionnel. Ils s'appuient aussi davantage sur le pouvoir de l'intention, fixent des objectifs d'une manière moins manipulatrice et sont capables d'être empathiques sans s'approprier les problèmes des autres. Loi, société et organisation Mercedes va rajouter de l'intelligence artificielle dans ses voitures https://azure.microsoft.com/en-us/blog/mercedes-benz-enhances-drivers-experience-with-azure-openai-service/ Programme béta test de 3 mois pour le moment Assistance vocale “Hey Mercedes” Permet de discuter avec la voiture pour trouver son chemin, concocter une recette, ou avoir tout simplement des discussions Ils travaillent sur des plugin pour reserver un resto, acheter des tickets de cinéma Free software vs Open Source dans le contexte de l'intelligence artificielle par Sacha Labourey https://medium.com/@sachalabourey/ai-free-software-is-essential-to-save-humanity-86b08c3d4777 on parle beaucoup d'AI et d'open source mais il manque la dimension de controle des utilisateurs finaux Stallman a crée la FSF par peur de la notion d'humain augmenté par des logiciels qui sont controllés par d'autres (implants dans le cerveau etc) d'ou la GPL et sa viralité qui propage la capacité a voir et modifier le conde que l'on fait tourner dans le debat AI, ce n'est pas seulement open source (casser oligopolie) mais aissu le free software qui est en jeu La folie du Cyber Resilience Act (CRA) europeen https://news.apache.org/foundation/entry/save-open-source-the-impending-tragedy-of-the-cyber-resilience-act Au sein de l'UE, la loi sur la cyber-résilience (CRA) fait maintenant son chemin à travers les processus législatifs (et doit faire l'objet d'un vote clé le 19 juillet 2023). Cette loi s'appliquera à un large éventail de logiciels (et de matériel avec logiciel intégré) dans l'UE. L'intention de ce règlement est bonne (et sans doute attendue depuis longtemps) : rendre le logiciel beaucoup plus sûr. Le CRA a une approche binaire: oui/non et considère tout le monde de la même manière Le CRA réglementerait les projets à source ouverte à moins qu'ils n'aient « un modèle de développement entièrement décentralisé ». Mais les modèles OSS sont de complexes mélanges de pur OSS et éditeurs de logiciels les entreprises commerciales et les projets open source devront être beaucoup plus prudents quant à ce que les participants peuvent travailler sur le code, quel financement ils prennent, et quels correctifs ils peuvent accepter. Certaines des obligations sont pratiquement impossibles à respecter, par exemple l'obligation de « livrer un produit sans vulnérabilités exploitables connues ». Le CRA exige la divulgation de vulnérabilités graves non corrigées et exploitées à l'ENISA (une institution de l'UE) dans un délai mesuré en heures, avant qu'elles ne soient corrigées. (complètement opposé aux bonnes pratiques de sécu) Une fois de plus une bonne idée à l'origine mais très mal implémentée qui risque de faire beaucoup de dommages Octave Klaba, avec Miro, son frère, et la Caisse des Dépôts, finalisent la création de Synfonium qui va maintenant racheter 100% de Qwant et 100% fe Shadow. Synfonium est détenue à 75% par Jezby Venture & Deep Code et à 25% par la CDC. https://twitter.com/i/web/status/1673555414938427392 L'un de rôles de Synfonium est de créer la masse critique des utilisateurs et des clients B2C & B2B qui vont pouvoir utiliser tous ces services gratuits et payants Vous y retrouverez le moteur de recherche, les services gratuits, la suite collaborative, le social login, mais aussi les services de nos partenaires tech. Le but est de créer une plateforme dans le Cloud SaaS EU qui respectent nos valeurs et nos lois européennes Yann LeCun : «L'intelligence artificielle va amplifier l'intelligence humaine» https://www.europe1.fr/emissions/linterview-politique-dimitri-pavlenko/yann-lecun-li[…]gence-artificielle-va-amplifier-lintelligence-humaine-4189120 Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 2-3 septembre 2023 : SRE France SummerCamp - Chambéry (France) 6 septembre 2023 : Cloud Alpes - Lyon (France) 8 septembre 2023 : JUG Summer Camp - La Rochelle (France) 14 septembre 2023 : Cloud Sud - Remote / Toulouse (France) 18 septembre 2023 : Agile Tour Montpellier - Montpellier (France) 19-20 septembre 2023 : Agile en Seine - Paris (France) 19 septembre 2023 : Salon de la Data Nantes - Nantes (France) & Online 21-22 septembre 2023 : API Platform Conference - Lille (France) & Online 22 septembre 2023 : Agile Tour Sophia Antipolis - Valbonne (France) 25-26 septembre 2023 : BIG DATA & AI PARIS 2023 - Paris (France) 28-30 septembre 2023 : Paris Web - Paris (France) 2-6 octobre 2023 : Devoxx Belgium - Antwerp (Belgium) 6 octobre 2023 : DevFest Perros-Guirec - Perros-Guirec (France) 10 octobre 2023 : ParisTestConf - Paris (France) 11-13 octobre 2023 : Devoxx Morocco - Agadir (Morocco) 12 octobre 2023 : Cloud Nord - Lille (France) 12-13 octobre 2023 : Volcamp 2023 - Clermont-Ferrand (France) 12-13 octobre 2023 : Forum PHP 2023 - Marne-la-Vallée (France) 19-20 octobre 2023 : DevFest Nantes - Nantes (France) 19-20 octobre 2023 : Agile Tour Rennes - Rennes (France) 26 octobre 2023 : Codeurs en Seine - Rouen (France) 25-27 octobre 2023 : ScalaIO - Paris (France) 26-27 octobre 2023 : Agile Tour Bordeaux - Bordeaux (France) 26-29 octobre 2023 : SoCraTes-FR - Orange (France) 10 novembre 2023 : BDX I/O - Bordeaux (France) 15 novembre 2023 : DevFest Strasbourg - Strasbourg (France) 16 novembre 2023 : DevFest Toulouse - Toulouse (France) 23 novembre 2023 : DevOps D-Day #8 - Marseille (France) 30 novembre 2023 : PrestaShop Developer Conference - Paris (France) 30 novembre 2023 : WHO run the Tech - Rennes (France) 6-7 décembre 2023 : Open Source Experience - Paris (France) 7 décembre 2023 : Agile Tour Aix-Marseille - Gardanne (France) 8 décembre 2023 : DevFest Dijon - Dijon (France) 7-8 décembre 2023 : TechRocks Summit - Paris (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
openAI's window to build their moat is closing, but they have a powerful friend stepping up to help seal the deal. Plus, our reaction to Oracle's very spicy response to Red Hat.
The Linux world is abuzz with controversy as IBM Red Hat puts the source code for Red Hat Enterprise Linux (RHEL) behind a paywall, leaving CentOS Stream as the only accessible option. Meanwhile, Oracle emphasizes its commitment to Linux freedom, offering open access to binaries and source code for their RHEL-compatible distribution, Oracle Linux. On the other hand, SUSE takes a bold step by forking RHEL and investing millions in developing their own RHEL-compatible distribution, free from restrictions. The battle for Linux supremacy is heating up, with each company vying for dominance and championing their respective visions of openness and innovation. Time Stamps: 0:00 - Welcome to the Rundown 0:39 - Intel Shucks NUC Products 7:18 - Kentik Announces Azure Observability 10:18 - El Capitan Reporting for Duty 15:39 - Microsoft Issues Patch Targeting Malicious Drivers 18:53 - InfluxData Sees Influx of Angry Customers 23:01 - Vector Capital Acquires Riverbed Technology 26:52 - Oracle Wants Linux to Remain Open and Free 31:34 - SUSE Working On Another Red Hat 35:21 - The Battle Over Red Hat Enterprise Linux 39:11 - The Weeks Ahead 40:56 - Thanks for Watching Follow our Hosts on Social Media Tom Hollingsworth: https://www.twitter.com/NetworkingNerd Stephen Foskett: https://www.twitter.com/SFoskett Follow Gestalt IT Website: https://www.GestaltIT.com/ Twitter: https://www.twitter.com/GestaltIT LinkedIn: https://www.linkedin.com/company/Gestalt-IT Tags: #Rundown, @Intel, @IntelBusiness, @KentikInc, @Azure, @Microsoft, #Observability, @InfluxDB, @Riverbed, @Capital_Vector, @RedHat, @Oracle, @OracleCloud, #Linux, #OpenSource,
In this episode, Lois Houston and Nikita Abraham are joined by Autumn Black to discuss MySQL Database, a fully-managed database service powered by the integrated HeatWave in-memory query accelerator. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Deepak Modi, Ranbir Singh, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00;00;00;00 - 00;00;39;08 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. Hello and welcome to the Oracle University Podcast. You're listening to our second season Oracle Database Made Easy. I'm Lois Houston, Director of Product Innovation and Go to Market Programs with Oracle University. 00;00;39;10 - 00;01;08;03 And with me is Nikita Abraham, Principal Technical Editor. Hi, everyone. In our last episode, we had a really fascinating conversation about Oracle Machine Learning with Cloud Engineer Nick Commisso. Do remember to catch that episode if you missed it. Today, we have with us Autumn Black, who's an Oracle Database Specialist. Autumn is going to take us through MySQL, the free version and the Enterprise Edition, and MySQL Data Service. 00;01;08;05 - 00;01;39;16 We're also going to ask her about HeatWave. So let's get started. Hi, Autumn. So tell me, why is MySQL such a popular choice for developers? MySQL is the number one open-source database and the second most popular database overall after the Oracle Database. According to a Stack Overflow survey, MySQL has been for a long time and remains the number one choice for developers, primarily because of its ease of use, reliability, and performance. 00;01;39;17 - 00;02;08;22 And it's also big with companies? MySQL is used by the world's most innovative companies. This includes Twitter, Facebook, Netflix, and Uber. It is also used by students and small companies. There are different versions of MySQL, right? What are the main differences between them when it comes to security, data recovery, and support? MySQL comes in two flavors: free version or paid version. 00;02;08;24 - 00;02;45;05 MySQL Community, the free version, contains the basic components for handling data storage. Just download it, install it, and you're ready to go. But remember, free has costs. That stored data is not exactly secure and data recovery is not easy and sometimes impossible. And there is no such thing as free MySQL Community support. This is why MySQL Enterprise Edition was created, to provide all of those missing important pieces: high availability, security, and Oracle support from the people who build MySQL. 00;02;45;10 - 00;03;09;24 You said MySQL is open source and can be easily downloaded and run. Does it run on-premises or in the cloud? MySQL runs on a local computer, company's data center, or in the cloud. Autumn, can we talk more about MySQL in the cloud? Today, MySQL can be found in Amazon RDS and Aurora, Google Cloud SQL, and Microsoft Azure Database for MySQL. 00;03;09;27 - 00;03;35;23 They all offer a cloud-managed version of MySQL Community Edition with all of its limitations. These MySQL cloud services are expensive and it's not easy to move data away from their cloud. And most important of all, they do not include the MySQL Enterprise Edition advanced features and tools. And they are not supported by the Oracle MySQL experts. 00;03;35;25 - 00;04;07;03 So why is MySQL Database Service in Oracle Cloud Infrastructure better than other MySQL cloud offerings? How does it help data admins and developers? MySQL Database Service in Oracle Cloud Infrastructure is the only MySQL database service built on MySQL Enterprise Edition and 100% built, managed, and supported by the MySQL team. Let's focus on the three major categories that make MySQL Database Service better than the other MySQL cloud offerings: ease of use, security, and enterprise readiness. 00;04;07;03 - 00;04;44;24 MySQL DBAs tend to be overloaded with mundane database administration tasks. They're responsible for many databases, their performance, security, availability, and more. It is difficult for them to focus on innovation and on addressing the demands of lines of business. MySQL is fully managed on OCI. MySQL Database Service automates all those time-consuming tasks so they can improve productivity and focus on higher value tasks. 00;04;44;26 - 00;05;07;13 Developers can quickly get all the latest features directly from the MySQL team to deliver new modern apps. They don't get that on other clouds that rely on outdated or forked versions of MySQL. Developers can use the MySQL Document Store to mix and match SQL and NoSQL content in the same database as well as the same application. 00;05;07;19 - 00;05;30;26 Yes. And we're going to talk about MySQL Document Store in a lot more detail in two weeks, so don't forget to tune in to that episode. Coming back to this, you spoke about how MySQL Database Service or MDS on OCI is easy to use. What about its security? MDS security first means it is built on Gen 2 cloud infrastructure. 00;05;30;28 - 00;05;57;13 Data is encrypted for privacy. Data is on OCI block volume. So what does this Gen 2 cloud infrastructure offer? Is it more secure? Oracle Cloud is secure by design and architected very differently from the Gen 1 clouds of our competitors. Gen 2 provides maximum isolation and protection. That means Oracle cannot see customer data and users cannot access our cloud control computer. 00;05;57;15 - 00;06;27;09 Gen 2 architecture allows us to offer superior performance on our compute objects. Finally, Oracle Cloud is open. Customers can run Oracle software, third-party options, open source, whatever you choose without modifications, trade-offs, or lock-ins. Just to dive a little deeper into this, what kind of security features does MySQL Database Service offer to protect data? Data security has become a top priority for all organizations. 00;06;27;12 - 00;06;55;17 MySQL Database Service can help you protect your data against external attacks, as well as internal malicious users with a range of advanced security features. Those advanced security features can also help you meet industry and regulatory compliance requirements, including GDPR, PCI, and HIPPA. When a security vulnerability is discovered, you'll get the fix directly from the MySQL team, from the team that actually develops MySQL. 00;06;55;19 - 00;07;22;16 I want to talk about MySQL Enterprise Edition that you brought up earlier. Can you tell us a little more about it? MySQL Database Service is the only public cloud service built on MySQL Enterprise Edition, which includes 24/7 support from the team that actually builds MySQL, at no additional cost. All of the other cloud vendors are using the Community Edition of MySQL, so they lack the Enterprise Edition features and tools. 00;07;22;22 - 00;07;53;24 What are some of the default features that are available in MySQL Database Service? MySQL Enterprise scalability, also known as the thread pool plugin, data-at-rest encryption, native backup, and OCI built-in native monitoring. You can also install MySQL Enterprise Monitor to monitor MySQL Database Service remotely. MySQL works well with your existing Oracle investments like Oracle Data Integrator, Oracle Analytics Cloud, Oracle GoldenGate, and more. 00;07;53;27 - 00;08;17;20 MySQL Database Service customers can easily use Docker and Kubernetes for DevOps operations. So how much of this is managed by the MySQL team and how much is the responsibility of the user? MySQL Database Service is a fully managed database service. A MySQL Database Service user is responsible for logical schema modeling, query design and optimization, define data access and retention policies. 00;08;17;22 - 00;08;44;26 The MySQL team is responsible for providing automation for operating system installation, database and OS patching, including security patches, backup, and recovery. The system backs up the data for you, but in an emergency, you can restore it to a new instance with a click. Monitoring and log handling. Security with advanced options available in MySQL Enterprise Edition. 00;08;44;28 - 00;09;01;18 And of course, maintaining the data center for you. To use MDS, users must have OCI tenancy, a compartment, belong to a group with required policies. 00;09;01;21 - 00;09;28;28 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You'll find training on everything from cloud computing, database, and security to artificial intelligence and machine learning, all of which is available free to subscribers. So get going. Pick a course of your choice, get certified, join the Oracle University Learning Community, and network with your peers. If you're already an Oracle MyLearn user, go to MyLearn to begin your journey. 00;09;29;03 - 00;09;40;24 If you have not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started. 00;09;40;27 - 00;10;05;20 Welcome back! Autumn, tell us about the system architecture of MySQL Database Service. A database system is a logical container for the MySQL instance. It provides an interface enabling management of tasks, such as provisioning, backup and restore, monitoring, and so on. It also provides a read and write endpoint, enabling you to connect to the MySQL instance using the standard protocols. 00;10;05;28 - 00;10;31;27 And what components does a MySQL Database Service DB system consist of? A computer instance, an Oracle Linux operating system, the latest version of MySQL server Enterprise Edition, a virtual network interface card, VNIC, that attaches the DB system to a subnet of the virtual cloud network, network-attached higher performance block storage. Is there a way to monitor how the MySQL Database Service is performing? 00;10;31;29 - 00;10;59;29 You can monitor the health, capacity, and performance of your Oracle Cloud Infrastructure MySQL Database Service resources by using metrics, alarms, and notifications. The MySQL Database Service metrics enable you to measure useful quantitative data about your MySQL databases such as current connection information, statement activity, and latency, host CPU, memory, and disk I/O utilization, and so on. 00;11;00;03 - 00;11;23;15 You can use metrics data to diagnose and troubleshoot problems with MySQL databases. What should I keep in mind about managing the SQL database? Stopped MySQL Database Service system stops billing for OCPUs, but you also cannot connect to the DB system. During MDS automatic update, the operating system is upgraded along with patching of the MySQL server. 00;11;23;17 - 00;11;49;15 Metrics are used to measure useful data about MySQL Database Service system. Turning on automatic backups is an update to MDS to enable automatic backups. MDS backups can be removed by using the details pages and OCI and clicking Delete. Thanks for that detailed explanation on MySQL, Autumn. Can you also touch upon MySQL HeatWave? Why would you use it over traditional methods of running analytics on MySQL data? 00;11;49;18 - 00;12;18;01 Many organizations choose MySQL to store their valuable enterprise data. MySQL is optimized for Online Transaction Processing, OLTP, but it is not designed for Online Analytic Processing, OLAP. As a result, organizations that need to efficiently run analytics on data stored in MySQL database move their data to another database to run analytic applications such as Amazon Redshift. 00;12;18;04 - 00;12;41;22 MySQL HeatWave is designed to enable customers to run analytics on data that is stored in MySQL database without moving data to another database. What are the key features and components of HeatWave? HeatWave is built on an innovative in-memory analytics engine that is architected for scalability and performance, and is optimized for Oracle Cloud Infrastructure, OCI. 00;12;41;24 - 00;13;05;29 It is enabled when you add a HeatWave cluster to a MySQL database system. A HeatWave cluster comprises a MySQL DB system node and two or more HeatWave nodes. The MySQL DB system node includes a plugin that is responsible for cluster management, loading data into the HeatWave cluster, query scheduling, and returning query results to the MySQL database system. 00;13;06;02 - 00;13;29;15 The HeatWave nodes store data and memory and processed analytics queries. Each HeatWave node contains an instance of the HeatWave. The number of HeatWave nodes required depends on the size of your data and the amount of compression that is achieved when loading the data into the HeatWave cluster. Various aspects of HeatWave use machine-learning-driven automation that helps to reduce database administrative costs. 00;13;29;18 - 00;13;52;11 Thanks, Autumn, for joining us today. We're looking forward to having you again next week to talk to us about Oracle NoSQL Database Cloud Service. To learn more about MySQL Data Service, head over to mylearn.oracle.com and look for the Oracle Cloud Data Management Foundations Workshop. Until next time, this is Nikita Abraham and Lois Houston signing off. 00;13;52;14 - 00;16;33;05 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Bem-vindos a mais um episódio do Diocast, no episódio de hoje vamos conversar sobre um assunto que gerou muita polêmica na comunidade Linux: as mudanças anunciadas pela Red Hat no formato de distribuição do código-fonte do Red Hat Enterprise Linux, ou RHEL, um dos principais produtos da empresa. Para quem não sabe, a Red Hat é uma das maiores empresas de software livre do mundo, e o RHEL é um sistema operacional voltado para servidores, que oferece estabilidade, segurança e suporte de longo prazo. Muitas empresas usam o RHEL como base para seus serviços e aplicações, e pagam uma licença para ter acesso ao suporte da Red Hat. Mas o que aconteceu foi que, no dia 21 de junho, a Red Hat anunciou que iria mudar como disponibiliza o código-fonte do RHEL, dificultando a criação de distribuições derivadas, como o Rocky Linux, o AlmaLinux e o Oracle Linux. Essas distribuições são chamadas de clones do RHEL, pois são praticamente idênticas ao sistema original, mas sem a marca registrada e com um custo de serviços bem mais baixo (quando há). A mudança causou grande agitação na comunidade Linux, pois muitas instituições e desenvolvedores dependem desses clones para ter acesso a um sistema operacional confiável e compatível com o RHEL, mas sem necessariamente terem condições de pagar por ele. Além disso, a mudança foi vista como uma forma da Red Hat sufocar os clones e tentar forçar os usuários a migrarem para o seu produto. Ao menos de imediato, parece que as coisas não estão seguindo como esperado por eles. Neste episódio, nós vamos discutir nossa visão sobre os possíveis motivos por trás dessa decisão da Red Hat, as consequências para os usuários e os desenvolvedores dos clones do RHEL, além das reações dos projetos envolvidos. Será que a Red Hat está sendo antiética ou apenas defendendo os seus interesses? Será que os clones do RHEL vão conseguir se adaptar à nova situação ou vão perder espaço no mercado? --- Apoiadores deste episódio ✅ A Diostore está com uma promoção espetacular! Estamos comemorando a Semana do Frete Grátis, ou seja, em compras a partir de R$49,90 você ganha frete grátis para todas as regiões do Brasil! Acesse agora diostore.com.br e aproveite! ✅ StreamYard é um estúdio virtual que permite que você faça lives profissionais, interagindo com seus convidados e seu público nas principais redes sociais.: https://streamyard.com/?fpr=diolinux --- Deixe seu comentário no post do episódio para ser lido no próximo programa. https://diolinux.com.br/podcast/red-hat-e-a-sua-cartada.html
In this episode, hosts Lois Houston and Nikita Abraham are joined once again by Alex Bouchereau to discuss how you can use Oracle Database Cloud Service to deploy Oracle Databases in the cloud. They also talk through the fundamentals of Oracle Cloud Infrastructure database system instances, including bare metal and virtual machine shapes and their storage architecture. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Deepak Modi, Ranbir Singh, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: http://traffic.libsyn.com/oracleuniversitypodcast/Oracle_University_Podcast_S02_EP04.mp3 00;00;00;00 - 00;00;38;21 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Product Innovation and Go to Market Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. 00;00;38;22 - 00;01;07;29 Hi, everyone. In our last episode, we discussed Oracle Exadata Cloud Service with our Oracle Database Specialist Alex Bouchereau. If you missed that episode, please remember to go back and give it a listen. We're so excited to have Alex with us again and today she's going to talk about the fundamentals of OCI database system instances on Oracle Cloud Infrastructure. We'll also ask her about the DB system for bare metal and virtual machine shapes. 00;01;08;02 - 00;01;40;22 Hi, Alex. Can you tell us about the Oracle Database Cloud Service? Oracle Database Cloud Service provides you the ability to deploy Oracle databases to the cloud. You can deploy Enterprise Edition or Standard Edition 2 and any database version from 11.2 and later. You have the option to deploy using virtual machine or bare metal shapes. Database Cloud Service VM and BM metal DB systems are deployed in Oracle Cloud Infrastructure regions around the world. 00;01;40;24 - 00;02;12;01 With Database Cloud Service, you manage the database instance, including provisioning, patching, backup, and disaster recovery using OCI cloud automation tools, such as the OCI console, CLI, and API. There is also an SDK that supports a number of different languages. Using VM shapes, you can deploy real application clusters and scale your storage requirements. Walk us through the various Oracle database editions. You know, from Standard Edition 2 to Enterprise Edition Extreme Performance. 00;02;12;03 - 00;02;37;15 What are the key differences in terms of features, add-ons, and licensing options? Beginning with Standard Edition 2, features included with the service are Multitenant Pluggable Database and Tablespace Encryption. Moving to Enterprise Edition, you add additional database features, such as data guard and the EM packs of data masking and subsetting, tuning, and diagnostics. 00;02;37;17 - 00;03;14;06 With Enterprise Edition High Performance, in addition to the base Enterprise Edition, you add Management and Lifecycle Management packs as well as Advanced Security and Database Vault and Label Security. And finally, Enterprise Edition Extreme Performance has all the previously discussed features, plus Active Data Guard, Oracle RAC, and Database In-Memory. Note that all packages include Oracle Database Transparent Data Encryption, TDE. You have an option to bring your own license, BYOL, which means you could use your organization's existing Oracle Database software licenses. 00;03;14;12 - 00;03;44;27 What's the difference between bare metal instances and virtual machines in the context of Oracle Cloud Infrastructure? OCI is the only public cloud that supports BM and VMs during the same set of APIs, hardware, firmware, software stack, and networking infrastructure. Bare metal instances are single tenant and you have full control over the resources provisioned within the service. Virtual machines follow a multitenant model and share servers that are not overprovisioned. 00;03;45;00 - 00;04;11;00 So how is the customer billed for these services? For databases using bare metal and virtual machine infrastructure, Oracle Cloud Infrastructure uses per second billing. This means that OCPU and storage usage is billed by the second with a minimum usage period of one minute for virtual machine DB systems and one hour for bare metal DB systems. 00;04;11;00 - 00;04;39;29 For bare metal servers, there is an infrastructure charge along with the billing for OCPUs. Alex, can DBCS be customized? I mean, are there different kinds of DBCS offerings to fulfill the various needs of our customers? The maximum number of OCPUs, RAM, and storage for your database depends on the shape you choose. There are two types of DB systems on virtual machines. Single node or one-node virtual machine DB system consists of one virtual machine. 00;04;40;02 - 00;05;13;27 Two-node virtual machine DB systems consist of two virtual machines of separate servers. Virtual machine DB systems use Oracle Cloud Infrastructure block storage. Bare metal systems used NVMe local storage and are therefore not scalable. Storage scaling is available for all VM shapes. Online OCPU scaling is available for bare metal systems or for two-node RAC VMs. Alex, help us understand what Database Cloud Service on bare metal is. 00;05;13;29 - 00;05;43;07 How does it differ from other Oracle Database Service offerings? Database Cloud Service on bare metal is a database service offering that enables customers to deploy and manage full-featured Oracle databases on bare metal servers. Bare metal offers proven and predictable performance with local NVMe storage on dedicated servers that provide high IOPS with extremely low latency. Bare metal DB systems consist of a single bare metal server running Oracle Linux with locally attached NVMe storage. 00;05;43;07 - 00;06;11;25 If the node fails, you can simply launch another system and restore the databases from current backups. Bare metal DB system provide dedicated resources that are ideal for databases that are performance intensive. And how do you start a bare metal instance? When you launch a bare metal DB system, you select the single Oracle Database edition that applies to all the databases on that DB system. 00;06;11;28 - 00;06;49;08 The selected edition cannot be changed. Each DB system can have multiple database homes, which can be different versions. Each database home can have multiple databases, which are, of course, the same version as the database home. Are you attending Oracle CloudWorld 2023? Learn from experts, network with peers, and find out about the latest innovations when Oracle CloudWorld returns to Las Vegas from September 18 through 21. CloudWorld is the best place to learn about Oracle solutions from the people who build and use them. 00;06;49;12 - 00;07;17;22 In addition to your attendance at CloudWorld, your ticket gives you access to Oracle MyLearn and all of the cloud learning subscription content as well as three free certification exam credits. This is valid from the week you register through 60 days after the conference. So what are you waiting for? Register today. Learn more about Oracle CloudWorld at www.oracle.com/cloudworld. 00;07;17;24 - 00;07;46;08 Welcome back. Alex, can you explain how Automatic Storage Management interfaces with disks in OCI? ASM directly interfaces with the disks. When you provision a bare metal system, you will indicate the data storage percentage assigned to data storage, which is user data and database files. Your choice is 40% or 80%. The remaining percentage is assigned to RECO storage consisting of database redo logs, archive logs, and recovery manager backups. 00;07;46;08 - 00;08;20;14 Storage is continuously monitored for any faults. Any disk that fails will be taken offline. Whenever the shapes list a maximum amount of usable space and data in RECO, these reservations for rebalancing are already taken into account. The root user has complete control over the storage subsystem so customization and tuning are possible. But the service already optimally configures these by default according to best practices. 00;08;20;18 - 00;08;58;20 So what are the key benefits of running databases on virtual machines? Database Service on VMs is a Database Service offering that enables customers to build, scale, and manage full-featured Oracle databases on virtual machines. The key benefits of running databases on VMs are cost-effectiveness, ease of getting started, durable and scalable storage, and the ability to run real application clusters, RAC, to improve availability. Database Cloud Service on VMs is built on the same high performance, highly available cloud infrastructure used by all Oracle Cloud Infrastructure services. 00;08;58;22 - 00;09;23;21 RAC databases will run on a single availability domain (AD) while ensuring each node is on a separate physical RAC, ensuring high availability. When you launch a virtual machine DB system, you select the Oracle Database edition and version that applies to the database on that DB system. The selected edition cannot be changed. Depending on your selected Oracle database edition and version, your DB system can support multiple pluggable databases, PDBs. 00;09;23;28 - 00;09;59;17 Are there various kinds of compute shapes available with DBCS? Virtual machine database systems offer compute shapes with 1 to 24 cores to support customers with small to large database processing requirements. A virtual machine DB system database uses Oracle Cloud Infrastructure block storage instead of local storage. You specify a storage size when you launch the DB system, and you can scale up the storage as needed at any time. 00;09;59;17 - 00;10;27;26 On-demand up and down scaling of database OCPUs use without interrupting database operations on two-node Oracle Real Application Cluster deployments allows customers to meet application needs while minimizing costs. To change to the number of CPU cores on an existing virtual machine DB system, you will change the shape of that DB system. Can you tell us about the node virtual machine DB systems in OCI? 00;10;27;28 - 00;11;01;22 For one-node virtual machine DB systems, Oracle Cloud Infrastructure provides a fast provisioning option that allows you to create a DB system using Logical Volume Manager, LVM, as your storage management software. Fast provisioning for an Oracle Database with Logical Volume Manager increases developer productivity. Can we also choose to use Automatic Storage with DBCS? You have a choice to select ASM storage management. You will select Oracle Grid Infrastructure when you provision your DB system to use Oracle Automatic Storage Management. 00;11;01;25 - 00;11;32;03 This is required for RAC deployments. Virtual machine DB system deployments with ASM support 11.2 database versions and above, whereas virtual machine DB system deployments with LVM support 12.2 database versions and above. When you provision your virtual machine DB system, you will identify the amount of available storage. This is the amount of block storage and gigabyte to allocate to the virtual machine DB system. 00;11;32;05 - 00;11;56;05 Available storage can be scaled up or down as needed after provisioning your DB system. You will also indicate total storage, the total block storage and gigabyte used by the virtual machine DB system. The amount of available storage you select determines this value. Oracle charges for the total storage used. What our customers really want to know is how their data is kept secure? 00;11;56;11 - 00;12;26;29 Lay it out for us. Alex. Identity Management, network controls, and encryption are all built into Database Cloud Service virtual machine and bare metal DB system to provide end-to-end security. Oracle Data Safe is an integrated security service available at no additional cost. With it, you could do a few things. Identify configuration drift through overall security assessments. This helps you identify gaps and then fix the issues. Flag risks users or behavior, and see if they're all well controlled. 00;12;27;02 - 00;12;54;14 Audit user activity. Track risky actions. Raise alerts and streamline compliance checks. Discover sensitive data. Know where it is and how much you have. Mask sensitive data. Three out of four DB instances are copies for dev test. This lets you remove that risk. To help meet security compliance standards, you can optionally enable FIPS, STIG, and SELinux options. 00;12;54;20 - 00;13;15;04 Thank you, Alex, for guiding us through all of this. To learn more about Oracle Database Cloud Service, please visit mylearn.oracle.com and take a look at our free Oracle Cloud Data Management Foundations Workshop. You'll also find quizzes that you can take to test your understanding of these concepts. That brings us to the end of this episode. 00;13;15;06 - 00;13;39;11 We're very excited about our episode next week in which we'll be talking with Rohit Rahi, Vice President of OCI Global Delivery, and Bill Lawson, Senior Director of Cloud Applications Product Management for Oracle University. We'll be talking about the free Oracle Cloud Infrastructure and Cloud Apps training and certifications that are available for a limited time. So you really won't want to miss that episode! 00;13;39;13 - 00;16;23;28 Until next time, this is Lois Houston and Nikita Abraham signing off. That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
SambaBox is an enterprise directory solution based on open-source code that provides a cost-effective, stand-alone and powerful corporate alternative meeting global standards.Los Angeles, California--(Newsfile Corp. - November 21, 2022) - SambaBox is an enterprise directory solution that helps reduce operating costs, developed by Profelis, the expert Linux and open source technologies developer, based on open source Samba4 infrastructure. SambaBox stands out as a global corporate alternative to the now generic Microsoft Active Directory in this category and will be running seamlessly on all cloud platforms soon.SambaBox is an enterprise directory solution that helps reduce operating costs, developed by ProfelisTo view an enhanced version of this graphic, please visit:https://images.newsfilecorp.com/files/8552/145021_9c58f42ec4e2480d_001full.jpgSubscribed to widely by public institutions, universities, the private sector, manufacturers, solution providers, cloud and service providers, SambaBox retains its cost advantage even if the number of users increases. Featuring a user-friendly and web-based interface, SambaBox supports different operating systems ranging from Pardus, RedHat, SuSE, Oracle Linux, to Windows and macOS. What sets it apart from its competitors is that since 2017, SambaBox has eliminated single brand dependence, and the need for third-party support.Profelis Managing Partner Caglar Ulkuderner notes that SambaBox was developed in answer to the need to move away from technological dependence on a single provider and in search of a cost-effective solution in the enterprise market, adding, "The market size for authentication will exceed 34 billion USD in the next five years. It is quite clear that solutions compatible with cloud and K8s systems will be greatly sought after." Working seamlessly in offline environments such as on intranets, SambaBox is the only product on the market that can manage more than three thousand GPOs via a web interface. Highlighting that it is this capability that makes it suitable for use in critical projects with information security sensitivity, Ulkuderner said, "SambaBox offers a more responsive solution that goes beyond Microsoft Active Directory, Amazon's sml Directory and Google's directory service which have become generic in this category. The fact that SambaBox is a software appliance makes it easy to install, and this has won over users. In addition to North America, we are also launched Oceania and Latin America." Profelis Managing Partner Caglar UlkudernerTo view an enhanced version of this graphic, please visit:https://images.newsfilecorp.com/files/8552/145021_9c58f42
On The Cloud Pod this week, AWS Enterprise Support adds incident detection and response, the announcement of Google Cloud Spanner, and Oracle expands to Spain. Thank you to our sponsor, Foghorn Consulting, which provides top notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you're having trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week. Episode Highlights ⏰ AWS Enterprise Support adds incident detection and response ⏰ You can now get a 90-day free trial of Google Cloud Spanner ⏰ Oracle opens its newest cloud infrastructure region in Spain Top Quote
Hello everyone! This video was recorded for the VMconf 22 Vulnerability Management conference, vmconf.pw. I will be talking about my open source project Scanvus. This project is already a year old and I use it almost every day. Scanvus (Simple Credentialed Authenticated Network VUlnerability Scanner) is a vulnerability scanner for Linux. Currently for Ubuntu, Debian, CentOS, RedHat, Oracle Linux and Alpine distributions. But in general for any Linux distribution supported by the Vulners Linux API. The purpose of this utility is to get a list of packages and Linux distribution version from some source, make a request to an external vulnerabililty detection API (only Vulners Linux API is currently supported), and show the vulnerability report. Watch the video version of this episode on my YouTube channel. Read the full text of this episode with all links on avleonov.com blog.
On The Cloud Pod this week, the team weighs the merits of bitcoin mining versus hacking. Plus: AWS Trusted Advisor prioritizes Support customers, Google provides impenetrable protection from a major DDoS attack, and Oracle Linux 9 is truly unbreakable. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
Kurz bevor auch wir etwas Urlaub machen, sorgt mit RETBleed eine weitere Sicherheitslücke für Aufsehen. Gleichzeitig ist dieser Juli der große Monat der 9er-Versionen - Rocky Linux 9.0, Oracle Linux 9 und vim 9.0 warten darauf, getestet zu werden. SUSE und Red Hat liefern neue Versionen ihrer Management-Software, während CutefishOS und Debian 9 vor dem Aus stehen. Während ein Teil der FLOSS-Community über die Auswirkungen von Lennart Poetterings neuen Arbeitgeber nachdenkt, kristallisiert sich eine potenzielle neue Anwenderschaft für Proxmox heraus. Links zu dieser Episode: CVE-2022-29900: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-29900CVE-2022-29901: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-29901RETBleed-Details: https://comsec.ethz.ch/research/microarch/retbleed/RETBleed-Details: https://hackaday.com/2022/07/15/this-week-in-security-retbleed-post-quantum-python-atomicwrites-and-the-mysterious-cuteboi/Linux zu Spectre: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/spectre.htmlRetbleed-Patches in Linux 5.19-rc7: https://linuxnews.de/2022/07/linux-5-19-rc7-enthaelt-patches-gegen-retbleed/MicrocodeDecryptor: https://github.com/chip-red-pill/MicrocodeDecryptorvim 9.0 Release Notes: https://www.vim.org/vim90.phpDebian 10 (Buster) Upgrade-Guide: https://www.debian.org/releases/buster/amd64/release-notes/ch-upgrading.en.htmlDebian 11 (Bullseye) Upgrade-Guide: https://www.debian.org/releases/bullseye/amd64/release-notes/ch-upgrading.en.htmlDebian 11.4 Release Notes: https://www.debian.org/News/2022/20220709.de.htmlRocky Linux 9.0 Release Notes: https://docs.rockylinux.org/release_notes/9_0/Peridot Build System: https://github.com/rocky-linux/peridotelementaryOS Juni Updates: https://blog.elementary.io/updates-for-june-2022/Uyuni 2022.06 Release Notes: https://www.uyuni-project.org/doc/2022.06/release-notes-uyuni-server.html#_version_2022_06SUSE Manager 4.3 Release Notes: https://www.suse.com/releasenotes/x86_64/SUSE-MANAGER/4.3/index.htmlUyuni Container-Issue: https://github.com/uyuni-project/uyuni/issues/306#issuecomment-1179598796Uyuni Ansible-Collection: https://github.com/stdevel/ansible-collection-uyuniRed Hat Satellite 6.11 Release Notes: https://access.redhat.com/documentation/en-us/red_hat_satellite/6.11/html-single/release_notesOracle Linux 9 Release Notes: https://docs.oracle.com/en/operating-systems/oracle-linux/9/relnotes9.0/Canonical-Blog über Firefox Snapcraft-Probleme: https://ubuntu.com//blog/improving-firefox-snap-performance-part-3Lennart Poettering verlässt Red Hat: https://www.phoronix.com/scan.php?page=news_item&px=Systemd-Creator-MicrosoftPläne zur VMware-Lizenzierung: https://tech.slashdot.org/story/22/05/27/214227/broadcom-to-focus-on-rapid-transition-to-subscription-for-vmwareWartungsfenster-Podcast zu Broadcom-/VMware-Deal: https://wartungsfenster.podigee.io/9-technical-debt-as-a-serviceVortrag "IT-Infrastruktur außerhalb des Docker-/K8-/OpenStack-Hypes" von Jens Schanz: https://osad-munich.org/wp-content/uploads/2019/12/2019_OSAD-IT-Infrastruktur-au%C3%9Ferhalb-des-Docker-K8-Openstack-Hypes.pdfCutefish OS-Entwicklung eingefroren: https://linuxnews.de/2022/07/cutefish-os-entwicklung-eingefroren/Cutefish OS-Fork kündigt sich an: https://www.reddit.com/r/linux/comments/vwd0m8/i_am_about_to_fork_cutefishos_and_i_need_your_help/Asahi Linux Juli-Update: https://asahilinux.org/2022/07/july-2022-release/Sneak Preview auf SUSE D-Installer: https://www.phoronix.com/scan.php?page=news_item&px=D-Installer-0.4E01: https://ageofdevops.de/index.php/podcast/januar-2022/Ventoy: https://github.com/ventoy/VentoyOpenGrok: https://github.com/oracle/opengrok
Kurz bevor auch wir etwas Urlaub machen, sorgt mit RETBleed eine weitere Sicherheitslücke für Aufsehen. Gleichzeitig ist dieser Juli der große Monat der 9er-Versionen - Rocky Linux 9.0, Oracle Linux 9 und vim 9.0 warten darauf, getestet zu werden. SUSE und Red Hat liefern neue Versionen ihrer Management-Software, während CutefishOS und Debian 9 vor dem Aus stehen. Während ein Teil der FLOSS-Community über die Auswirkungen von Lennart Poetterings neuen Arbeitgeber nachdenkt, kristallisiert sich eine potenzielle neue Anwenderschaft für Proxmox heraus. Links zu dieser Episode: CVE-2022-29900: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-29900CVE-2022-29901: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-29901RETBleed-Details: https://comsec.ethz.ch/research/microarch/retbleed/RETBleed-Details: https://hackaday.com/2022/07/15/this-week-in-security-retbleed-post-quantum-python-atomicwrites-and-the-mysterious-cuteboi/Linux zu Spectre: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/spectre.htmlRetbleed-Patches in Linux 5.19-rc7: https://linuxnews.de/2022/07/linux-5-19-rc7-enthaelt-patches-gegen-retbleed/MicrocodeDecryptor: https://github.com/chip-red-pill/MicrocodeDecryptorvim 9.0 Release Notes: https://www.vim.org/vim90.phpDebian 10 (Buster) Upgrade-Guide: https://www.debian.org/releases/buster/amd64/release-notes/ch-upgrading.en.htmlDebian 11 (Bullseye) Upgrade-Guide: https://www.debian.org/releases/bullseye/amd64/release-notes/ch-upgrading.en.htmlDebian 11.4 Release Notes: https://www.debian.org/News/2022/20220709.de.htmlRocky Linux 9.0 Release Notes: https://docs.rockylinux.org/release_notes/9_0/Peridot Build System: https://github.com/rocky-linux/peridotelementaryOS Juni Updates: https://blog.elementary.io/updates-for-june-2022/Uyuni 2022.06 Release Notes: https://www.uyuni-project.org/doc/2022.06/release-notes-uyuni-server.html#_version_2022_06SUSE Manager 4.3 Release Notes: https://www.suse.com/releasenotes/x86_64/SUSE-MANAGER/4.3/index.htmlUyuni Container-Issue: https://github.com/uyuni-project/uyuni/issues/306#issuecomment-1179598796Uyuni Ansible-Collection: https://github.com/stdevel/ansible-collection-uyuniRed Hat Satellite 6.11 Release Notes: https://access.redhat.com/documentation/en-us/red_hat_satellite/6.11/html-single/release_notesOracle Linux 9 Release Notes: https://docs.oracle.com/en/operating-systems/oracle-linux/9/relnotes9.0/Canonical-Blog über Firefox Snapcraft-Probleme: https://ubuntu.com//blog/improving-firefox-snap-performance-part-3Lennart Poettering verlässt Red Hat: https://www.phoronix.com/scan.php?page=news_item&px=Systemd-Creator-MicrosoftPläne zur VMware-Lizenzierung: https://tech.slashdot.org/story/22/05/27/214227/broadcom-to-focus-on-rapid-transition-to-subscription-for-vmwareWartungsfenster-Podcast zu Broadcom-/VMware-Deal: https://wartungsfenster.podigee.io/9-technical-debt-as-a-serviceVortrag "IT-Infrastruktur außerhalb des Docker-/K8-/OpenStack-Hypes" von Jens Schanz: https://osad-munich.org/wp-content/uploads/2019/12/2019_OSAD-IT-Infrastruktur-au%C3%9Ferhalb-des-Docker-K8-Openstack-Hypes.pdfCutefish OS-Entwicklung eingefroren: https://linuxnews.de/2022/07/cutefish-os-entwicklung-eingefroren/Cutefish OS-Fork kündigt sich an: https://www.reddit.com/r/linux/comments/vwd0m8/i_am_about_to_fork_cutefishos_and_i_need_your_help/Asahi Linux Juli-Update: https://asahilinux.org/2022/07/july-2022-release/Sneak Preview auf SUSE D-Installer: https://www.phoronix.com/scan.php?page=news_item&px=D-Installer-0.4E01: https://ageofdevops.de/index.php/podcast/januar-2022/Ventoy: https://github.com/ventoy/VentoyOpenGrok: https://github.com/oracle/opengrok
• 00:00:34 - Cinnamon 5.4 • 00:09:40 - Radeon Memory Visualizer • 00:15:55 - Fedora Cloud возвращается • 00:20:50 - Oracle Linux 9 • 00:27:42 - RubyGems authentication • 00:33:00 - Proton 7.0-3 • 00:39:40 - KDE Plasma 5.25 • 00:50:02 - browser-linux • 01:01:50 - Kdenlive 22.04.2 • 01:09:10 - K-9 Mail & Thunderbird • 01:16:08 - Ubuntu Core 22 • 01:20:25 - Остановлена поддержка IE • 01:28:25 - Adobe Photoshop Web
• 00:33 Linux 5.18 • 06:25 RHEL 9 • 11:32 AlmaLinux 9 • 14:43 Oracle Linux 8.6 • 17:34 Ultramarine Linux • 22:27 Kdenlive 22.04.1 • 25:43 GitLab и VS Code • 30:27 Wine 7.9 • 32:26 Обновление Steam Deck • 33:24 SteamOS 3.2 • 35:19 Project Zomboid 41.71 • 36:03 На Steam Deck более 3000 игр! • 41:22 Wayland 1.21 Alpha • 43:50 NoiseTorch скомпрометирована Телеграм-канал ОСёЛ
This week we tried to migrate our Matrix instance to our own data center, it didn't go well. We discuss our mistakes and what our next steps are. Your calls, your emails we cover it all in this episode! -- During The Show -- 00:55 Steve's Weekend Client Prep 02:23 Caller Tony Client received a police call Transparent Proxy Managed Switches DNS Resolver/Forwarder Firewall Rules Squid Proxy Dragon's Tail 11:20 VLAN questions - Tyler TP Link G105E (http://www.amazon.com/dp/B00N0OHEMA/?tag=minddripmedia-20) HP OfficeConnect Switch (https://www.ebay.com/itm/373794096000?epid=9030355100&hash=item5707dd4380:g:htwAAOSwhlZhk9rb) Multiple Network Cards Managed Switch Cisco HP 1920 Series Stacking 17:30 Light Automation - John All smart bulbs have a failsafe Shelly 1 (https://shelly.cloud/products/shelly-1-smart-home-automation-relay/) Smart Assistant Smart stuff at the switch GE Enbrighten Zwave Plus (http://www.amazon.com/dp/B07226MG2T/?tag=minddripmedia-20) UltraPro Switch (http://www.amazon.com/dp/B07B3LXZJ9/?tag=minddripmedia-20) Lutron Switches (https://www.lutron.com/en-US/Products/Pages/WholeHomeSystems/RadioRA2/Overview.aspx) Inovelli (https://inovelli.com/products/switches/) 24:46 User recommends software - Brett DerbyNet (https://jeffpiazza.github.io/derbynet/) Pinewood Derby Software Real Time stats, Live instant replay, and more! 27:03 RHCE Questions - MHJ RHCE is hard Skills Based RHCSA is required before you can be called a RHCE Class is not required, but will help you a lot VTC Class 36:50 Tim Asked The guy with MoCA adapters that fail... He has to be sure that any splitters along the way are compatible (will pass the right frequencies). Also, if the cable is connected to CATV, he may need an in-line filter that will prevent his in-house MoCA signalling from propagating up the line to the cable company or the neighbors if the neighborhood infrastructure is old enough. 37:20 sunjam Asked Curious about custom local domain name resolution. Setup avahi for hostname.local on my pi, but curious about adding services to *.home in addition to .local Pseudo TLD Reserved Pseudo Domains mDNS 39:21 CubicleNate Asked I want to know where to get low(er) cost used rack mountable storage solutions. What interfaces are used between them? I have a raid 10 btrfs array I want to grow. eSATA Enclosure (https://www.datoptic.com/ec/hardware-raid-five-5-bay-1u-rackmount-esata-interface.html) InfiniBand (https://en.wikipedia.org/wiki/InfiniBand) 42:19 Caller George Digitize VHS C tapes Janacane Converter (http://www.amazon.com/dp/B07NPFJJ7K/?tag=minddripmedia-20) VHS player with built in converter to DVD 46:48 Linux News Wire Slackware 15 Announcement (http://www.slackware.com/announce/15.0.php) Slackware 15 The Register (https://www.theregister.com/2022/02/07/slackware/) Tiny Core Linux 13 (https://www.tomshardware.com/news/tiny-core-linux-13-released) Peppermint 11 Release (https://news.itsfoss.com/peppermint-11-release/) Trisquel 10 Release (https://trisquel.info/en/trisquel-10-nabia-release-announcement) Openmandriva 4.3 Released (https://www.openmandriva.org/en/news/article/openmandriva-lx-4-3-released) Libre Office 73 Community (https://blog.documentfoundation.org/blog/2022/02/02/libreoffice-73-community/) AMD Linux Client Hiring (https://www.phoronix.com/scan.php?page=news_item&px=AMD-Linux-Client-Hiring-2022) RISC V Investment from Intel (https://riscv.org/whats-new/2022/02/intel-corporation-makes-deep-investment-in-risc-v-community-to-accelerate-innovation-in-open-computing/) Intel Invests Billion into RISC V (https://www.zdnet.com/article/intel-invests-in-open-source-risc-v-processors-with-a-billion-dollars-in-new-chip-foundries/) 3dfx Glide Coming to Linux (https://www.tomshardware.com/news/3dfx-glide-linux-gaming) Local Privilege Escalation (https://www.crowdstrike.com/blog/hunting-pwnkit-local-privilege-escalation-in-linux/) 12 Year Old Local Privilege Escalation on Linux (https://www.cpomagazine.com/cyber-security/all-linux-distributions-affected-by-12-year-old-pwnkit-local-privilege-escalation-bug-allowing-an-attacker-to-execute-commands-as-root/) Sama Code Execution Bug (https://www.helpnetsecurity.com/2022/02/02/samba-bug-may-allow-code-execution-as-root-on-linux-machines-nas-devices-cve-2021-44142/) Argo CD High Severity Flaw (https://www.theregister.com/2022/02/04/argo_cd_0day_kubernetes/) Google Open Source Story (https://thenewstack.io/a-kubernetes-documentary-shares-googles-open-source-story/) Oracle Linux in Windows Store (https://www.theregister.com/2022/02/02/oracle_linux_microsoft/) Github Sponsor Only Repos (https://techcrunch.com/2022/02/02/github-introduces-sponsor-only-repositories/) 48:39 MDM Data Migration Linux Delta Matrix Server Digital Ocean Bill was doubling 12,000+ Users Data Migration took 4+ days Ansible and Containers Postgres issues Learned a lot What is your baseline? Possible Solutions -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/272) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed) Special Guest: Steve Ovens.
EP315 - O Oracle Linux é melhor com HugePages? Entre no nosso canal do Telegram para receber conteúdos Exclusivos sobre Banco de dados Oracle: https://t.me/joinchat/AAAAAEb7ufK-90djaVuR4Q
Jim Grisanzio from Oracle Developer Relations talks with Simon Coter and Simon Hayler from the Oracle Linux and Virtualization Product Management organization about Oracle VirtualBox, the community, the roadmap, and integration with Oracle Cloud Infrastructure (OCI). Simon Coter, Director, Oracle Linux and Virtualization https://twitter.com/scoter80 Simon Hayler, Sr. Principal Technical Product Manager, Oracle Linux and Virtualization https://twitter.com/simonhayler1965 VirtualBox for Dummieshttps://blogs.oracle.com/scoter/post/ebook-virtualbox-for-dummies Oracle Virtual Boxhttps://www.virtualbox.org/ Oracle Cloud Infrastructurehttps://www.oracle.com/cloud/ Oracle Cloud Infrastructure with Oracle VM VirtualBoxhttps://blogs.oracle.com/virtualization/post/journey-to-oracle-cloud-infrastructure-with-oracle-vm-virtualbox https://www.oracle.com/a/ocom/docs/oracle-vm-vb-oci-export-20190502-5480003.pdf Jim Grisanzio, Oracle Developer Relationshttps://twitter.com/jimgris https://www.linkedin.com/in/jimgris/ https://developer.oracle.com/team/ https://oraclegroundbreakers.libsyn.com/
Life Leadership with Leila Singh: All things... Coaching, Career & Personal Brand!
In today's episode of the mi-brand HQ podcast, I am speaking to Thomas Lee Thomas Lee has been leading the Oracle Linux & Virtualization business in the APJ region since 2017. He is based in Singapore, and reports to HQ in the US. Together with his team, his goal is to help customers in their journey to Digital transformation & Automation, Hybrid IT, DevOps, Security and multi-cloud strategies, with their Oracle Open cloud infrastructure portfolio. Thomas is a seasoned and international professional, with 25 years of experience across the IT Industry (Software, Hardware, and Services), working for both start-up companies and large IT Corporations, in Europe and in Asia. For the past 10 years, he held multiple Executive roles both at HP and ORACLE, in the APAC region. Prior to that, he worked for 15 years in the EMEA region. He holds a Bachelor's degree in International Business, from a French Business School. He also graduated from an Executive Management program at INSEAD school. In today's episode, Thomas will be sharing – What creates a WINNING CULTURE, & the importance of raising the bar His strategies for becoming a GREAT vs. a GOOD LEADER, & the ONE THING to communicate to your team! How DIVERSITY in the workplace is essential for success in business and in tech How your DREAMS directly influence a WINNING MINDSET and free you of limitations! What to consider when working across cultures, to enable integration and inclusion You can connect with Thomas on LinkedIn at – https://www.linkedin.com/in/thomas-lee-894b57b/ The Life Leadership Podcast – with Leila Singh, is all things Coaching, Career & Personal Branding! This podcast is for ambitious career professionals, especially aspiring executives, working in the technology industry, wanting to uncover your real potential, create new possibilities and accelerate your career - to BE DO & HAVE more, whilst redefining your success, in work, relationships, health and much more. Life Leadership: Creating a life and career of choice, fulfilment and new possibilities! As well as discussing common coaching topics and challenges that my clients overcome, I will also explore aspects of career advancement and personal branding in the workplace. And of course, continue to interview high-achieving leaders and execs in the tech space, who have carved out a successful career in their field, overcome challenges, and are openly willing to share their career journey, learnings and insights with you. Please SUBSCRIBE to this podcast, leave a REVIEW and SHARE with those that may benefit from this content. If you would like to learn more about working with me, Direct Message me on LinkedIn or email me at hello@leilasingh.com Connect directly with me here - www.linkedin.com/in/leila-singh/ Register here to receive your copy of The mi-brand Personal Brand Playbook - www.leilasingh.com/go/playbook And check out - >>> This article by https://BestPodcasts.co.uk, who curated a list of the Best Career Podcasts of 2023, offering unique and actionable insights to help you achieve your career goals - https://www.bestpodcasts.co.uk/best-career-podcasts/ with our podcast ‘Life Leadership' featuring in the Top 5! >>> https://blog.Feedspot.com whose editorial team extensively researched and curated a list of the Top 15 Life Leadership Podcasts across all platforms, featuring 'Life Leadership' in the Top 3! With ranking based on factors including - Podcast content quality - Episode consistency - Age of podcast - Engagement & shares of the podcast across social platforms. 15 Best Life Leadership Podcasts You Must Follow in 2023 (feedspot.com)
EP81 - É possível instalar o Oracle Database 19c no Oracle Linux 8? Entre no nosso canal do Telegram para receber conteúdos Exclusivos sobre Banco de dados Oracle: https://t.me/joinchat/AAAAAEb7ufK-90djaVuR4Q
Coming up in this episode 1. XFace gets a new face 2. CentOS rocks your world 3. Mozilla Watch 4. We wish upon a star 5. And we think inside the box Preshow Wowstick 1F on Amazon (https://www.amazon.com/Wowstick-Electric-Screwdriver-Cordless-Lithium-ion/dp/B07H27G9NF/ref=sr_1_4?dchild=1&keywords=wowstick&qid=1609986722&sr=8-4) XFCE 4.14 --> 4.16 Official Announcement (https://www.xfce.org/about/news/?post=1608595200) Official Tour, highlighting the new color palette (https://www.xfce.org/about/tour416) https://9to5linux.com/xfce-4-16-desktop-environment-officially-released-this-is-whats-new TL;DR - Fractional scaling. New Icons. Cleaned up dialogs and settings menus. CentOS Linux announcement CentOS (https://www.centos.org) Official Announcement (https://blog.centos.org/2020/12/future-is-centos-stream/) Old "Rough" Timeline Fedora --> wait some years --> Centos Stream --> wait a few weeks --> RHEL --> wait some more time --> Centos New "Rough" Timeline Fedora --> wait some years --> Centos Stream --> wait a few weeks --> RHEL Rocky Linux (https://rockylinux.org) Project Lenix (https://www.projectlenix.org/) Oracle Linux (https://www.oracle.com/linux/) Mozilla Watch... Visual refresh meta bug (https://bugzilla.mozilla.org/show_bug.cgi?id=proton) AKA proton --> photon https://www.techradar.com/news/get-ready-for-a-radical-firefox-redesign https://www.ghacks.net/2021/01/02/mozilla-is-working-on-a-firefox-design-refresh/ https://www.debugpoint.com/2021/01/firefox-proton/ Wishlist for 2021 Dan I think the Pine 64 future (https://www.pine64.org/2020/11/15/november-update-kde-pinephone-ce-and-a-peek-into-the-future/) looks good. My wish is that the time frame is closer to the 6 month range for the next gen stuff to start to materialize. Intel gets their new graphics stack rolled out for laptops. Leo Pine/Pi Moar Cores! Enthusiast server platform. Firefox becomes a major player again. Nature abhors a monoculture. Joe My long suffering 4k laptop screen woes remedied (installers that support Hidpi) RPi sata or M.2 ports (even MMC) Housekeeping Email us (mailto:contact@linuxuserspace.show) SANS Internet Stormcast (https://isc.sans.edu/podcast.html) Support us at Patreon (https://patreon.com/linuxuserspace) Join us on Telegram (https://linuxuserspace.show/telegram) Follow us on twitter (https://twitter.com/LinuxUserSpace) Check out Linux User Space (https://linuxuserspace.show) on the web App Focus Gnome Boxes This episode's app: Gnome Boxes (https://help.gnome.org/users/gnome-boxes/stable/) Boxes on flathub (https://flathub.org/apps/details/org.gnome.Boxes) Next Time We discuss January's distro of the month, Solus (https://getsol.us) Join us in two weeks when we return to the Linux User Space
EP75 - Oracle Linux: A melhor alternativa para CentOS Entre no nosso canal do Telegram para receber conteúdos Exclusivos sobre Banco de dados Oracle: https://t.me/joinchat/AAAAAEb7ufK-90djaVuR4Q
Histoire de distributions Linux - Cent OS, Fedora, Red Hat, Mandrake, Chrome OS, Oracle linux, Cloud Ready et Gentoo sont dans un bateau. Cent OS tombe à l'eau.
Creating VM is not rocket science in ESXi and it is quite similar to creating VMs in any hypervisor like VM Ware or VirtualBox. In the last article, we did deployment and configuration of ESXi Server on Bare-Metal machine. In today’s article, we’ll deploy a VM on ESXi Server with Oracle Linux 7. For better understanding I would recommend to refer to my own written article on this here
Installation of Oracle Enterprise Linux 6.7 on a physical server along with customized storage for a deployment of Oracle Database. For better understanding I would recommend to refer to my own written article on this here
IBM just spent $34 billion to buy a software company that gives away its primary product for free. IBM Sunday said it would acquire Red Hat, best known for its Red Hat Enterprise Linux operating system. Red Hat is an open source software company that gives away the source code for its core products. That means anyone can download them for free. And many do. Oracle even uses Red Hat's source code for its own Oracle Linux product.
We review Meltdown and Spectre responses from various BSD projects, show you how to run CentOS with bhyve, GhostBSD 11.1 is out, and we look at the case against the fork syscall. This episode was brought to you by Headlines More Meltdown Much has been happened this week, but before we get into a status update of the various mitigations on the other BSDs, some important updates: Intel has recalled the microcode update they issued on January 8th. It turns out this update can cause Haswell and Broadwell based systems to randomly reboot, with some frequency. (https://newsroom.intel.com/news/intel-security-issue-update-addressing-reboot-issues/) AMD has confirmed that its processors are vulnerable to both variants of Spectre, and the the fix for variant #2 will require a forthcoming microcode update, in addition to OS level mitigations (https://www.amd.com/en/corporate/speculative-execution) Fujitsu has provided a status report for most of its products, including SPARC hardware (https://sp.ts.fujitsu.com/dmsp/Publications/public/Intel-Side-Channel-Analysis-Method-Security-Review-CVE2017-5715-vulnerability-Fujitsu-products.pdf) The Register of course has some commentary (https://www.theregister.co.uk/2018/01/12/intel_warns_meltdown_spectre_fixes_make_broadwells_haswells_unstable/) If new code is needed, Intel will need to get it right: the company already faces numerous class action lawsuits. Data centre operators already scrambling to conduct unplanned maintenance will not be happy about the fix reducing stability. AMD has said that operating system patches alone will address the Spectre bounds check bypass bug. Fixing Spectre's branch target injection flaw will require firmware fixes that AMD has said will start to arrive for Ryzen and EPYC CPUs this week. The Register has also asked other server vendors how they're addressing the bugs. Oracle has patched its Linux, but has told us it has “No comment/statement on this as of now” in response to our query about its x86 systems, x86 cloud, Linux and Solaris on x86. The no comment regarding Linux is odd as fixes for Oracle Linux landed here (https://linux.oracle.com/errata/ELSA-2018-4006.html) on January 9th. SPARC-using Fujitsu, meanwhile, has published advice (PDF) revealing how it will address the twin bugs in its servers and PCs, and also saying its SPARC systems are “under investigation”. Response from OpenBSD: (https://undeadly.org/cgi?action=article;sid=20180106082238) 'Meltdown, aka "Dear Intel, you suck"' (https://marc.info/?t=151521438600001&r=1&w=2) Theo de Raadt's response to Meltdown (https://www.itwire.com/security/81338-handling-of-cpu-bug-disclosure-incredibly-bad-openbsd-s-de-raadt.html) That time in 2007 when Theo talked about how Intel x86 had major design problems in their chips (https://marc.info/?l=openbsd-misc&m=118296441702631&w=2) OpenBSD gets a Microcode updater (https://marc.info/?l=openbsd-cvs&m=151570987406841&w=2) Response from Dragonfly BSD: (http://lists.dragonflybsd.org/pipermail/users/2018-January/313758.html) The longer response in four commits One (http://lists.dragonflybsd.org/pipermail/commits/2018-January/627151.html) Two (http://lists.dragonflybsd.org/pipermail/commits/2018-January/627152.html) Three (http://lists.dragonflybsd.org/pipermail/commits/2018-January/627153.html) Four (http://lists.dragonflybsd.org/pipermail/commits/2018-January/627154.html) Even more Meltdown (https://www.dragonflydigest.com/2018/01/10/20718.html) DragonflyBSD master now has full IBRS and IBPB support (http://lists.dragonflybsd.org/pipermail/users/2018-January/335643.html) IBRS (Indirect Branch Restricted Speculation): The x86 IBRS feature requires corresponding microcode support. It mitigates the variant 2 vulnerability. If IBRS is set, near returns and near indirect jumps/calls will not allow their predicted target address to be controlled by code that executed in a less privileged prediction mode before the IBRS mode was last written with a value of 1 or on another logical processor so long as all RSB entries from the previous less privileged prediction mode are overwritten. Speculation on Skylake and later requires these patches ("dynamic IBRS") be used instead of retpoline. If you are very paranoid or you run on a CPU where IBRS=1 is cheaper, you may also want to run in "IBRS always" mode. IBPB (Indirect Branch Prediction Barrier): Setting of IBPB ensures that earlier code's behavior does not control later indirect branch predictions. It is used when context switching to new untrusted address space. Unlike IBRS, IBPB is a command MSR and does not retain its state. DragonFlyBSD's Meltdown Fix Causing More Slowdowns Than Linux (https://www.phoronix.com/scan.php?page=article&item=dragonfly-bsd-meltdown&num=1) NetBSD HOTPATCH() (http://mail-index.netbsd.org/source-changes/2018/01/07/msg090945.html) NetBSD SVS (Separate Virtual Space) (http://mail-index.netbsd.org/source-changes/2018/01/07/msg090952.html) Running CentOS with Bhyve (https://www.daemon-security.com/2018/01/bhyve-centos-0110.html) With the addition of UEFI in FreeBSD (since version 11), users of bhyve can use the UEFI boot loader instead of the grub2-bhyve port for booting operating systems such as Microsoft Windows, Linux and OpenBSD. The following page provides information necessary for setting up bhyve with UEFI boot loader support: https://wiki.freebsd.org/bhyve/UEFI Features have been added to vmrun.sh to make it easier to setup the UEFI boot loader, but the following is required to install the UEFI firmware pkg: # pkg install -y uefi-edk2-bhyve With graphical support, you can use a vnc client like tigervnc, which can be installed with the following command: # pkg install -y tigervnc In the case of most corporate or government environments, the Linux of choice is RHEL, or CentOS. Utilizing bhyve, you can test and install CentOS in a bhyve VM the same way you would deploy a Linux VM in production. The first step is to download the CentOS iso (for this tutorial I used the CentOS minimal ISO): http://isoredirect.centos.org/centos/7/isos/x8664/CentOS-7-x8664-Minimal-1708.iso I normally use a ZFS Volume (zvol) when running bhyve VMs. Run the following commands to create a zvol (ensure you have enough disk space to perform these operations): # zfs create -V20G -o volmode=dev zroot/centos0 (zroot in this case is the zpool I am using) Similar to my previous post about vmrun.sh, you need certain items to be configured on FreeBSD in order to use bhyve. The following commands are necessary to get things running: ``` echo "vfs.zfs.vol.mode=2" >> /boot/loader.conf kldload vmm ifconfig tap0 create sysctl net.link.tap.uponopen=1 net.link.tap.uponopen: 0 -> 1 ifconfig bridge0 create ifconfig bridge0 addm em0 addm tap0 ifconfig bridge0 up ``` (replace em0 with whatever your physical interface is). There are a number of utilities that can be used to manage bhyve VMs, and I am sure there is a way to use vmrun.sh to run Linux VMs, but since all of the HowTos for running Linux use the bhyve command line, the following script is what I use for running CentOS with bhyve. ``` !/bin/sh General bhyve install/run script for CentOS Based on scripts from pr1ntf and lattera HOST="127.0.0.1" PORT="5901" ISO="/tmp/centos.iso" VMNAME="centos" ZVOL="centos0" SERIAL="nmda0A" TAP="tap1" CPU="1" RAM="1024M" HEIGHT="800" WIDTH="600" if [ "$1" == "install" ]; then Kill it before starting it bhyvectl --destroy --vm=$VMNAME bhyve -c $CPU -m $RAM -H -P -A -s 0,hostbridge -s 2,virtio-net,$TAP -s 3,ahci-cd,$ISO -s 4,virtio-blk,/dev/zvol/zroot/$ZVOL -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT -s 30,xhci,tablet -s 31,lpc -l com1,/dev/$SERIAL -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd $VMNAME kill it after bhyvectl --destroy --vm=$VMNAME elif [ "$1" == "run" ]; then Kill it before starting it bhyvectl --destroy --vm=centos bhyve -c $CPU -m $RAM -w -H -s 0,hostbridge -s 2,virtio-net,$TAP -s 4,virtio-blk,/dev/zvol/zroot/$ZVOL -s 29,fbuf,tcp=$HOST:$PORT,w=$WIDTH,h=$HEIGHT -s 30,xhci,tablet -s 31,lpc -l com1,/dev/$SERIAL -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd $VMNAME & else echo "Please type install or run"; fi ``` The variables at the top of the script can be adjusted to fit your own needs. With the addition of the graphics output protocol in UEFI (or UEFI-GOP), a VNC console is launched and hosted with the HOST and PORT setting. There is a password option available for the VNC service, but the connection should be treated as insecure. It is advised to only listen on localhost with the VNC console and tunnel into the host of the bhyve VM. Now with the ISO copied to /tmp/centos.iso, and the script saved as centos.sh you can run the following command to start the install: # ./centos.sh install At this point, using vncviewer (on the local machine, or over an SSH tunnel), you should be able to bring up the console and run the CentOS installer as normal. The absolutely most critical item is to resolve an issue with the booting of UEFI after the installation has completed. Because of the path used in bhyve, you need to run the following to be able to boot CentOS after the installation: # cp -f /mnt/sysimage/boot/efi/EFI/centos/grubx64.efi /mnt/sysimage/boot/efi/EFI/BOOT With this setting changed, the same script can be used to launch your CentOS VM as needed: # ./centos.sh run If you are interested in a better solution for managing your Linux VM, take a look at the various bhyve management ports in the FreeBSD ports tree. Interview - newnix architect - @newnix (https://bsd.network/@newnix) News Roundup GhostBSD 11.1 - FreeBSD for the desktop (https://distrowatch.com/weekly.php?issue=20180108#ghostbsd) GhostBSD is a desktop oriented operating system which is based on FreeBSD. The project takes the FreeBSD operating system and adds a desktop environment, some popular applications, a graphical package manager and Linux binary compatibility. GhostBSD is available in two flavours, MATE and Xfce, and is currently available for 64-bit x86 computers exclusively. I downloaded the MATE edition which is available as a 2.3GB ISO file. Installing GhostBSD's system installer is a graphical application which begins by asking us for our preferred language, which we can select from a list. We can then select our keyboard's layout and our time zone. When it comes to partitioning we have three main options: let GhostBSD take over the entire disk using UFS as the file system, create a custom UFS layout or take over the entire disk using ZFS as the file system. UFS is a classic file system and quite popular, it is more or less FreeBSD's equivalent to Linux's ext4. ZFS is a more advanced file system with snapshots, multi-disk volumes and optional deduplication of data. I decided to try the ZFS option. Once I selected ZFS I didn't have many more options to go through. I was given the chance to set the size of my swap space and choose whether to set up ZFS as a plain volume, with a mirrored disk for backup or in a RAID arrangement with multiple disks. I stayed with the plain, single disk arrangement. We are then asked to create a password for the root account and create a username and password for a regular user account. The installer lets us pick our account's shell with the default being fish, which seemed unusual. Other shells, including bash, csh, tcsh, ksh and zsh are available. The installer goes to work copying files and offers to reboot our computer when it is done. Early impressions The newly installed copy of GhostBSD boots to a graphical login screen where we can sign into the account we created during the install process. Signing into our account loads the MATE 1.18 desktop environment. I found MATE to be responsive and applications were quick to open. Early on I noticed odd window behaviour where windows would continue to slide around after I moved them with the mouse, as if the windows were skidding on ice. Turning off compositing in the MATE settings panel corrected this behaviour. I also found the desktop's default font (Montserrat Alternates) to be hard on my eyes as the font is thin and, for lack of a better term, bubbly. Fonts can be easily adjusted in the settings panel. A few minutes after I signed into my account, a notification appeared in the system tray letting me know software updates were available. Clicking the update icon brings up a small window showing us a list of package updates and, if any are available, updates to the base operating system. FreeBSD, and therefore GhostBSD, both separate the core operating system from the applications (packages) which run on the operating system. This means we can update the core of the system separately from the applications. GhostBSD's core remains relatively static and minimal while applications are updated using a semi-rolling schedule. When we are updating the core operating system, the update manager will give us the option of rebooting the system to finish the process. We can dismiss this prompt to continue working, but the wording of the prompt may be confusing. When asked if we want to reboot to continue the update process, the options presented to us are "Continue" or "Restart". The Continue option closes the update manager and returns us to the MATE desktop. The update manager worked well for me and the only issue I ran into was when I dismissed the update manager and then wanted to install updates later. There are two launchers for the update manager, one in MATE's System menu and one in the settings panel. Clicking either of these launchers didn't accomplish anything. Running the update manager from the command line simply caused the process to lock up until killed. I found if I had dismissed the update manager once, I'd have to wait until I logged in again to use it. Alternatively, I could use a command line tool or use the OctoPkg package manager to install package updates. Conclusions Most of my time with GhostBSD, I was impressed and happy with the operating system. GhostBSD builds on a solid, stable FreeBSD core. We benefit from FreeBSD's performance and its large collection of open source software packages. The MATE desktop was very responsive in my trial and the system is relatively light on memory, even when run on ZFS which has a reputation for taking up more memory than other file systems. FreeBSD Looks At Making Wayland Support Available By Default (https://www.phoronix.com/scan.php?page=news_item&px=FreeBSD-Wayland-Availability) There's an active discussion this week about making Wayland support available by default on FreeBSD. FreeBSD has working Wayland support -- well, assuming you have working Intel / Radeon graphics -- and do have Weston and some other Wayland components available via FreeBSD Ports. FreeBSD has offered working Wayland support that is "quite usable" for more than one year. But, it's not too easy to get going with Wayland on FreeBSD. Right now those FreeBSD desktop users wanting to use/develop with Wayland currently need to rebuild the GTK3 tool-kit, Mesa, and other packages with Wayland support enabled. This call for action now is about allowing the wayland=on to be made the default. This move would then allow these dependencies to be built with Wayland support by default, but for the foreseeable future FreeBSD will continue defaulting to X.Org-based sessions. The FreeBSD developers mostly acknowledge that Wayland is the future and the cost of enabling Wayland support by default is just slightly larger packages, but that weight is still leaner than the size of the X.Org code-base and its dependencies. FreeBSD vote thread (https://lists.freebsd.org/pipermail/freebsd-ports/2017-December/111906.html) TrueOS Fliped the switch already (https://github.com/trueos/trueos-core/commit/f48dba9d4e8cefc45d6f72336e7a0b5f42a2f6f1) fork is not my favorite syscall (https://sircmpwn.github.io/2018/01/02/The-case-against-fork.html) This article has been on my to-write list for a while now. In my opinion, fork is one of the most questionable design choices of Unix. I don't understand the circumstances that led to its creation, and I grieve over the legacy rationale that keeps it alive to this day. Let's set the scene. It's 1971 and you're a fly on the wall in Bell Labs, watching the first edition of Unix being designed for the PDP-11/20. This machine has a 16-bit address space with no more than 248 kilobytes of memory. They're discussing how they're going to support programs that spawn new programs, and someone has a brilliant idea. “What if we copied the entire address space of the program into a new process running from the same spot, then let them overwrite themselves with the new program?” This got a rousing laugh out of everyone present, then they moved on to a better design which would become immortalized in the most popular and influential operating system of all time. At least, that's the story I'd like to have been told. In actual fact, the laughter becomes consensus. There's an obvious problem with this approach: every time you want to execute a new program, the entire process space is copied and promptly discarded when the new program begins. Usually when I complain about fork, this the point when its supporters play the virtual memory card, pointing out that modern operating systems don't actually have to copy the whole address space. We'll get to that, but first — First Edition Unix does copy the whole process space, so this excuse wouldn't have held up at the time. By Fourth Edition Unix (the next one for which kernel sources survived), they had wisened up a bit, and started only copying segments when they faulted. This model leads to a number of problems. One is that the new process inherits all of the parent's process descriptors, so you have to close them all before you exec another process. However, unless you're manually keeping tabs on your open file descriptors, there is no way to know what file handles you must close! The hack that solves this is CLOEXEC, the first of many hacks that deal with fork's poor design choices. This file descriptors problem balloons a bit - consider for example if you want to set up a pipe. You have to establish a piped pair of file descriptors in the parent, then close every fd but the pipe in the child, then dup2 the pipe file descriptor over the (now recently closed) file descriptor 1. By this point you've probably had to do several non-trivial operations and utilize a handful of variables from the parent process space, which hopefully were on the stack so that we don't end up copying segments into the new process space anyway. These problems, however, pale in comparison to my number one complaint with the fork model. Fork is the direct cause of the stupidest component I've ever heard of in an operating system: the out-of-memory (aka OOM) killer. Say you have a process which is using half of the physical memory on your system, and wants to spawn a tiny program. Since fork “copies” the entire process, you might be inclined to think that this would make fork fail. But, on Linux and many other operating systems since, it does not fail! They agree that it's stupid to copy the entire process just to exec something else, but because fork is Important for Backwards Compatibility, they just fake it and reuse the same memory map (except read-only), then trap the faults and actually copy later. The hope is that the child will get on with it and exec before this happens. However, nothing prevents the child from doing something other than exec - it's free to use the memory space however it desires! This approach now leads to memory overcommittment - Linux has promised memory it does not have. As a result, when it really does run out of physical memory, Linux will just kill off processes until it has some memory back. Linux makes an awfully big fuss about “never breaking userspace” for a kernel that will lie about memory it doesn't have, then kill programs that try to use the back-alley memory they were given. That this nearly 50 year old crappy design choice has come to this astonishes me. Alas, I cannot rant forever without discussing the alternatives. There are better process models that have been developed since Unix! The first attempt I know of is BSD's vfork syscall, which is, in a nutshell, the same as fork but with severe limitations on what you do in the child process (i.e. nothing other than calling exec straight away). There are loads of problems with vfork. It only handles the most basic of use cases: you cannot set up a pipe, cannot set up a pty, and can't even close open file descriptors you inherited from the parent. Also, you couldn't really be sure of what variables you were and weren't editing or allowed to edit, considering the limitations of the C specification. Overall this syscall ended up being pretty useless. Another model is posixspawn, which is a hell of an interface. It's far too complicated for me to detail here, and in my opinion far too complicated to ever consider using in practice. Even if it could be understood by mortals, it's a really bad implementation of the spawn paradigm — it basically operates like fork backwards, and inherits many of the same flaws. You still have to deal with children inheriting your file descriptors, for example, only now you do it in the parent process. It's also straight-up impossible to make a genuine pipe with posixspawn. (Note: a reader corrected me - this is indeed possible via posixspawnfileactionsadddup2.) Let's talk about the good models - rfork and spawn (at least, if spawn is done right). rfork originated from plan9 and is a beautiful little coconut of a syscall, much like the rest of plan9. They also implement fork, but it's a special case of rfork. plan9 does not distinguish between processes and threads - all threads are processes and vice versa. However, new processes in plan9 are not the everything-must-go fuckfest of your typical fork call. Instead, you specify exactly what the child should get from you. You can choose to include (or not include) your memory space, file descriptors, environment, or a number of other things specific to plan9. There's a cool flag that makes it so you don't have to reap the process, too, which is nice because reaping children is another really stupid idea. It still has some problems, mainly around creating pipes without tremendous file descriptor fuckery, but it's basically as good as the fork model gets. Note: Linux offers this via the clone syscall now, but everyone just fork+execs anyway. The other model is the spawn model, which I prefer. This is the approach I took in my own kernel for KnightOS, and I think it's also used in NT (Microsoft's kernel). I don't really know much about NT, but I can tell you how it works in KnightOS. Basically, when you create a new process, it is kept in limbo until the parent consents to begin. You are given a handle with which you can configure the process - you can change its environment, load it up with file descriptors to your liking, and so on. When you're ready for it to begin, you give the go-ahead and it's off to the races. The spawn model has none of the flaws of fork. Both fork and exec can be useful at times, but spawning is much better for 90% of their use-cases. If I were to write a new kernel today, I'd probably take a leaf from plan9's book and find a happy medium between rfork and spawn, so you could use spawn to start new threads in your process space as well. To the brave OS designers of the future, ready to shrug off the weight of legacy: please reconsider fork. Enable ld.lld as bootstrap linker by default on amd64 (https://svnweb.freebsd.org/changeset/base/327783) Enable ld.lld as bootstrap linker by default on amd64 For some time we have been planning to migrate to LLVM's lld linker. Having a man page was the last blocking issue for using ld.lld to link the base system kernel + userland, now addressed by r327770. Link the kernel and userland libraries and binaries with ld.lld by default, for additional test coverage. This has been a long time in the making. On 2013-04-13 I submitted an upstream tracking issue in LLVM PR 23214: [META] Using LLD as FreeBSD's system linker. Since then 85 individual issues were identified, and submitted as dependencies. These have been addressed along with two and a half years of other lld development and improvement. I'd like to express deep gratitude to upstream lld developers Rui Ueyama, Rafael Espindola, George Rimar and Davide Italiano. They put in substantial effort in addressing the issues we found affecting FreeBSD/amd64. To revert to using ld.bfd as the bootstrap linker, in /etc/src.conf set WITHOUTLLDBOOTSTRAP=yes If you need to set this, please follow up with a PR or post to the freebsd-toolchain mailing list explaining how default WITHLLDBOOTSTRAP failed for your use case. Note that GNU ld.bfd is still installed as /usr/bin/ld, and will still be used for linking ports. ld.lld can be installed as /usr/bin/ld by setting in /etc/src.conf WITH_LLD_IS_LLD=yes A followup commit will set WITHLLDIS_LD by default, possibly after Clang/LLVM/lld 6.0 is merged to FreeBSD. Release notes: Yes Sponsored by: The FreeBSD Foundation Followup: https://www.mail-archive.com/svn-src-all@freebsd.org/msg155493.html *** Beastie Bits BSDCAN2017 Interview with Peter Hessler, Reyk Floeter, and Henning Brauer (https://undeadly.org/cgi?action=article;sid=20171229080944) video (https://www.youtube.com/watch?v=e-Xim3_rJns) DSBMD (https://freeshell.de/~mk/projects/dsbmd.html) ccc34 talk - May contain DTraces of FreeBSD (https://media.ccc.de/v/34c3-9196-may_contain_dtraces_of_freebsd) [scripts to run an OpenBSD mirror, rsync and verify])(https://github.com/bluhm/mirror-openbsd) Old School PC Fonts (https://int10h.org/oldschool-pc-fonts/readme/) Feedback/Questions David - Approach and Tools for Snapshots and Remote Replication (http://dpaste.com/33HKKEM#wrap) Brian - Help getting my FreeBSD systems talking across the city (http://dpaste.com/3QWFEYR#wrap) Malcolm - First BSD Meetup in Stockholm happened and it was great (http://dpaste.com/1Z9Y8H1) Brad - Update on TrueOS system (http://dpaste.com/3EC9RGG#wrap) ***
Show: 14Show Overview: Brian and Tyler talk address some of the many layers of security required in a container environment. This show will be part of a series on container and Kubernetes security. They look at security requirement in the Container Host, Container Content, Container Registry, and Software Build Processes. Show Notes and News:10 Layers of Container SecurityGoogle, VMware and Pivotal announced a Hybrid Cloud partnership with KubernetesGoogle and Cisco announced a Hybrid Cloud partnership with Kubernetes (and more)Docker adds support for Kubernetes to DockerEERancher makes Kubernetes the primary orchestratorMicrosoft announces new Azure Container Service, AKSOracle announced Kubernetes on Oracle Linux (and some installers)Heptio announces new toolsTopic 1 - Let’s start at the bottom of the stack with the security needed on a container host.Linux namespaces - isolation Linux capabilities and SECCOMP - restrict routes, ports, limiting process calls SELinux (or AppArmor) - mandatory access controls cGroups - resource managementTopic 2 - Next in the stack, or outside the stack, is the sources of container content.Trusted sources (known registries vs. public registries (e.g. DockerHub) Scanning the content of containers Managing the versions, patches of container contentTopic 3 - Once we have the content (applications), we need a secure place to store and access it - container registries.Making a registry highly-available Who manages and audits the registry? How to scan container within a container? How to cryptographically sign images? Identifying known registries Process for managing the content in a registry (tagging, versioning/naming, etc) Automated policies (patch management, getting new content, etc.) Topic 4 - Once we have secure content (building blocks) and a secure place to store the container images, we need to think about a secure supply chain of the software - the build process.Does a platform require containers, or can it accept code? Can it manage secure builds? How to build automated triggers for builds? How to audit those triggers (webhooks, etc.)? How to validate / scan / test code at different stages of a pipeline? (static analysis, dynamic analysis, etc.) How to promote images to a platform? (automated, manual promotion, etc.)Feedback?Email: PodCTL at gmail dot comTwitter: @PodCTL Web: http://podctl.com
The DevOps kids have decided to come up with a new term “observability.” We get to the bottom of the WTF barrel on what that is - it sounds like a good word-project. Also, there’s a spate of kubernetes news, as always, and some interesting acquisitions. Plus, a micro-iOS 11 review. Meta, follow-up, etc. Patreon (https://www.patreon.com/sdt) - like anyone who starts these things, I have no idea WTF it is, if it’s a good idea, or if I should be ashamed. Need some product/market fit. Check out the Software Defined Talk Members Only White-Paper Exiguous podcast over there. Join us all in the SDT Slack (http://www.softwaredefinedtalk.com/slack). Is “observability” just “instrumentation”? Write-up (https://medium.com/@copyconstruct/monitoring-and-observability-8417d1952e1c) from Cindy Sridharan. This guy (https://medium.com/@steve.mushero/observability-vs-monitoring-is-it-about-active-vs-passive-or-dev-vs-ops-14b24ddf182f): “Thinking directionally, Monitoring is the passive collection of Metrics, logs, etc. about a system, while Observability is the active dissemination of information from the system. Looking at it another way, from the external ‘supervisor’ perspective, I monitor you, but you make yourself Observable.” So, yes: if developers actually make their code monitorable and manageable…easy street! It’s a good detailing of that important part of DevOps. Cloud Native Java (http://amzn.to/2jPJHcv) has a good example with the default “observability” attributes for apps, and then an overview of Zipkin tracing. Weekly k8s News Heptio gets funding (https://www.geekwire.com/2017/heptio-raises-25m-series-b-funding-round-kubernetes-takes-world/), now “has raised $33.5 million in funding to date.” I think we’ll cover this press release in a WP episode. Also, something called “StackPointCloud” now with the Istio (https://thenewstack.io/stackpointcloud-drops-istio-service-mesh-integration/). Mesosphere adding K8s support (https://techcrunch.com/2017/09/06/mesosphere-says-its-not-bowing-to-kubernetes/) - “Guagenti also noted that he believes that Mesosphere is currently a leader in the container space, both in terms of the number of containers its users run in production and in terms of revenue (though the company sadly didn’t share any numbers).” "I think it’s fair to call Kubernetes the de facto standard for how enterprises will do container orchestration,” Derrick Harris (http://news.architecht.io/issues/with-oracle-on-board-kubernetes-has-to-be-the-de-facto-standard-for-container-orchestration-73880). Is Kubernetes Repeating OpenStack’s Mistakes? (https://www.mirantis.com/blog/is-kubernetes-repeating-openstacks-mistakes/) - Boris throwing bombs Meanwhile, an abstract of a containers penetration study (https://redmonk.com/fryan/2017/09/10/cloud-native-technologies-in-the-fortune-100/), from RedMonk: "Docker, is running at 71% across Fortune 100 companies. Kubernetes usage is running in some form at 54%, and Cloud Foundry usage is at 50%” This update from the Cloud Foundry Foundation (https://www.cloudfoundry.org/update-containers-2017-research-shows/) is a little more, er, “responsible” in pointing out flaws. Instead it just says there’s lots of growth and tire-kicking: 2016/2017 y/y shows those evaluating containers went up from 31% to 42%, while “using” ticked up a tad from 22% to 25%, n=540. Oracle’s in the CNCF (https://techcrunch.com/2017/09/13/oracle-joins-the-cloud-native-computing-foundation-as-a-platinum-member/) club! K8s on Oracle Linux, K8s for Oracle Public Cloud. “At this point, there really can’t be any doubt that Kubernetes is winning the container orchestration wars, given that virtually every major player is now backing the project, both financially and with code contributions.” James checks in on Red Hat (http://redmonk.com/jgovernor/2017/09/21/red-hat-is-pretty-good-at-being-red-hat/). (https://techcrunch.com/2017/09/13/oracle-joins-the-cloud-native-computing-foundation-as-a-platinum-member/) Acquisitions & more! Rackspace acquires Datapipe (https://techcrunch.com/2017/09/11/rackspace-acquires-datapipe-as-it-looks-to-expand-its-managed-cloud-business/) “The reason we’re buying them is that we want to extend our leadership in multi-cloud services,” Rackspace chief strategy officer Matt Bradley told me. “It’s a sign and signal that we’re going for it.” Bradley expects that the combined company will make Rackspace the largest private cloud player and the largest managed hosting service. Datadog acquires Logmatic.io to add log management to its cloud monitoring platform (https://techcrunch.com/2017/09/07/datadog-acquires-logmatic-io-to-add-log-management-to-its-cloud-monitoring-platform/) Puppet Acquires Distelli (https://www.geekwire.com/2017/puppet-acquires-distelli-bolster-cloud-computing-automation-platform/), known for their Kubernetes dashboard. Jay Lyman at 451 (https://451research.com/report-short?entityId=93381). Sizing Puppet: “The company has grown to more than 500 employees, and has estimated annual revenue in the $100m range.” Coverage from Susan Hall: “What we haven’t had up to this point is all the requisite automation for moving infrastructure code and application code through any kind of automated delivery lifecycle” and now they gots that. https://thenewstack.io/puppet-will-extend-infrastructure-automation-capabilities-distelli-acquisition/ “In May, the company launched its Kubernetes dashboard K8S. It allows users to connect repositories, build images from source, then deploy them to that Kubernetes cluster. You can also set up automated pipelines to push images from one cluster to another, promote software from test/dev to prod, quickly roll back and do all this in the context of one or more Kubernetes clusters… The Kubernetes service is offered as a hosted service or in an on-prem version. It provides notifications through Slack.” Google pays $1.1 billion for HTC team and non-exclusive IP license (https://www.axios.com/login-2487682498.html?rebelltitem=2&utm_medium=linkshare&utm_campaign=organic#rebelltitem2) Security Corner The Apple Effect? — Why BMW might get rid of car keys (http://www.autonews.com/article/20170915/OEM06/170919789/why-bmw-might-get-rid-of-car-keys) Don’t blame Apache — EQUIFAX OFFICIALLY HAS NO EXCUSE (https://www.wired.com/story/equifax-breach-no-excuse/) Is there anything to do here? Setup layers of credit cards? Require Touch ID (etc.) approval of all financial decisions and transactions in your “account”? Food & Safety like inspectors for security? Hackers respond to Face ID on the iPhone X (http://bgr.com/2017/09/21/iphone-x-release-date-soon-hackers-eye-face-id/) iOS 11 Coté has been running the beta. It seems fine. There’s the usual Re-arrangement of how some gestures work that’s jarring at first, but after using it for awhile, you forget what they even are. The extra control center stuff is nice. The Files.app is interesting, but not too featureful. The new photo formats are annoying because, you know, non-Apple things need to support it (which they seem to?) Bonus Links Coté gives up on defining DevOps, and more Interview about DevOpsDays Auckland (https://www.infoq.com/news/2017/09/michael-cote-devops-days-nz). (https://techcrunch.com/2017/09/13/oracle-joins-the-cloud-native-computing-foundation-as-a-platinum-member/) Is Solaris dead yet? Strongly confirmed rumors that Oracle is shutting it down (http://www.zdnet.com/article/sun-set-oracle-closes-down-last-sun-product-lines/). This guy has written a big Solaris-brain to Linux-brain manifesto/guide (http://www.brendangregg.com/blog/2017-09-05/solaris-to-linux-2017.html), plus: “[n]owadays, Sun is a cobweb-covered sign at the Facebook Menlo Park campus, kept as a warning to the next generation.” SICK BURN! Layoffs and more (http://dtrace.org/blogs/bmc/2017/09/04/the-sudden-death-and-eternal-life-of-solaris/): “In particular, that employees who had given their careers to the company were told of their termination via a pre-recorded call — “robo-RIF’d” in the words of one employee — is both despicable and cowardly.” HPE We Can See The New Hewlett Clearly Now, Says CEO Whitman (http://blogs.barrons.com/techtraderdaily/2017/09/05/hpe-we-can-see-the-new-hewlett-clearly-now-says-ceo-whitman/?mod=BOLBlog) - AI in storage arrays, Docker in OneView. Clearly (http://www.marketwatch.com/story/hp-enterprise-has-yet-another-confusing-plan-to-simplify-itself-2017-09-05)? Making money (https://www.cnbc.com/2017/09/05/hpe-earnings-q3-2017.html). They bought CTP!? (https://www.cloudtp.com/doppler/hewlett-packard-enterprise-to-acquire-cloud-technology-partners/) Selling hardware to cloud providers is rough (https://www.nextplatform.com/2017/09/06/prospects-leaner-meaner-hpe/). Huawei New board (http://talkincloud.com/cloud-services/chinas-huawei-braces-board-revamp-western-markets-beckon). Microsoft app support. We can all agree on food Someone has to pay attention to this real world stuff (http://www.nationalgeographic.com/magazine/2017/09/holland-agriculture-sustainable-farming/). This Tiny Country Feeds the World More on VMware/AWS The possible failures in the partnership (https://www.bloomberg.com/news/articles/2017-09-01/how-vmware-s-partnership-with-amazon-could-end-up-backfiring) - sort of an odd article in that the larger point is “maybe it won’t work.” Meanwhile, Matt Asay does some loopty-loops on it all (https://www.theregister.co.uk/2017/09/11/kubernetes_envy/). JEE Code put in github (http://go.theregister.com/feed/www.theregister.co.uk/2017/09/06/oracle_java_ee_java_se_github/). They’re giving it over to the Eclipse Foundation (https://blogs.oracle.com/theaquarium/opening-up-ee-update). Probably a good idea. VMware’s OpenStack Little report form 451 (https://451research.com/report-short?entityId=93303&type=mis&alertid=693&contactid=0033200001wgKCKAA2&utm_source=sendgrid&utm_medium=email&utm_campaign=market-insight&utm_content=newsletter&utm_term=93303-VMware+sheds+free+version+of+its+OpenStack+distribution). “Going forward, users pay a onetime $995-per-CPU socket license fee, in addition to ongoing support.” Recommendations Brandon: Prophets of Rage (http://prophetsofrage.com/). Matt: American Gods (https://en.wikipedia.org/wiki/American_Gods_(TV_series)), the TV show. Zero History (https://www.amazon.com/Zero-History-Blue-William-Gibson-ebook/dp/B003YL4AGC/): finale(?) to William Gibson’s Blue Ant trilogy LOT (https://www.lot2046.com/): a subscription-based service which distributes a basic set of clothing, footwear, essential self-care products, accessories, and media content. Engineering the End of Fashion (https://www.ssense.com/en-gb/editorial/fashion/engineering-the-end-of-fashion) Coté: Rick & Morty (http://amzn.to/2xrHo3L). These cultural guides (http://www.commisceo-global.com/country-guides) are fucking awesome! See America (http://www.commisceo-global.com/country-guides/usa-guide), Australia (http://www.commisceo-global.com/country-guides/australia-guide), and Latvia (http://www.commisceo-global.com/country-guides/latvia-guide) (no one sang at the meals I was at!). Cardenal Mendoza (https://www.thewhiskyexchange.com/p/996/cardenal-mendoza-brandy-solera-gran-reserva), brandy de jerez (https://en.wikipedia.org/wiki/Brandy_de_Jerez). And, you know, cognac/brandy in general - be a fucking adult already, you damn kids.
This week on the podcast, Kyle and Dan talk about planning PeopleTools and Catch-up projects, BI Publisher security, and how to turn off excessive BI Publisher logging. We also talk about slowly killing COBOL with PeopleSoft (it's not dead yet) and using multiple Change Assistant installations. Show Notes Collaborate Sessions Advanced PS Admin - Dan and Kyle Take Control of PeopleSoft Logs with Elasticsearch - Dan New Elastic Stack 5.2 features @ 2:00 Tabs v. Spaces - Language Analysis @ 3:00 JQ - command line JSON parser @ 5:30 choco install jq ps-availability 2.0 @ 7:30 SQL Server 2016 is Certified @ 13:30 PT 8.55.14 DPK - more recent binaries @ 15:30 and psft_cm.yaml Killing COBOLs off slowly @ 19:30 Idea - Replace Payroll COBOL with App Engine COBOL DPK Follow-up @ 24:00 BI Pub Security @ 26:15 BI Publisher Logging @ 29:15 Patch for BI Pub/Query Manager output @ 33:00 OS Update from Kyle @ 34:45 Oracle Linux or RedHat? Planning PeopleTools and Catch-up Projects 43:00 Kicking off a Catch-up Project @ 54:00 Multiple Change Assistant Installations @ 57:30
2016年4月の気になるニュース紹介です。 Microsoft Build Developer Conference | March 30 – April 1, 2016 Build 2016で驚きの発表―Microsoftはこの夏Windows 10でBashシェルをサポート | TechCrunch Japan コはコンピューターのコ | Windowsでもコマンドしたい!ConEmuとMSYS2を入れてみよう [速報]Visual Studioに無料でXamarinが追加。無料のCommunity EditionでもiOS/Androidネイティブアプリが開発可能に。Build 2016 - Publickey Windows版とMac版のDockerアプリが限定ベータで登場 - ZDNet Japan MicrosoftのマイクロサービスプラットホームAzure Service Fabricが一般公開へ | TechCrunch Japan Head of Oracle Linux moves to Microsoft | ZDNet No-Cost RHEL Developer Subscription now available – Red Hat Developer Blog ご紹介しているニュースのネタ元はこちらに随時蓄積しています。 エッジのたたないポッドキャスト ネタ帳 マスコミというかメディアはなにを言っているのかよくわからないんですが MUSIC & SE from: SampleSwap 音楽素材/魔王魂
Erin sits down and talks to Mike Robinson about the PlateSpin Migrate 9.2, PlateSpin Protect 10.3 and PlateSpin Forge 3.3. These release adds support for VMware vSphere 5.1, Red Hat RHEL 6 and Oracle Linux 6