Act of creating a virtual version of something
POPULARITY
Categories
In this episode of the Hands-On IT podcast, Landon Miles interviews Anthony Maxwell, who is a software engineer at Automox. They discuss Anthony's journey from IT operations to software engineering, and his home lab setup. He discusses his favorite projects, the skills he's learned, and how he applies them in his professional life. Anthony also provides insights into using Automox for policy compliance, offers advice for those looking to start their own home labs, and shares his thoughts on virtualization, operating systems, and staying updated with technology trends.This episode originally aired on July 25, 2024.
Join the vBrownBag crew for an insightful session with guest (and host!) Damian Karlson as he breaks down OpenStack for VMware administrators. From Broadcom's shake-up to cultural, operational, and technical migration differences, Damian offers a practical, grounded walk-through of what it means to move from VMware to OpenStack. ☕️ Chapters 00:00:00 – Introduction 00:07:10 – Why organizations are leaving VMware 00:20:45 – Technical differences 00:33:00 – Operational differences 00:43:30 – Culture shift and community resources Resources: https://www.openstack.org/vmware-migration-to-openstack/vmware-to-openstack-migration-guide https://www.openstack.org/coa/ https://www.linkedin.com/in/damiankarlson/ #OpenStack #VMware #OpenInfra #CloudMigration #vBrownBag
Luis is an AWS Community Builder, CTO, and game developer! In this session, you'll learn how to build a live FAST (Free Ad-Supported TV) channel using AWS Elemental MediaLive. We'll walk through the end-to-end process: from ingest and transcoding, to dynamic ad insertion with MediaTailor and VOD integration via MediaConvert. This talk is perfect for engineers, architects, or media professionals looking to deliver scalable, serverless streaming solutions on AWS. 00:00 - Intro 04:50 - Building FAST Channels 06:20 - Key Concepts 18:10 - Architecture of the demo 21:00 - QRs for repo and player demo 22:25 - Building the demo live! 46:51 - Alternate Architectures 2 & 3 How to find Luis: https://www.linkedin.com/in/luis-valdivia-humareda/ Luis' links: https://github.com/lvaldivia/vbrownbag2025
Have you ever considered how a single server can support countless applications and workloads at once? In this episode, hosts Lois Houston and Nikita Abraham, together with Principal OCI Instructor Orlando Gentil, explore the sophisticated technologies that make this possible in modern cloud data centers. They discuss the roles of hypervisors, virtual machines, and containers, explaining how these innovations enable efficient resource sharing, robust security, and greater flexibility for organizations. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! For the last two weeks, we've been talking about different aspects of cloud data centers. In this episode, Orlando Gentil, Principal OCI Instructor at Oracle University, joins us once again to discuss how virtualization, through hypervisors, virtual machines, and containers, has transformed data centers. 00:58 Lois: That's right, Niki. We'll begin with a quick look at the history of virtualization and why it became so widely adopted. Orlando, what can you tell us about that? Orlando: To truly grasp the power of virtualization, it's helpful to understand its journey from its humble beginnings with mainframes to its pivotal role in today's cloud computing landscape. It might surprise you, but virtualization isn't a new concept. Its roots go back to the 1960s with mainframes. In those early days, the primary goal was to isolate workloads on a single powerful mainframe, allowing different applications to run without interfering with each other. As we moved into the 1990s, the challenge shifted to underutilized physical servers. Organizations often had numerous dedicated servers, each running a single application, leading to significant waste of computing resources. This led to the emergence of virtualization as we know it today, primarily from the 1990s to the 2000s. The core idea here was to run multiple isolated operating systems on a single physical server. This innovation dramatically improved the resource utilization and laid the technical foundation for cloud computing, enabling the scalable and flexible environments we rely on today. 02:26 Nikita: Interesting. So, from an economic standpoint, what pushed traditional data centers to change and opened the door to virtualization? Orlando: In the past, running applications often meant running them on dedicated physical servers. This led to a few significant challenges. First, more hardware purchases. Every new application, every new project often required its own dedicated server. This meant constantly buying new physical hardware, which quickly escalated capital expenditure. Secondly, and hand-in-hand with more servers came higher power and cooling costs. Each physical server consumed power and generated heat, necessitating significant investment in electricity and cooling infrastructure. The more servers, the higher these operational expenses became. And finally, a major problem was unused capacity. Despite investing heavily in these physical servers, it was common for them to run well below their full capacity. Applications typically didn't need 100% of server's resources all the time. This meant we were wasting valuable compute power, memory, and storage, effectively wasting resources and diminishing the return of investment from those expensive hardware purchases. These economic pressures became a powerful incentive to find more efficient ways to utilize data center resources, setting the stage for technologies like virtualization. 04:05 Lois: I guess we can assume virtualization emerged as a financial game-changer. So, what kind of economic efficiencies did virtualization bring to the table? Orlando: From a CapEx or capital expenditure perspective, companies spent less on servers and data center expansion. From an OpEx or operational expenditure perspective, fewer machines meant lower electricity, cooling, and maintenance costs. It also sped up provisioning. Spinning a new VM took minutes, not days or weeks. That improved agility and reduced the operational workload on IT teams. It also created a more scalable, cost-efficient foundation which made virtualization not just a technical improvement, but a financial turning point for data centers. This economic efficiency is exactly what cloud providers like Oracle Cloud Infrastructure are built on, using virtualization to deliver scalable pay as you go infrastructure. 05:09 Nikita: Ok, Orlando. Let's get into the core components of virtualization. To start, what exactly is a hypervisor? Orlando: A hypervisor is a piece of software, firmware, or hardware that creates and runs virtual machines, also known as VMs. Its core function is to allow multiple virtual machines to run concurrently on a single physical host server. It acts as virtualization layer, abstracting the physical hardware resources like CPU, memory, and storage, and allocating them to each virtual machine as needed, ensuring they can operate independently and securely. 05:49 Lois: And are there types of hypervisors? Orlando: There are two primary types of hypervisors. The type 1 hypervisors, often called bare metal hypervisors, run directly on the host server's hardware. This means they interact directly with the physical resources offering high performance and security. Examples include VMware ESXi, Oracle VM Server, and KVM on Linux. They are commonly used in enterprise data centers and cloud environments. In contrast, type 2 hypervisors, also known as hosted hypervisors, run on top of an existing operating system like Windows or macOS. They act as an application within that operating system. Popular examples include VirtualBox, VMware Workstation, and Parallels. These are typically used for personal computing or development purposes, where you might run multiple operating systems on your laptop or desktop. 06:55 Nikita: We've spoken about the foundation provided by hypervisors. So, can we now talk about the virtual entities they manage: virtual machines? What exactly is a virtual machine and what are its fundamental characteristics? Orlando: A virtual machine is essentially a software-based virtual computer system that runs on a physical host computer. The magic happens with the hypervisor. The hypervisor's job is to create and manage these virtual environments, abstracting the physical hardware so that multiple VMs can share the same underlying resources without interfering with each other. Each VM operates like a completely independent computer with its own operating system and applications. 07:40 Lois: What are the benefits of this? Orlando: Each VM is isolated from the others. If one VM crashes or encounters an issue, it doesn't affect the other VMs running on the same physical host. This greatly enhances stability and security. A powerful feature is the ability to run different operating systems side-by-side on the very same physical host. You could have a Windows VM, a Linux VM, and even other specialized OS, all operating simultaneously. Consolidate workloads directly addresses the unused capacity problem. Instead of one application per physical server, you can now run multiple workloads, each in its own VM on a single powerful physical server. This dramatically improves hardware utilization, reducing the need of constant new hardware purchases and lowering power and cooling costs. And by consolidating workloads, virtualization makes it possible for cloud providers to dynamically create and manage vast pools of computing resources. This allows users to quickly provision and scale virtual servers on demand, tapping into these shared pools of CPU, memory, and storage as needed, rather than being tied to a single physical machine. 09:10 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest technology. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 09:54 Nikita: Welcome back! Orlando, let's move on to containers. Many see them as a lighter, more agile way to build and run applications. What's your take? Orlando: A container packages an application in all its dependencies, like libraries and other binaries, into a single, lightweight executable unit. Unlike a VM, a container shares the host operating system's kernel, running on top of the container runtime process. This architectural difference provides several key advantages. Containers are incredibly portable. They can be taken virtually anywhere, from a developer's laptop to a cloud environment, and run consistently, eliminating it works on my machine issues. Because containers share the host OS kernel, they don't need to bundle a full operating system themselves. This results in significantly smaller footprints and less administration overhead compared to VMs. They are faster to start. Without the need to boot a full operating system, containers can start up in seconds, or even milliseconds, providing rapid deployment and scaling capabilities. 11:12 Nikita: Ok. Throughout our conversation, you've spoken about the various advantages of virtualization but let's consolidate them now. Orlando: From a security standpoint, virtualization offers several crucial benefits. Each VM operates in its own isolated sandbox. This means if one VM experiences a security breach, the impact is generally contained to that single virtual machine, significantly limiting the spread of potential threats across your infrastructure. Containers also provide some isolation. Virtualization allows for rapid recovery. This is invaluable for disaster recovery or undoing changes after a security incident. You can implement separate firewalls, access rules, and network configuration for each VM. This granular control reduces the overall exposure and attack surface across your virtualized environments, making it harder for malicious actors to move laterally. Beyond security, virtualization also brings significant advantages in terms of operational and agility benefits for IT management. Virtualization dramatically improves operational efficiency and agility. Things are faster. With virtualization, you can provision new servers or containers in minutes rather than days or weeks. This speed allows for quicker deployment of applications and services. It becomes much simpler to deploy consistent environment using templates and preconfigured VM images or containers. This reduces errors and ensures uniformity across your infrastructure. It's more scalable. Virtualization makes your infrastructure far more scalable. You can reshape VMs and containers to meet changing demands, ensuring your resources align precisely with your needs. These operational benefits directly contribute to the power of cloud computing, especially when we consider virtualization's role in enabling cloud and scalability. Virtualization is the very backbone of modern cloud computing, fundamentally enabling its scalability. It allows multiple virtual machines to run on a single physical server, maximizing hardware utilization, which is essential for cloud providers. This capability is core of infrastructure as a service offerings, where users can provision virtualized compute resources on demand. Virtualization makes services globally scalable. Resources can be easily deployed and managed across different geographic regions to meet worldwide demand. Finally, it provides elasticity, meaning resources can be automatically scaled up or down in response to fluctuating workloads, ensuring optimal performance and cost efficiency. 14:21 Lois: That's amazing. Thank you, Orlando, for joining us once again. Nikita: Yeah, and remember, if you want to learn more about the topics we covered today, go to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: Well, that's all we have for today. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 14:40 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Have you ever wondered where all your digital memories, work projects, or favorite photos actually live in the cloud? In this episode, Lois Houston and Nikita Abraham are joined by Principal OCI Instructor Orlando Gentil to discuss cloud storage. They explore how data is carefully organized, the different ways it can be stored, and what keeps it safe and easy to find. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about the differences between traditional and cloud data centers, and covered components like CPU, RAM, and operating systems. If you haven't listened to the episode yet, I'd suggest going back and listening to it before you dive into this one. Nikita: Joining us again is Orlando Gentil, Principal OCI Instructor at Oracle University, and we're going to ask him about another fundamental concept: storage. 01:04 Lois: That's right, Niki. Hi Orlando! Thanks for being with us again today. You introduced cloud data centers last week, but tell us, how is data stored and accessed in these centers? Orlando: At a fundamental level, storage is where your data resides persistently. Data stored on a storage device is accessed by the CPU and, for specialized tasks, the GPU. The RAM acts as a high-speed intermediary, temporarily holding data that the CPU and the GPU are actively working on. This cyclical flow ensures that applications can effectively retrieve, process, and store information, forming the backbone for our computing operations in the data center. 01:52 Nikita: But how is data organized and controlled on disks? Orlando: To effectively store and manage data on physical disks, a structured approach is required, which is defined by file systems and permissions. The process began with disks. These are the raw physical storage devices. Before data can be written to them, disks are typically divided into partitions. A partition is a logical division of a physical disk that acts as if it were a separated physical disk. This allows you to organize your storage space and even install multiple operating systems on a single drive. Once partitions are created, they are formatted with a file system. 02:40 Nikita: Ok, sorry but I have to stop you there. Can you explain what a file system is? And how is data organized using a file system? Orlando: The file system is the method and the data structure that an operating system uses to organize and manage files on storage devices. It dictates how data is named, is stored, retrieved, and managed on the disk, essentially providing the roadmap for data. Common file systems include NTFS for Windows and ext4 or XFS for Linux. Within this file system, data is organized hierarchically into directories, also known as folders. These containers help to logically group related files, which are the individual units of data, whether they are documents, images, videos, or applications. Finally, overseeing this entire organization are permissions. 03:42 Lois: And what are permissions? Orlando: Permissions define who can access a specific files and directories and what actions they are allowed to perform-- for example, read, write, or execute. This access control, often managed by user, group, and other permissions, is fundamental for security, data integrity, and multi-user environments within a data center. 04:09 Lois: Ok, now that we have a good understanding of how data is organized logically, can we talk about how data is stored locally within a server? Orlando: Local storage refers to storage devices directly attached to a server or computer. The three common types are Hard Disk Drive. These are traditional storage devices using spinning platters to store data. They offer large capacity at a lower cost per gigabyte, making them suitable for bulk data storage when high performance isn't the top priority. Unlike hard disks, solid state drives use flash memory to store data, similar to USB drives but on a larger scale. They provide significantly faster read and write speeds, better durability, and lower power consumption than hard disks, making them ideal for operating systems, applications, and frequently accessed data. Non-Volatile Memory Express is a communication interface specifically designed for solid state that connects directly to the PCI Express bus. NVME offers even faster performance than traditional SATA-based solid state drives by reducing latency and increasing bandwidth, making it the top choice for demanding workloads that require extreme speed, such as high-performance databases and AI applications. Each type serves different performance and cost requirements within a data center. While local storage is essential for immediate access, data center also heavily rely on storage that isn't directly attached to a single server. 05:59 Lois: I'm guessing you're hinting at remote storage. Can you tell us more about that, Orlando? Orlando: Remote storage refers to data storage solutions that are not physically connected to the server or client accessing them. Instead, they are accessed over the network. This setup allows multiple clients or servers to share access to the same storage resources, centralizing data management and improving data availability. This architecture is fundamental to cloud computing, enabling vast pools of shared storage that can be dynamically provisioned to various users and applications. 06:35 Lois: Let's talk about the common forms of remote storage. Can you run us through them? Orlando: One of the most common and accessible forms of remote storage is Network Attached Storage or NAS. NAS is a dedicated file storage device connected to a network that allows multiple users and client devices to retrieve data from a centralized disk capacity. It's essentially a server dedicated to serving files. A client connects to the NAS over the network. And the NAS then provides access to files and folders. NAS devices are ideal for scenarios requiring shared file access, such as document collaboration, centralized backups, or serving media files, making them very popular in both home and enterprise environments. While NAS provides file-level access over a network, some applications, especially those requiring high performance and direct block level access to storage, need a different approach. 07:38 Nikita: And what might this approach be? Orlando: Internet Small Computer System Interface, which provides block-level storage over an IP network. iSCSI or Internet Small Computer System Interface is a standard that allows the iSCSI protocol traditionally used for local storage to be sent over IP networks. Essentially, it enables servers to access storage devices as if they were directly attached even though they are located remotely on the network. This means it can leverage standard ethernet infrastructure, making it a cost-effective solution for creating high performance, centralized storage accessible over an existing network. It's particularly useful for server virtualization and database environments where block-level access is preferred. While iSCSI provides block-level access over standard IP, for environments demanding even higher performance, lower latency, and greater dedicated throughput, a specialized network is often deployed. 08:47 Nikita: And what's this specialized network called? Orlando: Storage Area Network or SAN. A Storage Area Network or SAN is a high-speed network specifically designed to provide block-level access to consolidated shared storage. Unlike NAS, which provides file level access, a SAN presents a storage volumes to servers as if they were local disks, allowing for very high performance for applications like databases and virtualized environments. While iSCSI SANs use ethernet, many high-performance SANs utilize fiber channel for even faster and more reliable data transfer, making them a cornerstone of enterprise data centers where performance and availability are paramount. 09:42 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest technology. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 10:26 Nikita: Welcome back! Orlando, are there any other popular storage paradigms we should know about? Orlando: Beyond file level and block level storage, cloud environments have popularized another flexible and highly scalable storage paradigm, object storage. Object storage is a modern approach to storing data, treating each piece of data as a distinct, self-contained unit called an object. Unlike file systems that organize data in a hierarchy or block storage that breaks data into fixed size blocks, object storage manages data as flat, unstructured objects. Each object is stored with unique identifiers and rich metadata, making it highly scalable and flexible for massive amounts of data. This service handles the complexity of storage, providing access to vast repositories of data. Object storage is ideal for use cases like cloud-native applications, big data analytics, content distribution, and large-scale backups thanks to its immense scalability, durability, and cost effectiveness. While object storage is excellent for frequently accessed data in rapidly growing data sets, sometimes data needs to be retained for very long periods but is accessed infrequently. For these scenarios, a specialized low-cost storage tier, known as archive storage, comes into play. 12:02 Lois: And what's that exactly? Orlando: Archive storage is specifically designed for long-term backup and retention of data that you rarely, if ever, access. This includes critical information, like old records, compliance data that needs to be kept for regulatory reasons, or disaster recovery backups. The key characteristics of archive storage are extremely low cost per gigabyte, achieved by optimizing for infrequent access rather than speed. Historically, tape backup systems were the common solution for archiving, where data from a data center is moved to tape. In modern cloud environments, this has evolved into cloud backup solutions. Cloud-based archiving leverages high-cost, effective during cloud storage tiers that are purpose built for long term retention, providing a scalable and often more reliable alternative to physical tapes. 13:05 Lois: Thank you, Orlando, for taking the time to talk to us about the hardware and software layers of cloud data centers. This information will surely help our listeners to make informed decisions about cloud infrastructure to meet their workload needs in terms of performance, scalability, cost, and management. Nikita: That's right, Lois. And if you want to learn more about what we discussed today, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: In our next episode, we'll take a look at more of the fundamental concepts within modern cloud environments, such as Hypervisors, Virtualization, and more. I can't wait to learn more about it. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 13:47 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this podcast, we dive deep into the current state of enterprise virtualization. We go deep into what's changed with Nutanix, VergeIO, Proxmox, XCP-ng, and Scale Computing, talk about where you should turn if you're trying to find your way out of the grips of Broadcom by VMware, and more!Send us a textSupport the showThis video is brought to you by us! Check out HomeLab Gear here: https://homelabgear.shop/ Visit our website here: https://2guystek.tv/ for all things 2GT! And thank you so much for listening!
Curious about what really goes on inside a cloud data center? In this episode, Lois Houston and Nikita Abraham chat with Principal OCI Instructor Orlando Gentil about how cloud data centers are transforming the way organizations manage technology. They explore the differences between traditional and cloud data centers, the roles of CPUs, GPUs, and RAM, and why operating systems and remote access matter more than ever. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Today, we're covering the fundamentals you need to be successful in a cloud environment. If you're new to cloud, coming from a SaaS environment, or planning to move from on-premises to the cloud, you won't want to miss this. With us today is Orlando Gentil, Principal OCI Instructor at Oracle University. Hi Orlando! Thanks for joining us. 01:01 Lois: So Orlando, we know that Oracle has been a pioneer of cloud technologies and has been pivotal in shaping modern cloud data centers, which are different from traditional data centers. For our listeners who might be new to this, could you tell us what a traditional data center is? Orlando: A traditional data center is a physical facility that houses an organization's mission critical IT infrastructure, including servers, storage systems, and networking equipment, all managed on site. 01:32 Nikita: So why would anyone want to use a cloud data center? Orlando: The traditional model requires significant upfront investment in physical hardware, which you are then responsible for maintaining along with the underlying infrastructure like physical security, HVAC, backup power, and communication links. In contrast, cloud data centers offer a more agile approach. You essentially rent the infrastructure you need, paying only for what you use. In the traditional data center, scaling resources up and down can be a slow and complex process. On cloud data centers, scaling is automated and elastic, allowing resources to adjust dynamically based on demand. This shift allows business to move their focus from the constant upkeep of infrastructure to innovation and growth. The move represents a shift from maintenance to momentum, enabling optimized costs and efficient scaling. This fundamental shift is how IT infrastructure is managed and consumed, and precisely what we mean by moving to the cloud. 02:39 Lois: So, when we talk about moving to the cloud, what does it really mean for businesses today? Orlando: Moving to the cloud represents the strategic transition from managing your own on-premise hardware and software to leveraging internet-based computing services provided by a third-party. This involves migrating your applications, data, and IT operations to a cloud environment. This transition typically aims to reduce operational overhead, increase flexibility, and enhance scalability, allowing organizations to focus more on their core business functions. 03:17 Nikita: Orlando, what's the “brain” behind all this technology? Orlando: A CPU or Central Processing Unit is the primary component that performs most of the processing inside the computer or server. It performs calculations handling the complex mathematics and logic that drive all applications and software. It processes instructions, running tasks, and operations in the background that are essential for any application. A CPU is critical for performance, as it directly impacts the overall speed and efficiency of the data center. It also manages system activities, coordinating user input, various application tasks, and the flow of data throughout the system. Ultimately, the CPU drives data center workloads from basic server operations to powering cutting edge AI applications. 04:10 Lois: To better understand how a CPU achieves these functions and processes information so efficiently, I think it's important for us to grasp its fundamental architecture. Can you briefly explain the fundamental architecture of a CPU, Orlando? Orlando: When discussing CPUs, you will often hear about sockets, cores, and threads. A socket refers to the physical connection on the motherboard where a CPU chip is installed. A single server motherboard can have one or more sockets, each holding a CPU. A core is an independent processing unit within a CPU. Modern CPUs often have multiple cores, enabling them to handle several instructions simultaneously, thus increasing processing power. Think of it as having multiple mini CPUs on a single chip. Threads are virtual components that allow a single CPU core to handle multiple sequence of instructions or threads concurrently. This technology, often called hyperthreading, makes a single core appear as two logical processors to the operating system, further enhancing efficiency. 05:27 Lois: Ok. And how do CPUs process commands? Orlando: Beyond these internal components, CPUs are also designed based on different instruction set architectures which dictate how they process commands. CPU architectures are primarily categorized in two designs-- Complex Instruction Set Computer or CISC and Reduced Instruction Set Computer or RISC. CISC processors are designed to execute complex instructions in a single step, which can reduce the number of instructions needed for a task, but often leads to a higher power consumption. These are commonly found in traditional Intel and AMD CPUs. In contrast, RISC processors use a simpler, more streamlined set of instructions. While this might require more steps for a complex task, each step is faster and more energy efficient. This architecture is prevalent in ARM-based CPUs. 06:34 Are you looking to boost your expertise in enterprise AI? Check out the Oracle AI Agent Studio for Fusion Applications Developers course and professional certification—now available through Oracle University. This course helps you build, customize, and deploy AI Agents for Fusion HCM, SCM, and CX, with hands-on labs and real-world case studies. Ready to set yourself apart with in-demand skills and a professional credential? Learn more and get started today! Visit mylearn.oracle.com for more details. 07:09 Nikita: Welcome back! We were discussing CISC and RISC processors. So Orlando, where are they typically deployed? Are there any specific computing environments and use cases where they excel? Orlando: On the CISC side, you will find them powering enterprise virtualization and server workloads, such as bare metal hypervisors in large databases where complex instructions can be efficiently processed. High performance computing that includes demanding simulations, intricate analysis, and many traditional machine learning systems. Enterprise software suites and business applications like ERP, CRM, and other complex enterprise systems that benefit from fewer steps per instruction. Conversely, RISC architectures are often preferred for cloud-native workloads such as Kubernetes clusters, where simpler, faster instructions and energy efficiency are paramount for distributed computing. Mobile device management and edge computing, including cell phones and IoT devices where power efficiency and compact design are critical. Cost optimized cloud hosting supporting distributed workloads where the cumulative energy savings and simpler design lead to more economical operations. The choice between CISC and RISC depends heavily on the specific workload and performance requirements. While CPUs are versatile generalists, handling a broad range of tasks, modern data centers also heavily rely on another crucial processing unit for specialized workloads. 08:54 Lois: We've spoken a lot about CPUs, but our conversation would be incomplete without understanding what a Graphics Processing Unit is and why it's important. What can you tell us about GPUs, Orlando? Orlando: A GPU or Graphics Processing Unit is distinct from a CPU. While the CPU is a generalist excelling at sequential processing and managing a wide variety of tasks, the GPU is a specialist. It is designed specifically for parallel compute heavy tasks. This means it can perform many calculations simultaneously, making it incredibly efficient for workloads like rendering graphics, scientific simulations, and especially in areas like machine learning and artificial intelligence, where massive parallel computation is required. In the modern data center, GPUs are increasingly vital for accelerating these specialized, data intensive workloads. 09:58 Nikita: Besides the CPU and GPU, there's another key component that collaborates with these processors to facilitate efficient data access. What role does Random Access Memory play in all of this? Orlando: The core function of RAM is to provide faster access to information in use. Imagine your computer or server needing to retrieve data from a long-term storage device, like a hard drive. This process can be relatively slow. RAM acts as a temporary high-speed buffer. When your CPU or GPU needs data, it first checks RAM. If the data is there, it can be accessed almost instantaneously, significantly speeding up operations. This rapid access to frequently used data and programming instructions is what allows applications to run smoothly and systems to respond quickly, making RAM a critical factor in overall data center performance. While RAM provides quick access to active data, it's volatile, meaning data is lost when power is off, or persistent data storage, the information that needs to remain available even after a system shut down. 11:14 Nikita: Let's now talk about operating systems in cloud data centers and how they help everything run smoothly. Orlando, can you give us a quick refresher on what an operating system is, and why it is important for computing devices? Orlando: At its core, an operating system, or OS, is the fundamental software that manages all the hardware and software resources on a computer. Think of it as a central nervous system that allows everything else to function. It performs several critical tasks, including managing memory, deciding which programs get access to memory and when, managing processes, allocating CPU time to different tasks and applications, managing files, organizing data on storage devices, handling input and output, facilitate communication between the computer and its peripherals, like keyboards, mice, and displays. And perhaps, most importantly, it provides the user interface that allows us to interact with the computer. 12:19 Lois: Can you give us a few examples of common operating systems? Orlando: Common operating system examples you are likely familiar with include Microsoft Windows and MacOS for personal computers, iOS and Android for mobile devices, and various distributions of Linux, which are incredibly prevalent in servers and increasingly in cloud environments. 12:41 Lois: And how are these operating systems specifically utilized within the demanding environment of cloud data centers? Orlando: The two dominant operating systems in data centers are Linux and Windows. Linux is further categorized into enterprise distributions, such as Oracle Linux or SUSE Linux Enterprise Server, which offer commercial support and stability, and community distributions, like Ubuntu and CentOS, which are developed and maintained by communities and are often free to use. On the other side, we have Windows, primarily represented by Windows Server, which is Microsoft's server operating system known for its robust features and integration with other Microsoft products. While both Linux and Windows are powerful operating systems, their licensing modes can differ significantly, which is a crucial factor to consider when deploying them in a data center environment. 13:43 Nikita: In what way do the licensing models differ? Orlando: When we talk about licensing, the differences between Linux and Windows become quite apparent. For Linux, Enterprise Distributions come with associated support fees, which can be bundled into the initial cost or priced separately. These fees provide access to professional support and updates. On the other hand, Community Distributions are typically free of charge, with some providers offering basic community-driven support. Windows server, in contrast, is a commercial product. Its license cost is generally included in the instance cost when using cloud providers or purchased directly for on-premise deployments. It's also worth noting that some cloud providers offer a bring your own license, or BYOL program, allowing organizations to use their existing Windows licenses in the cloud, which can sometimes provide cost efficiencies. 14:46 Nikita: Beyond choosing an operating system, are there any other important aspects of data center management? Orlando: Another critical aspect of data center management is how you remotely access and interact with your servers. Remote access is fundamental for managing servers in a data center, as you are rarely physically sitting in front of them. The two primary methods that we use are SSH, or secure shell, and RDP, remote desktop. Secure shell is widely used for secure command line access for Linux servers. It provides an encrypted connection, allowing you to execute commands, transfer files, and manage your servers securely from a remote location. The remote desktop protocol is predominantly used for graphical remote access to Windows servers. RDP allows you to see and interact with the server's desktop interface, just as if you were sitting directly in front of it, making it ideal for tasks that require a graphical user interface. 15:54 Lois: Thank you so much, Orlando, for shedding light on this topic. Nikita: Yeah, that's a wrap for today! To learn more about what we discussed, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. In our next episode, we'll take a close look at how data is stored and managed. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 16:16 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
When a company quietly builds world-class storage and virtualization software for twenty years, it usually means they have been too busy solving real problems to shout about it. That is what makes euroNAS and its founder, Tvrtko Fritz, such an interesting story. In this episode, I reconnect with Tvrtko after meeting him on the IT Press Tour in Amsterdam to learn how his company evolved from “NAS for the masses” into a trusted enterprise alternative in a market filled with bigger names. Tvrtko shares how euroNAS began with a simple idea that administrators should not have to battle complex infrastructure to keep systems running. Over time, that belief shaped a complete platform covering hyper-converged virtualization, Ceph-based storage, and instant backup and recovery. He recalls the story of a dentist who lost a full day of work waiting for a slow restore, which inspired euroNAS to create instant recovery that restores in seconds rather than hours. We also discuss how their intuitive graphical interface has turned Ceph from a daunting project that once took a week to set up into something that can be configured in twenty minutes. That change has opened advanced storage to universities, managed service providers, and enterprises handling petabyte-scale workloads. We also tackle a topic that many in IT are thinking about right now: VMware. With licensing changes frustrating customers, Tvrtko explains how euroNAS has become the quiet plan B for many organizations seeking stability and control. Its perpetual per-node licensing model removes the pressure of forced subscriptions, while tools such as the VM import wizard make migration faster and less painful. What stands out most is that Tvrtko still takes part in customer support himself, using real conversations to guide product development and keep the company close to the people who depend on it. Looking ahead, Tvrtko outlines how euroNAS is growing through partnerships with major hardware vendors and through its expanding role in AI infrastructure, where demand for scalable storage continues to rise. The conversation highlights the value of engineering-led companies that build with care, focus on reliability, and give customers genuine ownership of their systems. If you want to understand what practical innovation looks like in enterprise storage, this episode will remind you why simplicity still wins.
This week on The Data Stack Show, Alexander Patrushev joins John to share his journey from working on mainframes at IBM to leading AI infrastructure innovation at Nebius, with stops at VMware and AWS along the way. The discussion explores the evolution of AI and cloud infrastructure, the five pillars of successful machine learning projects, and the unique challenges of building and operating modern AI data centers—including energy consumption, cooling, and networking. Alexander also delves into the practicalities of infrastructure as code, the importance of data quality, and offers actionable advice for those looking to break into the AI field. Key takeaways include the need for strong data foundations, thoughtful project selection, and the value of leveraging existing skills and tools to succeed in the rapidly evolving AI landscape. Don't miss this great conversation.Highlights from this week's conversation include:Alexander's Background and Early Career at IBM (1:06)Moving From Mainframes to Virtualization at VMware (4:09)Transitioning to AWS and Machine Learning Projects (8:22)What Was Missed From Mainframes and the Rise of Public Cloud (9:03)Security, Performance, and Economics in Cloud Infrastructure (12:40)The Five Pillars of Successful Machine Learning Projects (15:02)Choosing the Right ML Project: Data, Impact, and Existing Solutions (18:01)Real-World AI and ML Use Cases Across Industries (19:42)Building Specialized AI Clouds Versus Hyperscalers (22:08)Performance, Scalability, and Reliability in AI Infrastructure (25:18)Data Center Energy Consumption and Power Challenges (28:41)Cooling, Networking, and Supporting Systems in AI Data Centers (30:06)Infrastructure as Code and Tooling in AI (31:50)Lowering Complexity for AI Developers and the Role of Abstraction (34:08)Startup Opportunities in the AI Stack (38:53)When to Fine-Tune or Post-Train Foundation Models (43:41)Comparing and Testing Models With Tool Use (47:49)Skills and Advice for Entering the AI Field (49:18)Final Thoughts and Encouragement for AI Newcomers (52:31)The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ready to conquer DevOps fearlessly? Watch as Shala Werner walks through “DevOps Without the Oh, ****!” moments—sharing real-world stories, and her creative Terraproof sandbox for breaking things safely (without blowing up production)! Whether you're a seasoned engineer or just getting your feet wet, you'll pick up new strategies and learn how to turn nerves into a superpower. Stick around for laughs, relatable tech tales, and some solid advice on experimenting and failing smart.
Join an epic panel of AWS Heroes as they dive into their experiences with Kiro, the AI-powered IDE shaking up development workflows. From spec-driven coding to pricing discussions and game demos, this conversation mixes deep tech insights with fun moments. Whether curious about AI's impact on coding or just looking for some cloud community vibes, this session offers laughs, honest feedback, and expert viewpoints.
Looking for help navigating the changing environment around your VMware investment? What considerations do you need to make relative to staying on VMware versus moving to another solution? We have answers and guidance - check out this episode for an insightful discussion with David Stevens, Field Solution Architect, as we delve into the evolving landscape of virtualization. Despite recent changes from Broadcom, VMware still commands a significant market share, and David unpacks why. We explore the various paths VMware customers are taking today, from sticking with their current setup to exploring hypervisor alternatives, migrating to the cloud, or even transitioning to containers. This episode offers valuable anecdotes and practical insights for anyone navigating the complexities of modern IT infrastructure. Hear how Pure Storage is strategically positioned to assist organizations through these transitions, highlighting our deep integrations with VMware for over a decade and how investments in Azure VMware Solution provide a cost effective cloud alternative. David also shares his "hot takes" on industry trends, a memorable "screwup" story, and crucial advice for the future of data management. Tune in to gain actionable takeaways on charting the right course for your VMware situation and beyond.
In this episode of the IoT For All Podcast, Gaurav Johri, co-founder and CEO of Doppelio, joins Ryan Chacon to discuss software validation and testing in IoT. The conversation covers the vital role of virtualization, the increasing complexity and distributed nature of connected products, the benefits of combining physical and virtual testing labs, the pitfalls of simulator-based approaches, intelligent automation in DevOps, the ROI of early validation, and future trends in AI, edge computing, and 5G.Gaurav Johri brings a wealth of expertise with over 25 years in steering multinational enterprises through the digital age. He has held global leadership positions at Mindtree, Onmobile, and Infosys. Johri's vision and passion for a future built on connected products shaped Doppelio as a pioneer in IoT testing. He is also a regular speaker at connected world events, such as AutomotiveIQ and IoT Tech Expo.Doppelio is a leading IoT test automation platform that enables enterprises to rapidly test connected products through advanced device virtualization at scale. Their solution creates "Doppels" (data twins) across diverse protocols, eliminating physical device dependency while enabling seamless co-existence of physical and virtual testing labs. They support comprehensive testing from simple sensors to complex industrial equipment, delivering 10x faster testing speeds, 80-90% coverage, and millions in operational savings. Trusted by Fortune 500 companies across connected elevators, medical devices, automotive, and security industries, Doppelio accelerates time-to-market while reducing field failure risks through intelligent automation.Discover more about IoT at https://www.iotforall.comFind IoT solutions: https://marketplace.iotforall.comMore about Doppelio: https://doppelio.comConnect with Gaurav: https://www.linkedin.com/in/gaurav-johri/(00:00) Intro(00:21) Gaurav Johri and Doppelio(00:56) IoT testing and its importance(03:56) Virtualization in IoT testing(06:10) Real-world examples of IoT testing(08:32) Physical vs. virtual testing labs(10:22) Limitations of simulator-based approaches(12:25) How do you enable rapid, scalable validation?(14:12) Role of intelligent automation in DevOps and CI/CD(15:43) The ROI of performing early software validation(17:35) Advice for modernizing IoT testing(19:26) Future of IoT testing with AI, edge, 5G(20:52) Learn more and follow upSubscribe to the Channel: https://bit.ly/2NlcEwmJoin Our Newsletter: https://newsletter.iotforall.comFollow Us on Social: https://linktr.ee/iot4all
Unlock essential cloud security lessons with Rajeev Joshi and the vBrownBag team as they explore the most common AWS mistakes: misconfigured S3 buckets, over-permissive IAM, open ports, hard-coded secrets, and blind spots in logging. Hear real-world breach stories and learn practical best practices for safer cloud deployments, whether starting out or leveling up as a security engineer. #CloudSecurity #AWS #CyberSecurity #DevSecOps #CloudCareers #vBrownBag #Infosec Chapters 00:00:03 Cloud Security Introduction & Welcome 00:05:31 Why Cloud Security Matters: Data Breaches, Shared Responsibility 00:18:31 Top Mistakes: S3 Bucket and IAM Misconfigurations 00:31:12 Open Ports, Bad Security Groups, and Real-World Case Studies 00:40:29 Hard-Coded Secrets, Logging, and Monitoring 00:54:54 Best Practices, Careers, and Closing Advice Resources: https://www.linkedin.com/in/rajeev-joshi-7964b4221/
Snigdha Joshi is a UI/UX Designer: In this session, we unravel how artificial intelligence is redefining the future of creative technology from intuitive UI/UX design to dynamic content creation, generative art, immersive storytelling, and beyond. Gain insight into how AI is unlocking new dimensions of expression, streamlining design processes, and giving rise to transformative career roles at the intersection of imagination and intelligence. 00:00 - Intro 03:50 - Designing Tomorrow 05:00 - Fear of AI 10:29 - User Research 24:18 - Beyond Chat 36:37 - Tools that Do More 41:45 - Final Takeaway 51:22 - Q&A How to find Snigdha: https://www.linkedin.com/in/snigdha-joshi-20a476253/ Snigdha's links: https://snigdhajoshi.framer.website
In this episode of The Offset Podcast we're talking about something that's been on our mind for while - virtualization. Specifically how virtualization can help facilitate and color and postproduction workflow.If you're new to the subject than this episode is a good primer on the essential components of virtualization, for those of you more experienced with virtualization, we believe strongly that virtualization will continue to play a large role in our industry and over the next few years become mainstream in many post and finishing workflows.Specific topics discussed include:What is virtualization and why is it important?Local hardware mentality vs virtualizationLocal hypervisors and VMsVirtualization servers (DIY local and or Cloud Based)Key vocabulary - bare metal, pass through etcThe role of Remote Desktop/streaming and local clients in a virtualized setupWhy and how we're experimenting with fully virtualized workflowsThe trickle down effect of virtualization is gaining steamIf you like The Offset Podcast, we'd love it if you could do us a big favor. It'd help a lot if you could like and rate the show on Apple Podcasts, Spotify, YouTube, or wherever you listen/watch the show.Also if you liked this show consider support the podcast by buying us a cup of coffee. - https://buymeacoffee.com/theoffsetpodcast
Apple Silicon provides so many different opportunities with virtualisation, but many of us still miss the intel days when we could snapshot and use serial numbers to automate device enrollment. Tools like Tart provide opportunities to build workflows that allow us to configure and enhance the way we virtualise macOS. Hosts: Tom Bridge - @tbridge@theinternet.social Marcus Ransom - @marcusransom Selina Ali - LinkedIn Guests: Rob Potvin, Senior Consulting Engineer, Jamf - LinkedIn Links: MacADUK presentation https://www.youtube.com/watch?v=7DqS9bG3bkg Rob's Blog - https://www.motionbug.com/ Tart - https://tart.run/ Virtual Buddy - https://github.com/insidegui/VirtualBuddy Bushel - https://getbushel.app/ Orka - https://www.macstadium.com/orka Anka - https://veertu.com/anka-flow/ UTM - https://mac.getutm.app/ Great resource for VMs https://eclecticlight.co/ Sponsors: Kandji 1Password Nudge Security Material Security Watchman Monitoring If you're interested in sponsoring the Mac Admins Podcast, please email podcast@macadmins.org for more information. Get the latest about the Mac Admins Podcast, follow us on Twitter! We're @MacAdmPodcast! The Mac Admins Podcast has launched a Patreon Campaign! Our named patrons this month include Weldon Dodd, Damien Barrett, Justin Holt, Chad Swarthout, William Smith, Stephen Weinstein, Seb Nash, Dan McLaughlin, Joe Sfarra, Nate Cinal, Jon Brown, Dan Barker, Tim Perfitt, Ashley MacKinlay, Tobias Linder Philippe Daoust, AJ Potrebka, Adam Burg, & Hamlin Krewson
Janvi Bhagchandani is an AWS Community Builder and an AWS Cloud Captian! In this episode she talks about personal branding and self-marketing within the AWS ecosystem while covering aspects like credentials as social proof, visible expertise, community recognition, and other psychological marketing angles that tie these elements together in the AWS context. 00:00 - Intro 05:43 - Cloud Credibility 07:46 - Authority Bias 16:01 - The 7-38-55 Rule 23:09 - The Primacy Effect 34:45 - The Reciprocity Principle 40:57 - Liking Principle 53:00 - Q&A How to find Janvi: https://www.linkedin.com/in/janvi-bhagchandani-b58853256/
In this episode we talk to AWS Hero Brian Hough: Vibe Coding with GenAI is fast and fun — until your app has to actually work in production. That's when reality hits: fragile APIs, missing auth, surprise AWS bills, strict constraints, and no clear path to scale. In this Dev Chat, I'll share what it takes to evolve from AI-generated MVPs to real-world, production-ready apps for millions of users. We'll talk infrastructure as code, scaling APIs, adding observability, and building systems that don't break under pressure. If you've used GenAI tools like Amazon Q, Bedrock, or your favorite code copilot, this session will help you ship faster and smarter. 00:00 - Intro 15:43 - Why Vibe Coding Isn't Enough 17:10 - The vibe coded initial app 18:30 - What could possibly go wrong? 24:42 - (Agenda) How we're going to fix the vibe coded app 27:55 - Fixing our vibe code workflow 29:06 - The Architecture 31:29 - Our Toolkit & Fixing all the things! 55:17 - The repo to play along at home! 55:23 - Q&A How to find Brian: https://www.linkedin.com/in/brianhhough/ https://brianhhough.com/ Brian's links: https://github.com/BrianHHough/aws-summit-2025
In this episode we talk to Kat Cosgrove, Head of Developer Advocacy at Minimus! We get into a bunch of topics: Horror movies, cannibalism, contributing to open source, how we (sometimes) destroy the things we love
What if your phone didn't need to hold your data at all? In this episode of The Tech Trek, Amir sits down with Jared Shepard, CEO of Hypori, to explore how virtualization at the edge is transforming security, mobility, and data ownership. Jared breaks down Hypori's secure virtual mobile OS, originally built for the Department of Defense, and how it's now entering the enterprise and consumer spaces. From eliminating mobile device management to protecting sensitive data from AI exposure, this conversation is a wake-up call for any tech leader thinking about security at the edge.Key Takeaways:Hypori's virtual mobile OS allows users to access enterprise data securely without storing it on their device.Virtualization collapses the attack surface by removing the edge device as a security risk.U.S. enterprises prioritize convenience and security, while Europe pushes privacy due to GDPR—Hypori bridges both.AI will soon enhance Hypori's platform through predictive resource allocation and network optimization.The military's extreme security standards helped Hypori harden its platform far beyond typical commercial use cases.Timestamped Highlights:01:30 — What Hypori is and how it turns any device into a secure, data-less terminal05:30 — Real-world BYOD use cases, from consultants to GDPR-compliant European enterprises11:20 — How virtualization changes the AI risk equation and protects enterprise data from agentic threats15:50 — Why cybersecurity should stop blaming users and start simplifying their responsibilities18:45 — How virtualization shrinks the attack surface and simplifies network defense22:59 — What it's like building for the Department of Defense and how that shaped Hypori's productQuote of the Episode:“Maybe it doesn't have to be a company's fight versus your fight for whose data belongs on your phone. What if we could just take that problem away?”Resources Mentioned:Hypori: www.hypori.comCall to Action:If this episode got you rethinking your mobile security strategy, share it with your team or your CIO. Subscribe to The Tech Trek for more conversations at the intersection of leadership, innovation, and real-world security.
Join Josh Lee and the vBrownBag crew for a lively conversation about why DevOps feels like learning a new language, and the real reasons why there aren't more junior SREs. Explore how layers of abstraction, culture, and tools make breaking in so tough, and hear how mentorship, networking, and a bit of career focus can make all the difference. Whether you're new to tech or a seasoned engineer, this talk delivers practical advice, fresh perspectives, and a few laughs. #DevOps #SRE #CareerAdvice #AIEngineering #TechLearning #vBrownBag #CloudCareers Chapters 00:00:05 DevOps Is a Foreign Language: Why There Are No Junior SREs 00:14:00 DevOps & Linguistics: Lessons from Language Learning 00:26:30 AI and the Modern Learning Stack 00:40:00 Mentorship, Networking, and Finding Your Why 00:50:00 Career Advice & Community Resources Resources: https://bsky.app/profile/joshleecreates.bsky.social https://osacom.io/events/2025/osaf-2025/ https://sessionize.com/osacon-2025/ https://altinity.com/blog/getting-started-with-altinitys-project-antalya
In Episode 182 of The Citrix Session, host Bill Sutton and Citrix's Todd Smith dive into the expanded capabilities of XenServer 8.4, Citrix's enterprise-grade hypervisor. No longer just for Citrix workloads, XenServer is now fully supported for all workloads under both Citrix Platform Licensing and UHMC—making it a strong contender for organizations exploring alternatives to VMware and Hyper-V.
Why are we still talking about virtualization? This week, Technology Now is returning to a classic topic in computing: Virtualization. So, what's changed in the landscape that's bought virtualization back into the limelight, and how is it being used in our current technological landscape? Brad Parks, Chief Product & Go To Market Officer at HPE's recently acquired Morpheus Data, tells us more.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Aubrey Lovell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations.About Brad Parks: https://www.linkedin.com/in/brad-parks-b190464/Sources:https://www.techtarget.com/searchitoperations/feature/The-history-of-virtualization-and-its-mark-on-data-center-managementhttps://inventivehq.com/history-of-virtualization/
Join Jenn Bergstrom and the vBrownBag crew for a deep dive into chaos engineering for machine learning and AI models. Discover how deliberate failure injection can improve system resilience, explore real-world experiments on AI model vulnerabilities, and learn why testing for failure is critical in today's fast-moving AI landscape. Whether you're an engineer, data scientist, or tech leader, this conversation is packed with practical insights, cautionary tales, and a touch of humor. #ChaosEngineering #MachineLearning #AI #vBrownBag #AIOps #ModelResilience #TechTalk Chapters: 00:00 – Introduction & vBrownBag Welcome 02:45 – What Is Chaos Engineering? 10:45 – Netflix, Chaos Monkey, and the Origins 16:40 – Chaos Engineering for AI & ML Models 27:00 – Non-Determinism in LLMs and Testing Challenges 46:00 – Organizational Adoption & Q&A Resources: https://www.linkedin.com/in/jenn-bergstrom/ https://amzn.to/44JBw5D - "Security Chaos Engineering: Sustaining Resilience in Software and Systems" https://amzn.to/4l3fFMi - "Chaos Engineering: Site reliability through controlled disruption" https://amzn.to/4kiqY1W - "Chaos Engineering: System Resiliency in Practice" https://netflix.github.io/chaosmonkey/
In this episode we talk to Rizel Scarlett - Tech Lead at Block. She talks about vibe coding, MCPs, how to do it right, and then a live demo! Join us for excellent insights & fun! 00:00 - Intro 07:05 - What is "vibe coding"? 13:34 - The Problem 15:22 - Enter MCPs 28:42 - Live Demo! 31:07 - The Dark Side 32:35 - How to be responsible How to find Rizel: https://www.linkedin.com/in/rizel-bobb-semple/ Rizel's links: https://block.github.io/goose/
This week we discuss telecom industry trends with SAP's Sandeep Chowdhury, covering 5G adoption, AI-powered network optimization, the promise of 6G, and sustainability challenges. Learn how telecom supply chains are evolving with digital transformation, circular economy practices, and integrated ecosystems to meet future connectivity and environmental demands. Come join us as we discuss the future of Supply Chain!
We spent the week learning keybindings, installing dependencies, and cramming for bonus points. Today, we score up and see how we did in the TUI Challenge.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Unraid: A powerful, easy operating system for servers and storage. Maximize your hardware with unmatched flexibility. Support LINUX UnpluggedLinks:
Unlock the power of generative AI for cloud architecture! In this vBrownBag episode, Alex Kearns demonstrates how to build a well-architected review crew using agentic AI, AWS Bedrock, and open-source tools. Learn to automate AWS Well-Architected Framework reviews, leverage knowledge bases, and see a live demo analyzing CloudFormation templates. Whether you're a cloud consultant or developer, discover practical ways to scale best practices and save time with GenAI. #cloud #AWS #AI #GenerativeAI #WellArchitected #vBrownBag #CloudComputing #DevOps Chapters: 00:00 – Introduction & Guest Welcom 04:00 – Alex's Cloud & AI Journey 17:00 – Building the GenAI Review Crew 34:00 – Live Demo: Automated Well-Architected Review 53:00 – Q&A & Future of AI in Cloud Architecture Resources:
In this episode we talk to Justin Garrison - Head of Product at Sidero Labs, the makers of Talos! The Talos distro is a reimagining of Linux for distributed systems like Kubernetes. Talos strips away everything unnecessary—no shell, no SSH, no package manager—leaving just what you need to run K8s clusters. All system management is done through a secure API, eliminating configuration drift and reducing your attack surface with a read-only filesystem. 00:00 - Intro 06:25 - New AI business ideas! 11:51 - What does "API Driven Linux" mean? How to find Justin: justingarrison.com Justin's links: Talos: https://www.talos.dev/ Getting started: https://www.talos.dev/v1.10/introduction/quickstart/
Fresh off Red Hat Summit, Chris is eyeing an exit from NixOS. What's luring him back to the mainstream? Our highlights, and the signal from the noise from open source's biggest event of the year.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:
First up in the news: Mint Monthly News, BackBlaze backups may be in trouble, you can run Arch inside Windows, Linux kernel drops 486 and early 586 support, and a new RaspberryPiOS release, and the end of Windows 10 support brings new opportunities In security and privacy: openSUSE removes Deepin Desktop over security issues, Proton threatens to quit Switzerland over new surveillance law Then in our Wanderings: Bill goes mobile, Moss plays with a Pangolin, Eric finally fixes his WiFi. In our Innards section: we talk about Virtual Machines In Bodhi Corner, just a bit about theming
The tech landscape is evolving faster than ever! Even if you've been in a traditional Ops role, knowing "how developers do" is essential knowledge. In this talk Andrew Fawcett, Heroku VP of Developer Relations will talk with us about how YOU can evolve your career and your development game using AI the RIGHT way.
Mike Fiedler, PyPI Safety and Security Engineer for the Python Software Foundation, joins the vBrownBag to talk about risks of software supply chain insecurity, and the concrete actions that software consumers & producers can take to make their software safer. Chapters: 02:12 Introducing Mike 07:20 What is software supply chain security? 08:45 Recent examples of software supply chain compromises 12:15 How do we prevent compromises in open source software? 18:57 Software consumers & software producers in the software supply chain 21:32 Recommended practices for software consumers 42:40 Recommended practices for software producers 50:15 Where to find Mike, and audience questions Resources: https://lnk.bio/miketheman https://blog.pypi.org
Kubernetes revolutionized the way software is built, deployed, and managed, offering engineers unprecedented agility and portability. But as Edera co-founder and CEO Emily Long shares, the speed and flexibility of containerization came with overlooked tradeoffs—especially in security. What started as a developer-driven movement to accelerate software delivery has now left security and infrastructure teams scrambling to contain risks that were never part of Kubernetes' original design.Emily outlines a critical flaw: Kubernetes wasn't built for multi-tenancy. As a result, shared kernels across workloads—whether across customers or internal environments—introduce lateral movement risks. In her words, “A container isn't real—it's just a set of processes.” And when containers share a kernel, a single exploit can become a system-wide threat.Edera addresses this gap by rethinking how containers are run—not rebuilt. Drawing from hypervisor tech like Xen and modernizing it with memory-safe Rust, Edera creates isolated “zones” for containers that enforce true separation without the overhead and complexity of traditional virtual machines. This isolation doesn't disrupt developer workflows, integrates easily at the infrastructure layer, and doesn't require retraining or restructuring CI/CD pipelines. It's secure by design, without compromising performance or portability.The impact is significant. Infrastructure teams gain the ability to enforce security policies without sacrificing cost efficiency. Developers keep their flow. And security professionals get something rare in today's ecosystem: true prevention. Instead of chasing billions of alerts and layering multiple observability tools in hopes of finding the needle in the haystack, teams using Edera can reduce the noise and gain context that actually matters.Emily also touches on the future—including the role of AI and “vibe coding,” and why true infrastructure-level security is essential as code generation becomes more automated and complex. With GPU security on their radar and a hardware-agnostic architecture, Edera is preparing not just for today's container sprawl, but tomorrow's AI-powered compute environments.This is more than a product pitch—it's a reframing of how we define and implement security at the container level. The full conversation reveals what's possible when performance, portability, and protection are no longer at odds.Learn more about Edera: https://itspm.ag/edera-434868Note: This story contains promotional content. Learn more.Guest: Emily Long, Founder and CEO, Edera | https://www.linkedin.com/in/emily-long-7a194b4/ResourcesLearn more and catch more stories from Edera: https://www.itspmagazine.com/directory/ederaLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsac25______________________Keywords:sean martin, emily long, containers, kubernetes, hypervisor, multi-tenancy, devsecops, infrastructure, virtualization, cybersecurity, brand story, brand marketing, marketing podcast, brand story podcast______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
We're excited to welcome Faye Ellis — Pluralsight instructor, AWS Hero, and one of my favorite people to learn from! In this session, Faye shares inexpensive and accessible ways you can start learning about Foundation Models (FMs), Large Language Models (LLMs), and AI. Whether you're just starting your AI journey or looking for practical experiments without a huge investment, this episode is packed with actionable insights. Faye also shares the top AI skills needed in 2025!
In this episode of the Future of ERP, Sandeep Chowdhury, Senior Director in SAP's Telecommunications Industry Business Unit, discusses how the telecom industry is evolving with advanced ERP systems to support Industry 4.0, 5G adoption, and smarter networks. He explains how emerging technologies like 6G and AI are transforming network operations and customer experiences. Sandeep also explores telecom's influence on other sectors, regulatory challenges, and the increasing importance of AI in automation and fraud prevention. Sustainability is a key theme, as he addresses energy use, e-waste, and circular economy efforts. Overall, Sandeep reveals how telecom is balancing innovation and environmental responsibility to shape the future of connectivity.
In this pre-event Brand Story On Location conversation recorded live from RSAC Conference 2025, Emily Long, Co-Founder and CEO of Edera, and Kaylin Trychon, Head of Communications, introduce a new approach to container security—one that doesn't just patch problems, but prevents them entirely.Edera, just over a year old, is focused on reimagining how containers are built and run by taking a hardware-up approach rather than layering security on from the top down. Their system eliminates lateral movement and living-off-the-land attacks from the outset by operating below the kernel, resulting in simplified, proactive protection across cloud and on-premises environments.What's notable is not just the technology, but the philosophy behind it. As Emily explains, organizations have grown accustomed to the limitations of containerization and the technical debt that comes with it. Edera challenges this assumption by revisiting foundational virtualization principles, drawing inspiration from technologies like Xen hypervisors, and applying them in modern ways to support today's use cases, including AI and GPU-driven environments.Kaylin adds that this design-first approach means security isn't bolted on later—it's embedded from the start. And yet, it's done without disruption. Teams don't need to scrap what they have or undertake complex rebuilds. The system works with existing environments to reduce complexity and ease compliance burdens like FedRAMP.For those grappling with infrastructure pain points—whether you're in product security, DevOps, or infrastructure—this conversation is worth a listen. Edera's vision is bold, but their delivery is practical. And yes, you'll find them roaming the show floor in bold pink—“mobile booth,” zero fluff.Listen to the episode to hear what it really means to be “secure by design” in the age of AI and container sprawl.Learn more about Edera: https://itspm.ag/edera-434868Note: This story contains promotional content. Learn more.Guests: Emily Long, Founder and CEO, Edera | https://www.linkedin.com/in/emily-long-7a194b4/Kaylin Trychon, Head of Communications, Edera | https://www.linkedin.com/in/kaylintrychon/ResourcesLearn more and catch more stories from Edera: https://www.itspmagazine.com/directory/ederaLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsac25______________________Keywords:emily long, kaylin trychon, sean martin, marco ciappelli, containers, virtualization, cloud, infrastructure, security, fedramp, brand story, brand marketing, marketing podcast, brand story podcast______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
Whitney Lee is a well-known figure in cloud computing. She is an international keynote speaker, the host of several successful streaming shows, and a Cloud Native Computing Foundation (CNCF) Ambassador. But here's the wild part: Whitney wrote her first line of code EVER when she was almost 40 years old, only 6 years ago
Yufa Li is a Fullstack Software Engineer who has been freelancing for years! In this episode we get into ALL the questions: What was her journey like? What are the ups and downs of freelancing? What works, what doesn't, what you should charge, EVERYTHING! 00:00 - Intro 08:00 - Let's talk about freelancing 12:00 - Freelance Statistics 2025 14:30 - The Pros 21:30 - The Cons 29:00 - How to get started freelancing! 37:50 - Q&A 39:15 - Use a dedicated business account How to find Yufa: LinkedIn: https://www.linkedin.com/in/yufa-li/ Her blog: https://the-zen-programmer.beehiiv.com/subscribe
Amanda Ruzza is a DevOps Engineer, world famous Jass Bassist, and a Services Architect at Datadog! in this episode she shares how she ‘migrated' traditional music studying techniques into learning Cloud and all things tech related! "Study is fun and it's all about falling in love with the journey
In this episode of TechSurge, host Sriram Viswanathan sits down with Charlie Giancarlo, Chairman and CEO of Pure Storage, to discuss the evolution and future of data storage in the digital age. Charlie shares insights from his career spanning his pivotal role in Cisco's success as a networking pioneer, to his leadership in transforming Pure Storage into a leader in innovative storage solutions.They explore the evolution of data center infrastructure, and the critical role of storage architecture in enabling AI and cloud technologies. Charlie also explains how Pure Storage's software-driven approach is creating new efficiencies and opportunities for enterprises, offering a compelling vision for a unified "data cloud" that breaks down data silos and unlocks new insights.This episode delves into the intersections of networking, compute, storage, and AI, providing an essential perspective for anyone interested in the future of technology infrastructure.If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform. Sign up for our newsletter at techsurgepodcast.com for exclusive insights and updates on upcoming TechSurge Live Summits.Links:Pure Storage Official Website: Explore Pure Storage's innovative data storage solutions. Pure Storage DataCharlie Giancarlo's Biography: Learn more about Charlie Giancarlo, Chairman and CEO of Pure Storage. Charlie GiancarloPortworx by Pure Storage: Discover Portworx, the Kubernetes data services platform acquired by Pure Storage. Wikipedia Portworx ASBIS – IT DistributorFlashBlade//EXA Announcement: Read about Pure Storage's FlashBlade//EXA, designed for high-performance computing and AI workloads. FlashBlade//EXA: The Future of AI and HPC Storage Performance
Mark Tinderholt, Azure architect and author of the book "Mastering Terraform", joins the vBrownBag to talk about his top 5 Azure automation mistakes to avoid. Learn how to approach complex automation tasks and how to solve them, all while maintaining your sanity.
Du'An Lightfoot, AWS Developer Advocate, joins the vBrownBag crew to talk about how he went from frustrated tinkerer to confident builder. He'll demonstrate his actual workflow, and live-code a project using nothing but vibes and Amazon Q Developer. Come see how and why anyone with ideas can now be a developer. Chapters: 00:00 Introductions 02:49 What is vibe coding? 03:42 The importance of our foundational knowledge 07:12 Getting a high-level overview of your project 10:53 Digging deeper into specific implementations 17:30 Amazon Q Developer builds a website from scratch 25:22 Amazon Q Developer deploys the website to AWS 34:11 Documenting what it did 49:45 How Q Developer works Resources: https://twitter.com/@labeveryday https://www.linkedin.com/in/duanlightfoot https://aws.amazon.com/blogs/devops/code-security-scanning-with-amazon-q-developer/ https://x.com/KanikaTolver/status/1902488505109848176
This episode dives into the fascinating evolution of server technology, from room-sized mainframes to today's AI-powered cloud computing. It explores the innovations, rivalries, and key players—IBM, Microsoft, Unix pioneers, and the rise of Linux—that shaped the industry. The discussion covers the transition from minicomputers to personal computing, the impact of open-source software, and the shift toward containerization, hybrid cloud, and AI-driven infrastructure. With a focus on the forces driving technological progress, this episode unpacks the past, present, and future of server technology and its role in digital transformation.
Robby Stahl, technical account manager at Platform9, joins the vBrownBag crew to vJailbreak, an open source tool that automates VM migration from VMware ESXi to KVM. Chapters: 00:00 Robby & Damian banter 04:49 What is vJailbreak? 10:06 vJailbreak on GitHub 13:45 A demo is attempted, but the demo gods do not approve 22:00 A video of the demo is attempted, but the video gods do not approve 23:40 Robby shares some successful customer anecdotes 34:12 Philosophizing ensues Resources: https://github.com/platform9/vjailbreak https://www.youtube.com/watch?v=seThilJ5ujM&list=PLUqDmxY3RncV-_mzIgL3P29Jssri7Y052&index=5 https://www.linkedin.com/in/robby-stahl/
Matthew Bonig, chief cloud architect at Defiance Digital, and co-author of the AWS CDK book, joins the vBrownBag crew to talk about leveraging observability & AI in software development. Chapters: 00:00 Roger, Damian, and Matthew have a bit of a chit-chat 03:27 Introducing Matthew
Aaron Hunter, AWS principal developer advocate, joins the vBrownBag crew to discuss general tips on preparing for certifications, the AWS AI/ML certification catalog, and great tools YOU can use to help pass an AWS AI/ML certification. Chapters: 00:00 Sean, the prodigal son, returns! There is much rejoicing across the lands!
Kyler Middleton joins Chris and Damian to talk about her free and open-source generative AI solution for the enterprise. The second link below includes a free 30 day subscription to Substack. You have Kyler's express permission to cancel on the last free day.
Madhura Maskasky, Co-Founder and Chief Product Officer at Platform9, talks about the current state of the VMware virtualization market, thoughts on migrations, and factors to consider to speed migrations.SHOW: 899SHOW TRANSCRIPT: The Cloudcast #899 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS" SPONSORS:Take your personal data back with Incogni! Use code CLOUDCAST at the link below and get 60% off an annual plan: http://incogni.com/cloudcast SHOW NOTES:Platform 9 (website)Platform 9 Press Release on VMware MigrationsThe Cloudcast #158 - Private Clouds with Platform 9The Serverless Cast #1 - Project FissionTopic 1 - Welcome back to the show. For those that don't know, you are a long time returning guest. It's been about 7 years since the last time we spoke, and this is also the third time we've talked to you and the folks over at Platform9. Before we jump in, tell us about your background…Topic 2 - Our topic today is VMware Migrations. As we've documented many times on the podcast, VMware and its customers are in flux. Many are evaluating whether they should stay or they should go. When you speak to customers considering this, what should they consider?Topic 3 - In general we keep hearing things like VMWare ELA's that are no only on 3 year renewal and a price hike of 3x, 5x, even 10x. What are you seeing? Is this a limited window of opportunity as customers make that decision to move off or lock in for another 3 years?Topic 4 - I see two sides to this: the business side and the tech side. I see this potentially being much harder on the technical side. While it might be possible to build a business case, the technical lock-in to the hypervisor technology tends to be a factor in conversions. Rebuilding years worth of virtual networking, storage links, etc. How do you advise customers on this today?Topic 5 - These migrations take time. Sometimes we are talking about installations in the tens of thousands of vms and and the migrations taking months to years to complete. Why would an organization go to the cost and the trouble to switch platforms?Topic 6 - Let's say an organization decides to pull the trigger. My immediate thought goes to Day1+ actions and operations. New platforms, new tools, new constructs. VMware installs tend to be very sticky because careers have been built on their product portfolio and expertise around that. Some engineers carry a large number of certifications and have a lot of money and time invested. How do you talk customers through this and help them?Topic 7 - Let's close out there. Where can our listeners go for more information?FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram:
