Podcasts about Multicloud

  • 451PODCASTS
  • 1,351EPISODES
  • 32mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Oct 24, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Multicloud

Show all podcasts related to multicloud

Latest podcast episodes about Multicloud

CarahCast: Podcasts on Technology in the Public Sector
Unlocking Performance: The Future of Data Centers in the Public Sector with Equinix – Episode 2: Hybrid Multicloud Networking Sales Play

CarahCast: Podcasts on Technology in the Public Sector

Play Episode Listen Later Oct 24, 2025 21:42


Unlock the Equinix podcast series to hear high-performance data center experts discuss how secure cloud architecture and edge data centers are driving innovation and cyber resilience in the Public Sector. Explore how Equinix Fabric and Network Edge enhance operational efficiency and security through colocation and hybrid multicloud strategies while maintaining regulatory compliance.

Oracle University Podcast
Cloud Data Centers: Core Concepts - Part 3

Oracle University Podcast

Play Episode Listen Later Oct 21, 2025 15:09


Have you ever considered how a single server can support countless applications and workloads at once?   In this episode, hosts Lois Houston and Nikita Abraham, together with Principal OCI Instructor Orlando Gentil, explore the sophisticated technologies that make this possible in modern cloud data centers.   They discuss the roles of hypervisors, virtual machines, and containers, explaining how these innovations enable efficient resource sharing, robust security, and greater flexibility for organizations.   Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------- Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! For the last two weeks, we've been talking about different aspects of cloud data centers. In this episode, Orlando Gentil, Principal OCI Instructor at Oracle University, joins us once again to discuss how virtualization, through hypervisors, virtual machines, and containers, has transformed data centers. 00:58 Lois: That's right, Niki. We'll begin with a quick look at the history of virtualization and why it became so widely adopted. Orlando, what can you tell us about that?  Orlando: To truly grasp the power of virtualization, it's helpful to understand its journey from its humble beginnings with mainframes to its pivotal role in today's cloud computing landscape. It might surprise you, but virtualization isn't a new concept. Its roots go back to the 1960s with mainframes. In those early days, the primary goal was to isolate workloads on a single powerful mainframe, allowing different applications to run without interfering with each other. As we moved into the 1990s, the challenge shifted to underutilized physical servers. Organizations often had numerous dedicated servers, each running a single application, leading to significant waste of computing resources. This led to the emergence of virtualization as we know it today, primarily from the 1990s to the 2000s. The core idea here was to run multiple isolated operating systems on a single physical server. This innovation dramatically improved the resource utilization and laid the technical foundation for cloud computing, enabling the scalable and flexible environments we rely on today. 02:26 Nikita: Interesting. So, from an economic standpoint, what pushed traditional data centers to change and opened the door to virtualization? Orlando: In the past, running applications often meant running them on dedicated physical servers. This led to a few significant challenges. First, more hardware purchases. Every new application, every new project often required its own dedicated server. This meant constantly buying new physical hardware, which quickly escalated capital expenditure. Secondly, and hand-in-hand with more servers came higher power and cooling costs. Each physical server consumed power and generated heat, necessitating significant investment in electricity and cooling infrastructure. The more servers, the higher these operational expenses became. And finally, a major problem was unused capacity. Despite investing heavily in these physical servers, it was common for them to run well below their full capacity. Applications typically didn't need 100% of server's resources all the time. This meant we were wasting valuable compute power, memory, and storage, effectively wasting resources and diminishing the return of investment from those expensive hardware purchases. These economic pressures became a powerful incentive to find more efficient ways to utilize data center resources, setting the stage for technologies like virtualization. 04:05 Lois: I guess we can assume virtualization emerged as a financial game-changer. So, what kind of economic efficiencies did virtualization bring to the table? Orlando: From a CapEx or capital expenditure perspective, companies spent less on servers and data center expansion. From an OpEx or operational expenditure perspective, fewer machines meant lower electricity, cooling, and maintenance costs. It also sped up provisioning. Spinning a new VM took minutes, not days or weeks. That improved agility and reduced the operational workload on IT teams. It also created a more scalable, cost-efficient foundation which made virtualization not just a technical improvement, but a financial turning point for data centers. This economic efficiency is exactly what cloud providers like Oracle Cloud Infrastructure are built on, using virtualization to deliver scalable pay as you go infrastructure.  05:09 Nikita: Ok, Orlando. Let's get into the core components of virtualization. To start, what exactly is a hypervisor? Orlando: A hypervisor is a piece of software, firmware, or hardware that creates and runs virtual machines, also known as VMs. Its core function is to allow multiple virtual machines to run concurrently on a single physical host server. It acts as virtualization layer, abstracting the physical hardware resources like CPU, memory, and storage, and allocating them to each virtual machine as needed, ensuring they can operate independently and securely. 05:49 Lois: And are there types of hypervisors? Orlando: There are two primary types of hypervisors. The type 1 hypervisors, often called bare metal hypervisors, run directly on the host server's hardware. This means they interact directly with the physical resources offering high performance and security. Examples include VMware ESXi, Oracle VM Server, and KVM on Linux. They are commonly used in enterprise data centers and cloud environments. In contrast, type 2 hypervisors, also known as hosted hypervisors, run on top of an existing operating system like Windows or macOS. They act as an application within that operating system. Popular examples include VirtualBox, VMware Workstation, and Parallels. These are typically used for personal computing or development purposes, where you might run multiple operating systems on your laptop or desktop. 06:55 Nikita: We've spoken about the foundation provided by hypervisors. So, can we now talk about the virtual entities they manage: virtual machines? What exactly is a virtual machine and what are its fundamental characteristics? Orlando: A virtual machine is essentially a software-based virtual computer system that runs on a physical host computer. The magic happens with the hypervisor. The hypervisor's job is to create and manage these virtual environments, abstracting the physical hardware so that multiple VMs can share the same underlying resources without interfering with each other. Each VM operates like a completely independent computer with its own operating system and applications.  07:40 Lois: What are the benefits of this? Orlando: Each VM is isolated from the others. If one VM crashes or encounters an issue, it doesn't affect the other VMs running on the same physical host. This greatly enhances stability and security. A powerful feature is the ability to run different operating systems side-by-side on the very same physical host. You could have a Windows VM, a Linux VM, and even other specialized OS, all operating simultaneously. Consolidate workloads directly addresses the unused capacity problem. Instead of one application per physical server, you can now run multiple workloads, each in its own VM on a single powerful physical server. This dramatically improves hardware utilization, reducing the need of constant new hardware purchases and lowering power and cooling costs. And by consolidating workloads, virtualization makes it possible for cloud providers to dynamically create and manage vast pools of computing resources. This allows users to quickly provision and scale virtual servers on demand, tapping into these shared pools of CPU, memory, and storage as needed, rather than being tied to a single physical machine. 09:10 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest technology. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 09:54 Nikita: Welcome back! Orlando, let's move on to containers. Many see them as a lighter, more agile way to build and run applications. What's your take? Orlando: A container packages an application in all its dependencies, like libraries and other binaries, into a single, lightweight executable unit. Unlike a VM, a container shares the host operating system's kernel, running on top of the container runtime process. This architectural difference provides several key advantages. Containers are incredibly portable. They can be taken virtually anywhere, from a developer's laptop to a cloud environment, and run consistently, eliminating it works on my machine issues. Because containers share the host OS kernel, they don't need to bundle a full operating system themselves. This results in significantly smaller footprints and less administration overhead compared to VMs. They are faster to start. Without the need to boot a full operating system, containers can start up in seconds, or even milliseconds, providing rapid deployment and scaling capabilities. 11:12 Nikita: Ok. Throughout our conversation, you've spoken about the various advantages of virtualization but let's consolidate them now.  Orlando: From a security standpoint, virtualization offers several crucial benefits. Each VM operates in its own isolated sandbox. This means if one VM experiences a security breach, the impact is generally contained to that single virtual machine, significantly limiting the spread of potential threats across your infrastructure. Containers also provide some isolation. Virtualization allows for rapid recovery. This is invaluable for disaster recovery or undoing changes after a security incident. You can implement separate firewalls, access rules, and network configuration for each VM. This granular control reduces the overall exposure and attack surface across your virtualized environments, making it harder for malicious actors to move laterally. Beyond security, virtualization also brings significant advantages in terms of operational and agility benefits for IT management. Virtualization dramatically improves operational efficiency and agility. Things are faster. With virtualization, you can provision new servers or containers in minutes rather than days or weeks. This speed allows for quicker deployment of applications and services. It becomes much simpler to deploy consistent environment using templates and preconfigured VM images or containers. This reduces errors and ensures uniformity across your infrastructure. It's more scalable. Virtualization makes your infrastructure far more scalable. You can reshape VMs and containers to meet changing demands, ensuring your resources align precisely with your needs. These operational benefits directly contribute to the power of cloud computing, especially when we consider virtualization's role in enabling cloud and scalability. Virtualization is the very backbone of modern cloud computing, fundamentally enabling its scalability. It allows multiple virtual machines to run on a single physical server, maximizing hardware utilization, which is essential for cloud providers. This capability is core of infrastructure as a service offerings, where users can provision virtualized compute resources on demand. Virtualization makes services globally scalable. Resources can be easily deployed and managed across different geographic regions to meet worldwide demand. Finally, it provides elasticity, meaning resources can be automatically scaled up or down in response to fluctuating workloads, ensuring optimal performance and cost efficiency. 14:21 Lois: That's amazing. Thank you, Orlando, for joining us once again.  Nikita: Yeah, and remember, if you want to learn more about the topics we covered today, go to mylearn.oracle.com and search for the Cloud Tech Jumpstart course.  Lois: Well, that's all we have for today. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 14:40 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Packet Pushers - Full Podcast Feed
TNO046: Prisma AIRS: Securing the Multi-Cloud and AI Runtime (Sponsored)

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Oct 17, 2025 27:22


Multi-cloud, automation, and AI are changing how modern networks operate and how firewalls and security policies are administered. In today’s sponsored episode with Palo Alto Networks, we dig into offerings such as CLARA (Cloud and AI Risk Assessment) that help ops teams gain more visibility into the structure and workflows of their multi-cloud networks. We... Read more »

Packet Pushers - Fat Pipe
TNO046: Prisma AIRS: Securing the Multi-Cloud and AI Runtime (Sponsored)

Packet Pushers - Fat Pipe

Play Episode Listen Later Oct 17, 2025 27:22


Multi-cloud, automation, and AI are changing how modern networks operate and how firewalls and security policies are administered. In today’s sponsored episode with Palo Alto Networks, we dig into offerings such as CLARA (Cloud and AI Risk Assessment) that help ops teams gain more visibility into the structure and workflows of their multi-cloud networks. We... Read more »

Coding talks with Vishnu VG
Oracle Multicloud: A New Era with Oracle Database@AWS

Coding talks with Vishnu VG

Play Episode Listen Later Oct 17, 2025 26:10


I've been attending Oracle Yatra in July 2025 in Bengaluru, and AI and Multicloud are core concepts discussed, along with Oracle Database concepts. It's interesting to see how Oracle is encouraging a multicloud strategy and AI capabilities within the Oracle ecosystem. In this article, we will discuss an overview of the Oracle Multicloud Strategy through Oracle @ AWS.https://builder.aws.com/content/31MWAULLazXegTJlsNVw8q36FLM/oracle-multicloud-a-new-era-with-oracle-databaseaws

Packet Pushers - Full Podcast Feed
NAN103: The Evolution of Multi-Cloud Networking

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Oct 15, 2025 47:47


We’re thrilled to welcome Tim McConnaughy back to the podcast. Tim is a hybrid cloud network architect, author, and co-host of the Cables to Cloud podcast. He recently wrote a 5-part blog series titled ‘Goodbye, Yellow Brick Road' that reflects on his career path, including his decision to leave a startup. We discuss the impetus... Read more »

Packet Pushers - Fat Pipe
NAN103: The Evolution of Multi-Cloud Networking

Packet Pushers - Fat Pipe

Play Episode Listen Later Oct 15, 2025 47:47


We’re thrilled to welcome Tim McConnaughy back to the podcast. Tim is a hybrid cloud network architect, author, and co-host of the Cables to Cloud podcast. He recently wrote a 5-part blog series titled ‘Goodbye, Yellow Brick Road' that reflects on his career path, including his decision to leave a startup. We discuss the impetus... Read more »

CarahCast: Podcasts on Technology in the Public Sector
Achieving AI-Powered Portfolio Management for Federal Agencies

CarahCast: Podcasts on Technology in the Public Sector

Play Episode Listen Later Oct 15, 2025 41:57


Watch the podcast to hear experts from Broadcom, Google Cloud and stackArmor discuss how agencies accelerate software delivery, improve customer experience and maintain compliance while meeting deadlines and staying within budget. Gain insights into how the Federal Government navigates FedRAMP's evolving framework, leverages AI tools for portfolio management and breaks down information silos with a unified platform.

Oracle University Podcast
Cloud Data Centers: Core Concepts - Part 2

Oracle University Podcast

Play Episode Listen Later Oct 14, 2025 14:16


Have you ever wondered where all your digital memories, work projects, or favorite photos actually live in the cloud?   In this episode, Lois Houston and Nikita Abraham are joined by Principal OCI Instructor Orlando Gentil to discuss cloud storage.   They explore how data is carefully organized, the different ways it can be stored, and what keeps it safe and easy to find.   Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------   Episode Transcript:    00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about the differences between traditional and cloud data centers, and covered components like CPU, RAM, and operating systems. If you haven't listened to the episode yet, I'd suggest going back and listening to it before you dive into this one.  Nikita: Joining us again is Orlando Gentil, Principal OCI Instructor at Oracle University, and we're going to ask him about another fundamental concept: storage. 01:04 Lois: That's right, Niki. Hi Orlando! Thanks for being with us again today. You introduced cloud data centers last week, but tell us, how is data stored and accessed in these centers?  Orlando: At a fundamental level, storage is where your data resides persistently. Data stored on a storage device is accessed by the CPU and, for specialized tasks, the GPU. The RAM acts as a high-speed intermediary, temporarily holding data that the CPU and the GPU are actively working on. This cyclical flow ensures that applications can effectively retrieve, process, and store information, forming the backbone for our computing operations in the data center. 01:52 Nikita: But how is data organized and controlled on disks? Orlando: To effectively store and manage data on physical disks, a structured approach is required, which is defined by file systems and permissions. The process began with disks. These are the raw physical storage devices. Before data can be written to them, disks are typically divided into partitions. A partition is a logical division of a physical disk that acts as if it were a separated physical disk. This allows you to organize your storage space and even install multiple operating systems on a single drive. Once partitions are created, they are formatted with a file system. 02:40 Nikita: Ok, sorry but I have to stop you there. Can you explain what a file system is? And how is data organized using a file system?  Orlando: The file system is the method and the data structure that an operating system uses to organize and manage files on storage devices. It dictates how data is named, is stored, retrieved, and managed on the disk, essentially providing the roadmap for data. Common file systems include NTFS for Windows and ext4 or XFS for Linux. Within this file system, data is organized hierarchically into directories, also known as folders. These containers help to logically group related files, which are the individual units of data, whether they are documents, images, videos, or applications. Finally, overseeing this entire organization are permissions.  03:42 Lois: And what are permissions? Orlando: Permissions define who can access a specific files and directories and what actions they are allowed to perform-- for example, read, write, or execute. This access control, often managed by user, group, and other permissions, is fundamental for security, data integrity, and multi-user environments within a data center.  04:09 Lois: Ok, now that we have a good understanding of how data is organized logically, can we talk about how data is stored locally within a server?   Orlando: Local storage refers to storage devices directly attached to a server or computer. The three common types are Hard Disk Drive. These are traditional storage devices using spinning platters to store data. They offer large capacity at a lower cost per gigabyte, making them suitable for bulk data storage when high performance isn't the top priority. Unlike hard disks, solid state drives use flash memory to store data, similar to USB drives but on a larger scale. They provide significantly faster read and write speeds, better durability, and lower power consumption than hard disks, making them ideal for operating systems, applications, and frequently accessed data. Non-Volatile Memory Express is a communication interface specifically designed for solid state that connects directly to the PCI Express bus. NVME offers even faster performance than traditional SATA-based solid state drives by reducing latency and increasing bandwidth, making it the top choice for demanding workloads that require extreme speed, such as high-performance databases and AI applications. Each type serves different performance and cost requirements within a data center. While local storage is essential for immediate access, data center also heavily rely on storage that isn't directly attached to a single server.  05:59 Lois: I'm guessing you're hinting at remote storage. Can you tell us more about that, Orlando? Orlando: Remote storage refers to data storage solutions that are not physically connected to the server or client accessing them. Instead, they are accessed over the network. This setup allows multiple clients or servers to share access to the same storage resources, centralizing data management and improving data availability. This architecture is fundamental to cloud computing, enabling vast pools of shared storage that can be dynamically provisioned to various users and applications. 06:35 Lois: Let's talk about the common forms of remote storage. Can you run us through them? Orlando: One of the most common and accessible forms of remote storage is Network Attached Storage or NAS. NAS is a dedicated file storage device connected to a network that allows multiple users and client devices to retrieve data from a centralized disk capacity. It's essentially a server dedicated to serving files. A client connects to the NAS over the network. And the NAS then provides access to files and folders. NAS devices are ideal for scenarios requiring shared file access, such as document collaboration, centralized backups, or serving media files, making them very popular in both home and enterprise environments. While NAS provides file-level access over a network, some applications, especially those requiring high performance and direct block level access to storage, need a different approach.  07:38 Nikita: And what might this approach be?  Orlando: Internet Small Computer System Interface, which provides block-level storage over an IP network. iSCSI or Internet Small Computer System Interface is a standard that allows the iSCSI protocol traditionally used for local storage to be sent over IP networks. Essentially, it enables servers to access storage devices as if they were directly attached even though they are located remotely on the network.  This means it can leverage standard ethernet infrastructure, making it a cost-effective solution for creating high performance, centralized storage accessible over an existing network. It's particularly useful for server virtualization and database environments where block-level access is preferred. While iSCSI provides block-level access over standard IP, for environments demanding even higher performance, lower latency, and greater dedicated throughput, a specialized network is often deployed.  08:47 Nikita: And what's this specialized network called? Orlando: Storage Area Network or SAN. A Storage Area Network or SAN is a high-speed network specifically designed to provide block-level access to consolidated shared storage. Unlike NAS, which provides file level access, a SAN presents a storage volumes to servers as if they were local disks, allowing for very high performance for applications like databases and virtualized environments. While iSCSI SANs use ethernet, many high-performance SANs utilize fiber channel for even faster and more reliable data transfer, making them a cornerstone of enterprise data centers where performance and availability are paramount. 09:42 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest technology. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 10:26 Nikita: Welcome back! Orlando, are there any other popular storage paradigms we should know about? Orlando: Beyond file level and block level storage, cloud environments have popularized another flexible and highly scalable storage paradigm, object storage.  Object storage is a modern approach to storing data, treating each piece of data as a distinct, self-contained unit called an object. Unlike file systems that organize data in a hierarchy or block storage that breaks data into fixed size blocks, object storage manages data as flat, unstructured objects. Each object is stored with unique identifiers and rich metadata, making it highly scalable and flexible for massive amounts of data. This service handles the complexity of storage, providing access to vast repositories of data. Object storage is ideal for use cases like cloud-native applications, big data analytics, content distribution, and large-scale backups thanks to its immense scalability, durability, and cost effectiveness. While object storage is excellent for frequently accessed data in rapidly growing data sets, sometimes data needs to be retained for very long periods but is accessed infrequently. For these scenarios, a specialized low-cost storage tier, known as archive storage, comes into play. 12:02 Lois: And what's that exactly? Orlando: Archive storage is specifically designed for long-term backup and retention of data that you rarely, if ever, access. This includes critical information, like old records, compliance data that needs to be kept for regulatory reasons, or disaster recovery backups. The key characteristics of archive storage are extremely low cost per gigabyte, achieved by optimizing for infrequent access rather than speed. Historically, tape backup systems were the common solution for archiving, where data from a data center is moved to tape. In modern cloud environments, this has evolved into cloud backup solutions. Cloud-based archiving leverages high-cost, effective during cloud storage tiers that are purpose built for long term retention, providing a scalable and often more reliable alternative to physical tapes. 13:05 Lois: Thank you, Orlando, for taking the time to talk to us about the hardware and software layers of cloud data centers. This information will surely help our listeners to make informed decisions about cloud infrastructure to meet their workload needs in terms of performance, scalability, cost, and management.  Nikita: That's right, Lois. And if you want to learn more about what we discussed today, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course.  Lois: In our next episode, we'll take a look at more of the fundamental concepts within modern cloud environments, such as Hypervisors, Virtualization, and more. I can't wait to learn more about it. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 13:47 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.  

Oracle University Podcast
AI Across Industries and the Importance of Responsible AI

Oracle University Podcast

Play Episode Listen Later Sep 30, 2025 18:55


AI is reshaping industries at a rapid pace, but as its influence grows, so do the ethical concerns that come with it.   This episode examines how AI is being applied across sectors such as healthcare, finance, and retail, while also exploring the crucial issue of ensuring that these technologies align with human values.   In this conversation, Lois Houston and Nikita Abraham are joined by Hemant Gahankari, Senior Principal OCI Instructor, who emphasizes the importance of fairness, inclusivity, transparency, and accountability in AI systems.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------- Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about how Oracle integrates AI capabilities into its Fusion Applications to enhance business workflows, and we focused on Predictive, Generative, and Agentic AI. Lois: Today, we'll discuss the various applications of AI. This is the final episode in our AI series, and before we close, we'll also touch upon ethical and responsible AI.  01:01 Nikita: Taking us through all of this is Senior Principal OCI Instructor Hemant Gahankari. Hi Hemant! AI is pretty much everywhere today. So, can you explain how it is being used in industries like retail, hospitality, health care, and so on?  Hemant: AI isn't just for sci-fi movies anymore. It's helping doctors spot diseases earlier and even discover new drugs faster. Imagine an AI that can look at an X-ray and say, hey, there is something sketchy here before a human even notices. Wild, right? Banks and fintech companies are all over AI. Fraud detection. AI has got it covered. Those robo advisors managing your investments? That's AI too. Ever noticed how e-commerce companies always seem to know what you want? That's AI studying your habits and nudging you towards that next purchase or binge watch. Factories are getting smarter. AI predicts when machines will fail so they can fix them before everything grinds to a halt. Less downtime, more efficiency. Everyone wins. Farming has gone high tech. Drones and AI analyze crops, optimize water use, and even help with harvesting. Self-driving cars get all the hype, but even your everyday GPS uses AI to dodge traffic jams. And if AI can save me from sitting in bumper-to-bumper traffic, I'm all for it. 02:40 Nikita: Agreed! Thanks for that overview, but let's get into specific scenarios within each industry.  Hemant: Let us take a scenario in the retail industry-- a retail clothing line with dozens of brick-and-mortar stores. Maintaining proper inventory levels in stores and regional warehouses is critical for retailers. In this low-margin business, being out of a popular product is especially challenging during sales and promotions. Managers want to delight shoppers and increase sales but without overbuying. That's where AI steps in. The retailer has multiple information sources, ranging from point-of-sale terminals to warehouse inventory systems. This data can be used to train a forecasting model that can make predictions, such as demand increase due to a holiday or planned marketing promotion, and determine the time required to acquire and distribute the extra inventory. Most ERP-based forecasting systems can produce sophisticated reports. A generative AI report writer goes further, creating custom plain-language summaries of these reports tailored for each store, instructing managers about how to maximize sales of well-stocked items while mitigating possible shortages. 04:11 Lois: Ok. How is AI being used in the hospitality sector, Hemant? Hemant: Let us take an example of a hotel chain that depends on positive ratings on social media and review websites. One common challenge they face is keeping track of online reviews, leading to missed opportunities to engage unhappy customers complaining on social media. Hotel managers don't know what's being said fast enough to address problems in real-time. Here, AI can be used to create a large data set from the tens of thousands of previously published online reviews. A textual language AI system can perform a sentiment analysis across the data to determine a baseline that can be periodically re-evaluated to spot trends. Data scientists could also build a model that correlates these textual messages and their sentiments against specific hotel locations and other factors, such as weather. Generative AI can extract valuable suggestions and insights from both positive and negative comments. 05:27 Nikita: That's great. And what about Financial Services? I know banks use AI quite often to detect fraud. Hemant: Unfortunately, fraud can creep into any part of a bank's retail operations. Fraud can happen with online transactions, from a phone or browser, and offsite ATMs too. Without trust, banks won't have customers or shareholders. Excessive fraud and delays in detecting it can violate financial industry regulations. Fraud detection combines AI technologies, such as computer vision to interpret scanned documents, document verification to authenticate IDs like driver's licenses, and machine learning to analyze patterns. These tools work together to assess the risk of fraud in each transaction within seconds. When the system detects a high risk, it triggers automated responses, such as placing holds on withdrawals or requesting additional identification from customers, to prevent fraudulent activity and protect both the business and its client. 06:42 Nikita: Wow, interesting. And how is AI being used in the health industry, especially when it comes to improving patient care? Hemant: Medical appointments can be frustrating for everyone involved—patients, receptionists, nurses, and physicians. There are many time-consuming steps, including scheduling, checking in, interactions with the doctors, checking out, and follow-ups. AI can fix this problem through electronic health records to analyze lab results, paper forms, scans, and structured data, summarizing insights for doctors with the latest research and patient history. This helps practice reduced costs, boost earnings, and deliver faster, more personalized care. 07:32 Lois: Let's take a look at one more industry. How is manufacturing using AI? Hemant: A factory that makes metal parts and other products use both visual inspections and electronic means to monitor product quality. A part that fails to meet the requirements may be reworked or repurposed, or it may need to be scrapped. The factory seeks to maximize profits and throughput by shipping as much good material as possible, while minimizing waste by detecting and handling defects early. The way AI can help here is with the quality assurance process, which creates X-ray images. This data can be interpreted by computer vision, which can learn to identify cracks and other weak spots, after being trained on a large data set. In addition, problematic or ambiguous data can be highlighted for human inspectors. 08:36 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 09:20 Nikita: Welcome back! AI can be used effectively to automate a variety of tasks to improve productivity, efficiency, cost savings. But I'm sure AI has its constraints too, right? Can you talk about what happens if AI isn't able to echo human ethics?  Hemant: AI can fail due to lack of ethics.  AI can spot patterns, not make moral calls. It doesn't feel guilt, understand context, or take responsibility. That is still up to us.  Decisions are only as good as the data behind them. For example, health care AI underdiagnosing women because research data was mostly male. Artificial narrow intelligence tends to automate discrimination at scale. Recruiting AI downgraded resumes just because it had a word "women's" (for example, women's chess club). Who is responsible when AI fails? For example, if a self-driving car hits someone, we cannot blame the car. Then who owns the failure? The programmer? The CEO? Can we really trust corporations or governments having programmed the use of AI not to be evil correctly? So, it's clear that AI needs oversight to function smoothly. 10:48 Lois: So, Hemant, how can we design AI in ways that respect and reflect human values? Hemant: Think of ethics like a tree. It needs all parts working together. Roots represent intent. That is our values and principles. The trunk stands for safeguards, our systems, and structures. And the branches are the outcomes we aim for. If the roots are shallow, the tree falls. If the trunk is weak, damage seeps through. The health of roots and trunk shapes the strength of our ethical outcomes. Fairness means nothing without ethical intent behind it. For example, a bank promotes its loan algorithm as fair. But it uses zip codes in decision-making, effectively penalizing people based on race. That's not fairness. That's harm disguised as data. Inclusivity depends on the intent sustainability. Inclusive design isn't just a check box. It needs a long-term commitment. For example, controllers for gamers with disabilities are only possible because of sustained R&D and intentional design choices. Without investment in inclusion, accessibility is left behind. Transparency depends on the safeguard robustness. Transparency is only useful if the system is secure and resilient. For example, a medical AI may be explainable, but if it is vulnerable to hacking, transparency won't matter. Accountability depends on the safeguard privacy and traceability. You can't hold people accountable if there is no trail to follow. For example, after a fatal self-driving car crash, deleted system logs meant no one could be held responsible. Without auditability, accountability collapses. So remember, outcomes are what we see, but they rely on intent to guide priorities and safeguards to support execution. That's why humans must have a final say. AI has no grasp of ethics, but we do. 13:16 Nikita: So, what you're saying is ethical intent and robust AI safeguards need to go hand in hand if we are to truly leverage AI we can trust. Hemant: When it comes to AI, preventing harm is a must. Take self-driving cars, for example. Keeping pedestrians safe is absolutely critical, which means the technology has to be rock solid and reliable. At the same time, fairness and inclusivity can't be overlooked. If an AI system used for hiring learns from biased past data, say, mostly male candidates being hired, it can end up repeating those biases, shutting out qualified candidates unfairly. Transparency and accountability go hand in hand. Imagine a loan rejection if the AI's decision isn't clear or explainable. It becomes impossible for someone to challenge or understand why they were turned down. And of course, robustness supports fairness too. Loan approval systems need strong security to prevent attacks that could manipulate decisions and undermine trust.  We must build AI that reflects human values and has safeguards. This makes sure that AI is fair, inclusive, transparent, and accountable.  14:44 Lois: Before we wrap, can you talk about why AI can fail? Let's continue with your analogy of the tree. Can you explain how AI failures occur and how we can address them? Hemant: Root elements like do not harm and sustainability are fundamental to ethical AI development. When these roots fail, the consequences can be serious. For example, a clear failure of do not harm is AI-powered surveillance tools misused by authoritarian regimes. This happens because there were no ethical constraints guiding how the technology was deployed. The solution is clear-- implement strong ethical use policies and conduct human rights impact assessment to prevent such misuse. On the sustainability front, training AI models can consume massive amount of energy. This failure occurs because environmental costs are not considered. To fix this, organizations are adopting carbon-aware computing practices to minimize AI's environmental footprint. By addressing these root failures, we can ensure AI is developed and used responsibly with respect for human rights and the planet. An example of a robustness failure can be a chatbot hallucinating nonexistent legal precedence used in court filings. This could be due to training on unverified internet data and no fact-checking layer. This can be fixed by grounding in authoritative databases. An example of a privacy failure can be AI facial recognition database created without user consent. The reason being no consent was taken for data collection. This can be fixed by adopting privacy-preserving techniques. An example of a fairness failure can be generated images of CEOs as white men and nurses as women, minorities. The reason being training on imbalanced internet images reflecting societal stereotypes. And the fix is to use diverse set of images. 17:18 Lois: I think this would be incomplete if we don't talk about inclusivity, transparency, and accountability failures. How can they be addressed, Hemant? Hemant: An example of an inclusivity failure can be a voice assistant not understanding accents. The reason being training data lacked diversity. And the fix is to use inclusive data. An example of a transparency and accountability failure can be teachers could not challenge AI-generated performance scores due to opaque calculations. The reason being no explainability tools are used. The fix being high-impact AI needs human review pathways and explainability built in. 18:04 Lois: Thank you, Hemant, for a fantastic conversation. We got some great insights into responsible and ethical AI. Nikita: Thank you, Hemant! If you're interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the AI for You course. Until next time, this is Nikita Abraham…. Lois: And Lois Houston, signing off! 18:26 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.  

nuboRadio -  Office 365 für Cloud-Worker und Teams
Multi-Cloud 2025 - Warum die Mischung den Unterschied macht

nuboRadio - Office 365 für Cloud-Worker und Teams

Play Episode Listen Later Sep 29, 2025 11:21


Heute geht's hoch hinaus – in die Cloud. Oder besser gesagt: in viele Clouds gleichzeitig. Denn unser Thema ist Multi-Cloud – ein Konzept, das für viele Unternehmen längst Realität ist, aber oft noch Fragen aufwirft. Was bedeutet Multi-Cloud eigentlich genau? Warum setzen so viele Unternehmen auf mehrere Anbieter gleichzeitig? Und wie gelingt es, diese Vielfalt sinnvoll zu nutzen, ohne sich in Komplexität zu verlieren? In dieser Folge bekommst du Antworten – praxisnah, verständlich und mit einem Blick auf die wichtigsten Trends für 2025.

Software Defined Talk
Episode 539: The Final Demand

Software Defined Talk

Play Episode Listen Later Sep 26, 2025 56:03


This week, we cover Oracle's OpenAI deal, the RubyGems drama, and Atlassian buying DX. Plus, does anyone still use widgets? Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/ptnxBcE_6FQ?si=lapKMarRCBFbeAET) 539 (https://www.youtube.com/live/ptnxBcE_6FQ?si=lapKMarRCBFbeAET) Runner-up Titles It's a two knob problem The healthy jaundice of success My homework is to go home Are you enjoying the widgets? I get you on the Ponzi Scheme Hanlon's Razor strikes again Blogging: Hardest form of social media Rundown Oracle Exclusive | Oracle, OpenAI Sign Massive $300 Billion Cloud Computing Deal (https://www.wsj.com/business/openai-oracle-sign-300-billion-computing-deal-among-biggest-in-history-ff27c8fe) Oracle and OpenAI are full of crap (https://bsky.app/profile/edzitron.com/post/3lynpe7zmas2k) OpenAI doesn't have the cash to pay Oracle $300 billion — raising it will test the very limits of private markets (https://sherwood.news/markets/openai-doesnt-have-the-cash-to-pay-oracle-usd300-billion-raising-it-will/) Nvidia stock jumps on $100 billion OpenAI investment as Huang touts 'biggest AI infrastructure project in history (https://finance.yahoo.com/news/nvidia-stock-jumps-on-100-billion-openai-investment-as-huang-touts-biggest-ai-infrastructure-project-in-history-171740509.html) Ruby Central Takes Over RubyGems (https://mjtsai.com/blog/2025/09/23/ruby-central-takes-over-rubygems/) Atlassian Atlassian acquires DX, a developer productivity platform, for $1B (https://techcrunch.com/2025/09/18/atlassian-acquires-dx-a-developer-productivity-platform-for-1b/) Atlassian acquires developer productivity startup DX for $1B (https://siliconangle.com/2025/09/18/atlassian-acquires-developer-productivity-startup-dx-1b/) The AI Shift: Static Software vs. Living AI Systems (https://cloudedjudgement.substack.com/p/clouded-judgement-91925-the-ai-shift) RSS co-creator launches new protocol for AI data licensing (https://techcrunch.com/2025/09/10/rss-co-creator-launches-new-protocol-for-ai-data-licensing/) Nvidia to Invest $5 Billion in Intel, Furthering Trump's Turnaround Plan (https://www.wsj.com/tech/ai/nvidia-intel-5-billion-investment-ad940533?mod=hp_lead_pos1) Relevant to your Interests Tesla Wants Out of the Car Business (https://www.theatlantic.com/technology/archive/2025/09/tesla-elon-musk-master-plan-robotaxi/684122/) Google is shutting down Tables, its Airtable rival | TechCrunch (https://techcrunch.com/2025/09/11/google-is-shutting-down-tables-its-airtable-rival/) Oracle's stock pump, Meta's $600B, Bronny Ellison and Warner Bros, European stereotypes (https://platformonomics.com/2025/09/platformonomics-tgif-99-september-12-2025/) Atlassian goes cloud-only, customers face integration issues (https://www.theregister.com/2025/09/09/atlassian_will_go_cloudonly_customers/) Getting a slice of the Kubernete$ management pie (https://newsletter.cote.io/p/getting-a-slice-of-the-kubernete) Cote on Multicloud (https://cote.io/2025/09/14/i-think-this-means-thing.html) ServiceNow Says Windsurf Gave Its Engineers a 10% Productivity Boost (https://bsky.app/profile/thenewstack.io/post/3lyvqw6lc6522) Most Work is Translation (https://open.substack.com/pub/aparnacd/p/most-work-is-translation?r=2d4o&utm_medium=ios) Microsoft warns users that Windows 10 is in its final days (https://go.theregister.com/feed/www.theregister.com/2025/09/16/windows_10_final_countdown/) How to use Tahoe's new Use Model shortcut to summarize articles (https://cote.io/2025/09/16/how-to-use-tahoes-new.html) Credit scores drop at fastest pace since the Great Recession | CNN Business (https://www.cnn.com/2025/09/16/economy/debt-credit-score-student-loans) Workday to buy AI firm Sana for $1.1 billion as HR software deal-making heats up (https://www.reuters.com/business/workday-buy-ai-firm-sana-11-billion-hr-software-deal-making-heats-up-2025-09-16/) Wasm 3.0 Completed - WebAssembly (https://webassembly.org/news/2025-09-17-wasm-3.0/) Exclusive: AI's ability to displace jobs is advancing quickly, Anthropic CEO says (https://www.axios.com/2025/09/17/anthropic-amodei-ai) From the facepalm community on Reddit: Meta's live AI cooking demo fails spectacularly (https://www.reddit.com/r/facepalm/s/VI8YmDY29p) Meta CTO explains the cause of its embarrassing smart glasses demo failures (https://www.engadget.com/wearables/meta-cto-explains-the-cause-of-its-embarrassing-smart-glasses-demo-failures-123011790.html) New H-1B rules sparked weekend chaos (https://www.morningbrew.com/stories/2025/09/22/new-h-1b-rules-sparked-weekend-chaos) The Man Calling Bullshit on the AI Boom (https://www.readtpa.com/p/the-man-calling-bullshit-on-the-ai?utm_campaign=post&utm_medium=web) Trump's H-1B visa fee isn't just about immigration, it's about fealty (https://www.theverge.com/report/782289/trumps-h-1b-visa-fee-isnt-about-immigration-its-about-fealty) Vivaldi takes a stand: keep browsing human | Vivaldi Browser (https://vivaldi.com/blog/keep-exploring/) Zoom Bets on Agentic AI With AI Companion 3.0 Amid Sluggish Growth (https://diginomica.com/zoom-unveils-ai-companion-30-betting-agentic-ai-drive-enterprise-growth) The Secret Service has dismantled a telecom threat near the UN. It could have disabled cell service in NYC (https://www.pbs.org/newshour/nation/the-secret-service-has-dismantled-a-telecom-threat-near-the-un-it-could-have-disabled-cell-service-in-nyc) Enterprise AI Looks Bleak, But Employee AI Looks Bright (https://www.dbreunig.com/2025/09/15/ai-adoption-at-work-play.html) Obot AI Secures $35M Seed to Build Enterprise MCP Gateway - obot (https://obot.ai/obot-ai-secures-35m-seed-to-build-enterprise-mcp-gateway/) Announcing the 2025 DORA Report | Google Cloud Blog (https://cloud.google.com/blog/products/ai-machine-learning/announcing-the-2025-dora-report/) Conferences Civo Navigate London (https://www.civo.com/navigate/london/2025), Coté speaking, September 30th. Texas Linux Fest (https://2025.texaslinuxfest.org), Austin, October 3rd to 4th. CF Day EU (https://events.linuxfoundation.org/cloud-foundry-day-europe/), Coté speaking, Frankfurt, October 7th, 2025. AI for the Rest of Us (https://aifortherestofus.live/london-2025), Coté speaking, October 15th-16th, London. Use code SDT20 for 20% off. Wiz Wizdom Conferences (https://www.wiz.io/wizdom), NYC November 3-5, London November 17-19 SREDay Amsterdam (https://sreday.com/2025-amsterdam-q4/), Coté speaking, November 7th. SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: Task (https://www.rottentomatoes.com/tv/task) Matt: OpenCore Legacy Patcher (https://dortania.github.io/OpenCore-Legacy-Patcher/) Photo Credits Header (https://unsplash.com/photos/black-ipad-on-white-table-Sw-JgeAosME)

Cloud Wars Live with Bob Evans
Gary Miller on Aligning Customer and Partner Success in the AI Era | Cloud Wars Live

Cloud Wars Live with Bob Evans

Play Episode Listen Later Sep 17, 2025 18:53


Gary Miller, Executive Vice President and Customer Success Officer, Oracle, talks to Bob Evans about how Oracle is helping customers navigate their AI journeys — whether they're just starting out or scaling enterprise-wide adoption. He shares how Oracle is embedding AI across its entire technology stack, aligning partner and customer success strategies, and empowering organizations through tools like Cloud Success Navigator, Innovation Studios, and democratized AI training to deliver real, measurable business value.AI-Powered Customer WinsThe Big Themes:Embedding AI Across the Entire Stack: Oracle is not just adding AI as a feature — it's fundamentally integrating AI into its entire technology stack. Gary Miller notes that many customers are surprised to discover that large language models are being trained and deployed on OCI, and that hundreds of AI capabilities are embedded directly into Fusion Applications and Oracle Database. Once customers understand this depth of integration, they quickly shift from curiosity to action, asking for guidance on how to adopt AI now, what use cases to prioritize, and how to define success.Cloud Success Navigator Is Central to AI Adoption Strategy: The Oracle Cloud Success Navigator has emerged as a pivotal tool for AI and cloud adoption. What started as a promise in a previous conversation is now a robust, free digital platform that helps customers and partners create innovation roadmaps, prioritize features, and accelerate time to value. With over 6,000 customers and 235 partners using the platform since March, the tool enables organizations to track over 11,000 adopted features — including 450 AI-specific ones.AI World 2025 Will Spotlight Real Customer Outcomes: At the upcoming AI World 2025 event, Oracle plans to go beyond product announcements to highlight customer success stories. Miller will host a keynote titled “Bold Outcomes,” featuring innovative customers and partners sharing their journeys. Oracle is also gamifying the learning experience with “AI Industry Adventure,” a theme-park-style game in Customer Success Central. Attendees will solve real-world industry challenges using Oracle Cloud AI solutions, making learning both interactive and fun.The Big Quote: “Customers are often unaware of how Oracle has embedded AI capabilities across the whole stack. Once they understand that, then they ask us for expert guidance on how best to achieve their transformation goals using Oracle AI solutions. I had one CEO, he said, after he saw this, he said, 'Well, don't let us fumble around in the dark looking for value. You know, where it is, point us there.' And so they asked, how can I start adopting AI in my current environment? . . . How do I define AI, success metrics, and realize AI value? That's the key thing."More from Gary Miller and Oracle:Connect with Gary Miller on LinkedIn or learn more about Oracle and AI. Visit Cloud Wars for more.

Oracle University Podcast
Oracle's AI Ecosystem

Oracle University Podcast

Play Episode Listen Later Sep 16, 2025 15:39


In this episode, Lois Houston and Nikita Abraham are joined by Principal Instructor Yunus Mohammed to explore Oracle's approach to enterprise AI. The conversation covers the essential components of the Oracle AI stack and how each part, from the foundational infrastructure to business-specific applications, can be leveraged to support AI-driven initiatives.   They also delve into Oracle's suite of AI services, including generative AI, language processing, and image recognition.     AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   -------------------------------------------------------------   Episode Transcript:  00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we discussed why the decision to buy or build matters in the world of AI deployment. Lois: That's right, Niki. Today is all about the Oracle AI stack and how it empowers not just developers and data scientists, but everyday business users as well. Then we'll spend some time exploring Oracle AI services in detail.  01:00 Nikita: Yunus Mohammed, our Principal Instructor, is back with us today. Hi Yunus! Can you talk about the different layers in Oracle's end-to-end AI approach? Yunus: The first base layer is the foundation of AI infrastructure, the powerful compute and storage layer that enables scalable model training and inferences. Sitting above the infrastructure, we have got the data platform. This is where data is stored, cleaned, and managed. Without a reliable data foundation, AI simply can't perform. So base of AI is the data, and the reliable data gives more support to the AI to perform its job. Then, we have AI and ML services. These provide ready-to-use tools for building, training, and deploying custom machine learning models. Next, to the AI/ML services, we have got generative AI services. This is where Oracle enables advanced language models and agentic AI tools that can generate content, summarize documents, or assist users through chat interfaces. Then, we have the top layer, which is called as the applications, things like Fusion applications or industry specific solutions where AI is embedded directly into business workflows for recommendations, forecasting or customer support. Finally, Oracle integrates with a growing ecosystem of AI partners, allowing organizations to extend and enhance their AI capabilities even further. In short, Oracle doesn't just offer AI as a feature. It delivers it as a full stack capability from infrastructure to the layer of applications. 02:59 Nikita: Ok, I want to get into the core AI services offered by Oracle Cloud Infrastructure. But before we get into the finer details, broadly speaking, how do these services help businesses? Yunus: These services make AI accessible, secure, and scalable, enabling businesses to embed intelligence into workflows, improve efficiency, and reduce human effort in repetitive or data-heavy tasks. And the best part is, Oracle makes it easy to consume these through application interfaces, APIs, software development kits like SDKs, and integration with Fusion Applications. So, you can add AI where it matters without needing a data scientist team to do that work.  03:52 Lois: So, let's get down to it. The first core service is Oracle's Generative AI service. What can you tell us about it?  Yunus: This is a fully managed service that allows businesses to tap into the power of large language models. You can actually work with these models from scratch to a well-defined develop model. You can use these models for a wide range of use cases like summarizing text, generating content, answering questions, or building AI-powered chat interfaces.  04:27 Lois: So, what will I find on the OCI Generative AI Console? Yunus: OCI Generative AI Console highlights three key components. The first one is the dedicated AI cluster. These are GPU powered environments used to fine tune and host your own custom models. It gives you control and performance at scale. Then, the second point is the custom models. You can take a base language model and fine tune it using your own data, for example, company manuals or HR policies or customer interactions, which are your own personal data. You can use this to create a model that speaks your business language. And last but not the least, the endpoints. These are the interfaces through which your application connect to the model. Once deployed, your app can query the model securely and at different scales, and you don't need to be a developer to get started. Oracle offers a playground, which is a non-core environment where you can try out models, craft parameters, and test responses interactively. So overall, the generative AI service is designed to make enterprise-grade AI accessible and customizable. So, fitting directly into business processes, whether you are building a smart assistant or you're automating the content generation process.  06:00 Lois: The next key service is OCI Generative AI Agents. Can you tell us more about it?  Yunus: OCI Generative AI agents combines a natural language interface with generative AI models and enterprise data stores to answer questions and take actions. The agent remembers the context, uses previous interactions, and retrieves deeper product speech details. They aren't just static chat bots. They are context aware, grounded in business data, and able to handle multi-turns, follow-up queries with relevant accurate responses, and driving productivity and decision-making across departments like sales, support, or operations. 06:54 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 07:37 Nikita: Welcome back! Yunus, let's move on to the OCI Language service.  Yunus: OCI Language helps business understand and process natural language at scale. It uses pretrained models, which means they are already trained on large industry data sets and are ready to be used right away without requiring AI expertise. It detects over 100 languages, including English, Japanese, Spanish, and more. This is great for global business that receive multilingual inputs from customers. It works with identity sentiments. For different aspects of the sentence, for example, in a review like, “The food was great, but the service sucked,” OCI Language can tell that food has a positive sentiment while service has a negative one. This is called aspect-based sentiment analysis, and it is more insightful than just labeling the entire text as positive or negative. Then we have got to identify key phrases representing important ideas or subjects. So, it helps in extracting these key phrases, words, or terms that capture the core messages. They help automate tagging, summarizing, or even routing of content like support tickets or emails.  In real life, the businesses are using this for customer feedback analysis, support ticket routing, social media monitoring, and even regulatory compliances.  09:21 Nikita: That's fantastic. And what about the OCI Speech service?  Yunus: The OCI Speech is an AI service that transcribes speech to text. Think of it as an AI-powered transcription engine that listens to the spoken English, whether in audio or video files, and turns it into usable and searchable and readable text. It provides timestamps, so you know exactly when something was said. A valuable feature for reviewing legal discussions, media footages, or compliance audits. OCI Speech even understands different speakers. You don't need to train this from scratch. It is pre-trained model hosted on an API. Just send your audio to the service, and you get an accurate timestamp text back in return. 10:17 Lois: I know we also have a service for object detection… called OCI Vision?  Yunus: OCI Vision uses pretrained, deep learning models to understand and analyze visual content. Just like a human might, you can upload an image or videos, and the AI can tell you what is in it and where they might be useful. There are two primary use cases, which you can use this particular OCI Vision for. One is for object detection. You have got a red color car. So OCI Vision is not just identifying that's a car. It is detecting and labeling parts of the car too, like the bumper, the wheels, the design components. This is a critical in industries like manufacturing, retail, or logistics. For example, in quality control, OCI Vision can scan product images to detect missing or defective parts automatically.  Then we have got the image classification. This is useful in scenarios like automated tagging of photos, managing digital assets, classifying this particular scene or context of this particular scene. So basically, when we talk about OCI Vision, which is actually a fully managed, no complex model training is required for this particular service. It's available via API. It is also working with defining their own custom model for working with the environments. 11:51 Nikita: And the final service is related to text and called OCI Document Understanding, right? Yunus: So OCI Document Understanding allows businesses to automatically extract structured insights from unstructured documents like invoices, contracts, recipes, and also sometimes resumes, or even business documents. 12:13 Nikita: And how does it work? Yunus: OCI reads the content from the scanned document. The OCR is smarter. It recognizes both printed and handwritten text. Then determines what type of document it is. So document classification is done. Text recognition recognizes text, then classifies the document. For example, if this is a purchase order, or bank statement, or any medical report. If your business handles documents in multiple languages, then the AI can actually help in language detection also, which helps you in routing the language or translating that particular language. Many documents contain structured data in table format. Think pricing tables or line items. OCI will help you in extracting these with high accuracy for reporting on feeding into ERP systems. And finally, I would say the key value extraction. It puts our critical business values like invoice numbers, payment amounts, or customer names from fields that may not always allow a fixed format. So, this service reduces the need for manual review, cuts down processes time, and ensures high accuracy for your system. 13:36 Lois: What are the key takeaways our listeners should walk away with after this episode? Yunus: The first one, Oracle doesn't treat AI as just a standalone tool. Instead, AI is integrated from the ground up. Whether you're talking about infrastructure, data platforms, machine learning services, or applications like HCM, ERP, or CX. In real world, the Oracle AI Services prioritize data management, security, and governance, all essential for enterprise AI use cases. So, it is about trust. Can your AI handle sensitive data? Can it comply with regulations? Oracle builds its AI services with strong foundation in data governance, robust security measures, and tight control over data residency and access. So this makes Oracle AI especially well-suited for industries like health care, finance, logistics, and government, where compliance and control aren't optional. They are critical.   14:44 Nikita: Thank you for another great conversation, Yunus. If you're interested in learning more about the topics we discussed today, head on over to mylearn.oracle.com and search for the AI for You course.  Lois: In our next episode, we'll get into Predictive AI, Generative AI, Agentic AI, all with respect to Oracle Fusion Applications. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 15:10 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.  

Irish Tech News Audio Articles
The Game Changing Potential of GenAI and Innovative Data Storage

Irish Tech News Audio Articles

Play Episode Listen Later Sep 15, 2025 6:26


Artificial intelligence (AI) is not long just a buzzword, but a pivotal force driving unprecedented business transformation and growth. The technology is fundamentally reshaping how businesses in Ireland operate, innovate, and compete. According to the Dell Innovation Catalyst Study, 76% of organisations based in Ireland are already considering AI and GenAI a key part of their business strategy, with 84% reporting substantial ROI and productivity gains from adopting these technologies. Moreover, 66% of Irish organisations are at early to mid-stage in their AI and GenAI adoption journey, while 90% see strong opportunities to leverage Agentic AI within their business operations. However, there are complexities involved with fully harnessing the power of GenAI. To build and train GenAI models, organisations need vast amounts of information. In turn, these same models also generate vast quantities of data to go back into the business. So, the question each business leader must ask before embracing AI and GenAI is: Are our storage solutions up to the task? The solution is scalable, secure, and economically sound data architecture that will set apart the organisations simply running in the AI race, and those leading it. Storage solutions for the GenAI age For GenAI to be successfully deployed, organisations must rethink, rearchitect and optimise their storage to effectively manage GenAI's hefty data management requirements. By doing so, organisations will avoid a potential slowdown in processes due to inadequate or improperly designed storage. The reality is that traditional storage systems are already struggling to keep pace with the explosion of data, and as GenAI systems advance and tackle new, more complex tasks the requirements will only increase. In other words, storage platforms must be aligned with the more complex realities of unstructured data, also known as qualitative data, and the emerging needs of GenAI. In fact, unstructured data accounts for over 90% of the data created each year - largely due to a rise in human-generated data, meaning the sphere is made up of cluttered and muddled columns of analysis. Enterprises need new ways to cost-effectively store data of this scale and complexity, while still providing easy and quick access to it and protecting it against cyber criminals. Unstructured data specifically is of interest to hackers due to its value and sheer volume. Organisations are seeking to enhance how they manage data - whether it's moving, accessing, scaling, or safeguarding it. In the pursuit of rapid improvement, many have adopted solutions that store data across several public cloud platforms. While these public cloud environments can deliver immediate benefits, such as increased flexibility and availability, they often introduce longer-term complications. Over time, organisations may face rising costs associated with moving data into and out of different clouds, heightened security risks, and challenges when attempting to optimise their data across these disparate environments. For generative AI to reach its full potential, it requires straightforward, reliable access to quality data; unfortunately, strategies that prioritise public cloud-only adoption above all else frequently struggle to meet these requirements. Organisations should instead look to adopt a multicloud by design approach. This will help them unlock the full potential of multicloud in the short and long term, without being constrained by siloed ecosystems of proprietary tools and services. Multicloud by design brings management consistency to storing, protecting and securing data in multicloud environments. Investing in new storage technologies Businesses need new, novel approaches that cater to GenAI's specific requirements and vast, diverse data sets. Some of these cutting-edge technologies include distributed storage, data compression and data indexing. Distributed storage enhances the scalability and reliability of GenAI systems by...

KuppingerCole Analysts
Analyst Chat #268: Interoperability by Design - Making IAM Work Across Legacy, SaaS, and Multi-Cloud

KuppingerCole Analysts

Play Episode Listen Later Sep 8, 2025 27:52


Identity and Access Management (IAM) is no longer a one-off project—it’s an ongoing journey. In this episode of the KuppingerCole Analyst Chat, Matthias Reinwarth is joined by Christopher (CISO & Lead Advisor) and Deniz Algin (Advisor) to explore how organizations can successfully apply the Identity Fabric concept. How to evolve from legacy systems to a future-proof IAM strategy without breaking existing operations? Why interoperability matters? What are the most common pitfalls organizations face when trying to modernize IAM? Find the answer to these questions and more in this episode! Key Topics Covered: Identity Fabric explained through a powerful “airport” analogy ✈️ How to design IAM programs in brownfield environments (no rip & replace) Capability-driven approach vs. tool-driven decisions Risk-based prioritization: quick wins, big wins & roadmaps Common pitfalls to avoid when modernizing IAM

KuppingerCole Analysts Videos
Analyst Chat #268: Interoperability by Design - Making IAM Work Across Legacy, SaaS, and Multi-Cloud

KuppingerCole Analysts Videos

Play Episode Listen Later Sep 8, 2025 27:52


Identity and Access Management (IAM) is no longer a one-off project—it’s an ongoing journey. In this episode of the KuppingerCole Analyst Chat, Matthias Reinwarth is joined by Christopher (CISO & Lead Advisor) and Deniz Algin (Advisor) to explore how organizations can successfully apply the Identity Fabric concept. How to evolve from legacy systems to a future-proof IAM strategy without breaking existing operations? Why interoperability matters? What are the most common pitfalls organizations face when trying to modernize IAM? Find the answer to these questions and more in this episode! Key Topics Covered: Identity Fabric explained through a powerful “airport” analogy ✈️ How to design IAM programs in brownfield environments (no rip & replace) Capability-driven approach vs. tool-driven decisions Risk-based prioritization: quick wins, big wins & roadmaps Common pitfalls to avoid when modernizing IAM

MY DATA IS BETTER THAN YOURS
Digitale Souveränität – Warum Datenhoheit über unsere Zukunft entscheidet, mit Nina-Sophie S. von leitzcloud by vBoxx

MY DATA IS BETTER THAN YOURS

Play Episode Listen Later Sep 4, 2025 40:04 Transcription Available


Wie sicher sind unsere Daten wirklich – und wer hat am Ende Zugriff darauf?In dieser Folge von MY DATA IS BETTER THAN YOURS spricht Host Jonas Rashedi mit Nina-Sophie Sczepurek, Co-Founder & COO bei leitzcloud by vBoxx, über Datensouveränität, Cybersicherheit und die strategische Bedeutung von Cloud-Architekturen. Nina erklärt, wie US-Gesetze wie der Cloud Act selbst auf Server in Europa wirken, warum Schleswig-Holstein und Dänemark auf europäische Cloud-Lösungen umsteigen wollen und wie neue EU-Gesetze wie der Cyber Resilience Act Unternehmen zu mehr Sicherheit verpflichten.Besonders praxisnah wird es, wenn sie von Projekten berichtet, in denen Unternehmen sensible Daten wie HR- oder Finanzinformationen bewusst in separate, europäische Clouds auslagern – oder gleich ganze private Cloud-Infrastrukturen aufbauen. Das Gespräch zeigt, wie eng technologische Entscheidungen mit geopolitischen Entwicklungen verwoben sind. Es geht um Vertrauen in Technologie, die Rolle von Multi-Cloud-Strategien und darum, warum Sensibilisierung und Transparenz entscheidend sind. MY DATA IS BETTER THAN YOURS ist ein Projekt von BETTER THAN YOURS, der Marke für richtig gute Podcasts. Zum LinkedIn-Profil von Nina: https://www.linkedin.com/in/nina-sophie-sczepurek/?locale=de_DE Zur Webseite von leitzcloud by vBoxx: https://leitzcloud.eu/ Zu allen wichtigen Links rund um Jonas und den Podcast: https://linktr.ee/jonas.rashedi 00:00 Intro und Begrüßung 02:02 Vorstellung Nina 04:32 Was bedeutet Datensouveränität? 06:47 Politische Rahmenbedingungen und Cloud Act 10:12 Risiken außereuropäischer Anbieter 14:18 Europas Potenzial und erste Schritte 15:47 Neue Cybersicherheitsgesetze 18:56 B2B-Sensibilisierung 22:16 Strategisches Datenmanagement 27:21 Multi-Cloud-Strategien 30:37 Vertrauen in Technologie 33:00 Praxisbeispiele aus Projekten 37:25 Blick in die Zukunft 38:19 Persönlicher Umgang mit Daten

Oracle University Podcast
The AI Workflow

Oracle University Podcast

Play Episode Listen Later Sep 2, 2025 22:08


Join Lois Houston and Nikita Abraham as they chat with Yunus Mohammed, a Principal Instructor at Oracle University, about the key stages of AI model development. From gathering and preparing data to selecting, training, and deploying models, learn how each phase impacts AI's real-world effectiveness. The discussion also highlights why monitoring AI performance and addressing evolving challenges are critical for long-term success.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about generative AI and gen AI agents. Today, we're going to look at the key stages in a typical AI workflow. We'll also discuss how data quality, feedback loops, and business goals influence AI success. With us today is Yunus Mohammed, a Principal Instructor at Oracle University.  01:00 Lois: Hi Yunus! We're excited to have you here! Can you walk us through the various steps in developing and deploying an AI model?  Yunus: The first point is the collect data. We gather relevant data, either historical or real time. Like customer transactions, support tickets, survey feedbacks, or sensor logs. A travel company, for example, can collect past booking data to predict future demand. So, data is the most crucial and the important component for building your AI models. But it's not just the data. You need to prepare the data. In the prepared data process, we clean, organize, and label the data. AI can't learn from messy spreadsheets. We try to make the data more understandable and organized, like removing duplicates, filling missing values in the data with some default values or formatting dates. All these comes under organization of the data and give a label to the data, so that the data becomes more supervised. After preparing the data, I go for selecting the model to train. So now, we pick what type of model fits your goals. It can be a traditional ML model or a deep learning network model, or it can be a generative model. The model is chosen based on the business problems and the data we have. So, we train the model using the prepared data, so it can learn the patterns of the data. Then after the model is trained, I need to evaluate the model. You check how well the model performs. Is it accurate? Is it fair? The metrics of the evaluation will vary based on the goal that you're trying to reach. If your model misclassifies emails as spam and it is doing it very much often, then it is not ready. So I need to train it further. So I need to train it to a level when it identifies the official mail as official mail and spam mail as spam mail accurately.  After evaluating and making sure your model is perfectly fitting, you go for the next step, which is called the deploy model. Once we are happy, we put it into the real world, like into a CRM, or a web application, or an API. So, I can configure that with an API, which is application programming interface, or I add it to a CRM, Customer Relationship Management, or I add it to a web application that I've got. Like for example, a chatbot becomes available on your company's website, and the chatbot might be using a generative AI model. Once I have deployed the model and it is working fine, I need to keep track of this model, how it is working, and need to monitor and improve whenever needed. So I go for a stage, which is called as monitor and improve. So AI isn't set in and forget it. So over time, there are lot of changes that is happening to the data. So we monitor performance and retrain when needed. An e-commerce recommendation model needs updates as there might be trends which are shifting.  So the end user finally sees the results after all the processes. A better product, or a smarter service, or a faster decision-making model, if we do this right. That is, if we process the flow perfectly, they may not even realize AI is behind it to give them the accurate results.  04:59 Nikita: Got it. So, everything in AI begins with data. But what are the different types of data used in AI development?  Yunus: We work with three main types of data: structured, unstructured, and semi-structured. Structured data is like a clean set of tables in Excel or databases, which consists of rows and columns with clear and consistent data information. Unstructured is messy data, like your email or customer calls that records videos or social media posts, so they all comes under unstructured data.  Semi-structured data is things like logs on XML files or JSON files. Not quite neat but not entirely messy either. So they are, they are termed semi-structured. So structured, unstructured, and then you've got the semi-structured. 05:58 Nikita: Ok… and how do the data needs vary for different AI approaches?  Yunus: Machine learning often needs labeled data. Like a bank might feed past transactions labeled as fraud or not fraud to train a fraud detection model. But machine learning also includes unsupervised learning, like clustering customer spending behavior. Here, no labels are needed. In deep learning, it needs a lot of data, usually unstructured, like thousands of loan documents, call recordings, or scan checks. These are fed into the models and the neural networks to detect and complex patterns. Data science focus on insights rather than the predictions. So a data scientist at the bank might use customer relationship management exports and customer demographies to analyze which age group prefers credit cards over the loans. Then we have got generative AI that thrives on diverse, unstructured internet scalable data. Like it is getting data from books, code, images, chat logs. So these models, like ChatGPT, are trained to generate responses or mimic the styles and synthesize content. So generative AI can power a banking virtual assistant trained on chat logs and frequently asked questions to answer customer queries 24/7. 07:35 Lois: What are the challenges when dealing with data?  Yunus: Data isn't just about having enough. We must also think about quality. Is it accurate and relevant? Volume. Do we have enough for the model to learn from? And is my data consisting of any kind of unfairly defined structures, like rejecting more loan applications from a certain zip code, which actually gives you a bias of data? And also the privacy. Are we handling personal data responsibly or not? Especially data which is critical or which is regulated, like the banking sector or health data of the patients. Before building anything smart, we must start smart.  08:23 Lois: So, we've established that collecting the right data is non-negotiable for success. Then comes preparing it, right?  Yunus: This is arguably the most important part of any AI or data science project. Clean data leads to reliable predictions. Imagine you have a column for age, and someone accidentally entered an age of like 999. That's likely a data entry error. Or maybe a few rows have missing ages. So we either fix, remove, or impute such issues. This step ensures our model isn't misled by incorrect values. Dates are often stored in different formats. For instance, a date, can be stored as the month and the day values, or it can be stored in some places as day first and month next. We want to bring everything into a consistent, usable format. This process is called as transformation. The machine learning models can get confused if one feature, like example the income ranges from 10,000 to 100,000, and another, like the number of kids, range from 0 to 5. So we normalize or scale values to bring them to a similar range, say 0 or 1. So we actually put it as yes or no options. So models don't understand words like small, medium, or large. We convert them into numbers using encoding. One simple way is assigning 1, 2, and 3 respectively. And then you have got removing stop words like the punctuations, et cetera, and break the sentence into smaller meaningful units called as tokens. This is actually used for generative AI tasks. In deep learning, especially for Gen AI, image or audio inputs must be of uniform size and format.  10:31 Lois: And does each AI system have a different way of preparing data?  Yunus: For machine learning ML, focus is on cleaning, encoding, and scaling. Deep learning needs resizing and normalization for text and images. Data science, about reshaping, aggregating, and getting it ready for insights. The generative AI needs special preparation like chunking, tokenizing large documents, or compressing images. 11:06 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 11:50 Nikita: Welcome back! Yunus, how does a user choose the right model to solve their business problem?  Yunus: Just like a business uses different dashboards for marketing versus finance, in AI, we use different model types, depending on what we are trying to solve. Like classification is choosing a category. Real-world example can be whether the email is a spam or not. Use in fraud detection, medical diagnosis, et cetera. So what you do is you classify that particular data and then accurately access that classification of data. Regression, which is used for predicting a number, like, what will be the price of a house next month? Or it can be a useful in common forecasting sales demands or on the cost. Clustering, things without labels. So real-world examples can be segmenting customers based on behavior for targeted marketing. It helps discovering hidden patterns in large data sets.  Generation, that is creating new content. So AI writing product description or generating images can be a real-world example for this. And it can be used in a concept of generative AI models like ChatGPT or Dall-E, which operates on the generative AI principles. 13:16 Nikita: And how do you train a model? Yunus: We feed it with data in small chunks or batches and then compare its guesses to the correct values, adjusting its thinking like weights to improve next time, and the cycle repeats until the model gets good at making predictions. So if you're building a fraud detection system, ML may be enough. If you want to analyze medical images, you will need deep learning. If you're building a chatbot, go for a generative model like the LLM. And for all of these use cases, you need to select and train the applicable models as and when appropriate. 14:04 Lois: OK, now that the model's been trained, what else needs to happen before it can be deployed? Yunus: Evaluate the model, assess a model's accuracy, reliability, and real-world usefulness before it's put to work. That is, how often is the model right? Does it consistently perform well? Is it practical in the real world to use this model or not? Because if I have bad predictions, doesn't just look bad, it can lead to costly business mistakes. Think of recommending the wrong product to a customer or misidentifying a financial risk.  So what we do here is we start with splitting the data into two parts. So we train the data by training data. And this is like teaching the model. And then we have got the testing data. This is actually used for checking how well the model has learned. So once trained, the model makes predictions. We compare the predictions to the actual answers, just like checking your answer after a quiz. We try to go in for tailored evaluation based on AI types. Like machine learning, we care about accuracy in prediction. Deep learning is about fitting complex data like voice or images, where the model repeatedly sees examples and tunes itself to reduce errors. Data science, we look for patterns and insights, such as which features will matter. In generative AI, we judge by output quality. Is it coherent, useful, and is it natural?  The model improves with the accuracy and the number of epochs the training has been done on.  15:59 Nikita: So, after all that, we finally come to deploying the model… Yunus: Deploying a model means we are integrating it into our actual business system. So it can start making decisions, automating tasks, or supporting customer experiences in real time. Think of it like this. Training is teaching the model. Evaluating is testing it. And deployment is giving it a job.  The model needs a home either in the cloud or inside your company's own servers. Think of it like putting the AI in place where it can be reached by other tools. Exposed via API or embedded in an app, or you can say application, this is how the AI becomes usable.  Then, we have got the concept of receives live data and returns predictions. So receives live data and returns prediction is when the model listens to real-time inputs like a user typing, or user trying to search or click or making a transaction, and then instantly, your AI responds with a recommendation, decisions, or results. Deploying the model isn't the end of the story. It is just the beginning of the AI's real-world journey. Models may work well on day one, but things change. Customer behavior might shift. New products get introduced in the market. Economic conditions might evolve, like the era of COVID, where the demand shifted and the economical conditions actually changed. 17:48 Lois: Then it's about monitoring and improving the model to keep things reliable over time. Yunus: The monitor and improve loop is a continuous process that ensures an AI model remains accurate, fair, and effective after deployment. The live predictions, the model is running in real time, making decisions or recommendations. The monitor performance are those predictions still accurate and helpful. Is latency acceptable? This is where we track metrics, user feedbacks, and operational impact. Then, we go for detect issues, like accuracy is declining, are responses feeling biased, are customers dropping off due to long response times? And the next step will be to reframe or update the model. So we add fresh data, tweak the logic, or even use better architectures to deploy the uploaded model, and the new version replaces the old one and the cycle continues again. 18:58 Lois: And are there challenges during this step? Yunus: The common issues, which are related to monitor and improve consist of model drift, bias, and latency of failures. In model drift, the model becomes less accurate as the environment changes. Or bias, the model may favor or penalize certain groups unfairly. Latency or failures, if the model is too slow or fails unpredictably, it disrupts the user experience. Let's take the loan approvals. In loan approvals, if we notice an unusually high rejection rate due to model bias, we might retrain the model with more diverse or balanced data. For a chatbot, we watch for customer satisfaction, which might arise due to model failure and fine-tune the responses for the model. So in forecasting demand, if the predictions no longer match real trends, say post-pandemic, due to the model drift, we update the model with fresh data.  20:11 Nikita: Thanks for that, Yunus. Any final thoughts before we let you go? Yunus: No matter how advanced your model is, its effectiveness depends on the quality of the data you feed it. That means, the data needs to be clean, structured, and relevant. It should map itself to the problem you're solving. If the foundation is weak, the results will be also. So data preparation is not just a technical step, it is a business critical stage. Once deployed, AI systems must be monitored continuously, and you need to watch for drops in performance for any bias being generated or outdated logic, and improve the model with new data or refinements. That's what makes AI reliable, ethical, and sustainable in the long run. 21:09 Nikita: Yunus, thank you for this really insightful session. If you're interested in learning more about the topics we discussed today, go to mylearn.oracle.com and search for the AI for You course.  Lois: That's right. You'll find skill checks to help you assess your understanding of these concepts. In our next episode, we'll discuss the idea of buy versus build in the context of AI. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 21:39 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Oracle University Podcast
Core AI Concepts – Part 2

Oracle University Podcast

Play Episode Listen Later Aug 19, 2025 12:42


In this episode, Lois Houston and Nikita Abraham continue their discussion on AI fundamentals, diving into Data Science with Principal AI/ML Instructor Himanshu Raj. They explore key concepts like data collection, cleaning, and analysis, and talk about how quality data drives impactful insights.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services.  Nikita: Hi everyone! Last week, we began our exploration of core AI concepts, specifically machine learning and deep learning. I'd really encourage you to go back and listen to the episode if you missed it.   00:52 Lois: Yeah, today we're continuing that discussion, focusing on data science, with our Principal AI/ML Instructor Himanshu Raj.  Nikita: Hi Himanshu! Thanks for joining us again. So, let's get cracking! What is data science?  01:06 Himanshu: It's about collecting, organizing, analyzing, and interpreting data to uncover valuable insights that help us make better business decisions. Think of data science as the engine that transforms raw information into strategic action.  You can think of a data scientist as a detective. They gather clues, which is our data. Connect the dots between those clues and ultimately solve mysteries, meaning they find hidden patterns that can drive value.  01:33 Nikita: Ok, and how does this happen exactly?  Himanshu: Just like a detective relies on both instincts and evidence, data science blends domain expertise and analytical techniques. First, we collect raw data. Then we prepare and clean it because messy data leads to messy conclusions. Next, we analyze to find meaningful patterns in that data. And finally, we turn those patterns into actionable insights that businesses can trust.  02:00 Lois: So what you're saying is, data science is not just about technology; it's about turning information into intelligence that organizations can act on. Can you walk us through the typical steps a data scientist follows in a real-world project?  Himanshu: So it all begins with business understanding. Identifying the real problem we are trying to solve. It's not about collecting data blindly. It's about asking the right business questions first. And once we know the problem, we move to data collection, which is gathering the relevant data from available sources, whether internal or external.  Next one is data cleaning. Probably the least glamorous but one of the most important steps. And this is where we fix missing values, remove errors, and ensure that the data is usable. Then we perform data analysis or what we call exploratory data analysis.  Here we look for patterns, prints, and initial signals hidden inside the data. After that comes the modeling and evaluation, where we apply machine learning or deep learning techniques to predict, classify, or forecast outcomes. Machine learning, deep learning are like specialized equipment in a data science detective's toolkit. Powerful but not the whole investigation.  We also check how good the models are in terms of accuracy, relevance, and business usefulness. Finally, if the model meets expectations, we move to deployment and monitoring, putting the model into real world use and continuously watching how it performs over time.  03:34 Nikita: So, it's a linear process?  Himanshu: It's not linear. That's because in real world data science projects, the process does not stop after deployment. Once the model is live, business needs may evolve, new data may become available, or unexpected patterns may emerge.  And that's why we come back to business understanding again, defining the questions, the strategy, and sometimes even the goals based on what we have learned. In a way, a good data science project behaves like living in a system which grows, adapts, and improves over time. Continuous improvement keeps it aligned with business value.   Now, think of it like adjusting your GPS while driving. The route you plan initially might change as new traffic data comes in. Similarly, in data science, new information constantly help refine our course. The quality of our data determines the quality of our results.   If the data we feed into our models is messy, inaccurate, or incomplete, the outputs, no matter how sophisticated the technology, will be also unreliable. And this concept is often called garbage in, garbage out. Bad input leads to bad output.  Now, think of it like cooking. Even the world's best Michelin star chef can't create a masterpiece with spoiled or poor-quality ingredients. In the same way, even the most advanced AI models can't perform well if the data they are trained on is flawed.  05:05 Lois: Yeah, that's why high-quality data is not just nice to have, it's absolutely essential. But Himanshu, what makes data good?   Himanshu: Good data has a few essential qualities. The first one is complete. Make sure we aren't missing any critical field. For example, every customer record must have a phone number and an email. It should be accurate. The data should reflect reality. If a customer's address has changed, it must be updated, not outdated. Third, it should be consistent. Similar data must follow the same format. Imagine if the dates are written differently, like 2024/04/28 versus April 28, 2024. We must standardize them.   Fourth one. Good data should be relevant. We collect only the data that actually helps solve our business question, not unnecessary noise. And last one, it should be timely. So data should be up to date. Using last year's purchase data for a real time recommendation engine wouldn't be helpful.  06:13 Nikita: Ok, so ideally, we should use good data. But that's a bit difficult in reality, right? Because what comes to us is often pretty messy. So, how do we convert bad data into good data? I'm sure there are processes we use to do this.  Himanshu: First one is cleaning. So this is about correcting simple mistakes, like fixing typos in city names or standardizing dates.  The second one is imputation. So if some values are missing, we fill them intelligently, for instance, using the average income for a missing salary field. Third one is filtering. In this, we remove irrelevant or noisy records, like discarding fake email signups from marketing data. The fourth one is enriching. We can even enhance our data by adding trusted external sources, like appending credit scores from a verified bureau.  And the last one is transformation. Here, we finally reshape data formats to be consistent, for example, converting all units to the same currency. So even messy data can become usable, but it takes deliberate effort, structured process, and attention to quality at every step.  07:26 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest technology. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 08:10 Nikita: Welcome back! Himanshu, we spoke about how to clean data. Now, once we get high-quality data, how do we analyze it?  Himanshu: In data science, there are four primary types of analysis we typically apply depending on the business goal we are trying to achieve.  The first one is descriptive analysis. It helps summarize and report what has happened. So often using averages, totals, or percentages. For example, retailers use descriptive analysis to understand things like what was the average customer spend last quarter? How did store foot traffic trend across months?  The second one is diagnostic analysis. Diagnostic analysis digs deeper into why something happened. For example, hospitals use this type of analysis to find out, for example, why a certain department has higher patient readmission rates. Was it due to staffing, post-treatment care, or patient demographics?  The third one is predictive analysis. Predictive analysis looks forward, trying to forecast future outcomes based on historical patterns. For example, energy companies predict future electricity demand, so they can better manage resources and avoid shortages. And the last one is prescriptive analysis. So it does not just predict. It recommends specific actions to take.  So logistics and supply chain companies use prescriptive analytics to suggest the most efficient delivery routes or warehouse stocking strategies based on traffic patterns, order volume, and delivery deadlines.   09:42 Lois: So really, we're using data science to solve everyday problems. Can you walk us through some practical examples of how it's being applied?  Himanshu: The first one is predictive maintenance. It is done in manufacturing a lot. A factory collects real time sensor data from machines. Data scientists first clean and organize this massive data stream, explore patterns of past failures, and design predictive models.  The goal is not just to predict breakdowns but to optimize maintenance schedules, reducing downtime and saving millions. The second one is a recommendation system. It's prevalent in retail and entertainment industries. Companies like Netflix or Amazon gather massive user interaction data such as views, purchases, likes.  Data scientists structure and analyze this behavioral data to find meaningful patterns of preferences and build models that suggest relevant content, eventually driving more engagement and loyalty. The third one is fraud detection. It's applied in finance and banking sector.  Banks store vast amounts of transaction record records. Data scientists clean and prepare this data, understand typical spending behaviors, and then use statistical techniques and machine learning to spot unusual patterns, catching fraud faster than manual checks could ever achieve.  The last one is customer segmentation, which is often applied in marketing. Businesses collect demographics and behavioral data about their customers. Instead of treating all the customers same, data scientists use clustering techniques to find natural groupings, and this insight helps businesses tailor their marketing efforts, offers, and communication for each of those individual groups, making them far more effective.  Across all these examples, notice that data science isn't just building a model. Again, it's understanding the business need, reviewing the data, analyzing it thoughtfully, and building the right solution while helping the business act smarter.  11:44 Lois: Thank you, Himanshu, for joining us on this episode of the Oracle University Podcast. We can't wait to have you back next week for part 3 of this conversation on core AI concepts, where we'll talk about generative AI and gen AI agents.     Nikita: And if you want to learn more about data science, visit mylearn.oracle.com and search for the AI for You course. Until next time, this is Nikita Abraham…  Lois: And Lois Houston signing off!  12:13 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

CarahCast: Podcasts on Technology in the Public Sector
Improving Database Management Across MultiCloud Environments

CarahCast: Podcasts on Technology in the Public Sector

Play Episode Listen Later Aug 19, 2025 45:51


Access the Verge Technologies podcast to hear IT experts discuss how SentientDB, Verge's AI-powered cloud convergence platform, unifies Federal information systems at scale. Learn how organizations are leveraging automated cloud management systems to enhance database mobility, reliability and compliance.

Oracle University Podcast

In this episode, hosts Lois Houston and Nikita Abraham, together with Senior Cloud Engineer Nick Commisso, break down the basics of artificial intelligence (AI). They discuss the differences between Artificial General Intelligence (AGI) and Artificial Narrow Intelligence (ANI), and explore the concepts of machine learning, deep learning, and generative AI. Nick also shares examples of how AI is used in everyday life, from navigation apps to spam filters, and explains how AI can help businesses cut costs and boost revenue.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Welcome to a new season of the podcast. I'm so excited about this one because we're going to dive into the world of artificial intelligence, speaking to many experts in the field. Nikita: If you've been listening to us for a while, you probably know we've covered AI from a bunch of different angles. But this time, we're dialing it all the way back to basics. We wanted to create something for the absolute beginner, so no jargon, no assumptions, just simple conversations that anyone can follow. 01:08 Lois: That's right, Niki. You don't need to have a technical background or prior experience with AI to get the most out of these episodes. In our upcoming conversations, we'll break down the basics of AI, explore how it's shaping the world around us, and understand its impact on your business. Nikita: The idea is to give you a practical understanding of AI that you can use in your work, especially if you're in sales, marketing, operations, HR, or even customer service.  01:37 Lois: Today, we'll talk about the basics of AI with Senior Cloud Engineer Nick Commisso. Hi Nick! Welcome back to the podcast. Can you tell us about human intelligence and how it relates to artificial intelligence? And within AI, I know we have Artificial General Intelligence, or AGI, and Artificial Narrow Intelligence, or ANI. What's the difference between the two? Nick: Human intelligence is the intellectual capability of humans that allow us to learn new skills through observation and mental digestion, to think through and understand abstract concepts and apply reasoning, to communicate using language and understand non-verbal cues, such as facial expressions, tone variation, body language. We can handle objections and situations in real time, even in a complex setting. We can plan for short and long-term situations or projects. And we can create music, art, or invent something new or have original ideas. If machines can replicate a wide range of human cognitive abilities, such as learning, reasoning, or problem solving, we call it artificial general intelligence.  Now, AGI is hypothetical for now, but when we apply AI to solve problems with specific, narrow objectives, we call it artificial narrow intelligence, or ANI. AGI is a hypothetical AI that thinks like a human. It represents the ultimate goal of artificial intelligence, which is a system capable of chatting, learning, and even arguing like us. If AGI existed, it would take the form like a robot doctor that accurately diagnoses and comforts patients, or an AI teacher that customizes lessons in real time based on each student's mood, pace, and learning style, or an AI therapist that comprehends complex emotions and provides empathetic, personalized support. ANI, on the other hand, focuses on doing one thing really well. It's designed to perform specific tasks by recognizing patterns and following rules, but it doesn't truly understand or think beyond its narrow scope. Think of ANI as a specialist. Your phone's face ID can recognize you instantly, but it can't carry on a conversation. Google Maps finds the best route, but it can't write you a poem. And spam filters catch junk mail, but it can't make you coffee. So, most of the AI you interact with today is ANI. It's smart, efficient, and practical, but limited to specific functions without general reasoning or creativity. 04:22 Nikita: Ok then what about Generative AI?  Nick: Generative AI is a type of AI that can produce content such as audio, text, code, video, and images. ChatGPT can write essays, but it can't fact check itself. DALL-E creates art, but it doesn't actually know if it's good. Or AI song covers can create deepfakes like Drake singing "Baby Shark."  04:47 Lois: Why should I care about AI? Why is it important? Nick: AI is already part of your everyday life, often working quietly in the background. ANI powers things like navigation apps, voice assistants, and spam filters. Generative AI helps create everything from custom playlists to smart writing tools. And while AGI isn't here yet, it's shaping ideas about what the future might look like. Now, AI is not just a buzzword, it's a tool that's changing how we live, work, and interact with the world. So, whether you're using it or learning about it or just curious, it's worth knowing what's behind the tech that's becoming part of everyday life.  05:32 Lois: Nick, whenever people talk about AI, they also throw around terms like machine learning and deep learning. What are they and how do they relate to AI? Nick: As we shared earlier, AI is the ability of machines to imitate human intelligence. And Machine Learning, or ML, is a subset of AI where the algorithms are used to learn from past data and predict outcomes on new data or to identify trends from the past. Deep Learning, or DL, is a subset of machine learning that uses neural networks to learn patterns from complex data and make predictions or classifications. And Generative AI, or GenAI, on the other hand, is a specific application of DL focused on creating new content, such as text, images, and audio, by learning the underlying structure of the training data.  06:24 Nikita: AI is often associated with key domains like language, speech, and vision, right? So, could you walk us through some of the specific tasks or applications within each of these areas? Nick: Language-related AI tasks can be text related or generative AI. Text-related AI tasks use text as input, and the output can vary depending on the task. Some examples include detecting language, extracting entities in a text, extracting key phrases, and so on.  06:54 Lois: Ok, I get you. That's like translating text, where you can use a text translation tool, type your text in the box, choose your source and target language, and then click Translate. That would be an example of a text-related AI task. What about generative AI language tasks? Nick: These are generative, which means the output text is generated by the model. Some examples are creating text, like stories or poems, summarizing texts, and answering questions, and so on. 07:25 Nikita: What about speech and vision? Nick: Speech-related AI tasks can be audio related or generative AI. Speech-related AI tasks use audio or speech as input, and the output can vary depending on the task. For example, speech to text conversion, speaker recognition, or voice conversion, and so on. Generative AI tasks are generative, i.e., the output audio is generated by the model (for example, music composition or speech synthesis). Vision-related AI tasks can be image related or generative AI. Image-related AI tasks use an image as the input, and the output depends on the task. Some examples are classifying images or identifying objects in an image. Facial recognition is one of the most popular image-related tasks that's often used for surveillance and tracking people in real time. It's used in a lot of different fields, like security and biometrics, law enforcement, entertainment, and social media. For generative AI tasks, the output image is generated by the model. For example, creating an image from a textual description or generating images of specific style or high resolution, and so on. It can create extremely realistic new images and videos by generating original 3D models of objects, such as machine, buildings, medications, people and landscapes, and so much more. 08:58 Lois: This is so fascinating. So, now we know what AI is capable of. But Nick, what is AI good at? Nick: AI frees you to focus on creativity and more challenging parts of your work. Now, AI isn't magic. It's just very good at certain tasks. It handles work that's repetitive, time consuming, or too complex for humans, like processing data or spotting patterns in large data sets.  AI can take over routine tasks that are essential but monotonous. Examples include entering data into spreadsheets, processing invoices, or even scheduling meetings, freeing up time for more meaningful work. AI can support professionals by extending their abilities. Now, this includes tools like AI-assisted coding for developers, real-time language translation for travelers or global teams, and advanced image analysis to help doctors interpret medical scans much more accurately. 10:00 Nikita: And what would you say is AI's sweet spot? Nick: That would be tasks that are both doable and valuable. A few examples of tasks that are feasible technically and have business value are things like predicting equipment failure. This saves downtime and the loss of business. Call center automation, like the routing of calls to the right person. This saves time and improves customer satisfaction. Document summarization and review. This helps save time for busy professionals. Or inspecting power lines. Now, this task is dangerous. By automating it, it protects human life and saves time. 10:48 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 11:30 Nikita: Welcome back! Now one big way AI is helping businesses today is by cutting costs, right? Can you give us some examples of this?  Nick: Now, AI can contribute to cost reduction in several key areas. For instance, chatbots are capable of managing up to 50% of customer queries. This significantly reduces the need for manual support, thereby lowering operational costs. AI can streamline workflows, for example, reducing invoice processing time from 10 days to just 1 hour. This leads to substantial savings in both time and resources. In addition to cost savings, AI can also support revenue growth. One way is enabling personalization and upselling. Platforms like Netflix use AI-driven recommendation systems to influence user choices. This not only enhances the user experience, but it also increases the engagement and the subscription revenue. Or unlocking new revenue streams. AI technologies, such as generative video tools and virtual influencers, are creating entirely new avenues for advertising and branded content, expanding business opportunities in emerging markets. 12:50 Lois: Wow, saving money and boosting bottom lines. That's a real win! But Nick, how is AI able to do this?  Nick: Now, data is what teaches AI. Just like we learn from experience, so does AI. It learns from good examples, bad examples, and sometimes even the absence of examples. The quality and variety of data shape how smart, accurate, and useful AI becomes. Imagine teaching a kid to recognize animals using only pictures of squirrels that are labeled dogs. That would be very confusing at the dog park. AI works the exact same way, where bad data leads to bad decisions. With the right data, AI can be powerful and accurate. But with poor or biased data, it can become unreliable and even misleading.  AI amplifies whatever you feed it. So, give it gourmet data, not data junk food. AI is like a chef. It needs the right ingredients. It needs numbers for predictions, like will this product sell? It needs images for cool tricks like detecting tumors, and text for chatting, or generating excuses for why you'd be late. Variety keeps AI from being a one-trick pony. Examples of the types of data are numbers, or machine learning, for predicting things like the weather. Text or generative AI, where chatbots are used for writing emails or bad poetry. Images, or deep learning, can be used for identifying defective parts in an assembly line, or an audio data type to transcribe a dictation from a doctor to a text. 14:35 Lois: With so much data available, things can get pretty confusing, which is why we have the concept of labeled and unlabeled data. Can you help us understand what that is? Nick: Labeled data are like flashcards, where everything has an answer. Spam filters learned from emails that are already marked as junk, and X-rays are marked either normal or pneumonia. Let's say we're training AI to tell cats from dogs, and we show it a hundred labeled pictures. Cat, dog, cat, dog, etc. Over time, it learns, hmm fluffy and pointy ears? That's probably a cat. And then we test it with new pictures to verify. Unlabeled data is like a mystery box, where AI has to figure it out itself. Social media posts, or product reviews, have no labels. So, AI clusters them by similarity. AI finding trends in unlabeled data is like a kid sorting through LEGOs without instructions. No one tells them which blocks will go together.  15:36 Nikita: With all the data that's being used to train AI, I'm sure there are issues that can crop up too. What are some common problems, Nick? Nick: AI's performance depends heavily on the quality of its data. Poor or biased data leads to unreliable and unfair outcomes. Dirty data includes errors like typos, missing values, or duplicates. For example, an age record as 250, or NA, can confuse the AI. And a variety of data cleaning techniques are available, like missing data can be filled in, or duplicates can be removed. AI can inherit human prejudices if the data is unbalanced. For example, a hiring AI may favor one gender if the past three hires were mostly male. Ensuring diverse and representative data helps promote fairness. Good data is required to train better AI. Data could be messy, and needs to be processed before to train AI. 16:39 Nikita: Thank you, Nick, for sharing your expertise with us. To learn more about AI, go to mylearn.oracle.com and search for the AI for You course. As you complete the course, you'll find skill checks that you can attempt to solidify your learning.  Lois: In our next episode, we'll dive deep into fundamental AI concepts and terminologies. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 17:05 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

The InfoQ Podcast
Understanding Event-Driven Architecture in a Multicloud Environment

The InfoQ Podcast

Play Episode Listen Later Jul 21, 2025 36:38


Teena Idnani, senior solutions architect at Microsoft, shares her experience on how and when to use event-driven architectures to improve the experience of your customers. She touches on when to use and not use this approach, as well as how to design your system, implement observability, and when to consider using more than one cloud vendor. Read a transcript of this interview: https://bit.ly/4n1gV3U Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: InfoQ Dev Summit Munich (October 15-16, 2025) Essential insights on critical software development priorities. https://devsummit.infoq.com/conference/munich2025 QCon San Francisco 2025 (November 17-21, 2025) Get practical inspiration and best practices on emerging software trends directly from senior software developers at early adopter companies. https://qconsf.com/ QCon AI New York 2025 (December 16-17, 2025) https://ai.qconferences.com/ The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - X: https://x.com/InfoQ?from=@ - LinkedIn: www.linkedin.com/company/infoq - Facebook: bit.ly/2jmlyG8 - Instagram: @infoqdotcom - Youtube: www.youtube.com/infoq - Bluesky: https://bsky.app/profile/infoq.com Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq

Business of Tech
Microsoft Cuts 9,000 Jobs, Boosts AI Partner Incentives; OpenAI Expands Multi-Cloud E-Commerce Tools

Business of Tech

Play Episode Listen Later Jul 17, 2025 15:36


Microsoft is undergoing a significant restructuring, placing artificial intelligence (AI) at the forefront of its strategy. The company has announced the layoff of approximately 9,000 employees, primarily targeting generalist sales roles, as it shifts towards a model that prioritizes technical expertise over traditional relationship-building in sales. This move is part of a broader initiative to enhance its AI offerings, particularly through its Copilot program, which has seen a 50% increase in funding and a 70% rise in partner incentives. Microsoft aims to eliminate product silos and align its go-to-market strategy with customer priorities, emphasizing the importance of AI integration in sales and service delivery.OpenAI is also making waves by diversifying its cloud infrastructure, now utilizing Google Cloud alongside Microsoft, CoreWeave, and Oracle. This strategic shift comes as OpenAI prepares to introduce new features in its ChatGPT platform, including a checkout function for e-commerce, which will allow users to make purchases directly through the chatbot. The company is positioning itself to compete more directly with Microsoft's Office suite by enhancing productivity tools and integrating e-commerce capabilities, signaling a move from being a model provider to an end-user platform.Amazon Web Services (AWS) has launched a new platform called Amazon Bedrock Agent Core, designed to facilitate collaboration among AI agents across organizations. This platform aims to address concerns about job security in the face of AI advancements, as it allows for the construction of interconnected AI agents capable of performing various tasks. Unlike competitors, AWS's offering is designed to be flexible and support multiple AI frameworks, positioning it as a neutral infrastructure provider in the AI landscape.In a rapid-fire segment, several companies have announced new partnerships and product updates. iRACA has teamed up with TD Cynics to extend its secure access services, while cgen.ai has launched a platform to streamline AI workloads. Nutrien has improved its Document AI software, and Cohesity has integrated its data management platform with Microsoft 365 Copilot, enabling users to leverage backup data for informed decision-making. These developments highlight a trend towards enabling service providers to evolve from mere technical support to delivering measurable business outcomes. Four things to know today 00:00 Microsoft Shakes Up Partner Strategy with AI Funding Boost and Workforce Realignment05:42 OpenAI's Cloud Diversification and Agent Ambitions Could Upend SMB Workflows08:35 AWS Launches AgentCore to Build Networks of Interconnected AI Agents11:15 Aryaka, C-Gen.AI, Nutrient, Cohesity Roll Out Innovations Targeting Business Value This is the Business of Tech.     Supported by:  https://timezest.com/mspradio/  All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech

Executives at the Edge
Internet-First Infrastructure: Architecting Multi-Cloud Environments

Executives at the Edge

Play Episode Listen Later Jul 10, 2025 23:05


ENTERPRISE EDITION Accenture's Chief Architect for Cloud Network and AI Infrastructure, Amo Mann, reveals how his global enterprise reinvented its network to embrace an internet-first, cloud-native, and AI-ready approach that slashed costs, boosted agility, and hardened security. What does it take to architect cloud, AI, and zero trust to work together? In this Executives at... Read More The post Internet-First Infrastructure: Architecting Multi-Cloud Environments appeared first on Mplify.

EM360 Podcast
Multi-Cloud & AI: Are You Ready for the Next Frontier?

EM360 Podcast

Play Episode Listen Later Jul 8, 2025 23:45


"AI may be both the driver and the remedy for multi-cloud adoption," says Dmitry Panenkov, Founder & CEO of emma, alluding to the vast potential and possibilities Artificial Intelligence (AI) and multi-cloud strategies offer. In this episode of the Tech Transformed podcast, Tom Croll, a Cybersecurity Industry Analyst and Tech Advisor at Lionfish, speaks to Panenkov. They talk about the intricacies of powering multi-cloud systems with AI, offering valuable insights for businesses aiming to tap into the full potential of both.They also discuss data fragmentation, interoperability issues, and security concerns. AI Adoption in Multi-CloudAddressing the key challenges of AI adoption in multi-cloud environments, Panenkov spotlights one of the most prominent issues – data fragmentation. “AI thrives on unified data sets. But multi-cloud setups often lead to data silos across the different platforms,” the founder of emma, the cloud management platform, explained. Data silos creates a disconnect which makes it increasingly challenging for AI models. It makes it harder for AI models to access and process the huge amounts of data needed to function efficiently. Instead, Panenkov stresses the potential of AI to drive multi-cloud adoption by optimising workloads and automating policies. In addition to data fragmentation, the lack of interoperability and tooling presents another challenge when integrating AI with multi-cloud. This is where Inconsistent APIs, a lack of standardisation, and variations in cloud-native tools create major friction. The difference is evident when building AI pipelines across diverse environments. Panenkov also pointed out the impact of latency and performance. He says, "Even Kubernetes is sensitive to latency. When we talk about AI and inference, and I'm not even talking about the training, I'm saying that inference is also sensitive." Without proper networking solutions, running AI workloads effectively in multi-cloud environments becomes next to impossible.Of course, security and compliance are a looming challenge for all enterprises across varying industries. Managing data protection in different jurisdictions and environments adds layers of legal and operational complexity.Despite these challenges, AI has significant advantages in multi-cloud systems that well surpass any challenges. Intelligent Orchestration is the Key to Successful Multi-Cloud AdoptionThe main topic of the conversation was how AI can actually help overcome the complexities of multi-cloud adoption. As the...

Cloud Security Podcast
Migrating from “Tick Box" Compliance to Automating GRC in a Multi-Cloud World

Cloud Security Podcast

Play Episode Listen Later Jun 17, 2025 28:48


In many organizations, security exception management is a manual process, often treated as a simple compliance checkbox. While necessary, this approach can lead to unmonitored configurations that drift from their approved state, creating inconsistencies in an organization's security posture over time. How can teams evolve this process to support modern development without compromising on security?In this episode, Ashish Rajan sits down with security expert Santosh Bompally, Cloud Security Engineering Team Lead at Humana to discuss a practical framework for automating exception management. Drawing on his journey from a young tech enthusiast to a security leader at Humana, Santosh explains how to transform this process from a manual task into a scalable, continuously monitored system that enables developer velocity.Learn how to build a robust program from the ground up, starting with establishing a security baseline and leveraging policy-as-code, certified components, and continuous monitoring to create a consistent and secure cloud environment.Guest Socials -⁠⁠ ⁠Santosh's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security BootCamp⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Cybersecurity, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Cybersecurity PodcastQuestions asked:(00:00) Introduction(00:39) From Young Hacker to Cybersecurity Pro(02:14) The "Tick Box" Problem with Exception Management(03:17) Exposing Your Threat Landscape: The Risk of Not Automating(05:43) Where Do You Even Start? The First Steps(08:26) VMs vs Containers vs Serverless: Is It Different?(11:15) Building Your Program: Start with a Security Baseline(14:44) What Standard to Follow? (CIS, PCI, HIPAA)(17:20) The Lifecycle of a Control: When Should You Retire One?(19:42) The 3 Levels of Security Automation Maturity(23:25) Do You Need to Be a Coder for GRC Automation?(26:16) Fun Questions: Home Automation, Family & Food

Getup Kubicast
#171 - Carreira, Estudos e a Mulher em Tech com Giulia Bordignon #Spacecoding

Getup Kubicast

Play Episode Listen Later Jun 5, 2025 75:46


Neste episódio do Kubicast, recebemos Giulia Bordignon, mais conhecida como SpaceCoding, para uma conversa inspiradora e cheia de provocações sobre a jornada de mulheres na tecnologia. Giulia é desenvolvedora backend, criadora de conteúdo, mestre em Engenharia de Computação e uma das vozes mais ativas sobre representação feminina em TI. O papo vai muito além do clichê e mergulha em temas estruturais como formação acadêmica, barreiras de entrada e as sutilezas do preconceito.Da graduação no interior ao mestrado em IAGiulia compartilha sua trajetória desde os primeiros contatos com a tecnologia, ainda no interior, até a decisão de seguir uma carreira acadêmica. A escolha pela graduação foi movida por uma busca por estabilidade financeira e por influências culturais sobre profissões "respeitadas". Ao longo da conversa, ela revela como disciplinas como contabilidade e administração pareceram limitadas até ela encontrar na tecnologia uma forma de unir criatividade, desafio intelectual e impacto real.Barreiras, bloqueios e viradas de chaveO episódio também expõe o quão traumático pode ser o primeiro contato com conteúdos técnicos para pessoas sem referências. Giulia relata como seu primeiro curso técnico em informática, focado em redes, a afastou da área por um tempo. Mais tarde, a vivência na graduação e o contato com IA mudaram completamente sua percepção sobre tecnologia.Mestrado: formação ou ego?Um dos momentos mais provocativos é quando Giulia, com bom humor, diz que vai fazer o doutorado apenas para ser chamada de "doutora". A frase ironiza a diferença entre motivações pessoais e valor de mercado, mostrando como muitas vezes os títulos acadêmicos não são reconhecidos na mesma medida fora do ambiente universitário.Tecnologia, corpo e bem-estarOutro ponto alto do episódio é a discussão sobre vida ativa e ergonomia. Giulia comenta como a prática de esportes sempre esteve presente na sua vida, inclusive durante a pandemia, quando encontrou na musculação uma nova forma de manter o corpo ativo. Essa relação com a saúde física se estende também ao cuidado com o ambiente de trabalho remoto, como o uso de mesas ajustáveis, cadeiras adequadas e pausas para alongamento.Conteúdo como ferramenta de representaçãoPor fim, o podcast entra em temas como a exposição nas redes, o impacto de haters e a responsabilidade (e o peso) de ser uma voz ativa por mais diversidade em tech. Giulia fala com franqueza sobre os ataques que já sofreu e sobre como isso só reforça a necessidade de continuar ocupando espaços.Para quem busca reflexões reais sobre tecnologia, formação e diversidade, este episódio é uma aula.O Kubicast é uma produção da Getup, empresa especialista em Kubernetes e projetos open source para Kubernetes. Os episódios do podcast estão nas principais plataformas de áudio digital e no YouTube.com/@getupcloud.

Getup Kubicast
#170 - Desafios do Kubernetes Geodistribuído - Com Guilherme Oki

Getup Kubicast

Play Episode Listen Later May 29, 2025 50:33


Hoje a conversa foi com o Guilherme Oki, um verdadeiro veterano do SRE e Cloud, que já navegou por ambientes de infraestrutura em fintechs, jogos e agora está numa startup stealth (sim, aquele mistério que te deixa curioso até o final). Falamos de Kubernetes em large scale, desafios de rede, geodistribuição e aquele eterno dilema do multi-cloud: usar ou fugir?Exploramos desde o que realmente significa trabalhar em "grande escala" (não, seu EKS com 10 nodes não conta), até questões mais cabeludas como Federation, eBPF, Cilium, e como lidar com as dores reais da escalabilidade em ambientes críticos.Tudo isso com uma pegada técnica, sem perder o bom humor. Cola com a gente nesse episódio que está simplesmente imperdível para quem vive ou quer viver no mundo de Kubernetes e infraestrutura moderna.Capítulos principais do episódio:00:00 - Abertura03:00 - O que é grande escala07:30 - Geodistribuição11:00 - Multi-cloud vale a pena?14:40 - Desafios de rede19:30 - Federation de clusters24:10 - Cilium e eBPF30:00 - Infra para jogos34:20 - Padronização em escala38:10 - Limites do Kubernetes42:00 - Controle com Cilium46:30 - Bugs e UDP50:40 - Gerenciado vs autonomiaLinks Importantes:- Guilherme Oki - https://www.linkedin.com/in/guilherme-oki-1a649b115/- João Brito - https://www.linkedin.com/in/juniorjbnParticipe de nosso programa de acesso antecipado e tenha um ambiente mais seguro em instantes!https://getup.io/zerocveO Kubicast é uma produção da Getup, empresa especialista em Kubernetes e projetos open source para Kubernetes. Os episódios do podcast estão nas principais plataformas de áudio digital e no YouTube.com/@getupcloud.

Packet Pushers - Full Podcast Feed
PP064: How Aviatrix Tackles Multi-Cloud Security Challenges (Sponsored)

Packet Pushers - Full Podcast Feed

Play Episode Listen Later May 27, 2025 42:51


Aviatrix is a cloud network security company that helps you secure connectivity to and among public and private clouds. On today’s Packet Protector, sponsored by Aviatrix, we get details on how Aviatrix works, and dive into a new feature called the Secure Network Supervisor Agent. This tool uses AI to help you monitor and troubleshoot... Read more »

Packet Pushers - Fat Pipe
PP064: How Aviatrix Tackles Multi-Cloud Security Challenges (Sponsored)

Packet Pushers - Fat Pipe

Play Episode Listen Later May 27, 2025 42:51


Aviatrix is a cloud network security company that helps you secure connectivity to and among public and private clouds. On today’s Packet Protector, sponsored by Aviatrix, we get details on how Aviatrix works, and dive into a new feature called the Secure Network Supervisor Agent. This tool uses AI to help you monitor and troubleshoot... Read more »

Audience 1st
A Deep Dive Into The Multi-Cloud Mess & How AlgoSec Connects the Dots

Audience 1st

Play Episode Listen Later May 9, 2025 41:48


What does it really take to secure applications across a hybrid, multi-cloud environment? In this episode of Audience 1st, I sit down with Adolfo Lopez, Sales Engineer at AlgoSec, who brings a practitioner's lens to the cloud security conversation. From his experience as a network engineer to helping organizations operationalize cloud security today, Adolfo walks us through what most teams overlook—and how to get it right. We cover: Why visibility into application flows is foundational for multi-cloud security What enterprises miss when they treat the cloud like a lift-and-shift extension of on-prem Why security must be application-centric—not infrastructure-led The critical role of policy discovery, orchestration, and automation How AlgoSec ACE helps teams answer the question: “What will break if I make this change?” If your team is working across AWS, Azure, GCP, and on-prem—and struggling to manage risk, connectivity, and policy alignment, this episode breaks it down practically and tactically. To get a demo of AlgoSec, visit: https://www.algosec.com/lp/request-a-demo

The MongoDB Podcast
EP. 264 Beyond the Database: Mastering Multi-Cloud Data, AI Automation & Integration (feat. Peter Ngai, SnapLogic)

The MongoDB Podcast

Play Episode Listen Later May 1, 2025 58:31


✨ Heads up! This episode features a demonstration of the SnapLogic UI and its AI Agent Creator towards the end. For the full visual experience, check out the video version on the Spotify app! ✨(Episode Summary)Tired of tangled data spread across multiple clouds, on-premise systems, and the edge? In this episode, MongoDB's Shane McAllister sits down with Peter Ngai, Principal Architect at SnapLogic, to explore the future of data integration and management in today's complex tech landscape.Dive into the challenges and solutions surrounding modern data architecture, including:Navigating the complexities of multi-cloud and hybrid cloud environments.The secrets to building flexible, resilient data ecosystems that avoid vendor lock-in.Strategies for seamless data integration and connecting disparate applications using low-code/no-code platforms like SnapLogic.Meeting critical data compliance, security, and sovereignty demands (think GDPR, HIPAA, etc.).How AI is revolutionizing data automation and providing faster access to insights (featuring SnapLogic's Agent Creator).The powerful synergy between SnapLogic and MongoDB, leveraging MongoDB both internally and for customer integrations.Real-world applications, from IoT data processing to simplifying enterprise workflows.Whether you're an IT leader, data engineer, business analyst, or simply curious about cloud strategy, iPaaS solutions, AI in business, or simplifying your data stack, Peter offers invaluable insights into making data connectivity a driver, not a barrier, for innovation.-Keywords: Data Integration, Multi-Cloud, Hybrid Cloud, Edge Computing, SnapLogic, MongoDB, AI, Artificial Intelligence, Data Automation, iPaaS, Low-Code, No-Code, Data Architecture, Data Management, Cloud Data, Enterprise Data, API Integration, Data Compliance, Data Sovereignty, Data Security, Business Automation, ETL, ELT, Tech Stack Simplification, Peter Ngai, Shane McAllister.

Feds At The Edge by FedInsider
Ep. 198 Identity Governance and Zero Trust - Best Practices for a Multi-Cloud Environment

Feds At The Edge by FedInsider

Play Episode Listen Later Apr 30, 2025 58:26


Today's modern network has placed identity management in the forefront to manage a plethora of landscapes – on and off prem, public and private, hybrid, and the new kid on the block, alt-clouds.  This week on Feds At The Edge, we explore how the Defense Information Systems Agency (DISA) is leading the charge in modern identity management, once a backwater concept, to center stage, with its ambitious program, Thunderdome.   Chris Pymm, Portfolio Manager, Zero Trust & Division Chief for ID7 at DISA, shares how Thunderdome spans 50 sites and 12,000 users, automating identity controls to outpace threats like lateral movement. We also hear from Quest Software Public Sector cybersecurity expert Chris Roberts, who breaks identity management down to its core: know the user, know the device, know the behavior.   Tune in on your favorite podcasting platform today to hear how DISA is redefining identity for today's distributed networks—and what your agency can take from their playbook.          

Irish Tech News Audio Articles
Simplifying IT for the AI and Multicloud Era

Irish Tech News Audio Articles

Play Episode Listen Later Apr 28, 2025 5:36


Guest post by Brian O' Toole, Consumption and Software Sales Leader at Dell Technologies AI is rapidly reshaping the business landscape, making digital transformation not just a priority but a necessity for Irish organisations. Yet as companies look to harness its potential, they often find themselves navigating increasingly complex IT environments - a challenge that can feel overwhelming for businesses of all sizes. Whether it's navigating cloud migration or staying secure and scaling AI projects or even just managing day-to-day IT workloads with limited resources, there's one thing we keep hearing from businesses and organisations alike is that 'we need to simplify'. At Dell Technologies, we've seen these challenges firsthand - and that's why we're helping organisations embrace technology as-a-Service. Adopting this approach can help simplify operations, modernise IT infrastructure, and give businesses the agility they need to innovate at speed in the AI era. A Fresh Approach to IT Management Today, IT teams face a perfect storm of priorities from business leaders responding to external challenges. These priorities pressure IT leaders to do more with less as they get operations teams to innovate while addressing expanding regulatory frameworks around data. All these pressures and potentially competing priorities increase the risk of IT decision sprawl that could solve problems in one area while adding complexity in others. To help IT and business leaders navigate this environment and shift IT costs from capital expenditure (CapEx) to operational expenditure (OpEx), Dell APEX Cloud Platforms provide integrated infrastructure management that reduces multicloud complexity while strengthening security and governance. APEX is a portfolio of fully integrated, turnkey systems that integrate Dell infrastructure, software and cloud operating stacks to deliver consistent multicloud operations. By extending cloud operating models to on-premises and edge environments, Dell APEX Cloud Platforms bridge the cloud divide by delivering consistent cloud operations everywhere. With Dell APEX Cloud Platforms, you can: Minimize multicloud costs and complexity in the cloud ecosystem of your choice. Increase application value by accelerating productivity with familiar experiences that enable you to develop anywhere and deploy everywhere. Improve security and governance by enforcing consistent cloud ecosystem management from cloud to edge and enhancing control with layered security. The shift to an As-a-Service approach gives businesses control without the chaos. Whether a scaling startup or an established large business planning to advance their Multicloud solutions or leverage AI-driven applications, they can get access to latest technology such as storage, servers, devices and cloud services - on demand with only the cost for what they use. Enabling organisations to innovate in an AI and Multicloud era For organisations, the shift to an as-a-service model is not just about simplifying IT systems, it's about ensuring they can unlock innovation and growth. Businesses can pay for what they use which aligns technology investment to actual value and usage. This approach is especially critical for costly infrastructure such as GPUs, servers, and storage which all require substantial investment. By spreading costs over time, organisations in Ireland can forge a cost-effective pathway to leveraging cutting-edge AI capabilities without being locked into long-term technology commitments. In Ireland, we're seeing a growing appetite for more agile, scalable IT models, especially among businesses embracing AI, hybrid work, and Multicloud strategies. As the debate between public and private clouds are fading, Multicloud ecosystems are the future, and Dell APEX is leading the charge. With partnerships spanning hyper scalers like Microsoft, Red Hat, VMware, and Google Cloud, Dell APEX delivers simplified IT management across environments. Dell APEX innovatio...

Audience 1st
5 Mindset Shifts Security Teams Must Adopt to Master Multi-Cloud Security

Audience 1st

Play Episode Listen Later Apr 4, 2025 30:38


Multi-cloud security isn't just a technology challenge—it's an organizational mindset problem. Security teams are juggling AWS, Azure, and GCP, each with different security models, policies, and rules. The result? Silos, misconfigurations, and security gaps big enough to drive an exploit through. In this episode, I sat down with Gal Yosef from AlgoSec to break down: Why multi-cloud security is so complex (and what security teams are getting wrong) How to bridge the gap between network security and cloud security teams How large enterprises manage cloud security policy enforcement across business units The shift from one-size-fits-all security policies to flexible, risk-based guardrails Why automation and visibility are critical for securing multi-cloud environments If you want to secure application connectivity across your hybrid environment, visit algosec.com.

Connected Social Media
Validating and Evolving Intel IT's Multicloud Strategy

Connected Social Media

Play Episode Listen Later Apr 1, 2025


Intel IT adopted a “right workload, right place” multicloud strategy nearly 10 years ago. This strategy has accelerated application development...[…]

Data Breach Today Podcast
Nir Zuk: Google's Multi-Cloud Security Strategy Won't Work

Data Breach Today Podcast

Play Episode Listen Later Mar 28, 2025


Data Breach Today Podcast
Nir Zuk: Google's Multi-Cloud Security Strategy Won't Work

Data Breach Today Podcast

Play Episode Listen Later Mar 28, 2025


This Week in Health IT
Interview In Action: Harnessing the Power of the Multi-Cloud Environment with Amar Maletira

This Week in Health IT

Play Episode Listen Later Feb 26, 2025 17:50 Transcription Available


February 26, 2025: Amar Maletira, CEO of Rackspace, explores the evolving role of multi-cloud environments—why are CIOs now rethinking their cloud strategies after years of rapid migration? As AI continues to weave itself into every facet of IT, how can healthcare organizations effectively harness its power while navigating workforce gaps and security risks? And in a world of increasing cyber threats, what are the real challenges of securing critical healthcare workloads across hybrid infrastructures? This conversation unpacks the complexity of modern IT strategy, from cloud optimization to AI-driven automation.Key Points:02:27 The Evolution of Cloud Computing07:08 AI in Cloud Management12:03 Rackspace Security SolutionsSubscribe: This Week HealthTwitter: This Week HealthLinkedIn: This Week HealthDonate: Alex's Lemonade Stand: Foundation for Childhood Cancer

Cloud Security Podcast
Cloud Security Detection & Response Strategies That Actually Work

Cloud Security Podcast

Play Episode Listen Later Feb 4, 2025 57:58


We spoke to Will Bengtson (VP of Security Operations at HashiCorp) bout the realities of cloud incident response and detection. From root credentials to event-based threats, this conversation dives deep into: Why cloud security is NOT like on-prem – and how that affects incident response How attackers exploit APIs in seconds (yes, seconds—not hours!) The secret to building a cloud detection program that actually works The biggest detection blind spots in AWS, Azure, and multi-cloud environments What most SOC teams get WRONG about cloud security Guest Socials: ⁠⁠⁠⁠⁠⁠⁠Will's Linkedin Podcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security BootCamp⁠⁠⁠⁠⁠ If you are interested in AI Cybersecurity, you can check out our sister podcast -⁠⁠⁠⁠⁠ AI Cybersecurity Podcast Questions asked: (00:00) Introduction (00:38) A bit about Will Bengtson (05:41) Is there more awareness of Incident Response in Cloud (07:05) Native Solutions for Incident Response in Cloud (08:40) Incident Response and Threat Detection in the Cloud (11:53) Getting started with Incident Response in Cloud (20:45) Maturity in Incident Response in Cloud (24:38) When to start doing Threat Hunting? (27:44) Threat hunting and detection in MultiCloud (31:09) Will talk about his BlackHat training with Rich Mogull (39:19) Secret Detection for Detection Capability (43:13) Building a career in Cloud Detection and Response (51:27) The Fun Section

Screaming in the Cloud
Replay - Multi-Cloud is the Future with Tobi Knaup

Screaming in the Cloud

Play Episode Listen Later Dec 12, 2024 31:02


On this Screaming in the Cloud Replay, we're revisiting our conversation with Tobi Knaup, the current VP & General Manager of Cloud Native at Nutanix. At the time this first aired, Tobi was the co-founder and CTO of D2iQ before the company was acquired by Nutanix. In this blast from the past, Corey and Tobi discuss why Mesosphere rebranded as D2iQ and why the Kubernetes community deserves the credit for the widespread adoption of the container orchestration platform. Many people assume Kubernetes is all they need, but that's a mistake, and Tobi explains what other tools they end up having to use. We'll also hear why Tobi thinks that multi-cloud is the future (it is the title of the episode after all).Show Highlights(0:00) Intro(0:28) The Duckbill Group sponsor read(1:01) Memosphere rebranding to D2iQ(4:34) The strength of the Kubernetes community(7:43) Is open-source a bad business model?(10:19) Why you need more than just Kubernetes(13:13) The Duckbill Group sponsor read(13:55) Is multi-cloud the best practice?(17:31) Creating a consistent experience between two providers(19:05) Tobi's background story(24:24) Memories of the days of physical data centers(28:00) How long will Kubernetes be relevant(30:18) Where you can find more from TobiAbout Tobi KnaupTobi Knaup is the VP & General Manager of Cloud Native at Nunatix. Previously, he was the Co-Founder and CTO of D2iQ Kubernetes Platform before Nutanix acquired the company. Knaup is an experienced software engineer focusing on large scale systems and machine learning. Tobi's research work is on Internet-scale sentiment analysis using online knowledge, linguistic analysis, and machine learning. Outside of his tech work, he enjoys making cocktails and has collected his favorite recipes on his cocktail website.LinksTobi's Twitter: https://twitter.com/superguenterLinkedIn URL: https://www.linkedin.com/in/tobiasknaup/Personal site: https://tobi.knaup.me/Original Episodehttps://www.lastweekinaws.com/podcast/screaming-in-the-cloud/multi-cloud-is-the-future-with-tobi-knaup/SponsorThe Duckbill Group: duckbillgroup.com 

Packet Pushers - Heavy Networking
HN756: Alkira Enhances Its Multi-Cloud Networking With ZTNA and Security (Sponsored)

Packet Pushers - Heavy Networking

Play Episode Listen Later Nov 1, 2024 47:04


Alkira provides a Multi-Cloud Networking Service (MCNS) that lets you connect public cloud and on-prem locations using a cloud-delivered, as-a-service approach. But Alkira offers more than just multi-cloud connectivity. On today’s sponsored episode of Heavy Networking, we dig into Alkira’s full set of offerings, which include networking, visibility, governance, and security controls such as firewalls... Read more »