Form of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand
POPULARITY
If you've ever wondered how Oracle Database really works inside AWS, this episode will finally turn the lights on. Join Senior Principal OCI Instructor Susan Jang as she explains the two database services available (Exadata Database Service and Autonomous Database), how Oracle and AWS share responsibilities behind the scenes, and which essential tasks still land on your plate after deployment. You'll discover how automation, scaling, and security actually work, and which model best fits your needs, whether you want hands-off simplicity or deeper control. Oracle Database@AWS Architect Professional: https://mylearn.oracle.com/ou/course/oracle-databaseaws-architect-professional/155574 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone! In our last episode, we began the discussion on Oracle Database@AWS. Today, we're diving deeper into the database services that are available in this environment. Susan Jang, our Senior Principal OCI Instructor, joins us once again. 00:56 Lois: Hi Susan! Thanks for being here today. In our last conversation, we compared Oracle Autonomous Database and Exadata Database Service. Can you elaborate on the fundamental differences between these two services? Susan: Now, the primary difference is between the service is really the management model. The Autonomous is fully-managed by Oracle, while the Exadata provides flexibility for you to have the ability to customize your database environment while still having the infrastructure be managed by Oracle. 01:30 Nikita: When it comes to running Oracle Database@AWS, how do Oracle and AWS each chip in? Could you break down what each provider is responsible for in this setup? Susan: Oracle Database@AWS is a collaboration between Oracle, as well as AWS. It allows the customer to deploy and run Oracle Database services, including the Oracle Autonomous Database and the Oracle Exadata Database Service directly in AWS data centers. Oracle provides the ability of having the Oracle Exadata Database Service on a dedicated infrastructure. This service delivers full capabilities of Oracle Exadata Database on the Oracle Exadata hardware. It offers high performance and high security for demanding workloads. It has cloud automation, resource scaling, and performance optimization to simplify the management of the service. Oracle Autonomous Database on the dedicated Exadata infrastructure provides a fully Autonomous Database on this dedicated infrastructure within AWS. It automates the database management tasks, including patching, backups, as well as tuning, and have built-in AI capabilities for developing AI-powered applications and interacting with data using natural language. The Oracle Database@AWS integrates those core database services with various AWS services for a comprehensive unified experience. AWS provides the ability of having a cloud-based object storage, and that would be the Amazon S3. You also have the ability to have other services, such as the Amazon CloudWatch. It monitors the database metrics, as well as performance. You also have Amazon Bedrock. It provides a development environment for a generative AI application. And last but not the least, amongst the many other services, you also have the SageMaker. This is a cloud-based platform for development of machine learning models, a wonderful integration with our AI application development needs. 03:54 Lois: How has the work involved in setting up and managing databases changed over time? Susan: When we take a look at the evolution of how things have changed through the years in our systems, we realize that transfer responsibility has now been migrated more from customer or human interaction to services. As the database technology evolves from the traditional on-premise system to the Exadata engineered system, and finally to the Autonomous Database, certain services previously requiring significant manual intervention has become increasingly automated, as well as optimized. 04:34 Lois: How so? Susan: When we take a look at the more traditional database environment, it requires manual configuration of hardware, operating system, as well as the software of the database, along with initial database creation. As we evolve into the Exadata environment, the Exadata Database, specifically the Exadata cloud service, simplifies provisioning through web-based wizard, making it faster and easier to deploy the Oracle Database in an optimized hardware. But when we move it to an Autonomous environment, it automates the entire provisioning process, allowing users to rapidly deploy mission-critical databases without manual intervention, or DBA involvement. So as customers move toward Autonomous Database through Exadata, we have fewer components that the customer needs to manage in the database stack, which gives them more time to focus more on important parts of the business. With the Exadata Database, it provides a co-management of backup, restore, patches and upgrade, monitoring, and tuning. And it allows the administrator the ability to customize the configuration to meet their very specific business needs. With Autonomous Database, it's now fully automated and it's a greater responsibility is shift toward the service. With Autonomous Database on dedicated infrastructure, it provides that fine-grained tuning more for Oracle to help you perform that task. 06:15 Nikita: If we narrow it down just to Oracle and AWS for a moment, which parts of the infrastructure or day-to-day ops are handled by each company behind the scenes? Susan: When we take a look at Oracle Database@AWS, it operates under a shared responsibility model, dividing the service responsibilities between AWS, as well as Oracle, as well as you, the customer. The AWS has the data center. Remember, this is where everything is running. The Oracle Database@AWS, the Oracle Database infrastructure may be managed by Oracle and run in OCI, but is physically located within the AWS regions, as well as the availability zones and the AWS data centers. The AWS infrastructure, in this case, is AWS's responsibility to secure the environment, including the physical security of the data center, the network infrastructure, and the foundational services like the compute, the storage, and the networking, all within AWS. The next thing of who's responsible for the shared responsibility, it's Oracle. And that would be the hardware. We provide the hardware. While the hardware may physically reside in the AWS data center, Oracle's Cloud Infrastructure operational team will be the one managing this infrastructure, including software patching, infrastructure update, and other operations through a connection to OCI. This means Oracle handles the provisioning, as well as the maintenance of any of the underlying Exadata infrastructure hardware. When we take a look at the next thing that it manages, it is also responsible besides the infrastructure of the Exadata. It is also the ability to manage the hardware, the environment of that hardware through the database control plane. So Oracle manages the administration and the operational for the Oracle Database@AWS service, which resides in OCI. So this includes the capabilities for management, upgrade, and operational features. 08:37 Nikita: And what are the key things that still remain on the customer's plate? Susan: If you are in an Exadata environment or in an Autonomous environment, it is you, the customer, who is responsible for most of the database administration operation, as well as managing the users and the privileges of the user to access the database. No one knows the database and who should be accessing the data better than you. You will be responsible for securing the applications, the data of the database, which now allows you to define who has access to it, control the data encryption, and securing the application that interacts with the Oracle Database@AWS. 09:29 Lois: Susan, we've talked about both Autonomous Database and Exadata Database Service being available on Oracle Database@AWS, but what's different about how each works in this environment, and why might someone pick one over the other? Susan: Both databases, even though they run on the same Exadata Cloud Infrastructure, both can be deployed on both public cloud, as well as the customer data center, which is Oracle Cloud@Customer. The Autonomous Database is a fully managed, completely automated environment. And this provides a capability of having a fully Autonomous Database Service running on a dedicated Oracle Exadata Infrastructure within your AWS data center. The Exadata is a service that is provided and managed by Oracle and is physically running in the AWS data center, but is designed for mission critical workload and includes RAC environment, Real Application Cluster, offering a high performance availability and full feature capability that is similar to other Exadata environment, such as those running in our customers' data center. The primary difference is really between the two services. When you take a look at the Exadata, the customer only pays for the compute resources that is used. Autoscaling can be used for a variety or variable resources, the workload, to automatically scale to the compute resources up or down when required. The Autonomous Database also has automatic optimization for data warehousing, transaction processing, as well as JSON workload. The Exadata service, the customer again, also pays for the compute resources that they allocate. But that's the key thing. The customer can initiate the scaling because it's very specific to the workload that is needed. So when you take a look at the two database services, one gives the ability to let Oracle fully manage it, including the scaling capability. The other, the Exadata, provides you the capability of having the environment that it's running on the infrastructure be managed by Oracle that adds a database administrator. You may wish to have a little bit more granular control of how you want the database to not only be scaling, but how you wish to customize how the database will be running. 12:10 Nikita: Focusing on Autonomous Database for a moment, what should teams know about how it actually runs within AWS? Susan: The Autonomous Database on the Oracle Database@AWS brings the power of the Oracle's self-managing, self-securing, and self-repairing database into your AWS environment. It provides the capability of the database automatically, automates many of the traditional, complex, and time-consuming database management tasks, such as the provisioning of the database, the patching, the backing up, and the scaling, and the performance tuning, reducing the need for any manual intervention by the database administrator. Running the Autonomous Database in your AWS region enables low latency access for your AWS applications and services that is deployed within AWS, thus improving performance and response time. With the Autonomous Database, it automates many of the traditional things that is now automatically done by Oracle. It also supports integration with various AWS services, such as the ability of the not in addition to AIM, but the cloud formation, the CloudWatch for monitoring and the S3 for the storage. You can easily migrate existing Exadata workload, including those running on Oracle RAC to AWS with minimum or no change to any of your databases or applications. In addition, there's a really powerful capability and feature of the database is called zero ETL, and that's zero extract, transformation, and load. It's an integration capability with services like your Amazon Redshift, enabling near real time analytics and machine learning on your transactional database that is stored within the Autonomous Database on in your AWS environment. So with the Autonomous Database, it checks off many of the boxes for automatic capability, securing, tuning, as well as scaling the database. With the Autonomous Database in the Dedicated Exadata Infrastructure, the Exadata Cloud Infrastructure resource represents the physical system, which can be expanded with storage, as well as compute services, the compute host. This now provides the ability to have an isolated zone for the highest protection from other tenants. The data is stored on a dedicated server only for one customer. That would be you. 14:56 Lois: Could you explain the role of Autonomous VM? What are its primary benefits? Susan: The virtual machine or as we refer to them as the cluster, includes the grid infrastructure and provides a private network isolation. This provides you the capability of having custom memory, core, and storage allocation. The Oracle Grid Infrastructure includes the Oracle Clusterware, which manages the cluster, as well as the servers, and ensure that the database can failover to another server in case of any failure. 15:34 Be a part of something big by joining the Oracle University Learning Community! Connect with over 3 million members, including Oracle experts and fellow learners. Engage in topical forums, share your knowledge, and celebrate your achievements together. Discover the community today at mylearn.oracle.com. 15:55 Nikita: Welcome back! Susan, what is the Autonomous Container Database? Susan: With the Autonomous Container Database, and you need that if you're going to create an Autonomous Database, you need to provision that within your Autonomous Exadata VM Cluster. It serves as a container to hold or to house one or more Autonomous Databases. This allows multiple Autonomous Databases to coexist in the same infrastructure while still being logically separated. And this allows for the separation of databases based on their intended use. Think of a database for production. Think of a database for development. Think of a database for testing. You may have different database versions within the same infrastructure. This isolation makes it easier for you to be able to meet your SLA, your Service Level Agreement, any long-term backups you may have, very specific encryption key needs to prevent issues from one database impacting another. So, the ability to have everything be isolated and secure is still grouping it in a manner that will meet your business needs. 17:08 Lois: Looking at Exadata Database Service specifically, what are some standout advantages for customers who deploy it on Oracle Database@AWS? Is there anything in particular they should get excited about in terms of performance or integration with AWS? Susan: The Exadata Database Service is running on a dedicated Exadata Infrastructure that's deployed within your AWS data center. It delivers the same Exadata service experience in cloud control planes as the Oracle Cloud Infrastructure, allowing you to leverage existing skills and processing across your multi-cloud environment. It addresses the data resiliency, or residency rather. And that's the scenario where many of our customers has the need. You have a need because of your security compliance to have the data local to you. By having the Exadata Database in your Oracle Database@AWS, it is running in your data center. So, this addresses that very important need, data residency, to have it close to you. It also allows for seamless integration with other AWS services and applications. So now you have a capability of a hybrid cloud architecture leveraging the benefit of both Oracle Exadata and your AWS system. It has built-in high availability, the RAC application cluster, as well as Data Guard, a capability of addressing disaster recovery capability. This also provides the ability for you to scale your compute, as well as your storage and your I/O resources independently. So as mentioned with Exadata, you have flexibility of how you want your database to be running individually. So just like the Autonomous, the Exadata Database checks off many of the boxes for running a mission-critical with high availability, highly redundant hardware and software features, along with extreme performance, scalability, and reliability. This now allows you to run your AI environment, your online transaction processing, your analytic workload on any scale on the Exadata Infrastructure running in the Oracle Cloud. And in this case, running in your data center. 19:45 Nikita: If a business suddenly needs more capacity, how does scaling work with Exadata Database Service versus Autonomous Database on Oracle Database@AWS? Susan: So with the Exadata scaling, you now can scale to meet expected demands so you know at certain point I will need more. I will then ask it to scale at that point when I will assign it-- and I'm using an example, I will assign it three computer cores all the time. But there may be demands. Think of your end of the quarter, end of the year processing that you may need more. So, you are enabling the compute cores to scale at the time you need it. And what's cool is it will then, when it's no longer needed, it will then scale back down to the original three cores that you assign. So, you only pay for the enabled cores. But what's very cool about the Autonomous is that it is real-time scaling. So, with Autonomous, now you have the capability using Autonomous Database since it is self-tuning, self-monitoring, the Autonomous Database actually monitors the workload requirement and scales to match the workload demand. Once the minimum level of the compute is defined and enabled, the automatic scaling is set. Autonomous Database will adjust to the consumption when it's needed, and it will scale back down when it's not. So though the Exadata is pretty cool, it will scale up and down on the workload demand. This is with the Autonomous is even more powerful. It is real-time scaling based on that usage at that moment. Built-in automatic increase to meet the workload demands when it spikes and it automatically scales back when it's not needed. A very powerful capability with all of our Oracle databases, the ability, even with traditional, to allow you to define what you may need with Exadata scaling for peak demands, as well as Autonomous scaling for real-time consumption and scaling when needed. When you look at all of our options, one of the key things to bear in mind is a phrase that we use: performance scale as more servers are added. And what this is really saying is Oracle's automated scaling ability for the database, it basically has the ability to maintain or improve its performance under increased workload by automatically adding computational resources when needed. This process is also known as horizontal scaling. It involves adding more servers, compute instances, to a cluster to share the processing load. And it has that capability automatically. 22:53 Nikita: There's so much more we can discuss about Oracle Database@AWS, but let's pause here for today! Thank you so much Susan for joining us. Lois: Yeah, it's been really great to have you, Susan. If you want to dive deeper into the topics we covered today, go to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 23:23 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Rob Hughes — CISO at RSA and Champion of a Passwordless FutureNo Password Required Season 7: Episode 1 - Rob HughesRob Hughes, the CISO at RSA, has more than 25 years of experience leading security and cloud infrastructure teams. In this episode, he reflects on his unconventional career path, from co-founding the original Geek.com and serving as its Chief Technologist during the early days of the internet, to leading security and systems design at Philips Home Monitoring.Jack Clabby of Carlton Fields, P.A. and Kayley Melton welcome Rob for a wide-ranging conversation on identity, leadership, and the realities of modern cybersecurity. Rob currently leads RSA's Security and Risk Office, overseeing cybersecurity, information security governance, and risk across both RSA's products and corporate environment.Rob explains his dream for a passwordless future. He unpacks why passwords remain one of the largest sources of cyber risk, how real-world incidents and password-spraying attacks have accelerated change, and why phishing-resistant technologies like passkeys may finally be reaching a tipping point. The episode wraps with the Lifestyle Polygraph, where Rob lightens the conversation with stories about gaming with his kids, underrated horror films, and classic cars.Follow Rob on LinkedIn: https://www.linkedin.com/in/robert-hughes-816067a4/Chapters: 00:00 Introduction to No Password Required01:43 Meet Rob Hughes, CISO at RSA02:05 The Role of a CISO in a Security Company05:09 Transitioning to the CISO Role08:00 The Early Days of Geek.com12:14 Launching a Startup During the Dot Com Boom14:30 The Push for a Passwordless Future18:21 Tipping Point for Passwordless Adoption20:20 Ongoing Learning in Cybersecurity26:09 Managing Stress in High-Pressure Environments33:46 The Lifestyle Polygraph Begins34:15 Career Insights in Cybersecurity36:08 Dream Cars and Personal Preferences39:58 Underrated Horror Films41:19 Creating a Cybersecurity Monster
Cloud infrastructure provisioning can be a challenging task, often requiring a delicate balance between speed and accuracy. In our discussion today, we explore how AI can streamline this process. Our guest, Marcin Vizensky, co-founder of SpaceLift, introduces us to their innovative solution called Spacelift Intent, which simplifies infrastructure management by removing the complexities of traditional tools like Terraform. Marcin explains how this platform allows users to express their needs directly, enabling quicker provisioning while maintaining necessary controls and policies. Join us as we delve into the intricacies of AI-driven cloud provisioning and its potential to make infrastructure management more accessible for developers and data scientists alike.The discussion centers around the challenges of cloud infrastructure provisioning and how AI can provide solutions. Marcin Vizensky, co-founder of SpaceLift, outlines two extremes in the current landscape: the rapid but unrepeatable manual provisioning of cloud resources through console clicks, and the slow, complex processes involving tools like Terraform that require extensive knowledge and setup. He emphasizes that many users, such as developers and data scientists, do not need to become cloud experts; they simply want to provision resources effectively. Marcin introduces Spacelift Intent, a tool designed to simplify this process by allowing users to express their needs in natural language, which AI translates into API calls. This approach shortens the lengthy deployment cycle typically associated with traditional tools, making it easier for users to manage infrastructure without deep technical expertise.Takeaways:Cloud infrastructure provisioning is challenging due to the extreme approaches we have adopted.AI can facilitate the process of managing cloud infrastructure by streamlining resource provisioning.SpaceLift Intent provides a middle ground between fast but chaotic and slow but formal cloud setups.Using AI in infrastructure management can allow for quick prototyping without extensive learning curves.The approach of SpaceLift Intent helps users transition from initial setups to formal infrastructure management.Historical data from previous configurations is preserved, allowing for easy migration and state management.Links referenced in this episode:spacelift.comopentofu.orgsoftwarearchitectureinsights.comCompanies mentioned in this episode:SpaceLiftOpen TOFUterraformpulumiCloud formation
Every system depends on reliable infrastructure behind the scenes. Oracle Cloud Infrastructure (OCI) delivers that reliability with speed, flexibility, and built-in security. Join Lois Houston and Nikita Abraham as they speak with Oracle Cloud experts David Mills and Tijo Thomas about what makes OCI different and how it drives real results for businesses of every size. Cloud Business Jumpstart https://mylearn.oracle.com/ou/course/cloud-business-jumpstart/152957 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone, and welcome to a brand-new season of the podcast! We're really excited about this one because we'll be diving into how Oracle Cloud Infrastructure is transforming the way businesses innovate, stay secure, and drive results. 00:55 Lois: And to help us with this, we've got two experts who know this space inside out—David Mills, Senior Principal PaaS Instructor, and Tijo Thomas, Principal OCI Instructor, both from Oracle University. Hi David! For those who might not be familiar, could you explain what Oracle Cloud Infrastructure is? David: OCI, as we call it, is Oracle's enterprise grade cloud platform, built from the ground up to run the systems that matter most to business. It provides the infrastructure and platform services businesses need to build, run, and scale applications securely, globally, and cost effectively. To provide more context, all of Oracle's SaaS applications such as NetSuite, Customer Experience, Human Capital Management, Supply Chain Management, as well as Enterprise Resource and Enterprise Performance Management, they all run on OCI. But OCI isn't just for Oracle's own apps. It's a full featured cloud platform used by thousands of customers to run their own applications, data, and services. OCI includes platform services such as databases, integration, analytics, and many others, and of course, the infrastructure services, such as compute, networking, and storage, which comprise the core of OCI. Bottom line, if something is running on Oracle Cloud, OCI is behind it. OCI includes over 100 services across numerous categories like compute, storage, networking, database, containers, AI, developer tools, integration, security, observability, and much more. So, whether you're lifting and shifting legacy workloads or building new apps in the cloud, OCI has the building blocks. 03:02 Lois: David, who was OCI designed for? David: OCI was built from scratch to address the limitations of first-generation clouds. No patchwork of legacy acquisitions, just a clean, modern, high-performance foundation designed for real enterprise workloads. OCI was designed for businesses that can't compromise financial services, health care, retail, governments, customers with strict regulations, global scale, and mission-critical systems. These are the companies choosing OCI not just because it works, but because it works under pressure. 03:42 Nikita: What else makes OCI different from other cloud platforms? David: Oracle's network and storage architecture delivers low latency results consistently. Then there's pricing—simple, predictable, and often much lower than our competitors. OCI was designed with governance and security in every layer. OCI supports all types of cloud strategies: public cloud, hybrid deployments, multi-cloud environments, and even a dedicated cloud we can install inside your own data center. We call all that distributed cloud, and that's where OCI really shines. OCI gives you everything you need to modernize your technology stack, run securely at scale, and build for the future without giving up control or blowing your budget. 04:37 Lois: Now, Tijo, we've covered what OCI is, who it's for, and what makes it unique. Let's switch gears a bit and talk about cloud regions. For anyone who doesn't know, a cloud region is just a specific geographic location where Oracle, or any cloud provider, runs its own data centers. Why does the choice of region matter for businesses, and what should they think about when picking one? Tijo: Many businesses are required by law to keep their data within national borders, whether it is GDPR in Europe or local privacy laws in Australia or Singapore, choosing the right region would help you to stay compliant. The closer your applications are to your users, the faster they perform. Running in a nearby region means lower latency, faster response times, and better customer experience. Then there is disaster recovery and high availability. Regions are the building blocks for setting up failover strategies. By deploying workloads in multiple regions, businesses can protect themselves from outages and keeping their systems in running state. Some businesses also need to meet industry-specific compliance requirements. Think of sectors like health care, government, or finance. They often require that the infrastructure and the data should stay within the national or regional boundaries. If your business is growing into new markets, regions allow you to deploy apps and services closer to your customers and without having the need to build new data centers. Regions also enable local integrations and partnerships, whether it is connecting with ISPs, local service providers, or complying with in-country partner requirements. Having a region nearby makes that integrations and operations smoother. Regions are not just about geography. They are a critical part of how the businesses would stay compliant, resilient, and responsive across the globe. Oracle runs a fast-growing global network of cloud regions, and each OCI region is fully independent and fully isolated. You choose your regions, and your data stays there. 07:06 Nikita: And are there different types of cloud regions? Tijo: There are several commercial regions, sovereign regions, government regions, and multi-cloud regions. Even with a wide range of cloud regions, some organizations cannot move their workloads and its data to the public cloud. Those workloads may need to stay in their own on-premises data center, but at the same time, they still want to leverage the benefits of OCI. 07:42 Take your cloud skills to the next level with the new Oracle Database@AWS course. Master provisioning, migration, security, and high availability for Oracle Database on AWS. Then validate your experience with an industry-recognized certification. Stand out in the multicloud space and accelerate your career. Visit mylearn.oracle.com for more information. 08:09 Nikita: Welcome back! We were talking about workloads and how some companies may have to keep their workloads on-premises. Why would they need to do that, Tijo? Tijo: First, data sovereignty. Let's say there may not be a list of public cloud region that the organization is looking for, or maybe the business need to set up a disaster recovery strategy within that specific location. Then there is security and control. Some industries have very strict regulations, and they require physical access and oversight of their infrastructure. And finally, there are latency-sensitive workloads. These are applications that cannot afford the delay of going back and forth to a remote cloud region. They need cloud services right next to their physical data center. 08:59 Nikita: So, how does Oracle help with that? Tijo: To address these requirements, Oracle introduces a set of offerings. The first one is called dedicated region, and the second one is called Cloud@Customer services. Through both these offerings, you get OCI services right in your data center and all behind your firewall, while achieving the benefits of flexibility and automation. 09:24 Nikita: So, what's a dedicated region? Tijo: Dedicated region is a completely managed cloud region that brings all the OCI services and Oracle Fusion SaaS applications within your data centers. Along with deploying the full stack OCI, you would receive support for Oracle Fusion SaaS applications and also gain a consistent experience with the same SLAs, APIs, and the tools available in Oracle Cloud. 09:53 Lois: Ok and what about Cloud@Customer? Tijo: While dedicated region is ideal for large scale enterprise needs, with full stack OCI and SaaS, some organizations just require a lighter footprint. And that's where Cloud@Customer comes in. And to begin with, we'll talk about Compute Cloud@Customer. It is a fully managed rack scale infrastructure that allows you to use the core OCI services, like the OCI compute, OCI storage, and OCI networking services at your on-premises. With Compute Cloud@Customer, you can run applications and middleware systems to provide consistent user experience and simplify IT administration across your distributed cloud architecture. We can plan to run the same application stack everywhere and centrally manage them without needing experts in every location. 10:52 Nikita: Is there a way to make running your Oracle databases easier and more cost-effective? Tijo: That's why Oracle offers you Oracle Exadata Cloud@Customer. Oracle Exadata Cloud@Customer combines the performance of Oracle Exadata with the simplicity, flexibility, and affordability of a managed database service delivered through customer data centers. It is the simplest way to move your current Oracle databases to the cloud, because it provides full compatibility with existing Exadata systems and Exadata Database services in Oracle Cloud Infrastructure. You could also run the fully-managed Oracle Autonomous Database on Exadata Cloud@Customer that would combine all the benefits of having Exadata, along with the simplicity of an autonomous cloud service. And when Compute Cloud@Customer is combined with Exadata Cloud@Customer, you can run full stack applications completely in your own data center. Applications will use the same high performance OCI compute and database services you get in the cloud, so you don't have to change the way you architect or deploy them. 12:09 Nikita: So, what you're saying is that Oracle dedicated region and Cloud@Customer bring OCI services into your data center. Tijo: It enables you to run applications faster using the same high-performance capabilities and autonomous operations. You get all of this while maintaining complete control of your data so that you can address data residency, security, and connectivity concerns. 12:35 Lois: Ok. We've talked about where OCI runs. Now David, let's get into what it actually does. David: OCI compute lets you run business applications on demand without buying or managing physical servers. You choose the type and size of the virtual machine you want, and OCI handles the rest. Need more power for peak traffic? OCI can automatically add capacity and scale it back down after. In addition to virtual machines, bare metal servers are also available for ultra high performance jobs like simulations, AI, or high speed trading. Every business stores data, but not all data needs the same kind of storage. OCI gives you options, fast block storage for your compute servers. It works just like a hard drive for your home computer. Shared file storage for applications and microservices. Large scale object storage for backups, videos, or other data, and low-cost long-term storage for object archives. The system even moves rarely used data to cheaper storage automatically. 13:51 Lois: Given Oracle's expertise in databases, what are some of the database options businesses can access with OCI? David: Oracle Autonomous Database automatically patches, tunes, and scales itself. Need raw power? Use Oracle Exadata, or go open source with MySQL HeatWave, which can be used for real time analytics. With these and many other database options, you get high performance automation and reliability all on demand. 14:24 Nikita: With so many database options, how is everything kept connected and running smoothly on OCI? David: Every cloud service relies on a fast, secure network. OCI's Virtual Cloud network acts like your own private data highway. You control how traffic flows between your apps, your people, and your regions. Need private direct connections to your data center or office? Use OCI FastConnect to bypass the public internet. OCI networking provides high speed performance with enterprise grade security designed for global business. 15:05 Lois: And what security service does Oracle provide? David: OCI doesn't treat this as an optional add on. When you sign up for OCI, your environment is isolated, your data is encrypted, and admin actions are logged. And there are so many security services. Identity and Access Management for handling users and permissions, Cloud Guard to detect threats and misconfigurations, OCI Vault for managing your encryption keys, Data Safe to monitor sensitive data access, as well as many others. You can leverage to meet any government or business compliance requirement. All of these are included in OCI, no need to stitch together third-party tools. 15:55 Lois: What if I want to see what's going on in my environment? David: OCI has monitoring services for metrics, logging services for real-time insights, tracing for distributed applications, and alarms to notify you when things go sideways. All of these services are integrated. So you can see what matters when you need it without all the noise. 16:23 Nikita: David, let's say someone wants to build and deploy an app. What services does OCI offer them? David: OCI provides numerous developer services for your teams to build apps or digital tools. OCI DevOps supports automated builds and deployments. OCI Container Engine for Kubernetes helps run microservices. OCI Functions supports serverless code that runs on demand. All of this works with familiar languages and frameworks. In short, OCI gives developers what they need to build, test, and deliver quickly without having to manage infrastructure. 17:03 Nikita: How does OCI make it easier for companies to bring their apps together and use AI, even if they don't have a dedicated AI team? David: Modern businesses run dozens of apps, and OCI helps you to connect them with Oracle Integration Cloud. With OIC, you can integrate SaaS applications as well as on-premise apps and systems, automate business processes and workflows, route and transform messages, and you can even expose key services as APIs so partners and systems can interact securely. OCI integration is the glue that holds modern IT together. OCI helps you turn data into decisions without needing an AI team. Use ready-made AI tools for language translation, image recognition, document understanding, speech transcription, and more. Or build your own models with data science and data flow services. It's all designed to bring machine learning into reach for every business. 18:10 Lois: Thank you, David and Tijo, for joining us on this episode of the Oracle University Podcast. If you want to learn more about OCI, visit mylearn.oracle.com and search for the Cloud Business Jumpstart course. Nikita: Next week, we'll look at why businesses choose OCI and how they're using OCI services to create real outcomes. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 18:38 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Rajeev Rajan (CTO @ Atlassian) shares the leadership playbook he used to transform Atlassian's engineering culture, and how that cultural foundation directly powered the build and launch of Rovo (Atlassian's new AI powered app). We cover how they reduced ship time from 120 days to zero, why “developer joy” is the metric that matters, and how to create a community of developer productivity champions to scale DevEx transformation. Rajeev also breaks down his principles for systematizing autonomy and empowerment, including frameworks for giving direct reports more ownership. Plus, a look at the future of Atlassian's “Systems of Work”! ABOUT RAJEEV RAJANRajeev Rajan is the Chief Technology Officer (CTO) at Atlassian. Rajeev joined the company in May 2022 and is responsible for Atlassian Engineering, IT, Security and Trust, and the Engineering Operations teams. His focus areas include the company's continued transformation to Cloud, Developer Platform, and Product lines. Additionally, he is passionate about continuing to develop Atlassian's world-class engineering organization and making it a top choice for aspiring engineering talent worldwide.A long-time resident of Washington state, Rajeev previously acted as the Vice President and Head of Engineering for Facebook and Head of Office for Meta in the Pacific Northwest Region. Prior to Meta, Rajeev spent more than two decades with Microsoft, first joining as an intern in 1994. During his time there, he worked on many products, culminating in Office 365 where he built and led the team responsible for all of the Cloud Infrastructure for Office 365.Rajeev is married with two children and a spunky yellow lab named Rayna. He is very involved in and passionate about a number of efforts that uplift the local community, ranging from the arts to STEM programs. SHOW NOTES:The "Listening Tour": Grounding leadership in reality and identifying friction points (3:52)The Confluence Editor story: Reducing ship time from 120 days to 0 (6:26)Moving beyond productivity: Why "Developer Joy" is the metric that matters (8:45)Creating a community of Developer Productivity Champions and the power of a Productivity Summit (13:44)Elevating productivity to a company-level OKR and measuring qualitative sentiment (17:12)Leadership framework: Deciding when to "manage through people" vs. "manage through process" (19:05)How to give more direct ownership / responsibility to a DRI (23:03)Alignment conversations about prioritizing developer joy & productivity (24:22)Challenges faced during Atlassian's developer joy transformation journey (26:23)How the "Developer Joy" foundation enabled building Rovo in just 6 months (30:02)The "System of Work": Expanding Jira's utility beyond engineering to finance, marketing, and legal (33:22)Rapid Fire Questions (40:48) This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/5 Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In today's Cloud Wars Minute, I look at how Google Cloud is reshaping the defense tech landscape.Highlights00:04 — Google Cloud has announced a multi-million dollar contract with the NATO Communication and Information Agency (NCIA), to provide critical sovereign cloud capabilities.This new strategic partnership aims to enhance NATO's digital infrastructure.The NCIA will utilize Google Distributed Cloud, or GDC, to support its Joint Analysis, Training, and Education Center, or JATEC.00:39 — One of the key features it will employ is Google Distributed Cloud (GDC) Air-Gapped, which is an essential component of Google's sovereign cloud solutions. The feature allows the delivery of cloud services and AI capabilities to disconnected, fully secure environments.00:56 —Tara Brady, President of Google Cloud EMEA, said the following: ". . . This partnership will enable NATO to decisively accelerate its digital modernization efforts while maintaining the highest levels of security and digital sovereignty."01:38 — For Google Cloud, this development represents significant progress in expanding its presence within the defense industry, a sector long led by AWS and Microsoft. It also emphasizes growing confidence in Google's sovereign cloud offerings and highlights the increasingly complex and competitive nature of the cloud market. Visit Cloud Wars for more.
On this episode of The Six Five Pod, hosts Patrick Moorhead and Daniel Newman discuss the tech news stories that made headlines this week. The handpicked topics for this week are: AMD Financial Analyst Day Breakdown: AMD presents long-term growth projections with over 35% revenue CAGR. Pat & Dan discuss AMD's 10-15% GPU market share projection, emphasizing Lisa Su's track record of execution and credibility. SoftBank's Strategic Repositioning: SoftBank sold its entire stake in Nvidia for $US5.83 billion ($8.9 billion). Masayoshi Son, Chairman of Japan's SoftBank Group plans to reallocate capital to OpenAI and other AI infrastructure investments. Hosts discuss the potential of ARM-based AI chip development. Anthropic's Infrastructure Investment: New $50 billion data center construction commitment with FluidStack. Claude Code is driving significant revenue and a path to 2028 profitability. Comparison with OpenAI's infrastructure strategy and independence goals. Cloud Infrastructure and Capacity Deals: Nebius secures $3 billion deal with Meta for GPU capacity. Meta's strategy of risk-sharing and outsourcing during demand peaks. The Depreciation Debate: Patrick argues there's a 6-year depreciation period for GPUs based on historical usage patterns, citing continued use of A-, V-, and H-series GPUs. Questions are raised about reticle limits and performance scaling sustainability. Government Shutdown Resolution: Senate votes to reopen government after 43-day closure, leaving in its wake and estimated $11 billion permanent economic loss and $16 billion in missed wages. Hosts break down the market's mixed response with AI sector concerns overshadowing the reopening. Cisco Earnings Analysis: Beat on revenue and earnings with solid enterprise performance. AI infrastructure orders are expected to triple to $3 billion in 2026. Hyperscale AI orders are at $1.3 billion with a strong growth trajectory. CoreWeave Market Position: Stock down 33% from three-month peak, but still up 16% over six months. Data center build-out delays appear to be impacting capacity and revenue projections. Applied Materials Performance: Beat expectations despite revenue decline from the China market loss. Future growth potential from TSMC, Intel, and Samsung US expansion. For a deeper dive into each topic, please click on the links above. Be sure to subscribe to The Six Five Pod so you never miss an episode.
In this conversation, Frank Verdeja shares his extensive experience in e-commerce and data management, discussing the importance of bridging the communication gap between technical teams and business stakeholders. He emphasizes the role of data integrity in e-commerce and the growing significance of data management systems in the age of AI. Frank expresses his enthusiasm for discovering new e-commerce companies and supporting startups in the Minneapolis area.TakeawaysFrank has 13 years of experience in e-commerce and data management.He emphasizes the importance of context in communication between technical and business teams.Data integrity is crucial for businesses of all types.Frank's company focuses on data governance and observability.He believes that understanding the value of data helps prioritize tasks.The rise of AI has made data management systems more important than ever.Customers prefer to own their data and need suitable platforms.Frank enjoys learning about new e-commerce companies in the Twin Cities.He has a passion for supporting startups and new players in the market.The conversation highlights the intersection of technology and business in e-commerce.Chapters00:00Introduction to E-commerce and Data Management02:57Bridging the Gap: Technology and Business Communication05:33Data Integrity and Its Role in E-commerce06:06The Future of E-commerce and Data Management06:06TC - Outtro All AV version 1.mp4
This week on The Data Stack Show, Alexander Patrushev joins John to share his journey from working on mainframes at IBM to leading AI infrastructure innovation at Nebius, with stops at VMware and AWS along the way. The discussion explores the evolution of AI and cloud infrastructure, the five pillars of successful machine learning projects, and the unique challenges of building and operating modern AI data centers—including energy consumption, cooling, and networking. Alexander also delves into the practicalities of infrastructure as code, the importance of data quality, and offers actionable advice for those looking to break into the AI field. Key takeaways include the need for strong data foundations, thoughtful project selection, and the value of leveraging existing skills and tools to succeed in the rapidly evolving AI landscape. Don't miss this great conversation.Highlights from this week's conversation include:Alexander's Background and Early Career at IBM (1:06)Moving From Mainframes to Virtualization at VMware (4:09)Transitioning to AWS and Machine Learning Projects (8:22)What Was Missed From Mainframes and the Rise of Public Cloud (9:03)Security, Performance, and Economics in Cloud Infrastructure (12:40)The Five Pillars of Successful Machine Learning Projects (15:02)Choosing the Right ML Project: Data, Impact, and Existing Solutions (18:01)Real-World AI and ML Use Cases Across Industries (19:42)Building Specialized AI Clouds Versus Hyperscalers (22:08)Performance, Scalability, and Reliability in AI Infrastructure (25:18)Data Center Energy Consumption and Power Challenges (28:41)Cooling, Networking, and Supporting Systems in AI Data Centers (30:06)Infrastructure as Code and Tooling in AI (31:50)Lowering Complexity for AI Developers and the Role of Abstraction (34:08)Startup Opportunities in the AI Stack (38:53)When to Fine-Tune or Post-Train Foundation Models (43:41)Comparing and Testing Models With Tool Use (47:49)Skills and Advice for Entering the AI Field (49:18)Final Thoughts and Encouragement for AI Newcomers (52:31)The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Subscribe to the SmartTechCheck newsletter:LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=6891547330575679488Medium https://markvena.medium.com/Subscribe to @SmartTechCheck for weekly podcast upload reminders: https://www.youtube.com/SmartTechCheckFollow Mark Vena on Twitter: https://twitter.com/MarkVenaTechGuyFollow Rob Pegoraro on Twitter: https://twitter.com/RobPegoraroFollow John Quain on Twitter: https://twitter.com/jqontechFollow Stewart Wolpin on Twitter: https://twitter.com/stewartwolpinhttps://open.spotify.com/show/7CxF4cT2AYCbzA8trCPnAl
Build and run everything from simple web apps to AI supercomputing by matching each workload to the right Azure VM in minutes. Find and know exactly what you're provisioning by understanding the naming format to see CPU type, memory, storage, and features before deployment to match what your app or workload needs. Use free tools like Azure Migrate to right-size and plan. Matt McSpirit, Microsoft Azure expert, shows how to choose, size, and deploy workloads such as burstable web apps, massive in-memory databases, GPU-driven AI training, and high-performance scientific modeling, all with automatic scaling and confidential computing when needed. ► QUICK LINKS: 00:00 - Azure Virtual Machines 01:12 - Decode Azure VM Names 01:28 - Right-Size with Azure Migrate 02:15 - B series 02:45 - D series 03:23 - E series 04:14 - F series 04:29 - L series 05:01 - M series 05:23 - Constrained vCPU VMs 05:49 - H series 06:20 - N series 06:55 - Azure Boost 07:24 - Confidential VMs & Deploying your VMs 08:28 - Wrap up ► Link References Get started at https://aka.ms/VMAzure Azure VM naming conventions at https://aka.ms/VMnames ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
This interview was recorded for GOTO Unscripted.https://gotopia.techRead the full transcription of this interview hereBrooklyn Zelenka - Author of Numerous Libraries Including Witchcraft & Founded the Vancouver Functional Programming MeetupJulian Wood - Serverless Developer Advocate at AWSRESOURCESBrooklynhttps://bsky.app/profile/expede.wtfhttps://octodon.social/@expede@types.plhttps://github.com/expedehttps://www.linkedin.com/in/brooklynzelenkahttps://notes.brooklynzelenka.comJulianhttps://bsky.app/profile/julianwood.comhttps://twitter.com/julian_woodhttp://www.wooditwork.comhttps://www.linkedin.com/in/julianrwoodLinkshttps://automerge.orghttps://discord.com/invite/zKGe4DCfgRhttps://www.robinsloan.com/notes/home-cooked-apphttps://github.com/ipvm-wghttps://www.localfirst.fmhttps://localfirstweb.devDESCRIPTIONDistributed systems researcher Brooklyn Zelenka unpacks the paradigm shift of local-first computing, where applications primarily run on users' devices and synchronize seamlessly without central servers.In a conversation with Julian Wood, she explains how this approach reduces latency, enables offline functionality, improves privacy through encryption, and democratizes app development—all while using sophisticated data structures. Perfect for collaborative tools and "cozy web" applications serving smaller communities, local-first software represents a fundamental rethinking of how we've built software for the past 30 years.RECOMMENDED BOOKSFord, Parsons, Kua & Sadalage • Building Evolutionary Architectures 2nd EditionFord, Richards, Sadalage & Dehghani • Software Architecture: The Hard PartsMark Richards & Neal Ford • Fundamentals of Software ArchitectureFord, Parsons & Kua • Building Evolutionary ArchitecturesNeal Ford • Functional ThinkingMichael Feathers • Working Effectively with Legacy CodeBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
AI in software development sounds like a dream, faster coding, cleaner refactoring, and technical reports that actually make sense to stakeholders. But, what's the bad news in the classic good news/bad news scenario? Poisoned training data, compliance risks, and systems that are brittle and will not scale. This week on Feds At The Edge, Alex Gromadzki, Assistant Director of Data Science at US GAO, and Steven Toy, Senior Director, Cloud Infrastructure for ICF, unpack the opportunities and pitfalls of generative AI in federal software development. From source-citing AI to data security in the software lifecycle, they reveal why small, testable use cases may be the smartest way forward. Listen now on your favorite podcast platform to hear how federal leaders can balance innovation with responsibility as AI reshapes the software development life cycle.
Railway is a software company that provides a popular platform for deploying and managing applications in the cloud. It automates tasks such as infrastructure provisioning, scaling, and deployment and is particularly known for having a developer-friendly interface. Jake Cooper is the Founder and CEO at Railway. He joins the show to talk about the company The post Streamlining Cloud Infrastructure Deployments with Jake Cooper appeared first on Software Engineering Daily.
Railway is a software company that provides a popular platform for deploying and managing applications in the cloud. It automates tasks such as infrastructure provisioning, scaling, and deployment and is particularly known for having a developer-friendly interface. Jake Cooper is the Founder and CEO at Railway. He joins the show to talk about the company The post Streamlining Cloud Infrastructure Deployments with Jake Cooper appeared first on Software Engineering Daily.
Oracle, Microsoft, AWS, and Google Cloud. Oracle is the newest member of that elite club and recently claimed that none of the others can match them in the cloud. This is a timely topic, as business leaders accelerate their move into the cloud to harness AI and innovation.01:00 — Recently Oracle Cloud Infrastructure's President Clay Magouyrk stated that Oracle is the only hyperscaler able to deliver more than 200 cloud and AI services across every deployment model. Google Cloud said it's no longer about the number of services or deployment models—it's about using digital tech to help customers build their futures.02:02 — In the next week or so, I'll be sharing Google Cloud's full point of view and how they believe customers should really be evaluating cloud infrastructure providers. It's a smart and healthy debate—one that gives customers visibility into how these tech giants are evolving. Which are truly keeping up with what customers value most today?02:45 — I give Clay Magouyrk credit for opening this discussion and to Google Cloud for engaging with it meaningfully. I'm leaving the door open for Microsoft and AWS if they'd like to join the conversation. These debates are good for everyone because they help clarify what matters. Visit Cloud Wars for more.
Chris Adams is joined by Adrian Cockcroft, former VP of Cloud Architecture Strategy at AWS, a pioneer of microservices at Netflix, and contributor to the Green Software Foundation's Real Time Cloud project. They explore the evolution of cloud sustainability—from monoliths to microservices to serverless—and what it really takes to track carbon emissions in real time. Adrian explains why GPUs offer rare transparency in energy data, how the Real Time Cloud dataset works, and what's holding cloud providers back from full carbon disclosure. Plus, he shares his latest obsession: building a generative AI-powered house automation system using agent swarms.
In this episode of the Sales Success Stories podcast, host Scott Ingram interviews Samir Dandekar, a seasoned sales professional from Oracle's Cloud Infrastructure team who recently closed a staggering $150 million+ deal. Learn more at Top1.FM
In this episode we interview Santosh Kaveti, CEO of ProArch, about his journey from India to Atlanta, his educational background, and his career in technology. Santosh shares insights into his family life, early education, and the challenges he faced along the way. He discusses his passion for mathematics and how it influenced his career choices, ultimately leading him to the field of software engineering and digital infrastructure.00:00 Introduction00:20 What is Santosh Doing Today?06:00 First Memory of a Computer09:00 Early Interests / Education20:00 Moving to Utah24:00 Pursuing a Masters32:00 First Programming Job42:00 Deciding to Start a Company53:00 Formulating Strategy58:00 Acquiring Companies / Expansion1:20:00 Finding Clients While Growing1:25:00 Contact InfoConnect with Santosh: Linkedin: https://www.linkedin.com/in/santoshkavetiX: https://x.com/santoshkavetiMentioned in this Episode:ProArch: https://www.proarch.comWant more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
Discover how Rackspace Spot is democratizing cloud infrastructure with an open-market, transparent option for cloud servers. Kevin Carter, Product Director at Rackspace Technology, discusses Rackspace Spot's hypothesis and the impact of an open marketplace for cloud resources. Discover how this novel approach is transforming the industry. TIMESTAMPS[00:00:00] – Introduction & Kevin Carter's Background[00:02:00] – Journey to Rackspace and Open Source[00:04:00] – Engineering Culture and Pushing Boundaries[00:06:00] – Rackspace Spot and Market-Based Compute[00:08:00] – Cognitive vs. Technical Barriers in Cloud Adoption[00:10:00] – Tying Spot to OpenStack and Resource Scheduling[00:12:00] – Product Roadmap and Expansion of Spot[00:16:00] – Hardware Constraints and Power Consumption[00:18:00] – Scrappy Startups and Emerging Hardware Solutions[00:20:00] – Programming Languages for Accelerators (e.g., Mojo)[00:22:00] – Evolving Role of Software Engineers[00:24:00] – Importance of Collaboration and Communication[00:28:00] – Building Personal Networks Through Open Source[00:30:00] – The Power of Asking and Offering Help[00:34:00] – A Question No One Asks: Mentors[00:38:00] – The Power of Educators and Mentorship[00:40:00] – Rackspace's OpenStack and Spot Ecosystem Strategy[00:42:00] – Open Source Communities to Join[00:44:00] – Simplifying Complex Systems[00:46:00] – Getting Started with Rackspace Spot and GitHub[00:48:00] – Human Skills in the Age of GenAI - Post Interview Conversation[00:54:00] – Processing Feedback with Emotional Intelligence[00:56:00] – Encouraging Inclusive and Clear Collaboration QUOTESCHARNA PARKEY“If you can't engage with this infrastructure in a way that's going to help you, then I guarantee you it's not up to par for the direction that we're going. [...] This democratization — if you don't know how to use it — it's not doing its job.”KEVIN CARTER“Those scrappy startups are going to be the ones that solve it. They're going to figure out new and interesting ways to leverage instructions. [...] You're going to see a push from them into the hardware manufacturers to enhance workloads on FPGAs, leveraging AVX 512 instruction sets that are historically on CPU silicon, not on a GPU.”
We are pleased to announce the general availability of the NCI Compute product, enabling Nutanix customers to leverage external storage as part of their NCP deployments.The first supported solution at launch is the Nutanix Cloud Platform with Dell PowerFlex, a complete server-based infrastructure solution designed for large mission-critical environments where resiliency, security, scalability, and performance are essential.Blog Post: https://www.nutanix.com/blog/nutanix-cloud-infrastructure-for-external-storageHost: Phil Sellers, XenTegraCo-Host: Chris Calhoun, XenTegraCo-Host: Jirah Cox, NutanixCo-Host: Ben Rogers, Nutanix
Host Anne Currie is Joined by the esteemed Charles Humble, a figure in the world of sustainable technology. Charles Humble is a writer, podcaster, and former CTO with a decade's experience helping technologists build better systems—both technically and ethically. Together, they discuss how developers and companies can make smarter, greener choices in the cloud, as well as the trade-offs that should be considered. They discuss the road that led to the present state of generative AI, the effect it has had on the planet, as well as their hopes for a more sustainable future.
At Google Cloud Next 2025, Google Cloud VP and CTO Will Grannis joins Bob Evans to explore how AI is reshaping enterprise technology. Grannis shares how Google Cloud's OCTO team works with customers on complex challenges, using DeepMind research, next-gen TPUs, and AI-native infrastructure, while noting the fading line between B2B and B2C and the cultural changes needed to adapt.Inside Google Cloud's AI Strategy Google Cloud Is AI-Native at Its Core: Grannis says that Google Cloud's approach to AI is foundational. The organization's mindset, shaped by Google's long-standing leadership in AI, infuses every layer of its stack, from infrastructure to user interfaces. With a legacy of deploying machine learning at scale for over a decade, Google Cloud doesn't just offer AI tools—it helps customers reimagine their businesses through AI-native thinking, using products like DeepMind and innovations born across Google's consumer ecosystem.The OCTO Team Solves the Hardest Problems with Customers: Grannis leads the Office of the CTO (OCTO), a team he jokingly calls “the nerdy Navy SEALs.” They tackle highly complex, unsolved customer challenges that can't be addressed by existing products. Rather than building solutions in isolation, they co-create alongside customers. They start with business outcomes and design backward.Multi-Modality and Multi-Agent Systems Are the Future: Looking ahead, Grannis predicts that multi-modal AI, i.e. models that process images, text, speech, and even scent, will become the standard. He also foresees a shift from single-function agents to “agentic workflows” powered by multiple orchestrated AI agents. Google is prototyping orchestration with projects like Astra, that signal a future where AI is not only intelligent but contextually aware and collaborative.The Big Quote: “People . . . spend a lot of time just trying to take a PDF and analyze it. It seems very true. It is a pain . . I think that's one reason why a NotebookLM or a product like that has been so popular because it really attacks like the heart of what people hate doing at work. [AI] puts them in the driver's seat. They can ask questions, they can do analysis.”Learn more:Check out OCTO, NotebookLM, and Google Cloud.
In this episode of the Tyler Tech Podcast, we explore how cloud technology helps governments build greater resilience, maintain continuity, and adapt to evolving risks.Russell Gainford, chief technology officer at Tyler, joins us to discuss how the cloud delivers the scalability, flexibility, and reliability agencies need to keep critical services running — even in the face of disruption. From cybersecurity threats to natural disasters and unexpected system demands, cloud-based infrastructure empowers governments to respond quickly and recover confidently.Throughout the conversation, Russell shares insights on the limitations of traditional on-premises environments, the growing importance of proactive risk planning, and how cloud solutions help reduce technical debt while improving operational agility. He also offers best practices for building a roadmap to resilience, including how to prioritize critical systems, plan for dependencies, and make smart investments over time.Tune in to learn how modern cloud strategies are helping government agencies strengthen resilience, improve service delivery, and prepare for the unexpected.This episode also highlights Digital Access and Accessibility in the Resident Experience, a new white paper exploring how public sector organizations can remove barriers and create more inclusive digital services. As governments continue to expand digital offerings, ensuring a seamless, user-friendly experience is more important than ever.Download: Digital Access and Accessibility: Creating a Better Resident ExperienceAnd learn more about the topics discussed in this episode with these resources:Download: Building a Resilient GovernmentDownload: A Digital Guide to Modernizing the Resident ExperienceDownload: Cloud-Smart Strategies for IT Infrastructure ModernizationBlog: How Cloud-Based Solutions Expand Access to State ServicesBlog: Using Cloud-Based Solutions to Improve Access in CountiesBlog: The Cloud Experience: Improving Government ServicesBlog: Future-Proofing Government Through Technology ModernizationVideo: Tyler Talks: The Cloud is NowVideo: 30 Years of Data Moved to Cloud in 5 DaysVideo: Increase Efficiency With the CloudPodcast: Cloud Adoption and Understanding the Risks of Legacy SystemsListen to other episodes of the podcast.Let us know what you think about the Tyler Tech Podcast in this survey!
March 18, 2025: Today on TownHall, Sue Schade, Principal at StarBridge Advisors, talks with Sonney Sapra, SVP and CIO at Samaritan Health Services. Sonney discusses his nearly four-year journey at Samaritan Health, detailing the organization's focus on technological innovation, financial sustainability, and population health. Why move to a fully cloud-based infrastructure? How can AI and ambient listening technologies transform healthcare? Sonny shares insights into these questions and more, including integrating health plan data to improve patient outcomes. He also delves into workforce management in a remote and hybrid environment, and emphasizes the importance of understanding organizational needs in vendor partnerships. Finally, Sonny discusses his future role as a TownHall moderator in 2025 and his passion for sharing knowledge within the CIO community. Subscribe: This Week HealthTwitter: This Week HealthLinkedIn: This Week HealthDonate: Alex's Lemonade Stand: Foundation for Childhood Cancer
In this Tech Barometer podcast interview with Induprakas Keri, senior vice president and general manager for hybrid multicloud at Nutanix,...[…]
In this episode, Anurag Goel, founder and CEO of Render, explores the challenges of competing with major cloud providers, the evolution of cloud infrastructure, and Render's mission to simplify DevOps complexities. He shares insights from his journey, including his early programming experiences in India and the importance of fostering a supportive environment for success. Anurag reflects on his time at various companies, including startups and his pivotal role in Stripe's early development. He also emphasizes the significance of developer relations and the need for flexible product offerings to accommodate diverse customer needs.00:00 Introduction00:30 What is Anurag Doing Today?07:30 Cloud Infrastructure and Render20:00 First Memory of a Computer22:00 Education in India34:00 Early Career and Growth44:30 The Rise of Stripe1:00:00 Building Render1:11:30 The Importance of Pricing in Cloud Services1:14:15 Streamlined Deployment and Ops 1:27:25 Contact InfoConnect with Anurag: Email: anurag@render.comLinkedin: https://www.linkedin.com/in/anuragoel/X: https://x.com/anuraggoelMentioned in this Episode:Render: https://render.com/Stripe: https://stripe.com/Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
The rapid expansion of data centers is reshaping the industry, requiring new approaches to design, safety, and leadership. We're excited to have Doug Mouton, former Senior Eng Lead, Datacenter Design Engineering and Construction at Meta, as a guest on this latest episode of the “Data Center Revolution” podcast. Doug joins us with key insights into leadership, adaptability, and the evolution of hyperscale data-center construction. He also shares his journey from military service to leading large-scale infrastructure projects in the data center industry, highlighting key transferable skills along the way. Key Takeaways:(07:54) Military mindset builds strong leaders.(14:25) Veterans thrive in high-pressure environments.(25:32) Katrina exposed disaster preparedness gaps.(35:16) Microsoft shifted to cost-effective data center designs.(43:56) Data centers face growing energy challenges.(54:26) Safety-first culture boosts efficiency and morale.(01:21:43) Data centers must transition to hybrid cooling solutions.(01:42:09) AI needs ethical guardrails.Resources Mentioned:Fidelis New Energy | Website -https://www.fidelisinfra.comMicrosoft Azure -https://azure.microsoft.com/en-us/Meta -https://about.meta.com/Jacobs -https://www.jacobs.com/National Guard -https://nationalguard.com/Jones Lang LaSalle -https://www.us.jll.com/Thank you for listening to “Data Center Revolution.” Don't forget to leave us a review and subscribe so you don't miss an episode. To learn more about Overwatch, visit us at https://linktr.ee/overwatchmissioncritical #DataCenterIndustry #NuclearEnergy #FutureOfDataCenters #AI
In this episode of Automox's CISO IT podcast, host Jason Kikta delves into the world of servers, exploring their critical role in modern internet operations, the evolution of server technology, and the significance of Linux in the server landscape. He emphasizes the complexity of server management and the importance of reliability, highlighting how servers are often under-appreciated despite their foundational role in business operations.
On this month's episode of Unscripted Leadership, Comcast Business VP Heather Orrico is joined by Greg Hassler, VP of Network Technology and Cloud Infrastructure with the PGA Tour. Greg shares his journey that brought him to the PGA Tour almost 30 years ago and his philosophy about strong and effective management. He also explains how the […] The post Greg Hassler, Vice President of Network Technology & Cloud Infrastructure with PGA Tour appeared first on Business RadioX ®.
Join Sly Gittens for another episode of Women in Technology as we explore the career journey of Denise Holland, a Senior Technical Program Manager with 26 years of experience in IT. From network acceleration to cloud infrastructure, Denise discusses how her technical expertise and passion for innovation have shaped her success. Tune in for insights on leadership, continuous improvement, and what it takes to thrive in a male-dominated industry.
In this episode of ChainLeak, we explore "StorX Network: DePIN vs. Big Tech" with special guest Murphy John. Dive into the world of StorX, a decentralized cloud storage network powered by XDC, challenging Big Tech's cloud monopoly through DePIN (Decentralized Physical Infrastructure Networks)._____Episode HighlightsStorX's decentralized cloud modelData privacy benefits in StorXStorX vs. Big Tech cloud storage providersCommunity-operated nodes in StorXStorX and XDC blockchainScalability in decentralized storageStorX's open-source ecosystemDePIN's impact on tech infrastructureThe future of DePIN in cloud storage_____StorX Network is a decentralized cloud storage platform designed to offer secure, private, and efficient data storage by leveraging blockchain technology. Unlike traditional cloud providers that store data on centralized servers, StorX distributes encrypted data fragments across a network of community-operated nodes. This decentralized approach enhances data security, prevents single points of failure, and protects user privacy by ensuring that no single entity has full control over stored information. Built on the XDC blockchain, StorX provides a trustless and censorship-resistant storage solution, empowering users to store and retrieve data seamlessly while maintaining full ownership and control.StorX is also a part of the emerging DePIN (Decentralized Physical Infrastructure Networks) movement, which aims to disrupt centralized tech monopolies by enabling users to contribute resources to a distributed network. Node operators on StorX can earn incentives for hosting data, making it a community-driven alternative to traditional cloud giants like AWS, Google Cloud, and Microsoft Azure. With a strong emphasis on transparency, scalability, and open-source collaboration, StorX is paving the way for the future of decentralized cloud storage, where users have more autonomy over their digital assets._____Connect with StorX Network:Twitter: https://twitter.com/StorxNetworkWebsite: https://StorX.TechConnect with ChainLeak:Twitter: https://twitter.com/ChainLeakTelegram: https://t.me/ChainLeakWebsite: https://ChainLeak.comConnect with Joshuwa:Twitter: https://twitter.com/JoshRoomsburgTelegram: https://t.me/JoshRoomsburgTikTok: https://tiktok.com/@joshroomsburg_____Disclosure: This episode is presented by StorX Network.https://disclosure.ChainLeak.com
Just as a master chef transforms raw ingredients into culinary masterpieces, navigating the cloud requires expertise, precision, and a guiding hand. In this episode, Paul Kocan, VP of Managed Hosting and Craig Briars, Solution Consultant and Sales Director at Americaneagle.com break down the intricacies of cloud and managed hosting from their 2024 Americaneagle.com Customer Forum event presentation. They highlight creative analogies and real-world examples to make cloud hosting both actionable and approachable. This podcast is brought to you by Americaneagle.com Studios. Modern Marketing Messages: Americaneagle.com // Twitter // Instagram // Facebook // YouTube Taylor Karg: LinkedIn Paul Kocan: LinkedIn Craig Briars: LinkedIn Resources: Americaneagle.com Hosting & Managed Cloud Services
See the latest innovations in silicon design from AMD with new system-on-a-chip high bandwidth memory breakthroughs with up to 7 terabytes of memory bandwidth in a single virtual machine - and how it's possible to get more than 8x speed-ups without sacrificing compatibility from the previous generation to HBv5. These use AMD EPYC™ 9004 Processors with AMD 3D V-Cache™ Technology. And find out how Microsoft's own silicon including custom ARM-based Cobalt CPUs and Maia AI accelerators for performance and power efficiency. Mark Russinovich, Azure CTO, Deputy CISO, Technical Fellow, and Microsoft Mechanics lead contributor, shows how with workloads spanning Databricks, Siemens, Snowflake, or Microsoft Teams, Azure provides the tools to improve efficiency and performance in your datacenter at hyperscale. ► QUICK LINKS: 00:00 - 7TB memory bandwidth in a single VM 00:51 - Efficiency and optimization 02:33 - Choose the right hardware for workloads 04:52 - Microsoft Cobalt CPUs and Maia AI accelerators 06:14 - Hardware innovation for diverse workloads 07:53 - Speedups with HBv5 VMs 09:04 - Compatibility moving from HBv4 to HBv5 11:29 - Future of HPC 12:01 - Wrap up ► Link References Check out https://aka.ms/AzureHPC For more about HBv5 go to https://aka.ms/AzureHBv5 ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Brad and Amy dive into their year-end tech reflections, discussing goal-setting strategies and Amy's ambitious Build 12 project for 2025. The hosts explore various database hosting solutions, share their favorite hardware purchases including cameras and peripherals, and examine how AI tools are reshaping development workflows. The episode concludes with insights into emerging tech trends and anticipated developments for 2025.Chapter Marks00:00 Episode introduction and host intros00:41 Year-end goals discussion and 12-week planning02:02 Amy's Build 12 project announcement03:01 Goal setting strategies and focus04:25 Brad's 2024 goals review05:35 Travel plans and New York City trips06:58 More 2024 goals: fitness, career, and finances08:21 Technical stack discussion13:22 AI tools and development workflows17:19 Database hosting options comparison25:45 Tech gear and hardware updates33:47 Notable tech purchases review43:29 AI tools and future tech discussionLinksBuild Twelve, by Brian P. Moran - Amy's upcoming projectThe 12 Week Year (book)Atomic Habits, by James Clear (book)The Power of Habit, by Charles Duhigg (book)SupabaseNeon databaseDigital OceanTursoCursor IDERemarkable Tablet (v2)Oura RingRazer Basilisk V3 Pro mouseSwish app for MacNuphy Air 75 keyboardDrop keyboardInsta360 One cameraInsta360 Go 3 cameraNikon ZFC cameraRay Deck - Episode 182: Low-Code as a Medium For High-Speed DevelopersMarc LouPieter LevelsWorkOSThe Best Way to Add Authentication to Your Astro Website (Amy's YouTube Video)Comparing Frameworks - Amy's projectGitHub CopilotClaudeconvertkit.comloops.soPrisma
Hoje é dia de falar sobre nuvem! Neste episódio, falamos sobre o serviço de nuvem da Oracle, sobre multicloud, sobre as necessidades do mercado na era da IA e, é claro, sobre o Oracle Next Education, um programa que vem mudando a vida de dezenas de milhares de pessoas em toda a América latina. Vem ver quem participou desse papo: Paulo Silveira, o host que conta com você ouvinte Amanda Gelumbauskas, Latam Head do Oracle Next Education Lucas Leung, Latam Marketing Director na Oracle Paulo Alves, Coordenador da escola de DevOps da Alura
Matteo Collina and Luca Maraschi join the podcast to talk about Platformatic. Learn about Platformatics' incredible 4.3 million dollar seed round, its robust features and modular approach, and how it addresses the unique challenges faced by devs and enterprises. Links https://platformatic.dev/docs/getting-started/quick-start-watt Matteo Collina: https://nodeland.dev https://x.com/matteocollina https://fosstodon.org/@mcollina https://github.com/mcollina https://www.linkedin.com/in/matteocollina https://www.youtube.com/@adventuresinnodeland Luca Maraschi: https://www.linkedin.com/in/lucamaraschi https://x.com/lucamaraschi We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guests: Luca Maraschi and Matteo Collina.
In this episode of the Ecommerce Toolbox: Expert Perspectives podcast, host Kailin Noivo interviews Chris Meystrik, CTO of Jewelry Television (JTV). Chris draws insights from his extensive career, from early work at WebMD and Oracle to leading JTV's tech-driven ecommerce strategy. He discusses the challenges of integrating live TV broadcasts with ecommerce, emphasizing real-time data flows that support a unique customer experience. Chris also delves into JTV's recent re-platforming efforts and the decision-making process behind taking that step. How do we decide as technology experts when is the right time to re-platform? Tune in to find out the answer to this question and more, as Chris and Kailin investigate the intersection of technology, live TV, and ecommerce, and how JTV leverages these to drive innovation.
From our Sponsors at SimmerGo to TeamSimmer and use the coupon code DEVIATE for 10% on individual course purchases.The Technical Marketing Handbook provides a comprehensive journey through technical marketing principles.A new course is out now! Chrome DevTools for Digital MarketersLatest content from Juliana & SimoArticle: Cookie Access With Shopify Checkout And SGTM by Simo AhavaArticle: Unlocking Real-Time Insights: How does Piwik PRO's Real-Time Dashboarding Feature work? by Juliana JacksonAlso mentioned in the EpisodeStape's blog: https://stape.io/blogStape website: https://stape.ioMeasure Slack: https://www.measure.chat/Connect with Denis Golubovskyi This podcast is brought to you by Juliana Jackson and Simo Ahava. Intro jingle by Jason Packer and Josh Silverbauer.
MRKT Matrix - Wednesday, October 2nd S&P 500 is little changed as shaky October start continues on escalating Middle East tensions (CNBC) Saudi Minister Warns of $50 Oil as OPEC+ Members Flout Production Curbs (WSJ) Tesla's First Quarterly Sales Gain This Year Comes Up Short (Bloomberg) Bank of America's top ideas for the fourth quarter including Starbucks and Walmart (CNBC) Ozempic Goes From Threat to Opportunity for Packaged-Food Makers (Bloomberg) AI Can Only Do 5% of Jobs, Says MIT Economist Who Fears Crash (Bloomberg) OpenAI closes funding at $157 billion valuation, as Microsoft, Nvidia, SoftBank join round (CNBC) Equinix signs $15 bln joint venture to build U.S. data center infrastructure (Reuters) Oracle to Invest $6.5 Billion in AI and Cloud Infrastructure in Malaysia (WSJ) Microsoft Pitches AI for Consumers Against Crowded Field of Rivals (Bloomberg) --- Subscribe to our newsletter: https://riskreversalmedia.beehiiv.com/subscribe MRKT Matrix by RiskReversal Media is a daily AI powered podcast bringing you the top stories moving financial markets Story curation by RiskReversal, scripts by Perplexity Pro, voice by ElevenLabs
In this episode of Crazy Wisdom, Stewart Alsop chats with Ian Mason, who works on architecture and delivery of AI and ML solutions, including LLMs and retrieval-augmented generation (RAG). They explore topics like the evolution of knowledge graphs, how AI models like BERT and newer foundational models function, and the challenges of integrating deterministic systems with language models. Ian explains his process of creating solutions for clients, particularly using RAG and LLMs to support automated tasks, and discusses the future potential of AI, contrasting the hype with practical use cases. You can find more about Ian on his LinkedIn profile.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Welcome00:32 Understanding Knowledge Graphs02:03 Hybrid Systems and AI Models03:39 Philosophical Insights on AI05:01 RAG and Knowledge Graph Integration07:11 Challenges in AI and Knowledge Graphs11:40 Multimodal AI and Future Prospects13:44 Artificial Intelligence vs. Artificial Linear Algebra17:50 Silicon Valley and AI Hype30:44 Defining AGI and Embodied Intelligence32:29 Potential Risks and Mistakes of AI Agents35:04 The Role of Human Oversight in AI38:00 Understanding Vector Databases43:28 Building Solutions with Modern Tools46:52 The Future of Solution Development47:43 Personal Journey into Coding57:25 The Importance of Practical Learning59:44 Conclusion and Contact InformationKey InsightsThe evolution of AI models: Ian Mason discusses how foundational models like BERT have been overtaken by newer, more capable language models, which can perform tasks that once required multiple models. He highlights that while earlier models like BERT still have their uses, foundational models have simplified and expanded AI's capabilities.The role of knowledge graphs: Knowledge graphs provide structured, deterministic ways of handling data, which can complement language models. Ian explains that while LLMs are great for articulating responses based on large datasets, they lack the ability to handle logical and architectural connections between pieces of information, which knowledge graphs can provide.RAG (Retrieval-Augmented Generation) systems: Ian delves into how RAG systems help refine AI output by feeding language models relevant data from a pre-searched database, reducing hallucinations. By narrowing down the possible answers and focusing the LLM on high-quality data, RAG ensures more accurate and contextually appropriate responses.Limitations of language models: While LLMs can generate plausible-sounding responses, they lack deep architectural understanding and can easily hallucinate or provide inaccurate results without carefully curated input. Ian points out the importance of combining LLMs with structured data systems like knowledge graphs or vector databases to ground the output.Vector databases and embeddings: Ian explains how vector databases, which use embeddings and cosine similarity, are crucial for narrowing down the most relevant data in a RAG system. This modern approach outperforms traditional keyword searches by considering semantic meaning rather than just text similarity.AI's impact on business solutions: The conversation highlights how AI, particularly through tools like RAG and LLMs, can streamline business processes. For instance, Ian uses AI to automate customer service email drafting, breaking down complex customer queries and retrieving the most relevant answers, significantly improving operational efficiency.The future of AI in business: Ian believes AI's real-world impact will come from its integration into larger systems rather than revolutionary standalone changes. While there is significant hype around AGI and other speculative technologies, the focus for the near future should be on practical applications like automating business workflows, where AI can create measurable value without over-promising its capabilities.
Mark Rostick is a Vice President & Senior Managing Director located in Raleigh, NC. He is a voting member of Intel Capital's investment committee. He joined Intel Capital in 1999. Mark also co-manages our Cloud domain investment activities and portfolio. He has deep investment experience in cloud applications, infrastructure hardware and software, as well as AI/ML. As a member of Intel Capital's Investment Committee, he is responsible for approving investments proposed by Intel Capital investors, as well as managing the group's personnel and operations. Mark currently serves as a director or observer on the boards of Beep, RunPod, Hypersonic, Immuta, Lilt, MinIO, Opaque Systems, Tetrate, and Verta. Prior to Intel, Mark worked as a practicing attorney and in banking. You can learn more about: How to invest in the top AI/ML companies How to build a successful career in corporate venture The evolving landscape of enterprise software investments #IntelCapital #VentureCapital #TechInvestment #CloudComputing #AI #ML ===================== YouTube: @GraceGongCEO Newsletter: @SmartVenture LinkedIn: @GraceGong TikTok: @GraceGongCEO IG: @GraceGongCEO Twitter: @GraceGongGG ===================== Join the SVP fam with your host Grace Gong. In each episode, we are going to have conversations with some of the top investors, superstar founders, as well as well-known tech executives in silicon valley. We will have a coffee chat with them to learn their ways of thinking and actionable tips on how to build or invest in a successful company.
Saurabh Baji discusses Cohere's approach to developing and deploying large language models (LLMs) for enterprise use. * Cohere focuses on pragmatic, efficient models tailored for business applications rather than pursuing the largest possible models. * They offer flexible deployment options, from cloud services to on-premises installations, to meet diverse enterprise needs. * Retrieval-augmented generation (RAG) is highlighted as a critical capability, allowing models to leverage enterprise data securely. * Cohere emphasizes model customization, fine-tuning, and tools like reranking to optimize performance for specific use cases. * The company has seen significant growth, transitioning from developer-focused to enterprise-oriented services. * Major customers like Oracle, Fujitsu, and TD Bank are using Cohere's models across various applications, from HR to finance. * Baji predicts a surge in enterprise AI adoption over the next 12-18 months as more companies move from experimentation to production. * He emphasizes the importance of trust, security, and verifiability in enterprise AI applications. The interview provides insights into Cohere's strategy, technology, and vision for the future of enterprise AI adoption. https://www.linkedin.com/in/saurabhbaji/ https://x.com/sbaji https://cohere.com/ https://cohere.com/business MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. TOC (*) are best bits 00:00:00 1. Introduction and Background 00:04:24 2. Cloud Infrastructure and LLM Optimization 00:06:43 2.1 Model deployment and fine-tuning strategies * 00:09:37 3. Enterprise AI Deployment Strategies 00:11:10 3.1 Retrieval-augmented generation in enterprise environments * 00:13:40 3.2 Standardization vs. customization in cloud services * 00:18:20 4. AI Model Evaluation and Deployment 00:18:20 4.1 Comprehensive evaluation frameworks * 00:21:20 4.2 Key components of AI model stacks * 00:25:50 5. Retrieval Augmented Generation (RAG) in Enterprise 00:32:10 5.1 Pragmatic approach to RAG implementation * 00:33:45 6. AI Agents and Tool Integration 00:33:45 6.1 Leveraging tools for AI insights * 00:35:30 6.2 Agent-based AI systems and diagnostics * 00:42:55 7. AI Transparency and Reasoning Capabilities 00:49:10 8. AI Model Training and Customization 00:57:10 9. Enterprise AI Model Management 01:02:10 9.1 Managing AI model versions for enterprise customers * 01:04:30 9.2 Future of language model programming * 01:06:10 10. AI-Driven Software Development 01:06:10 10.1 AI bridging human expression and task achievement * 01:08:00 10.2 AI-driven virtual app fabrics in enterprise * 01:13:33 11. Future of AI and Enterprise Applications 01:21:55 12. Cohere's Customers and Use Cases 01:21:55 12.1 Cohere's growth and enterprise partnerships * 01:27:14 12.2 Diverse customers using generative AI * 01:27:50 12.3 Industry adaptation to generative AI * 01:29:00 13. Technical Advantages of Cohere Models 01:29:00 13.1 Handling large context windows * 01:29:40 13.2 Low latency impact on developer productivity * Disclaimer: This is the fifth video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview. Filmed in Seattle in Aug 2024.
In this episode of the Virtually Speaking Podcast, we're joined by Paul Lembo, CTO at VMware by Broadcom, to discuss how bringing all internal product teams under one organization has strengthened VMware Cloud Foundation (VCF). By aligning the teams on common goals, VCF has evolved into a more cohesive platform, enhancing integration, streamlining updates, and improving overall stability. Paul also shares insights from his daily conversations with customers, who often ask how changes since the acquisition will impact VMware's product offerings. Tune in to hear how this new approach is driving innovation and delivering a more reliable cloud platform. Links Mentioned Introducing VMware Cloud Foundation 9 Session: 3 Transformations for the Smarter Way to Cloud Playlist: VMware Explore Las Vegas 2024 The Virtually Speaking Podcast The Virtually Speaking Podcast is a technical podcast dedicated to discussing VMware topics related to private and hybrid cloud. Each week Pete Flecha and John Nicholson bring in various subject matter experts from within the industry to discuss their respective areas of expertise. If you're new to the Virtually Speaking Podcast check out all episodes on vspeakingpodcast.com and follow on TwitterX @VirtSpeaking.
Learn about the origins of Oracle's Roving Edge Device, and how the next-gen iteration with updated Intel processors is leading to breakthrough advancements in defense, agriculture, and healthcare. Recorded live at Oracle CloudWorld 2024 in Las Vegas with Matt Leonard, VP of OCI Edge and Cloud Infrastructure, Peter Guerra, Global VP of Data and AI, and guest host, Andy Morris from Intel Enterprise AI. The conversation also highlights Oracle and Intel's 31-year partnership and innovations in AI deployment at the edge. Guests include: Matt Leonard is Vice President of OCI Edge Cloud product management at Oracle, leading product strategy and vision for bringing the power of cloud computing to the edge. Matt's goal is to enable customers to deploy and manage applications anywhere. With over 20 years of experience in product management, integration, and IT advisory, Matt has a proven track record of delivering successful products and solutions for leading tech companies such as Google, Microsoft, and Amazon. Peter Guerra, Global Vice President, Data & AI at Oracle, is a proven Data & AI executive with over 20 years of experience with commercial and public sector customers. Prior to Oracle, he led AI teams at Microsoft, AWS, Accenture and Booz Allen Hamilton. His career has been to focus on data & AI solutions for customers in defense, public sector, health, energy, and retail. He is a technical expert in AI and data platforms, having led numerous deployments and algorithm development solutions, including contributing to the Apache Accumulo and Apache Nifi projects. He has written thought pieces for O'Reilly, published papers in IEEE, and has spoken at industry events such as NVIDIA's GTC, Oracle CloudWorld, Blackhat, and more. Peter holds a Bachelor of Science in Computer Science, a Bachelor of Art in English from University of Maryland, and an MBA with Information Systems concentration from Loyola University.
In this episode of PodRocket, Joel Hooks, creator of egghead.io, talks about the power of durable, event-driven workflows, the practicalities and benefits of serverless as a billing model, the intricacies distributed systems, and more. Links https://joelhooks.com https://x.com/jhooks https://www.linkedin.com/in/joelhooks https://egghead.io https://www.coursebuilder.dev/tips/using-inngest-to-add-email-automation-feature-to-pro-next-js-adt43 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Joel Hooks.
Just because the AWS Cloud hangs above our heads, doesn't mean your bill needs to be just as sky-high. In this Screaming in the Cloud Summer Replay, Corey is joined by Airbnb Staff Software Engineer Melanie Cebula. Her job is to ensure they keep their monthly cloud bill low, and that the cost isn't just there for a temporary stay. Hear Melanie and Corey chat about the vital role engineers play in helping balance the company books, tricks to optimizing your organization's cloud spending, how inexperience can have a dangerous effect on cost-cutting, and the growing pains facing today's world of data infrastructure. We hope you enjoy this trip down memory lane (just be sure you checkout on time to avoid any fees).Show Highlights: (0:00) Intro to episode(0:27) Backblaze sponsor read(0:54) The role of a Staff Engineer(2:09) Working for a large company reliant on the cloud(3:59) Melanie's Area of Expertise(5:58) Efficiently Managing AWS Bills(11:33) Optimizing cloud spend(14:50) The harmful hesitancy to turn things off(18:17) Inexperience and cost-saving measures(21:17) Firefly sponsor read(21:53) How to avoid snowballing cloud bills(23:40) Kuberentes and cloud billing(27:12) The perks of compounding microservices(29:19) Misconceptions about Kuberentes(31:10) Growing pains of data infrastructure(34:44) Where you can find MelanieAbout Melanie CebulaMelanie Cebula is an expert in Cloud Infrastructure, where she is recognized worldwide for explaining radically new ways of thinking about cloud efficiency and usability. She is an international keynote speaker, presenting complex technical topics to a broad range of audiences, both international and domestic. Melanie is a staff engineer at Airbnb, where she has experience building a scalable modern architecture on top of cloud-native technologies.Besides her expertise in the online world, Melanie spends her time offline on the “sharp end” of rock climbing. An adventure athlete setting new personal records in challenging conditions, she appreciates all aspects of the journey, including the triumph of reaching ever higher destinations.On and off the wall, Melanie focuses on building reliability into critical systems, and making informed decisions in difficult situations. In her personal time, Melanie hand whisks matcha tea, enjoys costuming and dancing at EDM festivals, and she is a triplet.Links Referenced:Twitter: https://twitter.com/melaniecebulaMelanie Cebula's website: https://melaniecebula.com/SponsorsBackblaze: https://www.backblaze.com/Firefly: https://www.firefly.ai/
On this week's episode of Screaming in the Cloud, Corey is joined by Stanford computer science student Aditya Saligrama, who recently taught a Stanford course on cloud infrastructure. Aditya shares his unique perspective on various topics, including how higher education approaches teaching computer science in a rapidly evolving landscape, why he chose cloud security to begin with instead of tacking it on at the end, and what his plans are for the rest of school and beyond. Corey and Aditya lament the lack of real-world skills taught by universities. Aditya shares with the audience just how much work goes into being an effective undergraduate-level teacher while being an undergraduate student himself. Show Highlights: (00:00) - Introduction(01:57) - Exploring CS40: cloud infrastructure and scalable application deployment(03:46) - The evolution of computer science education(05:09) - Bridging the gap between academia and industry(09:05) - Aditya's journey into security and cloud infrastructure(13:09) - The Stanford security clinic: red teaming for startups(14:09) - Internship insights and cloudflare's upcoming role(16:06) - The challenge of cloud account management for students(17:59) - Improving cloud education and accessibility(22:10) - The technical and educational challenges of CS40(29:29) - Final thoughts and where to find AdityaAbout Aditya Saligrama:Aditya Saligrama is an undergraduate and graduate student at Stanford University studying computer science, focusing on systems and security. In the Winter of 2024, Aditya taught CS 40 (Cloud Infrastructure and Scalable Application Deployment) at Stanford, the first university course ever to teach the fundamentals of deploying apps on the cloud hands-on using infrastructure as code. Aditya also leads the Applied Cyber student group at Stanford, winning first place in a national cyber defense competition in 2023 and second place in a global penetration testing competition in 2024, and advises early-stage startups on their security needs and posture through the Stanford Security Clinic. Aditya enjoys hiking, photography, and ping pong in his free time.Links referenced:Aditya's Twitter: @saligrama_aAditya's Website: https://saligrama.io* Sponsor Prowler: https://prowler.com
Boomer Living Tv - Podcast For Baby Boomers, Their Families & Professionals In Senior Living
In this Podcast episode, "Shaping the Future of AI Investments," join us for an insightful discussion with Debarghya (Deedy) Das from Menlo Ventures. We'll explore transformative technologies and sectors, including Generative AI, Enterprise SaaS, Cloud Infrastructure, and AI/ML investing. Discover the opportunities, risks, and ethical considerations of large language models and generative AI systems that can create text, code, and images. Learn about the trends and innovations powering the modern enterprise tech stack, and what's next for SaaS, data infrastructure, and development tools.Dive into the world of VC investment, with an inside look at theses, diligence, and working with AI/ML startups from idea to scale. Hear stories from the trenches about building stellar product and engineering teams, finding product-market fit, and scaling startups sustainably. Whether you're a founder, investor, developer, or just passionate about emerging tech, this is a discussion you can't miss!
In the realm of startups, few narratives are as compelling and instructive as Kris Bliesner's journey. With a career from early-stage ventures to navigating hypergrowth and transition periods, Kris offers a wealth of insights into the dynamics of building and scaling businesses.