POPULARITY
If you've ever wondered how Oracle Database really works inside AWS, this episode will finally turn the lights on. Join Senior Principal OCI Instructor Susan Jang as she explains the two database services available (Exadata Database Service and Autonomous Database), how Oracle and AWS share responsibilities behind the scenes, and which essential tasks still land on your plate after deployment. You'll discover how automation, scaling, and security actually work, and which model best fits your needs, whether you want hands-off simplicity or deeper control. Oracle Database@AWS Architect Professional: https://mylearn.oracle.com/ou/course/oracle-databaseaws-architect-professional/155574 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone! In our last episode, we began the discussion on Oracle Database@AWS. Today, we're diving deeper into the database services that are available in this environment. Susan Jang, our Senior Principal OCI Instructor, joins us once again. 00:56 Lois: Hi Susan! Thanks for being here today. In our last conversation, we compared Oracle Autonomous Database and Exadata Database Service. Can you elaborate on the fundamental differences between these two services? Susan: Now, the primary difference is between the service is really the management model. The Autonomous is fully-managed by Oracle, while the Exadata provides flexibility for you to have the ability to customize your database environment while still having the infrastructure be managed by Oracle. 01:30 Nikita: When it comes to running Oracle Database@AWS, how do Oracle and AWS each chip in? Could you break down what each provider is responsible for in this setup? Susan: Oracle Database@AWS is a collaboration between Oracle, as well as AWS. It allows the customer to deploy and run Oracle Database services, including the Oracle Autonomous Database and the Oracle Exadata Database Service directly in AWS data centers. Oracle provides the ability of having the Oracle Exadata Database Service on a dedicated infrastructure. This service delivers full capabilities of Oracle Exadata Database on the Oracle Exadata hardware. It offers high performance and high security for demanding workloads. It has cloud automation, resource scaling, and performance optimization to simplify the management of the service. Oracle Autonomous Database on the dedicated Exadata infrastructure provides a fully Autonomous Database on this dedicated infrastructure within AWS. It automates the database management tasks, including patching, backups, as well as tuning, and have built-in AI capabilities for developing AI-powered applications and interacting with data using natural language. The Oracle Database@AWS integrates those core database services with various AWS services for a comprehensive unified experience. AWS provides the ability of having a cloud-based object storage, and that would be the Amazon S3. You also have the ability to have other services, such as the Amazon CloudWatch. It monitors the database metrics, as well as performance. You also have Amazon Bedrock. It provides a development environment for a generative AI application. And last but not the least, amongst the many other services, you also have the SageMaker. This is a cloud-based platform for development of machine learning models, a wonderful integration with our AI application development needs. 03:54 Lois: How has the work involved in setting up and managing databases changed over time? Susan: When we take a look at the evolution of how things have changed through the years in our systems, we realize that transfer responsibility has now been migrated more from customer or human interaction to services. As the database technology evolves from the traditional on-premise system to the Exadata engineered system, and finally to the Autonomous Database, certain services previously requiring significant manual intervention has become increasingly automated, as well as optimized. 04:34 Lois: How so? Susan: When we take a look at the more traditional database environment, it requires manual configuration of hardware, operating system, as well as the software of the database, along with initial database creation. As we evolve into the Exadata environment, the Exadata Database, specifically the Exadata cloud service, simplifies provisioning through web-based wizard, making it faster and easier to deploy the Oracle Database in an optimized hardware. But when we move it to an Autonomous environment, it automates the entire provisioning process, allowing users to rapidly deploy mission-critical databases without manual intervention, or DBA involvement. So as customers move toward Autonomous Database through Exadata, we have fewer components that the customer needs to manage in the database stack, which gives them more time to focus more on important parts of the business. With the Exadata Database, it provides a co-management of backup, restore, patches and upgrade, monitoring, and tuning. And it allows the administrator the ability to customize the configuration to meet their very specific business needs. With Autonomous Database, it's now fully automated and it's a greater responsibility is shift toward the service. With Autonomous Database on dedicated infrastructure, it provides that fine-grained tuning more for Oracle to help you perform that task. 06:15 Nikita: If we narrow it down just to Oracle and AWS for a moment, which parts of the infrastructure or day-to-day ops are handled by each company behind the scenes? Susan: When we take a look at Oracle Database@AWS, it operates under a shared responsibility model, dividing the service responsibilities between AWS, as well as Oracle, as well as you, the customer. The AWS has the data center. Remember, this is where everything is running. The Oracle Database@AWS, the Oracle Database infrastructure may be managed by Oracle and run in OCI, but is physically located within the AWS regions, as well as the availability zones and the AWS data centers. The AWS infrastructure, in this case, is AWS's responsibility to secure the environment, including the physical security of the data center, the network infrastructure, and the foundational services like the compute, the storage, and the networking, all within AWS. The next thing of who's responsible for the shared responsibility, it's Oracle. And that would be the hardware. We provide the hardware. While the hardware may physically reside in the AWS data center, Oracle's Cloud Infrastructure operational team will be the one managing this infrastructure, including software patching, infrastructure update, and other operations through a connection to OCI. This means Oracle handles the provisioning, as well as the maintenance of any of the underlying Exadata infrastructure hardware. When we take a look at the next thing that it manages, it is also responsible besides the infrastructure of the Exadata. It is also the ability to manage the hardware, the environment of that hardware through the database control plane. So Oracle manages the administration and the operational for the Oracle Database@AWS service, which resides in OCI. So this includes the capabilities for management, upgrade, and operational features. 08:37 Nikita: And what are the key things that still remain on the customer's plate? Susan: If you are in an Exadata environment or in an Autonomous environment, it is you, the customer, who is responsible for most of the database administration operation, as well as managing the users and the privileges of the user to access the database. No one knows the database and who should be accessing the data better than you. You will be responsible for securing the applications, the data of the database, which now allows you to define who has access to it, control the data encryption, and securing the application that interacts with the Oracle Database@AWS. 09:29 Lois: Susan, we've talked about both Autonomous Database and Exadata Database Service being available on Oracle Database@AWS, but what's different about how each works in this environment, and why might someone pick one over the other? Susan: Both databases, even though they run on the same Exadata Cloud Infrastructure, both can be deployed on both public cloud, as well as the customer data center, which is Oracle Cloud@Customer. The Autonomous Database is a fully managed, completely automated environment. And this provides a capability of having a fully Autonomous Database Service running on a dedicated Oracle Exadata Infrastructure within your AWS data center. The Exadata is a service that is provided and managed by Oracle and is physically running in the AWS data center, but is designed for mission critical workload and includes RAC environment, Real Application Cluster, offering a high performance availability and full feature capability that is similar to other Exadata environment, such as those running in our customers' data center. The primary difference is really between the two services. When you take a look at the Exadata, the customer only pays for the compute resources that is used. Autoscaling can be used for a variety or variable resources, the workload, to automatically scale to the compute resources up or down when required. The Autonomous Database also has automatic optimization for data warehousing, transaction processing, as well as JSON workload. The Exadata service, the customer again, also pays for the compute resources that they allocate. But that's the key thing. The customer can initiate the scaling because it's very specific to the workload that is needed. So when you take a look at the two database services, one gives the ability to let Oracle fully manage it, including the scaling capability. The other, the Exadata, provides you the capability of having the environment that it's running on the infrastructure be managed by Oracle that adds a database administrator. You may wish to have a little bit more granular control of how you want the database to not only be scaling, but how you wish to customize how the database will be running. 12:10 Nikita: Focusing on Autonomous Database for a moment, what should teams know about how it actually runs within AWS? Susan: The Autonomous Database on the Oracle Database@AWS brings the power of the Oracle's self-managing, self-securing, and self-repairing database into your AWS environment. It provides the capability of the database automatically, automates many of the traditional, complex, and time-consuming database management tasks, such as the provisioning of the database, the patching, the backing up, and the scaling, and the performance tuning, reducing the need for any manual intervention by the database administrator. Running the Autonomous Database in your AWS region enables low latency access for your AWS applications and services that is deployed within AWS, thus improving performance and response time. With the Autonomous Database, it automates many of the traditional things that is now automatically done by Oracle. It also supports integration with various AWS services, such as the ability of the not in addition to AIM, but the cloud formation, the CloudWatch for monitoring and the S3 for the storage. You can easily migrate existing Exadata workload, including those running on Oracle RAC to AWS with minimum or no change to any of your databases or applications. In addition, there's a really powerful capability and feature of the database is called zero ETL, and that's zero extract, transformation, and load. It's an integration capability with services like your Amazon Redshift, enabling near real time analytics and machine learning on your transactional database that is stored within the Autonomous Database on in your AWS environment. So with the Autonomous Database, it checks off many of the boxes for automatic capability, securing, tuning, as well as scaling the database. With the Autonomous Database in the Dedicated Exadata Infrastructure, the Exadata Cloud Infrastructure resource represents the physical system, which can be expanded with storage, as well as compute services, the compute host. This now provides the ability to have an isolated zone for the highest protection from other tenants. The data is stored on a dedicated server only for one customer. That would be you. 14:56 Lois: Could you explain the role of Autonomous VM? What are its primary benefits? Susan: The virtual machine or as we refer to them as the cluster, includes the grid infrastructure and provides a private network isolation. This provides you the capability of having custom memory, core, and storage allocation. The Oracle Grid Infrastructure includes the Oracle Clusterware, which manages the cluster, as well as the servers, and ensure that the database can failover to another server in case of any failure. 15:34 Be a part of something big by joining the Oracle University Learning Community! Connect with over 3 million members, including Oracle experts and fellow learners. Engage in topical forums, share your knowledge, and celebrate your achievements together. Discover the community today at mylearn.oracle.com. 15:55 Nikita: Welcome back! Susan, what is the Autonomous Container Database? Susan: With the Autonomous Container Database, and you need that if you're going to create an Autonomous Database, you need to provision that within your Autonomous Exadata VM Cluster. It serves as a container to hold or to house one or more Autonomous Databases. This allows multiple Autonomous Databases to coexist in the same infrastructure while still being logically separated. And this allows for the separation of databases based on their intended use. Think of a database for production. Think of a database for development. Think of a database for testing. You may have different database versions within the same infrastructure. This isolation makes it easier for you to be able to meet your SLA, your Service Level Agreement, any long-term backups you may have, very specific encryption key needs to prevent issues from one database impacting another. So, the ability to have everything be isolated and secure is still grouping it in a manner that will meet your business needs. 17:08 Lois: Looking at Exadata Database Service specifically, what are some standout advantages for customers who deploy it on Oracle Database@AWS? Is there anything in particular they should get excited about in terms of performance or integration with AWS? Susan: The Exadata Database Service is running on a dedicated Exadata Infrastructure that's deployed within your AWS data center. It delivers the same Exadata service experience in cloud control planes as the Oracle Cloud Infrastructure, allowing you to leverage existing skills and processing across your multi-cloud environment. It addresses the data resiliency, or residency rather. And that's the scenario where many of our customers has the need. You have a need because of your security compliance to have the data local to you. By having the Exadata Database in your Oracle Database@AWS, it is running in your data center. So, this addresses that very important need, data residency, to have it close to you. It also allows for seamless integration with other AWS services and applications. So now you have a capability of a hybrid cloud architecture leveraging the benefit of both Oracle Exadata and your AWS system. It has built-in high availability, the RAC application cluster, as well as Data Guard, a capability of addressing disaster recovery capability. This also provides the ability for you to scale your compute, as well as your storage and your I/O resources independently. So as mentioned with Exadata, you have flexibility of how you want your database to be running individually. So just like the Autonomous, the Exadata Database checks off many of the boxes for running a mission-critical with high availability, highly redundant hardware and software features, along with extreme performance, scalability, and reliability. This now allows you to run your AI environment, your online transaction processing, your analytic workload on any scale on the Exadata Infrastructure running in the Oracle Cloud. And in this case, running in your data center. 19:45 Nikita: If a business suddenly needs more capacity, how does scaling work with Exadata Database Service versus Autonomous Database on Oracle Database@AWS? Susan: So with the Exadata scaling, you now can scale to meet expected demands so you know at certain point I will need more. I will then ask it to scale at that point when I will assign it-- and I'm using an example, I will assign it three computer cores all the time. But there may be demands. Think of your end of the quarter, end of the year processing that you may need more. So, you are enabling the compute cores to scale at the time you need it. And what's cool is it will then, when it's no longer needed, it will then scale back down to the original three cores that you assign. So, you only pay for the enabled cores. But what's very cool about the Autonomous is that it is real-time scaling. So, with Autonomous, now you have the capability using Autonomous Database since it is self-tuning, self-monitoring, the Autonomous Database actually monitors the workload requirement and scales to match the workload demand. Once the minimum level of the compute is defined and enabled, the automatic scaling is set. Autonomous Database will adjust to the consumption when it's needed, and it will scale back down when it's not. So though the Exadata is pretty cool, it will scale up and down on the workload demand. This is with the Autonomous is even more powerful. It is real-time scaling based on that usage at that moment. Built-in automatic increase to meet the workload demands when it spikes and it automatically scales back when it's not needed. A very powerful capability with all of our Oracle databases, the ability, even with traditional, to allow you to define what you may need with Exadata scaling for peak demands, as well as Autonomous scaling for real-time consumption and scaling when needed. When you look at all of our options, one of the key things to bear in mind is a phrase that we use: performance scale as more servers are added. And what this is really saying is Oracle's automated scaling ability for the database, it basically has the ability to maintain or improve its performance under increased workload by automatically adding computational resources when needed. This process is also known as horizontal scaling. It involves adding more servers, compute instances, to a cluster to share the processing load. And it has that capability automatically. 22:53 Nikita: There's so much more we can discuss about Oracle Database@AWS, but let's pause here for today! Thank you so much Susan for joining us. Lois: Yeah, it's been really great to have you, Susan. If you want to dive deeper into the topics we covered today, go to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 23:23 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode, we talk with Abdel Sghiouar and Mofi Rahman, Developer Advocates at Google and (guest) hosts of the Kubernetes Podcast from Google. Together, we dive into one central question: can you truly run LLMs reliably and at scale on Kubernetes?It quickly becomes clear that LLM workloads behave nothing like traditional web applications:GPUs are scarce, expensive, and difficult to schedule.Models are massive — some reaching 700GB — making load times, storage throughput, and caching critical.Containers become huge, making “build small containers” nearly impossible.Autoscaling on CPU or RAM doesn't work; new signals like GPU cache pressure, queue depth, and model latency take over.LLMs don't run in parallel, so batching and routing through the Inference Gateway API become essential.Device Management and Dynamic Resource Allocation (DRA) are forming the new foundation for GPU/TPU orchestration.Security shifts as rootless containers often no longer work with hardware accelerators.Guardrails (input/output filtering) become a built-in part of the inference path.And then there's the occasional request from customers who want deterministic LLM output — to which Mofi dryly responds: “You don't need a model — you need a database.”Powered by: ACC ICTStuur ons een bericht.ACC ICT Specialist in IT-CONTINUÏTEIT Bedrijfskritische applicaties én data veilig beschikbaar, onafhankelijk van derden, altijd en overalSupport the showLike and subscribe! It helps out a lot.You can also find us on:De Nederlandse Kubernetes Podcast - YouTubeNederlandse Kubernetes Podcast (@k8spodcast.nl) | TikTokDe Nederlandse Kubernetes PodcastWhere can you meet us:EventsThis Podcast is powered by:ACC ICT - IT-Continuïteit voor Bedrijfskritische Applicaties | ACC ICT
El tráfico impredecible es la pesadilla de todo negocio online, provocando errores 408 y 429 y servidores KO. Pero, ¿y si te dijéramos que puedes escalar tu aplicación a lo bestia, solo cuando lo necesites, y sin desperdiciar dinero en recursos ociosos cuando no hay tráfico? En este vídeo, hablamos con Jorge Turrado (Principal SRE en Lidl, con más de 110 millones de usuarios, y Mantenedor principal de KEDA) para desvelar la herramienta clave para el autoescalado inteligente en el ecosistema Cloud Native con Kubernetes (k8s): KEDA (Kubernetes Event-driven Autoscaling). Descubre por qué el escalado tradicional por CPU es demasiado lento y reactivo para las cargas asíncronas. Analizamos cómo KEDA extiende las capacidades del HPA (Horizontal Pod Autoscaler) de Kubernetes para que puedas escalar de manera ★PROACTIVA★ basándote en eventos, colas de mensajes (Kafka, Service Bus), bases de datos o muchos otros eventos externos.Aprenderás:- Qué es Kubernetes y por qué su sistema de autoescalado por CPU y memoria y no es suficiente.- Qué es KEDA y por qué se ha convertido en el estándar de facto para el escalado en Kubernetes.- Cómo funciona el autoescalado basado en eventos (no solo CPU o RAM).- Qué es el "Scale to Zero" y cuando puede usarse para optimizar drásticamente los costes en la nube y garantizar la resiliencia de tus aplicaciones.- El impacto real en los costes de la nube al optimizar el uso de tus pods.- Consejos y trucos internos de un mantenedor de KEDA para no caer en errores comunes.- Además, compartimos buenas prácticas de desarrollo que todo programador debe aplicar para que su código funcione correctamente en entornos efímeros y de alta escalabilidad .Si eres desarrollador, DevOps, SRE o responsable de IT, en este vídeo te interesa.¿Qué te parece? ¡Cuéntanos en los comentarios tu peor historia de caída de servidor!
In this episode of the TestGuild DevOps Toolchain Podcast, host Joe Colantonio sits down with Jennifer Rahmani, Co-founder and COO of Thoras.ai, a company redefining how infrastructure scales with AI-driven predictive technology. Drawing from her years as a DevOps engineer in the defense tech sector, Jennifer shares how she and her twin sister turned real-world frustrations into a reliability-first platform that eliminates the guesswork from scaling. We discuss how Thoras.ai integrates with Kubernetes to predict workload demand minutes—or even hours—in advance, allowing teams to maintain high availability without overspending. Jennifer explains why they use the right AI for the right use case, how their predictive autoscaling works in multi-cloud and hybrid environments, and how it helps SREs avoid downtime during unpredictable events like Black Friday or major product launches. Whether you're dealing with noisy data, high cloud bills, or sleepless nights worrying about reliability, this episode delivers practical insights for making smarter scaling decisions.
Red Robin recently rolled out a massively successful promotion for unlimited burgers for a month for $20... but with success comes huge demand... how did they scale up? What could have they have done? What should that have done? Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Merge Conflict: Twitter, Facebook, Website, Chat on Discord Music : Amethyst Seer - Citrine by Adventureface ⭐⭐ Review Us (https://itunes.apple.com/us/podcast/merge-conflict/id1133064277?mt=2&ls=1) ⭐⭐ Machine transcription available on http://mergeconflict.fm
Want to quickly provision your autonomous database? Then look no further than Oracle Autonomous Database Serverless, one of the two deployment choices offered by Oracle Autonomous Database. Autonomous Database Serverless delegates all operational decisions to Oracle, providing you with a completely autonomous experience. Join hosts Lois Houston and Nikita Abraham, along with Oracle Database experts, as they discuss how serverless infrastructure eliminates the need to configure any hardware or install any software because Autonomous Database handles provisioning the database, backing it up, patching and upgrading it, and growing or shrinking it for you. Survey: https://customersurveys.oracle.com/ords/surveys/t/oracle-university-gtm/survey?k=focus-group-2-link-share-5 Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Rajeev Grover, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! We hope you've been enjoying these last few weeks as we've been revisiting our most popular episodes of the year. Lois: Today's episode is the last one in this series and is a throwback to a conversation on Autonomous Databases on Serverless Infrastructure with three experts in the field: Hannah Nguyen, Sean Stacey, and Kay Malcolm. Hannah is a Staff Cloud Engineer, Sean is the Director of Platform Technology Solutions, and Kay is Vice President of Database Product Management. For this episode, we'll be sharing portions of our conversations with them. 01:14 Nikita: We began by asking Hannah how Oracle Cloud handles the process of provisioning an Autonomous Database. So, let's jump right in! Hannah: The Oracle Cloud automates the process of provisioning an Autonomous Database, and it automatically provisions for you a highly scalable, highly secure, and a highly available database very simply out of the box. 01:35 Lois: Hannah, what are the components and architecture involved when provisioning an Autonomous Database in Oracle Cloud? Hannah: Provisioning the database involves very few steps. But it's important to understand the components that are part of the provisioned environment. When provisioning a database, the number of CPUs in increments of 1 for serverless, storage in increments of 1 terabyte, and backup are automatically provisioned and enabled in the database. In the background, an Oracle 19c pluggable database is being added to the container database that manages all the user's Autonomous Databases. Because this Autonomous Database runs on Exadata systems, Real Application Clusters is also provisioned in the background to support the on-demand CPU scalability of the service. This is transparent to the user and administrator of the service. But be aware it is there. 02:28 Nikita: Ok…So, what sort of flexibility does the Autonomous Database provide when it comes to managing resource usage and costs, you know… especially in terms of starting, stopping, and scaling instances? Hannah: The Autonomous Database allows you to start your instance very rapidly on demand. It also allows you to stop your instance on demand as well to conserve resources and to pause billing. Do be aware that when you do pause billing, you will not be charged for any CPU cycles because your instance will be stopped. However, you'll still be incurring charges for your monthly billing for your storage. In addition to allowing you to start and stop your instance on demand, it's also possible to scale your database instance on demand as well. All of this can be done very easily using the Database Cloud Console. 03:15 Lois: What about scaling in the Autonomous Database? Hannah: So you can scale up your OCPUs without touching your storage and scale it back down, and you can do the same with your storage. In addition to that, you can also set up autoscaling. So the database, whenever it detects the need, will automatically scale up to three times the base level number of OCPUs that you have allocated or provisioned for the Autonomous Database. 03:38 Nikita: Is autoscaling available for all tiers? Hannah: Autoscaling is not available for an always free database, but it is enabled by default for other tiered environments. Changing the setting does not require downtime. So this can also be set dynamically. One of the advantages of autoscaling is cost because you're billed based on the average number of OCPUs consumed during an hour. 04:01 Lois: Thanks, Hannah! Now, let's bring Sean into the conversation. Hey Sean, I want to talk about moving an autonomous database resource. When or why would I need to move an autonomous database resource from one compartment to another? Sean: There may be a business requirement where you need to move an autonomous database resource, serverless resource, from one compartment to another. Perhaps, there's a different subnet that you would like to move that autonomous database to, or perhaps there's some business applications that are within or accessible or available in that other compartment that you wish to move your autonomous database to take advantage of. 04:36 Nikita: And how simple is this process of moving an autonomous database from one compartment to another? What happens to the backups during this transition? Sean: The way you can do this is simply to take an autonomous database and move it from compartment A to compartment B. And when you do so, the backups, or the automatic backups that are associated with that autonomous database, will be moved with that autonomous database as well. 05:00 Lois: Is there anything that I need to keep in mind when I'm moving an autonomous database between compartments? Sean: A couple of things to be aware of when doing this is, first of all, you must have the appropriate privileges in that compartment in order to move that autonomous database both from the source compartment to the target compartment. In addition to that, once the autonomous database is moved to this new compartment, any policies or anything that's defined in that compartment to govern the authorization and privileges of that said user in that compartment will be applied immediately to that new autonomous database that has been moved into that new compartment. 05:38 Nikita: Sean, I want to ask you about cloning in Autonomous Database. What are the different types of clones that can be created? Sean: It's possible to create a new Autonomous Database as a clone of an existing Autonomous Database. This can be done as a full copy of that existing Autonomous Database, or it can be done as a metadata copy, where the objects and tables are cloned, but they are empty. So there's no rows in the tables. And this clone can be taken from a live running Autonomous Database or even from a backup. So you can take a backup and clone that to a completely new database. 06:13 Lois: But why would you clone in the first place? What are the benefits of this? Sean: When cloning or when creating this clone, it can be created in a completely new compartment from where the source Autonomous Database was originally located. So it's a nice way of moving one database to another compartment to allow developers or another community of users to have access to that environment. 06:36 Nikita: I know that along with having a full clone, you can also have a refreshable clone. Can you tell us more about that? Who is responsible for this? Sean: It's possible to create a refreshable clone from an Autonomous Database. And this is one that would be synced with that source database up to so many days. The task of keeping that refreshable clone in sync with that source database rests upon the shoulders of the administrator. The administrator is the person who is responsible for performing that sync operation. Now, actually performing the operation is very simple, it's point and click. And it's an automated process from the database console. And also be aware that refreshable clones can trail the source database or source Autonomous Database up to seven days. After that period of time, the refreshable clone, if it has not been refreshed or kept in sync with that source database, it will become a standalone, read-only copy of that original source database. 07:38 Nikita: Ok Sean, so if you had to give us the key takeaways on cloning an Autonomous Database, what would they be? Sean: It's very easy and a lot of flexibility when it comes to cloning an Autonomous Database. We have different models that you can take from a live running database instance with zero impact on your workload or from a backup. It can be a full copy, or it can be a metadata copy, as well as a refreshable, read-only clone of a source database. 08:12 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You'll find training on everything from cloud computing, database, and security to artificial intelligence and machine learning, all of which is available free to subscribers. So, get going! Pick a course of your choice, get certified, join the Oracle University Learning Community, and network with your peers. If you're already an Oracle MyLearn user, go to MyLearn to begin your journey. If you have not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started. 08:50 Nikita: Welcome back! Thank you, Sean, and hi Kay! I want to ask you about events and notifications in Autonomous Database. Where do they really come in handy? Kay: Events can be used for a variety of notifications, including admin password expiration, ADB services going down, and wallet expiration warnings. There's this service, and it's called the notifications service. It's part of OCI. And this service provides you with the ability to broadcast messages to distributed components using a publish and subscribe model. These notifications can be used to notify you when event rules or alarms are triggered or simply to directly publish a message. In addition to this, there's also something that's called a topic. This is a communication channel for sending messages to subscribers in the topic. You can manage these topics and their subscriptions really easy. It's not hard to do at all. 09:52 Lois: Kay, I want to ask you about backing up Autonomous Databases. How does Autonomous Database handle backups? Kay: Autonomous Database automatically backs up your database for you. The retention period for backups is 60 days. You can restore and recover your database to any point in time during this retention period. You can initiate recovery for your Autonomous Database by using the cloud console or an API call. Autonomous Database automatically restores and recovers your database to the point in time that you specify. In addition to a point in time recovery, we can also perform a restore from a specific backup set. 10:37 Lois: Kay, you spoke about automatic backups, but what about manual backups? Kay: You can do manual backups using the cloud console, for example, if you want to take a backup say before a major change to make restoring and recovery faster. These manual backups are put in your cloud object storage bucket. 10:58 Nikita: Are there any special instructions that we need to follow when configuring a manual backup? Kay: The manual backup configuration tasks are a one-time operation. Once this is configured, you can go ahead, trigger your manual backup any time you wish after that. When creating the object storage bucket for the manual backups, it is really important-- so I don't want you to forget-- that the name format for the bucket and the object storage follows this naming convention. It should be backup underscore database name. And it's not the display name here when I say database name. In addition to that, the object name has to be all lowercase. So three rules. Backup underscore database name, and the specific database name is not the display name. It has to be in lowercase. Once you've created your object storage bucket to meet these rules, you then go ahead and set a database property. Default_backup_bucket. This points to the object storage URL and it's using the Swift protocol. Once you've got your object storage bucket mapped and you've created your mapping to the object storage location, you then need to go ahead and create a database credential inside your database. You may have already had this in place for other purposes, like maybe you were loading data, you were using Data Pump, et cetera. If you don't, you would need to create this specifically for your manual backups. Once you've done so, you can then go ahead and set your property to that default credential that you created. So once you follow these steps as I pointed out, you only have to do it one time. Once it's configured, you can go ahead and use it from now on for your manual backups. 13:00 Lois: Kay, the last topic I want to talk about before we let you go is Autonomous Data Guard. Can you tell us about it? Kay: Autonomous Data Guard monitors the primary database, in other words, the database that you're using right now. 13:14 Lois: So, if ADB goes down… Kay: Then the standby instance will automatically become the primary instance. There's no manual intervention required. So failover from the primary database to that standby database I mentioned, it's completely seamless and it doesn't require any additional wallets to be downloaded or any new URLs to access APEX or Oracle Machine Learning. Even Oracle REST Data Services. All the URLs and all the wallets, everything that you need to authenticate, to connect to your database, they all remain the same for you if you have to failover to your standby database. 13:58 Lois: And what happens after a failover occurs? Kay: After performing a failover, a new standby for your primary will automatically be provisioned. So in other words, in performing a failover your standby does become your new primary. Any new standby is made for that primary. I know, it's kind of interesting. So currently, the standby database is created in the same region as the primary database. For better resilience, if your database is provisioned, it would be available on AD1 or Availability Domain 1. My secondary, or my standby, would be provisioned on a different availability domain. 14:49 Nikita: But there's also the possibility of manual failover, right? What are the differences between automatic and manual failover scenarios? When would you recommend using each? Kay: So in the case of the automatic failover scenario following a disastrous situation, if the primary ADB becomes completely unavailable, the switchover button will turn to a failover button. Because remember, this is a disaster. Automatic failover is automatically triggered. There's no user action required. So if you're asleep and something happens, you're protected. There's no user action required, but automatic failover is allowed to succeed only when no data loss will occur. For manual failover scenarios in the rare case when an automatic failover is unsuccessful, the switchover button will become a failover button and the user can trigger a manual failover should they wish to do so. The system automatically recovers as much data as possible, minimizing any potential data loss. But you can see anywhere from a few seconds or minutes of data loss. Now, you should only perform a manual failover in a true disaster scenario, expecting the fact that a few minutes of potential data loss could occur, to ensure that your database is back online as soon as possible. 16:23 Lois: We hope you've enjoyed revisiting some of our most popular episodes over these past few weeks. We always appreciate your feedback and suggestions so remember to take that quick survey we've put out. You'll find it in the show notes for today's episode. Thanks a lot for your support. We're taking a break for the next two weeks and will be back with a brand-new season of the Oracle University Podcast in January. Happy holidays, everyone! Nikita: Happy holidays! Until next time, this is Nikita Abraham... Lois: And Lois Houston, signing off! 16:56 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Matteo Collina and Luca Maraschi join the podcast to talk about Platformatic. Learn about Platformatics' incredible 4.3 million dollar seed round, its robust features and modular approach, and how it addresses the unique challenges faced by devs and enterprises. Links https://platformatic.dev/docs/getting-started/quick-start-watt Matteo Collina: https://nodeland.dev https://x.com/matteocollina https://fosstodon.org/@mcollina https://github.com/mcollina https://www.linkedin.com/in/matteocollina https://www.youtube.com/@adventuresinnodeland Luca Maraschi: https://www.linkedin.com/in/lucamaraschi https://x.com/lucamaraschi We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guests: Luca Maraschi and Matteo Collina.
Copilot, domeny .io i trolle patentowe? Brzmi jak przepis na IT-owy rollercoaster! W tym odcinku Patoarchitekci serwują mieszankę gorących tematów, od AI po Kubernetes. Czy Copilot to przyjaciel czy wróg programisty? Jak customowe metryki wpływają na autoskalowanie w Kubernetes? Dlaczego Cloudflare walczy z trollami patentowymi? Poznaj odpowiedzi i zanurz się w świecie DevOps, cloud computing i AI. Chcesz być na bieżąco z trendami w IT? Posłuchaj tego odcinka i dołącz do dyskusji na Discordzie Patoarchitektów. Kto wie, może twój kod też zacznie przechodzić testy jak u Volkswagena?
In this episode of Spring Office Hours, hosts Dan Vega and DeShaun Carter interview Chris Bono, a Spring team member who works on Spring Cloud Dataflow and Spring Pulsar. They discuss streaming data, comparing Apache Kafka and Apache Pulsar, and explore the features and use cases of Spring Cloud Stream applications. Chris provides insights into the architecture of streaming applications, explains key concepts, and highlights the benefits of using Spring's abstraction layers for working with messaging systems.Show Notes:Introduction to Chris Bono and his work on Spring Cloud Dataflow and Spring PulsarComparison between Apache Kafka and Apache PulsarOverview of Spring Cloud Stream and its bindersExplanation of source, processor, and sink concepts in streaming applicationsIntroduction to Spring Cloud Stream Applications projectDiscussion on Change Data Capture (CDC) and its importance in streamingExploration of various sources, processors, and sinks available in Spring Cloud Stream ApplicationsMention of KEDA (Kubernetes Event-driven Autoscaling) and its potential use with Spring Cloud applicationsUpcoming features in Spring Pulsar 1.2 releaseImportance of community feedback and using GitHub discussions for feature requests and issue reportingThe podcast provides a comprehensive overview of streaming data concepts and how Spring projects can be used to build efficient streaming applications.
Deep Dive into Serverless Databases with Neon: Featuring Heikki Linnakangas In this episode of the Geek Narrator podcast, host Kaivalya Apte is joined by Heikki Linnakangas, co-founder of Neon, to explore the innovative world of serverless databases. They discuss Neon's unique approach to separating compute and storage, the benefits of serverless architecture for modern applications, and dive into various compelling use cases. They also cover Neon's architectural features like branching, auto-scaling, and auto-suspend, making it a powerful tool for both developers and enterprises. Whether you're curious about multi-tenancy, fault tolerance, or developer productivity, this episode offers insightful knowledge about leveraging Neon's capabilities for your next project. 00:00 Introduction 00:53 The Birth of Neon: Why It Was Created 02:16 Understanding Serverless Databases 07:06 Neon's Architecture: Separation of Compute and Storage 09:59 Exploring Branching in Neon 18:21 Auto Scaling and Handling Spikes in Traffic 20:17 The Challenge of Multiple Writers in Distributed Systems 22:51 Auto Suspend: Cost-Effective Database Management 26:02 Optimizing Cold Start Times 27:14 Balancing Cost and Performance 28:52 Replication and Durability 30:32 Understanding the Storage Layer 34:02 Custom LSM Tree Implementation 36:21 Fault Tolerance and Failover 07:00 Developer Productivity and Use Cases 42:56 Migration and Tooling 48:35 Future Roadmap and User Experience 50:28 Conclusion and Final Thoughts Neon website: https://neon.tech/ Follow me on Linkedin and Twitter: https://www.linkedin.com/in/kaivalyaapte/ and https://twitter.com/thegeeknarrator If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning! #PostgreSQL #SQL #RDBMS #NEON
Highlights from this week's conversation include:Jeff's Background and Transition to Independent Consulting (0:03)Working at Keurig and Business Model Changes (2:16)Tech Stack Evolution and SAP HANA Implementation (7:33)Adoption of Tableau and Data Pipelines (11:21)Supply Chain Analytics and Timeless Data Modeling (15:49)Impact of Cloud Computing on Cost Optimization (18:35)Challenges of Managing Variable Costs (20:59)Democratization of Data and Cost Impact (23:52)Quality of Fivetran Connectors (27:29)Data Ingestion and Cost Awareness (29:44)Virtual Warehouse Cost Management (31:22)Auto-Scaling and Performance Optimization (33:09)Cost-Saving Frameworks for Business Problems (38:19)Dashboard Frameworks (40:53)Increasing Dashboards (43:29)Final thoughts and takeaways (46:28)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
edX ✨I build courses: https://insight.paiml.com/d69
In aflevering 52, gaan we in gesprek met Jorge Turrado Ferrero, SRE Expert en KEDA-maintainer en Zbyněk Roubalík, Oprichter van Kedify en ook een KEDA-maintainer , om de fascinerende wereld van KEDA (Kubernetes Event Driven Autoscaler) te ontdekken.KEDA is een krachtig hulpmiddel waarmee Kubernetes-applicaties automatisch kunnen worden geschaald op basis van verschillende soorten gebeurtenissen/events. Dit omvat niet alleen traditionele metingen zoals CPU- en geheugengebruik, maar ook aangepaste metingen en externe gebeurtenisbronnen, waardoor het zeer flexibel en aanpasbaar is.Tijdens ons gesprek verkennen we de ins en outs van KEDA, bespreken we de belangrijkste kenmerken, voordelen en toepassingen in de echte wereld. We gaan ook in op de uitdagingen en kansen die gepaard gaan met op gebeurtenissen gebaseerde schaling in Kubernetes-omgevingen.Of je nu nieuw bent met KEDA of je begrip van deze innovatieve technologie wilt verbeteren, deze aflevering biedt waardevolle inzichten en praktische kennis rechtstreeks van de experts zelf. Luister nu om meer te weten te komen over KEDA ACC ICT Specialist in IT-CONTINUÏTEIT Bedrijfskritische applicaties én data veilig beschikbaar, onafhankelijk van derden, altijd en overalDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Welcome to part four in the AWS Certification Exam Prep Mini-Series! Whether you're an aspiring cloud enthusiast or a seasoned developer looking to deepen your architectural acumen, you've landed in the perfect spot. In this six-part saga, we're demystifying the pivotal role of a Solutions Architect in the AWS cloud computing cosmos. In this fourth episode, Caroline and Dave chat again with Anya Derbakova, a Senior Startup Solutions Architect at AWS, known for weaving social media magic, and Ted Trentler, a Senior AWS Technical Instructor with a knack for simplifying the complex. Together, we will step into the realm of performance, where we untangle the complexities of designing high-performing architectures in the cloud. We dissect the essentials of high-performing storage solutions, dive deep into elastic compute services for scaling and cost efficiency, and unravel the intricacies of optimizing database solutions for unparalleled performance. Expect to uncover: • The spectrum of AWS storage services and their optimal use cases, from Amazon S3's versatility to the shared capabilities of Amazon EFS. • How to leverage Amazon EC2, Auto Scaling, and Load Balancing to create elastic compute solutions that adapt to your needs. • Insights into serverless computing paradigms with AWS Lambda and Fargate, highlighting the shift towards de-coupled architectures. • Strategies for selecting high-performing database solutions, including the transition from on-premise databases to AWS-managed services like RDS and the benefits of caching with Amazon ElastiCache. • A real-world scenario where we'll navigate the challenge of processing hundreds of thousands of online votes in minutes, testing your understanding and application of high-performing AWS architectures. Whether you're dealing with vast amounts of data, requiring robust compute power, or ensuring your architecture can handle peak loads without a hitch, we've got you covered! Anya on LinkedIn: https://www.linkedin.com/in/annadderbakova/ Ted on Twitter: https://twitter.com/ttrentler Ted on LinkedIn: https://linkedin/in/tedtrentler Caroline on Twitter: https://twitter.com/carolinegluck Caroline on LinkedIn: https://www.linkedin.com/in/cgluck/ Dave on Twitter: https://twitter.com/thedavedev Dave on LinkedIn: https://www.linkedin.com/in/davidisbitski AWS SAA Exam Guide - https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate_Exam-Guide.pdf Party Rock for Exam Study - https://partyrock.aws/u/tedtrent/KQtYIhbJb/Solutions-Architect-Study-Buddy All Things AWS Training - Links to Self-paced and Instructor Led https://aws.amazon.com/training/ AWS Skill Builder – Free CPE Course - https://explore.skillbuilder.aws/learn/course/134/aws-cloud-practitioner-essentials AWS Skill Builder – Learning Badges - https://explore.skillbuilder.aws/learn/public/learning_plan/view/1044/solutions-architect-knowledge-badge-readiness-path AWS Usergroup Communities: https://aws.amazon.com/developer/community/usergroups Subscribe: Spotify: https://open.spotify.com/show/7rQjgnBvuyr18K03tnEHBI Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-developers-podcast/id1574162669 Stitcher: https://www.stitcher.com/show/1065378 Pandora: https://www.pandora.com/podcast/aws-developers-podcast/PC:1001065378 TuneIn: https://tunein.com/podcasts/Technology-Podcasts/AWS-Developers-Podcast-p1461814/ Amazon Music: https://music.amazon.com/podcasts/f8bf7630-2521-4b40-be90-c46a9222c159/aws-developers-podcast Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zb3VuZGNsb3VkLmNvbS91c2Vycy9zb3VuZGNsb3VkOnVzZXJzOjk5NDM2MzU0OS9zb3VuZHMucnNz RSS Feed: https://feeds.soundcloud.com/users/soundcloud:users:994363549/sounds.rss
Whether its GitOps, DevOps, Platform Engineering, Observability as a Service or other terms. We all have our definitions, but rarely do we have a consensus on what those terms really mean! To get some clarity we invited Roberth Strand, CNCF Ambassador and Azure MVP, who has been passionately advocating for GitOps as it was initially defined and explained by Alexis Richardson, Weaveworks in his blog What is GitOps Really! Tune in and learn about Desired State Management, Continuous Pull vs Pushing from Pipelines, how Progressive Delivery or Auto-Scaling fits into declaring everything in Git, what OpenGItOps is and why this podcast will help you get your GitOps certification (coming soon)As we had a lot to talk we also touched on Platform Engineering and various other topicsHere are all the links we discussed:Alexis GitOps Blog Post: https://medium.com/weaveworks/what-is-gitops-really-e77329f23416OpenGitOps: https://opengitops.dev/Flux Image Reflector: https://fluxcd.io/flux/components/image/CNCF White Paper on Platform Engineering: https://tag-app-delivery.cncf.io/whitepapers/platforms/Platform Engineering Maturity Model: https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/Platform Engineering Working Group as part of TAG App Delivery: https://tag-app-delivery.cncf.io/wgs/platforms/
In deze aflevering verkennen we nieuwe mogelijkheden om inzicht te krijgen in het gebruik van Entra-licenties. Ook behandelen we updates over Azure Firewall, Key Vaults en External Users in Teams. Entra License Utilization Insights: Ontdek hoe je inzicht kunt krijgen in het gebruik van je Entra-licenties. https://techcommunity.microsoft.com/t5/microsoft-entra-blog/introducing-microsoft-entra-license-utilization-insights/ba-p/3796393 Azure Firewall Parallel IP Groups: Leer meer over de ondersteuning voor Parallel IP Groups in Azure Firewall, nu beschikbaar in public preview. https://azure.microsoft.com/en-us/updates/azure-firewall-parallel-ip-group-update-support-is-now-in-public-preview/ Azure Firewall Autoscaling en Flow Trace Logs: Ontdek de algemene beschikbaarheid van Autoscaling en Flow Trace Logs in Azure Firewall. https://azure.microsoft.com/en-us/updates/azure-firewall-flow-trace-logs-and-autoscaling-based-on-number-of-connections-and-are-now-generally-available/ Retirement van RBAC Application Impersonation: Maak kennis met het afscheid van RBAC Application Impersonation in Exchange Online. https://techcommunity.microsoft.com/t5/exchange-team-blog/retirement-of-rbac-application-impersonation-in-exchange-online/ba-p/4062671 Key Vault Updates: Ontdek de algemene verbeteringen in Azure Key Vault. https://azure.microsoft.com/en-us/updates/general-availability-improvements-in-azure-key-vault/ Microsoft Teams Labels for External Users: Verken de updates over Microsoft Teams-labels voor externe gebruikers. https://app.cloudscout.one/evergreen-item/mc716008/ Microsoft Edge Data and Consent Changes: Leer meer over de veranderingen in gegevens en toestemming in Microsoft Edge. https://app.cloudscout.one/evergreen-item/mc715685/ Ondersteuning voor Azure VMs met Ultra of Premium SSD V2 in Azure Backup: Ontdek de ondersteuning voor Ultra Disk Backup en Premium SSD V2 Backup in Azure Backup. https://azure.microsoft.com/en-us/updates/ultra-disk-backup-support-ga/ https://azure.microsoft.com/en-us/updates/premium-ssd-v2-backup-support-ga/
Covering the purpose and use cases of KEDA, best practices, common pitfalls, and wrapping up with a scale-from-zero KEDA demo.
Want to quickly provision your autonomous database? Then look no further than Oracle Autonomous Database Serverless, one of the two deployment choices offered by Oracle Autonomous Database. Autonomous Database Serverless delegates all operational decisions to Oracle, providing you with a completely autonomous experience. Join hosts Lois Houston and Nikita Abraham, along with Oracle Database experts, as they discuss how serverless infrastructure eliminates the need to configure any hardware or install any software because Autonomous Database handles provisioning the database, backing it up, patching and upgrading it, and growing or shrinking it for you. Oracle Autonomous Database Episode: https://oracleuniversitypodcast.libsyn.com/oracle-autonomous-database Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Rajeev Grover, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. 00:26 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! Welcome back to a new season of the Oracle University Podcast. This time, our focus is going to be on Oracle Autonomous Database. We've got a jam-packed season planned with some very special guests joining us. 00:52 Lois: If you're a regular listener of the podcast, you'll remember that we'd spoken a bit about Autonomous Database last year. That was a really good introductory episode so if you missed it, you might want to check it out. Nikita: Yeah, we'll post a link to the episode in today's show notes so you can find it easily. 01:07 Lois: Right, Niki. So, for today's episode, we wanted to focus on Autonomous Database on Serverless Infrastructure and we reached out to three experts in the field: Hannah Nguyen, Sean Stacey, and Kay Malcolm. Hannah is an Associate Cloud Engineer, Sean, a Director of Platform Technology Solutions, and Kay, who's been on the podcast before, is Senior Director of Database Product Management. For this episode, we'll be sharing portions of our conversations with them. So, let's get started. 01:38 Nikita: Hi Hannah! How does Oracle Cloud handle the process of provisioning an Autonomous Database? Hannah: The Oracle Cloud automates the process of provisioning an Autonomous Database, and it automatically provisions for you a highly scalable, highly secure, and a highly available database very simply out of the box. 01:56 Lois: Hannah, what are the components and architecture involved when provisioning an Autonomous Database in Oracle Cloud? Hannah: Provisioning the database involves very few steps. But it's important to understand the components that are part of the provisioned environment. When provisioning a database, the number of CPUs in increments of 1 for serverless, storage in increments of 1 terabyte, and backup are automatically provisioned and enabled in the database. In the background, an Oracle 19c pluggable database is being added to the container database that manages all the user's Autonomous Databases. Because this Autonomous Database runs on Exadata systems, Real Application Clusters is also provisioned in the background to support the on-demand CPU scalability of the service. This is transparent to the user and administrator of the service. But be aware it is there. 02:49 Nikita: Ok…So, what sort of flexibility does the Autonomous Database provide when it comes to managing resource usage and costs, you know… especially in terms of starting, stopping, and scaling instances? Hannah: The Autonomous Database allows you to start your instance very rapidly on demand. It also allows you to stop your instance on demand as well to conserve resources and to pause billing. Do be aware that when you do pause billing, you will not be charged for any CPU cycles because your instance will be stopped. However, you'll still be incurring charges for your monthly billing for your storage. In addition to allowing you to start and stop your instance on demand, it's also possible to scale your database instance on demand as well. All of this can be done very easily using the Database Cloud Console. 03:36 Lois: What about scaling in the Autonomous Database? Hannah: So you can scale up your OCPUs without touching your storage and scale it back down, and you can do the same with your storage. In addition to that, you can also set up autoscaling. So the database, whenever it detects the need, will automatically scale up to three times the base level number of OCPUs that you have allocated or provisioned for the Autonomous Database. 04:00 Nikita: Is autoscaling available for all tiers? Hannah: Autoscaling is not available for an always free database, but it is enabled by default for other tiered environments. Changing the setting does not require downtime. So this can also be set dynamically. One of the advantages of autoscaling is cost because you're billed based on the average number of OCPUs consumed during an hour. 04:23 Lois: Thanks, Hannah! Now, let's bring Sean into the conversation. Hey Sean, I want to talk about moving an autonomous database resource. When or why would I need to move an autonomous database resource from one compartment to another? Sean: There may be a business requirement where you need to move an autonomous database resource, serverless resource, from one compartment to another. Perhaps, there's a different subnet that you would like to move that autonomous database to, or perhaps there's some business applications that are within or accessible or available in that other compartment that you wish to move your autonomous database to take advantage of. 04:58 Nikita: And how simple is this process of moving an autonomous database from one compartment to another? What happens to the backups during this transition? Sean: The way you can do this is simply to take an autonomous database and move it from compartment A to compartment B. And when you do so, the backups, or the automatic backups that are associated with that autonomous database, will be moved with that autonomous database as well. 05:21 Lois: Is there anything that I need to keep in mind when I'm moving an autonomous database between compartments? Sean: A couple of things to be aware of when doing this is, first of all, you must have the appropriate privileges in that compartment in order to move that autonomous database both from the source compartment to the target compartment. In addition to that, once the autonomous database is moved to this new compartment, any policies or anything that's defined in that compartment to govern the authorization and privileges of that said user in that compartment will be applied immediately to that new autonomous database that has been moved into that new compartment. 05:59 Nikita: Sean, I want to ask you about cloning in Autonomous Database. What are the different types of clones that can be created? Sean: It's possible to create a new Autonomous Database as a clone of an existing Autonomous Database. This can be done as a full copy of that existing Autonomous Database, or it can be done as a metadata copy, where the objects and tables are cloned, but they are empty. So there's no rows in the tables. And this clone can be taken from a live running Autonomous Database or even from a backup. So you can take a backup and clone that to a completely new database. 06:35 Lois: But why would you clone in the first place? What are the benefits of this? Sean: When cloning or when creating this clone, it can be created in a completely new compartment from where the source Autonomous Database was originally located. So it's a nice way of moving one database to another compartment to allow developers or another community of users to have access to that environment. 06:58 Nikita: I know that along with having a full clone, you can also have a refreshable clone. Can you tell us more about that? Who is responsible for this? Sean: It's possible to create a refreshable clone from an Autonomous Database. And this is one that would be synced with that source database up to so many days. The task of keeping that refreshable clone in sync with that source database rests upon the shoulders of the administrator. The administrator is the person who is responsible for performing that sync operation. Now, actually performing the operation is very simple, it's point and click. And it's an automated process from the database console. And also be aware that refreshable clones can trail the source database or source Autonomous Database up to seven days. After that period of time, the refreshable clone, if it has not been refreshed or kept in sync with that source database, it will become a standalone, read-only copy of that original source database. 08:00 Nikita: Ok Sean, so if you had to give us the key takeaways on cloning an Autonomous Database, what would they be? Sean: It's very easy and a lot of flexibility when it comes to cloning an Autonomous Database. We have different models that you can take from a live running database instance with zero impact on your workload or from a backup. It can be a full copy, or it can be a metadata copy, as well as a refreshable, read-only clone of a source database. 08:33 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You'll find training on everything from cloud computing, database, and security to artificial intelligence and machine learning, all of which is available free to subscribers. So, get going! Pick a course of your choice, get certified, join the Oracle University Learning Community, and network with your peers. If you are already an Oracle MyLearn user, go to MyLearn to begin your journey. If you have not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started. 09:12 Nikita: Welcome back! Thank you, Sean, and hi Kay! I want to ask you about events and notifications in Autonomous Database. Where do they really come in handy? Kay: Events can be used for a variety of notifications, including admin password expiration, ADB services going down, and wallet expiration warnings. There's this service, and it's called the notifications service. It's part of OCI. And this service provides you with the ability to broadcast messages to distributed components using a publish and subscribe model. These notifications can be used to notify you when event rules or alarms are triggered or simply to directly publish a message. In addition to this, there's also something that's called a topic. This is a communication channel for sending messages to subscribers in the topic. You can manage these topics and their subscriptions really easy. It's not hard to do at all. 10:14 Lois: Kay, I want to ask you about backing up Autonomous Databases. How does Autonomous Database handle backups? Kay: Autonomous Database automatically backs up your database for you. The retention period for backups is 60 days. You can restore and recover your database to any point in time during this retention period. You can initiate recovery for your Autonomous Database by using the cloud console or an API call. Autonomous Database automatically restores and recovers your database to the point in time that you specify. In addition to a point in time recovery, we can also perform a restore from a specific backup set. 10:59 Lois: Kay, you spoke about automatic backups, but what about manual backups? Kay: You can do manual backups using the cloud console, for example, if you want to take a backup say before a major change to make restoring and recovery faster. These manual backups are put in your cloud object storage bucket. 11:20 Nikita: Are there any special instructions that we need to follow when configuring a manual backup? Kay: The manual backup configuration tasks are a one-time operation. Once this is configured, you can go ahead, trigger your manual backup any time you wish after that. When creating the object storage bucket for the manual backups, it is really important-- so I don't want you to forget-- that the name format for the bucket and the object storage follows this naming convention. It should be backup underscore database name. And it's not the display name here when I say database name. 12:00 Kay: In addition to that, the object name has to be all lowercase. So three rules. Backup underscore database name, and the specific database name is not the display name. It has to be in lowercase. Once you've created your object storage bucket to meet these rules, you then go ahead and set a database property. Default_backup_bucket. This points to the object storage URL and it's using the Swift protocol. Once you've got your object storage bucket mapped and you've created your mapping to the object storage location, you then need to go ahead and create a database credential inside your database. You may have already had this in place for other purposes, like maybe you were loading data, you were using Data Pump, et cetera. If you don't, you would need to create this specifically for your manual backups. Once you've done so, you can then go ahead and set your property to that default credential that you created. So once you follow these steps as I pointed out, you only have to do it one time. Once it's configured, you can go ahead and use it from now on for your manual backups. 13:21 Lois: Kay, the last topic I want to talk about before we let you go is Autonomous Data Guard. Can you tell us about it? Kay: Autonomous Data Guard monitors the primary database, in other words, the database that you're using right now. Lois: So, if ADB goes down… Kay: Then the standby instance will automatically become the primary instance. There's no manual intervention required. So failover from the primary database to that standby database I mentioned, it's completely seamless and it doesn't require any additional wallets to be downloaded or any new URLs to access APEX or Oracle Machine Learning. Even Oracle REST Data Services. All the URLs and all the wallets, everything that you need to authenticate, to connect to your database, they all remain the same for you if you have to failover to your standby database. 14:19 Lois: And what happens after a failover occurs? Kay: After performing a failover, a new standby for your primary will automatically be provisioned. So in other words, in performing a failover your standby does become your new primary. Any new standby is made for that primary. I know, it's kind of interesting. So currently, the standby database is created in the same region as the primary database. For better resilience, if your database is provisioned, it would be available on AD1 or Availability Domain 1. My secondary, or my standby, would be provisioned on a different availability domain. 15:10 Nikita: But there's also the possibility of manual failover, right? What are the differences between automatic and manual failover scenarios? When would you recommend using each? Kay: So in the case of the automatic failover scenario following a disastrous situation, if the primary ADB becomes completely unavailable, the switchover button will turn to a failover button. Because remember, this is a disaster. Automatic failover is automatically triggered. There's no user action required. So if you're asleep and something happens, you're protected. There's no user action required, but automatic failover is allowed to succeed only when no data loss will occur. 15:57 Nikita: For manual failover scenarios in the rare case when an automatic failover is unsuccessful, the switchover button will become a failover button and the user can trigger a manual failover should they wish to do so. The system automatically recovers as much data as possible, minimizing any potential data loss. But you can see anywhere from a few seconds or minutes of data loss. Now, you should only perform a manual failover in a true disaster scenario, expecting the fact that a few minutes of potential data loss could occur, to ensure that your database is back online as soon as possible. 16:44 Lois: Thank you so much, Kay. This conversation has been so educational for us. And thank you once again to Hannah and Sean. To learn more about Autonomous Database, head over to mylearn.oracle.com and search for the Oracle Autonomous Database Administration Workshop. Nikita: Thanks for joining us today. In our next episode, we will discuss Autonomous Database on Dedicated Infrastructure. Until then, this is Nikita Abraham… Lois: …and Lois Houston signing off. 17:12 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this sponsored episode of the Kubernetes Unpacked Podcast, we dive into the importance of cost and resource optimization with CAST AI. The truth is, it's not just about saving money. The goal is ensuring that your apps are performing the way they should be. This not only saves customer frustration, but engineering frustration. We... Read more »
In this sponsored episode of Kubernetes Unpacked, we dive into the importance of cost and resource optimization with CAST AI. The truth is, it's not just about saving money. The goal is ensuring that your apps are performing the way they should. This saves both customer and engineering frustration. We also explore from an engineering perspective how CAST AI uses AI in the background and how AI teams are building integrations into the product. The post KU040: Kubernetes Autoscaling Magic – Cost Control In Gen AI And LLMs With CAST AI (Sponsored) appeared first on Packet Pushers.
In this sponsored episode of the Kubernetes Unpacked Podcast, we dive into the importance of cost and resource optimization with CAST AI. The truth is, it's not just about saving money. The goal is ensuring that your apps are performing the way they should be. This not only saves customer frustration, but engineering frustration. We... Read more »
In this sponsored episode of Kubernetes Unpacked, we dive into the importance of cost and resource optimization with CAST AI. The truth is, it's not just about saving money. The goal is ensuring that your apps are performing the way they should. This saves both customer and engineering frustration. We also explore from an engineering perspective how CAST AI uses AI in the background and how AI teams are building integrations into the product. The post KU040: Kubernetes Autoscaling Magic – Cost Control In Gen AI And LLMs With CAST AI (Sponsored) appeared first on Packet Pushers.
In this sponsored episode of the Kubernetes Unpacked Podcast, we dive into the importance of cost and resource optimization with CAST AI. The truth is, it's not just about saving money. The goal is ensuring that your apps are performing the way they should be. This not only saves customer frustration, but engineering frustration. We... Read more »
In this episode, Adam McCrea joins us to talk about building and growing Judoscale (previously, Rails Autoscale) an autoscaler originally released in the Heroku Marketplace, and now available for Render, with other platforms to come. Adam shares about building a product on the side, launching on the Heroku platform, the process of a rebrand, and making the transition to full-time indie business owner.Adam McCrea:TwitterJudoscale:WebsiteMentioned in the episode:HerokuRenderFly.ioNate BerkopecTinySeed
Configure Azure Virtual Desktop infrastructure to run efficiently. Drive up utilization with scaling plans. Scale session host VMs in a host pool up or down automatically, without paying for idle resources. Ensure your VM images are up-to-date with required software, configurations, and Windows updates. Gain an end-to-end view of your services' performance by utilizing Azure Virtual Desktop Insights at Scale, which provides comprehensive visibility into how your services are running, and access a consolidated view of all your host pools and subscriptions for easy monitoring. Azure Expert, Matt McSpirit, walks through part four in our series on Azure Virtual Desktop. ► QUICK LINKS: 00:00 - Introduction 00:23 - Scaling plans 03:29 - Personal scaling plans 04:30 - Up-to-date VM images 07:13 - Azure Virtual Desktop Insights at Scale 09:02 - Session History chart 10:06 - Wrap up ► Link References: Check out our complete playlist at https://aka.ms/AVDMechanicsSeries ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Bret and Matt welcome Jake Warner back to the show to talk about LowOps. What does LowOps mean? What can Cycle offer us as an alternative to Swarm and Kubernetes?Jake Warner is the CEO and founder of Cycle.io. And I had him on the show a few years ago when I first heard about Cycle and I wanted to get an update on their platform offering. On this show we generally talk about Docker and Kubernetes but I'm also interested in any container tooling that can help us deploy and manage container based applications. Cycles' platform is an alternative container orchestrator as a service. In fact, they go beyond what you would provide normally with a container orchestrator and they provide OS updates, networking, the container runtime, and the orchestrator all in a single offering as a way to reduce the complexity that we're typically faced with when we're deploying Kubernetes. While I'm a fan of Docker swarm due to its simplicity, it still requires you to manage the OS underneath, to configure networking sometimes, and the feature releases have slowed down in recent years. But I still have a soft spot for those solutions that are removing the grunt work of OS and update management and helping smaller teams get more work done. I think Cycle has the potential to do that for a lot of teams that aren't all in on the Kubernetes way, but still value the container abstraction as the way to deploy software to servers.Live recording of the complete show from May 18, 2023 is on YouTube (Ep. #217). Includes demos.★Topics★Cycle.io website@cycleplatform on YouTube Support this show and get exclusive benefits on Patreon, YouTube, or bretfisher.com!★Join my Community★Get on the waitlist for my next live course on CI automation and gitops deploymentsBest coupons for my Docker and Kubernetes coursesChat with us and fellow students on our Discord Server DevOps FansGrab some merch at Bret's Loot BoxHomepage bretfisher.comCreators & Guests Bret Fisher - Host Cristi Cotovan - Editor Beth Fisher - Producer Matt Williams - Host Jake Warner @ Cycle.io - Guest (00:00) - Intro (02:25) - Introducing the guests (03:17) - What is Cycle? (12:33) - Deploying and staying up to date with Cycle (14:21) - Cycle's own OS and updates (17:12) - Core OS vs Cycle (22:10) - Use multiple providers with Cycle (22:52) - Run Cycle anywhere with infrastructure abstraction layer (24:33) - No latency requirement for the nodes (28:28) - DNS for container-to-container resolution (29:54) - Migration from one cloud provider to another? (31:17) - Roll back and telemetry (32:48) - Full-featured API (37:12) - Cycle data volumes (38:35) - Backups (40:24) - Autoscaling (43:00) - Getting started (44:40) - Control plane and self-hosting (44:58) - Question about moving to Reno (45:59) - Built from revenue and angels; no VC funding
What if you could significantly reduce the amount of time spent managing your database while still being confident that it is secure? Well, you can! With Oracle Autonomous Database (ADB), you can enjoy the highest levels of performance, scalability, and availability without the complexity of operating and securing your database. In this episode, Lois Houston and Nikita Abraham speak to William Masdon about how you can use the features of ADB to securely integrate, store, and analyze your data. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Ranbir Singh, and the OU Studio Team for helping us create this episode.
In deze aflevering praten Jan en Ronald je bij over een aantal belangrijke functionaliteiten en features binnen Kubernetes. We beginnen met Resource Planning; hoe weet Kubernetes of er genoeg resources beschikbaar zijn wanneer een pod op de node wordt gezet? Vervolgens wordt Horizontal Pod Autoscaling toegelicht; het automatisch opstarten van pods bij een bepaalde load. De heren gaan ook in op Node Affinity; pods over verschillende regio's inzetten voor een gemixt cluster.Handige links:https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
Change is the only constant, and for businesses these days, being able to match demand at any given time is of crucial importance. In this episode, Lois Houston and Nikita Abraham, along with special guest Rohit Rahi, look at how Oracle Cloud Infrastructure's scaling capabilities allow organizations to add and remove resources as needed, both manually and automatically, enabling them to fulfill their goals. They also discuss the OS Management service and how it can be used to manage and monitor updates and patches in operating system environments. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community Twitter: https://twitter.com/Oracle_Edu LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Special thanks to Arijit Ghosh, Kiran BR, David Wright, the OU Podcast Team, and the OU Studio Team for helping us create this episode.
O que é AWS EC2? Pra que serve e como extrair o máximo da computação em nuvem?Vamos ver todos os recursos que a computação em nuvem da AWS oferece para seu negócio ou projeto, e como fazer para economizar muito com as opções de instância reservada (saving plans) e spot instance.Como usar o EC2 com seus recursos de Elastic Load Balancer, Auto-Scaling, Elastic IP, Security Group, e muito mais.O curso AWS 2.0 está sendo preparado com muito cuidado e dedicação para atender às principais demandas de mercado para profissionais e empreendedores de tecnologia.Inscreva-se agora para aproveitar todas as vantagens do pré-lançamento:https://www.uminventorqualquer.com.br/curso-aws/Inscreva-se no Canal Wesley Milan para acompanhar os Reviews de serviços AWS:https://bit.ly/3LqiYwgReview do EC2: https://youtu.be/bGwZqR2dTOwInscreva-se no Canal Wesley Milan em Inglês e recomende a seus amigos gringos:https://bit.ly/3LqFjtAMe siga no Instagram: https://bit.ly/3tfzAj0LinkedIn: https://www.linkedin.com/in/wesleymilan/Podcast: https://bit.ly/3qa5JH1
Two brothers discussing all things AWS every week. Hosted by Andreas and Michael Wittig presented by cloudonaut.
Links: Ben Kehoe has left iRobot. And where's he going next? Presumably to re:Invent! I am too, with my re:Quinnvent nonsense Amazon Athena announces Query Result Reuse to accelerate queries Amazon EC2 enables you to opt out of directly shared Amazon Machine Images Amazon EC2 placement groups can now be shared across multiple AWS accounts Amazon EC2 now supports specifying list of instance types to use in attribute-based instance type selection for Auto Scaling groups, EC2 Fleet, and Spot Fleet Amazon Lightsail announces support for domain registration and DNS autoconfiguration Amazon RDS now supports new General Purpose gp3 storage volumes Announcing recurring custom line items for AWS Billing Conductor AWS Lambda announces Telemetry API, further enriching monitoring and observability capabilities of Lambda Extensions AWS Cost Explorer's New Look and Common Use Cases A New AWS Region Opens in Switzerland - eu-central-2 is now available. Introducing AWS Resource Explorer – Quickly Find Resources in Your AWS Account Overview of building resilient applications with Amazon DynamoDB global tables Publish Amazon DevOps Guru Insights to Slack Channel Uncompressed Media over IP on AWS: Read the whitepaper Enable cross-account queries on AWS CloudTrail lake using delegated administration from AWS Organizations NASA and ASDI announce no-cost access to important climate dataset on the AWS Cloud
In this episode we're talking about all things sizing - clusters, sharing, indexing and we're joined by Jan Srniček of Global Logic. We first came across Jan via his talk from MongoDB world, where he spoke about the journey he and his team took in understanding how to reduce usage & cost - all the while keeping performance and responsiveness high. In this episode, Jan talks about his journey with MongoDB, moving from Cosmo DB to MongoDB. Initially they stood up their own on-prem MongoDB database to learn more, but then soon realised that moving to Atlas was key, particularly as they could host on Azure.For Global Logic's client, Catalina, Jan manages everything from small clusters with a few hundred records all the way up to a system of 7 clusters with over 5Billion data records!So he know's all about scaling - and teaches us some lessons he's learnt along the way. He illustrates that if you only scale one large cluster (e.g with Autoscaling on), your database never gets a break! However, if you have smaller clusters, all autoscaling, along with predictable traffic patterns visualised by using Atlas metrics UI and performance advisor, you can analyse usage and re-organise structure appropriately.Ultimately, understanding the workloads in your application and dividing those across clusters all working together is key to application performance.Ján Srniček - https://www.linkedin.com/in/j%C3%A1n-srni%C4%8Dek-4a3b826bGlobal Logic - https://www.globallogic.com/Catalina - https://www.catalina.com/ MongoDB - https://www.mongodb.com/
Com alguns de seus companheiros de Getup, João Brito comenta as mudanças mais relevantes da versão 1.25 do Kubernetes. Algumas delas são a remoção definitiva do PSP (PodSecurityPolicy), a depreciação do suporte para GlusterFS e a morte do Autoscaling v.2 beta 1. Tem também a entrada, ainda em estágio alfa, do recurso de namespace de Linux (não o do Kubernetes, heim!?) e o avanço para estável das features Pod Security Admission e Local Ephemeral Storage Capacity Isolation.Outra novidade é que o PDB (Pod Disruption Budget) vai para versão default, por isso recomendamos que mantenham os deploys produtivos com pelo menos duas réplicas para não ter dor de cabeça na hora de uma atualização, por exemplo.Em meio às observações da nova versão, a turma falou sobre os prós e contras de trabalhar com um cluster gerenciado vs um cluster no On-Premise; e se tem alguma future gate que faz falta num cluster de produção.LINKS do que foi comentado no programa:Artigo da Karol Valencia da Aqua Security: https://blog.aquasec.com/kubernetes-version-1.25KubiLab - Vídeo tutorial do Adonai Costa sobre o KEDA: https://gtup.me/KubilabKeda RECOMENDAÇÕES dos participantes:One Punch Man (livro de mangá)Attack on Titan (série de mangá)The Sandman (filme) Narradores de Javé (filme)Five Days at Memorial (série na Apple TV+)CONVITE! Estamos perto do Kubicast #100 e vamos comemorar esse marco de um jeito muito especial! No formato “ASK ME ANYTHING”, a audiência vai poder tirar todas as suas dúvidas sobre Kubernetes e afins! Inscreva-se para participar: https://getup.io/participe-do-kubicast-100. O evento acontece no dia 15/9 às 19h no Zoom.SOBRE O KUBICASTO Kubicast é uma produção da Getup, especialista em Kubernetes. Todos os episódios do podcast estão no site da Getup e nas principais plataformas de áudio digital. Alguns deles estão registrados no YT. #DevOps #Kubernetes #Containers #Kubicast
Bret is joined by Nirmal Mehta, a Principal Specialist Solution Architect at AWS, and a Docker Captain, to discuss Karpenter, an autoscaling solution launched by AWS in 2021. Karpenter simplifies Kubernetes infrastructure by automating node scaling up and down, giving you "the right nodes at the right time."Autoscaling, particularly for Kubernetes, can be quite a complex project when you first start. Bret and Nirmal discuss how Karpenter works, how it can help or complement your existing setup, and how autoscaling generally works.Streamed live on YouTube on June 9, 2022.Unedited live recording of this show on YouTube (Ep #173). Includes demos.★Topics★Starship Shell PromptBret's favorite shell setupKarpenterKarpenter release blogK8s Scheduling ConceptsOther types of autoscalers:Horizontal Pod AutoscalerVertical Pod AutoscalerCluster Autoscaler★Nirmal Mehta★Nirmal on TwitterNirmal on LinkedIn★Join my Community★Best coupons for my Docker and Kubernetes coursesChat with us on our Discord Server Vital DevOpsHomepage bretfisher.com ★ Support this podcast on Patreon ★
We chat with Nir Mashkowski, Principal PM Manager, about the evolution of PaaS since its inception 10+ years ago, some of the usage patterns that he has observed, the benefits and reasoning of using PaaS services, and the future of where Azure PaaS services is headed. Media File: https://azpodcast.blob.core.windows.net/episodes/Episode431.mp3 YouTube: https://youtu.be/b3zUtaCnhTE Resources: Overview - Azure App Service | Microsoft Docs Azure Functions Overview | Microsoft Docs What is Azure Static Web Apps? | Microsoft Docs Azure Container Apps documentation | Microsoft Docs Dapr - Distributed Application Runtime KEDA | Kubernetes Event-driven Autoscaling
James and I talked to Michael Pytel, co-founder and CTO of Fulfilld, a cloud SaaS warehouse management startup. As a fellow SAP nerd, we have shared background for commiseration. Michael has done some amazing things with the technology powering Fulfilld, and he and the rest of his team have given a lot of thought to matching that technology up to business needs for several warehouse worker personas. It was a blast geeking out.HighlightsTechie with CNC machining in his blood — grandfathers, uncle, fatherCo-founded NIMBLGeeks out about digital twins — even got the chance to do a digital twin of 49ers stadiumThe new face of breadMastered breadmaking during COVIDOne way to look at goals: “how do we become the Uber and Waze of the warehouse?”Why SaaS? IT departments are hampered by operations and maintenance. It's HARD to get that innovation done with those burdens.It's freeing to come out of the limitations of the ERP systems to really build what Fulfilld wantedIt's not vapor, customers are signed.The cost to develop in new, open technologies is much lower than the traditional ERP realm.Autoscaling infrastructure allows them to spend much less on infra and much more on engineeringIt all boils down to: cost of license, cost of maintenance. Everything else is details.Michael sees augmented reality (AR) as a natural next step for the future of the warehouse.Money QuotesMichaelWe walk into the warehouse, we see a bunch of mobile devices on the charger…no one using them.Building open, building flexible. That's the key.If enough employees are trained on Fulfilld, could we create an on-demand workforce for warehouses?Paul[Choosing your own stack] It's a breath of fresh air!
https://go.dok.community/slack https://dok.community/ From the DoK Day EU 2022 (https://youtu.be/Xi-h4XNd5tE) Managing stateful workloads in a containerized environment has always been a concern. However, as Kubernetes developed, the whole community worked hard to bring stateful workloads to meet the needs of their enterprise users. As a result, Kubernetes introduced StatefulSets which supports stateful workloads since Kubernetes version 1.9. Users of Kubernetes now can use stateful applications like databases, AI workloads, and big data. Kubernetes support for stateful workloads comes in the form of StatefulSets. And as we all know, Kubernetes lets us automate many administration tasks along with provisioning and scaling. Rather than manually allocating resources, we can generate automated procedures that save time, it lets us respond faster when peaks in demand, and reduce costs by scaling this down when resources are not required. So, it's really important to capture autoscaling in terms of stateful workloads in Kubernetes for better fault tolerance, high availability, and cost management. There are still a few challenges regarding Autoscaling Stateful Workloads in Kubernetes. They are related to horizontal/vertical scaling and automating the scaling process. In Horizontal Scaling when we are scaling up the workloads, we need to make sure that the infant workloads join the existing workloads in terms of collaboration, integration, load-sharing, etc. And make sure that no data is lost, also the ongoing tasks have to be completed/transferred/aborted while scaling down the workloads. If the workloads are in primary-standby architecture, we need to make sure that scale-up or scale-down happens on standby workloads first, so that the failovers are minimized. While scaling down some workloads, we also need to ensure that the targeted workloads are excluded from the voting to prevent quorum loss. Similarly, while scaling up some workloads, we need to ensure that new workloads join the voting. When new resources are required, we have to make the tradeoff between vertical scaling and horizontal scaling. And when it comes to Automation, we have to determine how to generate resource (CPU/memory) recommendations for the workloads. Also, when to trigger the autoscaling? Let's say, a group of workloads may need to be autoscaled together. For example, In sharded databases, each shard is represented by one StatefulSet. But, all the shards are treated similarly by the database operator. Each shard may have its own recommendations. So, we have to find a way to scale them with the same recommendations. Also, we need to determine what happens when an autoscaling operation fails and what will happen to the future recommendations after the failure? There can be some workloads that may need a managed restart. For example, in a database, secondary nodes may need to be restarted before the primary. In this case, how to do a managed restart while autoscaling? Also, we need to figure out what happens when the workloads are going through maintenance? We will try to answer some of those questions throughout our session. ----- Fahim is a Software Engineer, working at AppsCode Inc. He has been involved with Kubernetes project since 2018 and is very enthusiastic about Kubernetes and open source in general. ----- MD Kamol Hasan is a Professional Software Developer with expertise in Kubernetes and backend development in Go. One of the lead engineers of KubeDB and KubeVault projects. Competitive contest programmer participated in different national and international programming contests including ACM ICPC, NCPC, etc
Kaslin Fields and Mark Mirchandani learn how GKE manages their releases and how customers can take advantage of the GKE release channels for smooth transitions. Guests Abdelfettah Sghiouar and Kobi Magnezi of the Google Cloud GKE team are here to explain. With releases every four months or so, Kobi tells us that Kubernetes requires two pieces to be managed with each release: the control plane and the nodes. Both are managed for the customer in GKE. The new addition of release channels allows flexibility with release updating so customers can adjust to their specific project needs. Each channel offers a different updating mix and speed, and clients choose the channel that's right for their project. The idea for release channels isn't a new one, Kobi explains. In fact, Google's frequent project releases, while keeping things secure and running well, also can be customized by choosing from an assortment of channels in other Google offerings like Chrome. Our guests talk us through the process of releasing through channels and how each release marinates in the Rapid channel to be sure the version is supported and secure before being pushed to customers through other channels. We hear how release channels differ from no-channel releases, the benefits of specialized channels, and recommendations for customers as far as which channels to use with different development environments. Abdel describes real-world use cases for the Rapid, Regular, and Stable channels, the Surge Upgrade feature, and how GKE notifications with Pub/Sub helps in the updating process. Kobi talks about maintenance and exclusion windows to help customers further customize when and how their projects will update. Kobi and Abdel wrap up with a discussion of the future of GKE release channels. Kobi Magnezi Kobi is the Product Manager for GKE at Google Cloud. Abdelfettah Sghiouar Abdel is a Cloud Dev Advocate with a focus on Cloud native, GKE, and Service Mesh technologies. Cool things of the week GKE Essentials videos KubeCon EU 2023 site KubeCon Call for Proposals site Kubernetes 1.24: Stargazer site GCP Podcast Episode 292: Pulumi and Kubernetes Releases with Kat Cosgrove podcast Optimize and scale your startup on Google Cloud: Introducing the Build Series blog Interview Kubernetes site GKE site Autoscaling with GKE: Overview and pods video GKE release schedule dcos Release channels docs Upgrade-scope maintenance windows docs Configure cluster notifications for third-party services docs Cluster notifications docs Pub/Sub site Agones site What's something cool you're working on? Kaslin is working on KubeCon and new episodes of GKE Essentials. Hosts Mark Mirchandani and Kaslin Fields
[00:01:10] Adam tells us a little bit about himself and how he got into this field. [00:03:48] We learn more about Adam's career path from edge case to Rails Autoscale. [00:05:09] Adam gives us a rundown of what Rails Autoscale is and the problem it solves.[00:06:41] Andrew wonders if Rails Autoscale will help if you don't have enough memory, and Adam tells us the solution for this.[00:09:39] Adam fills us in on the support load he gets and the kind of support he gives.[00:10:39] Find out how Rails Autoscale is different compared to other autoscalers Adam tried. [00:16:05] If you're wondering when Rails Autoscale is right for you, Adam tells us. Also, he announces that he's working on a new autoscaler that's going to be language- agnostic on Heroku.[00:17:41] Andrew wonders what prompted Adam to do this for other languages, and he tells us how the development has been so far. [00:20:28] We learn how the experience has been for Adam building an app within the Heroku marketplace. [00:22:37] Andrew asks Adam if he ever thought of making a bunch of fake accounts. ☺[00:23:50] Is YNAB a Rails app? Adam explains more about it and the team there. [00:26:26] Adam's been in the Ruby community for a long time, so we find out what he's currently excited about, and where you can find him online.Panelists:Jason CharnesAndrew MasonGuest:Adam McCreaSponsor:HoneybadgerLinks:Ruby Radar NewsletterRuby Radar TwitterAdam McCrea TwitterRails AutoscaleYNAB YNAB API Ruby Library-GitHub
A lot of things happened in October, and we talked about them all in early November. In this episode Arjen, Guy, and JM discuss a whole bunch of cool things that were released and may be a bit harsh on everything Microsoft. News Finally in Sydney Amazon EC2 Mac instances are now available in seven additional AWS Regions Amazon MemoryDB for Redis is now available in 11 additional AWS Regions Serverless Lambda AWS Lambda now supports triggering Lambda functions from an Amazon SQS queue in a different account AWS Lambda now supports IAM authentication for Amazon MSK as an event source Step Functions Now — AWS Step Functions Supports 200 AWS Services To Enable Easier Workflow Automation | AWS News Blog AWS Batch adds console support for visualizing AWS Step Functions workflows Amplify Announcing General Availability of Amplify Geo for AWS Amplify AWS Amplify for JavaScript now supports resumable file uploads for Storage Other Accelerating serverless development with AWS SAM Accelerate | AWS Compute Blog Containers Amazon EKS Managed Node Groups adds native support for Bottlerocket AWS Fargate now supports Amazon ECS Windows containers Announcing the general availability of cdk8s and support for Go | Containers Monitoring clock accuracy on AWS Fargate with Amazon ECS Amazon ECS Anywhere now supports GPU-based workloads AWS Console Mobile Application adds support for Amazon Elastic Container Service AWS Load Balancer Controller version 2.3 now available with support for ALB IPv6 targets AWS App Mesh Metric Extension is now generally available EC2 & VPC New – Amazon EC2 C6i Instances Powered by the Latest Generation Intel Xeon Scalable Processors | AWS News Blog Amazon EC2 now supports sharing Amazon Machine Images across AWS Organizations and Organizational Units Amazon EC2 Hibernation adds support for Ubuntu 20.04 LTS Announcing Amazon EC2 Capacity Reservation Fleet a way to easily migrate Amazon EC2 Capacity Reservations across instance types Amazon EC2 Auto Scaling now supports describing Auto Scaling groups using tags Amazon EC2 now offers Microsoft SQL Server on Microsoft Windows Server 2022 AMIs AWS Elastic Beanstalk supports Database Decoupling in an Elastic Beanstalk Environment AWS FPGA developer kit now supports Jumbo frames in virtual ethernet frameworks for Amazon EC2 F1 instances Amazon VPC Flow Logs now supports Apache Parquet, Hive-compatible prefixes and Hourly partitioned files Network Load Balancer now supports TLS 1.3 New – Attribute-Based Instance Type Selection for EC2 Auto Scaling and EC2 Fleet | AWS News Blog Amazon Lightsail now supports AWS CloudFormation for instances, disks and databases Dev & Ops CLI AWS Cloud Control API, a Uniform API to Access AWS & Third-Party Services | AWS News Blog Now programmatically manage alternate contacts on AWS accounts CodeGuru Amazon CodeGuru now includes recommendations powered by Infer Amazon CodeGuru announces Security detectors for Python applications and security analysis powered by Bandit Amazon CodeGuru Reviewer adds detectors for AWS Java SDK v2's best practices and features IaC AWS CDK releases v1.121.0 - v1.125.0 with features for faster development cycles using hotswap deployments and rollback control AWS CloudFormation customers can now manage their applications in AWS Systems Manager Other NoSQL Workbench for Amazon DynamoDB now enables you to import and automatically populate sample data to help build and visualize your data models Amazon Corretto October Quarterly Updates Bulk Editing of OpsItems in AWS Systems Manager OpsCenter AWS Fault Injection Simulator now supports Spot Interruptions AWS Fault Injection Simulator now injects Spot Instance Interruptions Security Firewalls AWS Firewall Manager now supports centralized logging of AWS Network Firewall logs AWS Network Firewall Adds New Configuration Options for Rule Ordering and Default Drop Backups AWS Backup Audit Manager adds compliance reports AWS Backup adds an additional layer for backup protection with the availability of AWS Backup Vault Lock Other AWS Security Hub adds support for cross-Region aggregation of findings to simplify how you evaluate and improve your AWS security posture Amazon SES now supports 2048-bit DKIM keys AWS License Manager now supports Delegated Administrator for Managed entitlements Data Storage & Processing Goodbye Microsoft SQL Server, Hello Babelfish | AWS News Blog Announcing availability of the Babelfish for PostgreSQL open source project Announcing Amazon RDS Custom for Oracle AWS announces AWS Snowcone SSD Amazon RDS Proxy now supports Amazon RDS for MySQL Version 8.0 Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) announces support for Cross-Cluster Replication Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now comes with an improved management console AWS Transfer Family customers can now use Amazon S3 Access Point aliases for granular and simplified data access controls Amazon EMR now supports Apache Spark SQL to insert data into and update Apache Hive metadata tables when Apache Ranger integration is enabled Amazon Neptune now supports Auto Scaling for Read Replicas AWS Glue Crawlers support Amazon S3 event notifications Amazon Keyspaces (for Apache Cassandra) now supports automatic data expiration by using Time to Live (TTL) settings New – AWS Data Exchange for Amazon Redshift | AWS News Blog AI & ML SageMaker Announcing Fast File Mode for Amazon SageMaker Amazon SageMaker Projects now supports Image Building CI/CD templates Amazon SageMaker Data Wrangler now supports Amazon Athena Workgroups, feature correlation, and customer managed keys Other Amazon Kendra launches support for 34 additional languages Amazon Fraud Detector now supports event datasets AWS announces a price reduction of up to 56% for Amazon Fraud Detector machine learning fraud predictions Amazon Fraud Detector launches new ML model for online transaction fraud detection Amazon Transcribe now supports custom language models for streaming transcription Amazon Textract launches TIFF support and adds asynchronous support for receipts and invoices processing Announcing Amazon EC2 DL1 instances for cost efficient training of deep learning models Other Cool Stuff AWS IoT Core now makes it optional for customers to send the entire trust chain when provisioning devices using Just-in-Time Provisioning and Just-in-Time Registration AWS IoT SiteWise announces support for using the same asset models across different hierarchies VMware Cloud on AWS Outposts Brings VMware SDDC as a Fully Managed Service on Premises | AWS News Blog AWS Outposts adds new CloudWatch dimension for capacity monitoring Amazon Monitron launches iOS app Amazon Braket offers D-Wave's Advantage 4.1 system for quantum annealing Amazon QuickSight adds support for Pixel-Perfect dashboards Amazon WorkMail adds Mobile Device Access Override API and MDM integration capabilities Announcing Amazon WorkSpaces API to create new updated images with latest AWS drivers Computer Vision at the Edge with AWS Panorama | AWS News Blog Amazon Connect launches API to configure hours of operation programmatically New region availability and Graviton2 support now available for Amazon GameLift Sponsors CMD Solutions Silver Sponsors Cevo Versent
About this podcastSteve and Frank are back with the latest cloud FinOps news for October! This episode covers the following topics:AWS Fault Injection Simulator now injects Spot Instance InterruptionsAWS Pricing Calculator now supports Amazon CloudFrontIntroducing Amazon EC2 C6i instancesAWS Marketplace announces Purchase Order Management for SaaS contractsAmazon EC2 announces attribute-based instance type selection for Auto Scaling groups, EC2 Fleet, and Spot FleetIntroducing GKE image streaming for fast application startup and autoscalingRun your fault-tolerant workloads cost-effectively with Google Cloud Spot VMsN2D VMs with latest AMD EPYC CPUs enable on average over 30% better price-performanceFind your GKE cost optimization opportunities right in the consoleGoogle Cloud billing tutorials: Because surprises are for home makeover shows, not your walletFind out more about Cloud FinOps by visiting our website.
Azure Virtual Machine Scale Sets lets you create and manage a group of virtual machines to run your app or workload and provides sophisticated load-balancing, management, and automation. This is a critical service for creating and dynamically managing thousands of VMs in your environment. If you are new to the service this show will get you up to speed or if you haven't looked at VM Scale Sets in a while we'll show you how the service has significantly evolved to help you efficiently architect your apps for centralized configuration, high availability, auto-scaling and performance, cost optimization, security, and more. ► QUICK LINKS: 00:00 - Virtual machine scale sets intro 00:32 - What is a virtual machine scale set? 00:47 - Centralized configuration options 02:30 - How do scale sets increase availability? 03:54 - How does autoscaling work? 04:58 - Keeping costs down with VM scale sets 05:47 - Building security into your scale set configurations 06:28 - Where you can learn more about VM scale sets ► Link References: To learn more, check out https://aka.ms/VMSSOverview Watch our episode about Azure Spot VMs at https://aka.ms/EssentialsSpotVMs ► Unfamiliar with Microsoft Mechanics? We are Microsoft's official video series for IT. You can watch and share valuable content and demos of current and upcoming tech from the people who build it at #Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries?sub_confirmation=1 Join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen via podcast here: https://microsoftmechanics.libsyn.com/website ► Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Follow us on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
Concept of the week: Event Stream abstractions and Pravega: 15:15Demo of the week: Event Stream abstractions and Pravega: 1:11:00PR of the week: Pravega presto-connector PR 49: 1:20:51Question of the week: What is the point of Trino Forum and what is the relationship to Trino Slack?: 1:26:07Show Notes: https://trino.io/episodes/28.htmlShow Page: https://trino.io/broadcast/
Lucas Cardoso comanda uma super explicação sobre o conceito de Auto Scaling e como utilizá-la em nosso dia a dia! Confere aí! Entre no nosso grupo do Telegram e tire mais dúvidas Cloud Evangelists BR: https://t.me/cloudevangelistOu acesse: https://www.darede.com.br/
Do you want to scale your workloads on Kubernetes without having to worry about the details? Do you want to run Azure Functions anywhere and easily scale it yourself? Tom Kerkhove shows Scott Hanselman how Kubernetes Event-Driven Autoscaling (KEDA) makes application autoscaling dead simple.[0:00:00]– Introduction[0:01:10]– Presentation[0:07:57]– Demo[0:17:45]– Discussion and wrap-upKubernetes Event-driven AutoscalingKEDA on GitHubAzure Functions on Kubernetes with KEDAAzure Friday - Azure Serverless on Kubernetes with KEDACreate a free account (Azure)
Do you want to scale your workloads on Kubernetes without having to worry about the details? Do you want to run Azure Functions anywhere and easily scale it yourself? Tom Kerkhove shows Scott Hanselman how Kubernetes Event-Driven Autoscaling (KEDA) makes application autoscaling dead simple.[0:00:00]– Introduction[0:01:10]– Presentation[0:07:57]– Demo[0:17:45]– Discussion and wrap-upKubernetes Event-driven AutoscalingKEDA on GitHubAzure Functions on Kubernetes with KEDAAzure Friday - Azure Serverless on Kubernetes with KEDACreate a free account (Azure)
Do you want to scale your workloads on Kubernetes without having to worry about the details? Do you want to run Azure Functions anywhere and easily scale it yourself? Tom Kerkhove shows Scott Hanselman how Kubernetes Event-Driven Autoscaling (KEDA) makes application autoscaling dead simple.[0:00:00]– Introduction[0:01:10]– Presentation[0:07:57]– Demo[0:17:45]– Discussion and wrap-upKubernetes Event-driven AutoscalingKEDA on GitHubAzure Functions on Kubernetes with KEDAAzure Friday - Azure Serverless on Kubernetes with KEDACreate a free account (Azure)
Do you want to scale your workloads on Kubernetes without having to worry about the details? Do you want to run Azure Functions anywhere and easily scale it yourself? Tom Kerkhove shows Scott Hanselman how Kubernetes Event-Driven Autoscaling (KEDA) makes application autoscaling dead simple.[0:00:00]– Introduction[0:01:10]– Presentation[0:07:57]– Demo[0:17:45]– Discussion and wrap-upKubernetes Event-driven AutoscalingKEDA on GitHubAzure Functions on Kubernetes with KEDAAzure Friday - Azure Serverless on Kubernetes with KEDACreate a free account (Azure)
Sponsor Circle CI Episode on CI/CD with Circle CI Show DetailsIn this episode, we cover the following topics: Pillars in depth Performance Efficiency "Ability to use resources efficiently to meet system requirements and maintain that efficiency as demand changes and technology evolves" Design principles Easy to try new advanced technologies (by letting AWS manage them, instead of standing them up yourself) Go global in minutes Use serverless architectures Experiment more often Mechanical sympathy (use the technology approach that aligns best to what you are trying to achieve) Key service: CloudWatch Focus areas SelectionServices: EC2, EBS, RDS, DynamoDB, Auto Scaling, S3, VPC, Route53, DirectConnect ReviewServices: AWS Blog, AWS What's New MonitoringServices: CloudWatch, Lambda, Kinesis, SQS TradeoffsServices: CloudFront, ElastiCache, Snowball, RDS (read replicas) Best practices SelectionChoose appropriate resource typesCompute, storage, database, networking Trade OffsProximity and caching Cost Optimization "Ability to run systems to deliver business value at the lowest price point" Design principles Adopt a consumption model (only pay for what you use) Measure overall efficiency Stop spending money on data center operations Analyze and attribute expenditures Use managed services to reduce TCO Key service: AWS Cost Explorer (with cost allocation tags) Focus areas Expenditure awarenessServices: Cost Explorer, AWS Budgets, CloudWatch, SNS Cost-effective resourcesServices: Reserved Instances, Spot Instances, Cost Explorer Matching supply and demandServices: Auto Scaling Optimizing over timeServices: AWS Blog, AWS What's New, Trusted Advisor Key pointsUse Trusted Advisor to find ways to save $$$ The Well-Architected Review Centered around the question "Are you well architected?" The Well-Architected review provides a consistent approach to review workload against current AWS best practices and gives advice on how to architect for the cloud Benefits of the review Build and deploy faster Lower or mitigate risks Make informed decisions Learn AWS best practices The AWS Well-Architected Tool Cloud-based service available from the AWS console Provides consistent process for you to review and measure your architecture using the AWS Well-Architected Framework Helps you: Learn Measure Improve Improvement plan Based on identified high and medium risk topics Canned list of suggested action items to address each risk topic MilestonesMakes a read-only snapshot of completed questions and answers Best practices Save milestone after initially completing workload review Then, whenever you make large changes to your workload architecture, perform a subsequent review and save as a new milestone Links AWS Well-Architected AWS Well-Architected Framework - Online/HTML versionincludes drill down pages for each review question, with recommended action items to address that issue AWS Well-Architected Tool Enhanced Networking Amazon EBS-optimized instance VPC Endpoint Amazon S3 Transfer Acceleration AWS Billing and Cost Management Whitepapers AWS Well-Architected Framework Operational Excellence Pillar Security Pillar Reliability Pillar Performance-Efficiency Pillar Cost Optimization Pillar End Song:The Shadow Gallery by Roy EnglandFor a full transcription of this episode, please visit the episode webpage.We'd love to hear from you! You can reach us at: Web: https://mobycast.fm Voicemail: 844-818-0993 Email: ask@mobycast.fm Twitter: https://twitter.com/hashtag/mobycast Reddit: https://reddit.com/r/mobycast