POPULARITY
Want to quickly provision your autonomous database? Then look no further than Oracle Autonomous Database Serverless, one of the two deployment choices offered by Oracle Autonomous Database. Autonomous Database Serverless delegates all operational decisions to Oracle, providing you with a completely autonomous experience. Join hosts Lois Houston and Nikita Abraham, along with Oracle Database experts, as they discuss how serverless infrastructure eliminates the need to configure any hardware or install any software because Autonomous Database handles provisioning the database, backing it up, patching and upgrading it, and growing or shrinking it for you. Survey: https://customersurveys.oracle.com/ords/surveys/t/oracle-university-gtm/survey?k=focus-group-2-link-share-5 Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Rajeev Grover, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! We hope you've been enjoying these last few weeks as we've been revisiting our most popular episodes of the year. Lois: Today's episode is the last one in this series and is a throwback to a conversation on Autonomous Databases on Serverless Infrastructure with three experts in the field: Hannah Nguyen, Sean Stacey, and Kay Malcolm. Hannah is a Staff Cloud Engineer, Sean is the Director of Platform Technology Solutions, and Kay is Vice President of Database Product Management. For this episode, we'll be sharing portions of our conversations with them. 01:14 Nikita: We began by asking Hannah how Oracle Cloud handles the process of provisioning an Autonomous Database. So, let's jump right in! Hannah: The Oracle Cloud automates the process of provisioning an Autonomous Database, and it automatically provisions for you a highly scalable, highly secure, and a highly available database very simply out of the box. 01:35 Lois: Hannah, what are the components and architecture involved when provisioning an Autonomous Database in Oracle Cloud? Hannah: Provisioning the database involves very few steps. But it's important to understand the components that are part of the provisioned environment. When provisioning a database, the number of CPUs in increments of 1 for serverless, storage in increments of 1 terabyte, and backup are automatically provisioned and enabled in the database. In the background, an Oracle 19c pluggable database is being added to the container database that manages all the user's Autonomous Databases. Because this Autonomous Database runs on Exadata systems, Real Application Clusters is also provisioned in the background to support the on-demand CPU scalability of the service. This is transparent to the user and administrator of the service. But be aware it is there. 02:28 Nikita: Ok…So, what sort of flexibility does the Autonomous Database provide when it comes to managing resource usage and costs, you know… especially in terms of starting, stopping, and scaling instances? Hannah: The Autonomous Database allows you to start your instance very rapidly on demand. It also allows you to stop your instance on demand as well to conserve resources and to pause billing. Do be aware that when you do pause billing, you will not be charged for any CPU cycles because your instance will be stopped. However, you'll still be incurring charges for your monthly billing for your storage. In addition to allowing you to start and stop your instance on demand, it's also possible to scale your database instance on demand as well. All of this can be done very easily using the Database Cloud Console. 03:15 Lois: What about scaling in the Autonomous Database? Hannah: So you can scale up your OCPUs without touching your storage and scale it back down, and you can do the same with your storage. In addition to that, you can also set up autoscaling. So the database, whenever it detects the need, will automatically scale up to three times the base level number of OCPUs that you have allocated or provisioned for the Autonomous Database. 03:38 Nikita: Is autoscaling available for all tiers? Hannah: Autoscaling is not available for an always free database, but it is enabled by default for other tiered environments. Changing the setting does not require downtime. So this can also be set dynamically. One of the advantages of autoscaling is cost because you're billed based on the average number of OCPUs consumed during an hour. 04:01 Lois: Thanks, Hannah! Now, let's bring Sean into the conversation. Hey Sean, I want to talk about moving an autonomous database resource. When or why would I need to move an autonomous database resource from one compartment to another? Sean: There may be a business requirement where you need to move an autonomous database resource, serverless resource, from one compartment to another. Perhaps, there's a different subnet that you would like to move that autonomous database to, or perhaps there's some business applications that are within or accessible or available in that other compartment that you wish to move your autonomous database to take advantage of. 04:36 Nikita: And how simple is this process of moving an autonomous database from one compartment to another? What happens to the backups during this transition? Sean: The way you can do this is simply to take an autonomous database and move it from compartment A to compartment B. And when you do so, the backups, or the automatic backups that are associated with that autonomous database, will be moved with that autonomous database as well. 05:00 Lois: Is there anything that I need to keep in mind when I'm moving an autonomous database between compartments? Sean: A couple of things to be aware of when doing this is, first of all, you must have the appropriate privileges in that compartment in order to move that autonomous database both from the source compartment to the target compartment. In addition to that, once the autonomous database is moved to this new compartment, any policies or anything that's defined in that compartment to govern the authorization and privileges of that said user in that compartment will be applied immediately to that new autonomous database that has been moved into that new compartment. 05:38 Nikita: Sean, I want to ask you about cloning in Autonomous Database. What are the different types of clones that can be created? Sean: It's possible to create a new Autonomous Database as a clone of an existing Autonomous Database. This can be done as a full copy of that existing Autonomous Database, or it can be done as a metadata copy, where the objects and tables are cloned, but they are empty. So there's no rows in the tables. And this clone can be taken from a live running Autonomous Database or even from a backup. So you can take a backup and clone that to a completely new database. 06:13 Lois: But why would you clone in the first place? What are the benefits of this? Sean: When cloning or when creating this clone, it can be created in a completely new compartment from where the source Autonomous Database was originally located. So it's a nice way of moving one database to another compartment to allow developers or another community of users to have access to that environment. 06:36 Nikita: I know that along with having a full clone, you can also have a refreshable clone. Can you tell us more about that? Who is responsible for this? Sean: It's possible to create a refreshable clone from an Autonomous Database. And this is one that would be synced with that source database up to so many days. The task of keeping that refreshable clone in sync with that source database rests upon the shoulders of the administrator. The administrator is the person who is responsible for performing that sync operation. Now, actually performing the operation is very simple, it's point and click. And it's an automated process from the database console. And also be aware that refreshable clones can trail the source database or source Autonomous Database up to seven days. After that period of time, the refreshable clone, if it has not been refreshed or kept in sync with that source database, it will become a standalone, read-only copy of that original source database. 07:38 Nikita: Ok Sean, so if you had to give us the key takeaways on cloning an Autonomous Database, what would they be? Sean: It's very easy and a lot of flexibility when it comes to cloning an Autonomous Database. We have different models that you can take from a live running database instance with zero impact on your workload or from a backup. It can be a full copy, or it can be a metadata copy, as well as a refreshable, read-only clone of a source database. 08:12 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You'll find training on everything from cloud computing, database, and security to artificial intelligence and machine learning, all of which is available free to subscribers. So, get going! Pick a course of your choice, get certified, join the Oracle University Learning Community, and network with your peers. If you're already an Oracle MyLearn user, go to MyLearn to begin your journey. If you have not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started. 08:50 Nikita: Welcome back! Thank you, Sean, and hi Kay! I want to ask you about events and notifications in Autonomous Database. Where do they really come in handy? Kay: Events can be used for a variety of notifications, including admin password expiration, ADB services going down, and wallet expiration warnings. There's this service, and it's called the notifications service. It's part of OCI. And this service provides you with the ability to broadcast messages to distributed components using a publish and subscribe model. These notifications can be used to notify you when event rules or alarms are triggered or simply to directly publish a message. In addition to this, there's also something that's called a topic. This is a communication channel for sending messages to subscribers in the topic. You can manage these topics and their subscriptions really easy. It's not hard to do at all. 09:52 Lois: Kay, I want to ask you about backing up Autonomous Databases. How does Autonomous Database handle backups? Kay: Autonomous Database automatically backs up your database for you. The retention period for backups is 60 days. You can restore and recover your database to any point in time during this retention period. You can initiate recovery for your Autonomous Database by using the cloud console or an API call. Autonomous Database automatically restores and recovers your database to the point in time that you specify. In addition to a point in time recovery, we can also perform a restore from a specific backup set. 10:37 Lois: Kay, you spoke about automatic backups, but what about manual backups? Kay: You can do manual backups using the cloud console, for example, if you want to take a backup say before a major change to make restoring and recovery faster. These manual backups are put in your cloud object storage bucket. 10:58 Nikita: Are there any special instructions that we need to follow when configuring a manual backup? Kay: The manual backup configuration tasks are a one-time operation. Once this is configured, you can go ahead, trigger your manual backup any time you wish after that. When creating the object storage bucket for the manual backups, it is really important-- so I don't want you to forget-- that the name format for the bucket and the object storage follows this naming convention. It should be backup underscore database name. And it's not the display name here when I say database name. In addition to that, the object name has to be all lowercase. So three rules. Backup underscore database name, and the specific database name is not the display name. It has to be in lowercase. Once you've created your object storage bucket to meet these rules, you then go ahead and set a database property. Default_backup_bucket. This points to the object storage URL and it's using the Swift protocol. Once you've got your object storage bucket mapped and you've created your mapping to the object storage location, you then need to go ahead and create a database credential inside your database. You may have already had this in place for other purposes, like maybe you were loading data, you were using Data Pump, et cetera. If you don't, you would need to create this specifically for your manual backups. Once you've done so, you can then go ahead and set your property to that default credential that you created. So once you follow these steps as I pointed out, you only have to do it one time. Once it's configured, you can go ahead and use it from now on for your manual backups. 13:00 Lois: Kay, the last topic I want to talk about before we let you go is Autonomous Data Guard. Can you tell us about it? Kay: Autonomous Data Guard monitors the primary database, in other words, the database that you're using right now. 13:14 Lois: So, if ADB goes down… Kay: Then the standby instance will automatically become the primary instance. There's no manual intervention required. So failover from the primary database to that standby database I mentioned, it's completely seamless and it doesn't require any additional wallets to be downloaded or any new URLs to access APEX or Oracle Machine Learning. Even Oracle REST Data Services. All the URLs and all the wallets, everything that you need to authenticate, to connect to your database, they all remain the same for you if you have to failover to your standby database. 13:58 Lois: And what happens after a failover occurs? Kay: After performing a failover, a new standby for your primary will automatically be provisioned. So in other words, in performing a failover your standby does become your new primary. Any new standby is made for that primary. I know, it's kind of interesting. So currently, the standby database is created in the same region as the primary database. For better resilience, if your database is provisioned, it would be available on AD1 or Availability Domain 1. My secondary, or my standby, would be provisioned on a different availability domain. 14:49 Nikita: But there's also the possibility of manual failover, right? What are the differences between automatic and manual failover scenarios? When would you recommend using each? Kay: So in the case of the automatic failover scenario following a disastrous situation, if the primary ADB becomes completely unavailable, the switchover button will turn to a failover button. Because remember, this is a disaster. Automatic failover is automatically triggered. There's no user action required. So if you're asleep and something happens, you're protected. There's no user action required, but automatic failover is allowed to succeed only when no data loss will occur. For manual failover scenarios in the rare case when an automatic failover is unsuccessful, the switchover button will become a failover button and the user can trigger a manual failover should they wish to do so. The system automatically recovers as much data as possible, minimizing any potential data loss. But you can see anywhere from a few seconds or minutes of data loss. Now, you should only perform a manual failover in a true disaster scenario, expecting the fact that a few minutes of potential data loss could occur, to ensure that your database is back online as soon as possible. 16:23 Lois: We hope you've enjoyed revisiting some of our most popular episodes over these past few weeks. We always appreciate your feedback and suggestions so remember to take that quick survey we've put out. You'll find it in the show notes for today's episode. Thanks a lot for your support. We're taking a break for the next two weeks and will be back with a brand-new season of the Oracle University Podcast in January. Happy holidays, everyone! Nikita: Happy holidays! Until next time, this is Nikita Abraham... Lois: And Lois Houston, signing off! 16:56 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Matteo Collina and Luca Maraschi join the podcast to talk about Platformatic. Learn about Platformatics' incredible 4.3 million dollar seed round, its robust features and modular approach, and how it addresses the unique challenges faced by devs and enterprises. Links https://platformatic.dev/docs/getting-started/quick-start-watt Matteo Collina: https://nodeland.dev https://x.com/matteocollina https://fosstodon.org/@mcollina https://github.com/mcollina https://www.linkedin.com/in/matteocollina https://www.youtube.com/@adventuresinnodeland Luca Maraschi: https://www.linkedin.com/in/lucamaraschi https://x.com/lucamaraschi We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guests: Luca Maraschi and Matteo Collina.
Copilot, domeny .io i trolle patentowe? Brzmi jak przepis na IT-owy rollercoaster! W tym odcinku Patoarchitekci serwują mieszankę gorących tematów, od AI po Kubernetes. Czy Copilot to przyjaciel czy wróg programisty? Jak customowe metryki wpływają na autoskalowanie w Kubernetes? Dlaczego Cloudflare walczy z trollami patentowymi? Poznaj odpowiedzi i zanurz się w świecie DevOps, cloud computing i AI. Chcesz być na bieżąco z trendami w IT? Posłuchaj tego odcinka i dołącz do dyskusji na Discordzie Patoarchitektów. Kto wie, może twój kod też zacznie przechodzić testy jak u Volkswagena?
In this episode of Spring Office Hours, hosts Dan Vega and DeShaun Carter interview Chris Bono, a Spring team member who works on Spring Cloud Dataflow and Spring Pulsar. They discuss streaming data, comparing Apache Kafka and Apache Pulsar, and explore the features and use cases of Spring Cloud Stream applications. Chris provides insights into the architecture of streaming applications, explains key concepts, and highlights the benefits of using Spring's abstraction layers for working with messaging systems.Show Notes:Introduction to Chris Bono and his work on Spring Cloud Dataflow and Spring PulsarComparison between Apache Kafka and Apache PulsarOverview of Spring Cloud Stream and its bindersExplanation of source, processor, and sink concepts in streaming applicationsIntroduction to Spring Cloud Stream Applications projectDiscussion on Change Data Capture (CDC) and its importance in streamingExploration of various sources, processors, and sinks available in Spring Cloud Stream ApplicationsMention of KEDA (Kubernetes Event-driven Autoscaling) and its potential use with Spring Cloud applicationsUpcoming features in Spring Pulsar 1.2 releaseImportance of community feedback and using GitHub discussions for feature requests and issue reportingThe podcast provides a comprehensive overview of streaming data concepts and how Spring projects can be used to build efficient streaming applications.
Deep Dive into Serverless Databases with Neon: Featuring Heikki Linnakangas In this episode of the Geek Narrator podcast, host Kaivalya Apte is joined by Heikki Linnakangas, co-founder of Neon, to explore the innovative world of serverless databases. They discuss Neon's unique approach to separating compute and storage, the benefits of serverless architecture for modern applications, and dive into various compelling use cases. They also cover Neon's architectural features like branching, auto-scaling, and auto-suspend, making it a powerful tool for both developers and enterprises. Whether you're curious about multi-tenancy, fault tolerance, or developer productivity, this episode offers insightful knowledge about leveraging Neon's capabilities for your next project. 00:00 Introduction 00:53 The Birth of Neon: Why It Was Created 02:16 Understanding Serverless Databases 07:06 Neon's Architecture: Separation of Compute and Storage 09:59 Exploring Branching in Neon 18:21 Auto Scaling and Handling Spikes in Traffic 20:17 The Challenge of Multiple Writers in Distributed Systems 22:51 Auto Suspend: Cost-Effective Database Management 26:02 Optimizing Cold Start Times 27:14 Balancing Cost and Performance 28:52 Replication and Durability 30:32 Understanding the Storage Layer 34:02 Custom LSM Tree Implementation 36:21 Fault Tolerance and Failover 07:00 Developer Productivity and Use Cases 42:56 Migration and Tooling 48:35 Future Roadmap and User Experience 50:28 Conclusion and Final Thoughts Neon website: https://neon.tech/ Follow me on Linkedin and Twitter: https://www.linkedin.com/in/kaivalyaapte/ and https://twitter.com/thegeeknarrator If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning! #PostgreSQL #SQL #RDBMS #NEON
Highlights from this week's conversation include:Jeff's Background and Transition to Independent Consulting (0:03)Working at Keurig and Business Model Changes (2:16)Tech Stack Evolution and SAP HANA Implementation (7:33)Adoption of Tableau and Data Pipelines (11:21)Supply Chain Analytics and Timeless Data Modeling (15:49)Impact of Cloud Computing on Cost Optimization (18:35)Challenges of Managing Variable Costs (20:59)Democratization of Data and Cost Impact (23:52)Quality of Fivetran Connectors (27:29)Data Ingestion and Cost Awareness (29:44)Virtual Warehouse Cost Management (31:22)Auto-Scaling and Performance Optimization (33:09)Cost-Saving Frameworks for Business Problems (38:19)Dashboard Frameworks (40:53)Increasing Dashboards (43:29)Final thoughts and takeaways (46:28)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
edX ✨I build courses: https://insight.paiml.com/d69
In aflevering 52, gaan we in gesprek met Jorge Turrado Ferrero, SRE Expert en KEDA-maintainer en Zbyněk Roubalík, Oprichter van Kedify en ook een KEDA-maintainer , om de fascinerende wereld van KEDA (Kubernetes Event Driven Autoscaler) te ontdekken.KEDA is een krachtig hulpmiddel waarmee Kubernetes-applicaties automatisch kunnen worden geschaald op basis van verschillende soorten gebeurtenissen/events. Dit omvat niet alleen traditionele metingen zoals CPU- en geheugengebruik, maar ook aangepaste metingen en externe gebeurtenisbronnen, waardoor het zeer flexibel en aanpasbaar is.Tijdens ons gesprek verkennen we de ins en outs van KEDA, bespreken we de belangrijkste kenmerken, voordelen en toepassingen in de echte wereld. We gaan ook in op de uitdagingen en kansen die gepaard gaan met op gebeurtenissen gebaseerde schaling in Kubernetes-omgevingen.Of je nu nieuw bent met KEDA of je begrip van deze innovatieve technologie wilt verbeteren, deze aflevering biedt waardevolle inzichten en praktische kennis rechtstreeks van de experts zelf. Luister nu om meer te weten te komen over KEDA ACC ICT Specialist in IT-CONTINUÏTEIT Bedrijfskritische applicaties én data veilig beschikbaar, onafhankelijk van derden, altijd en overalDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Welcome to part four in the AWS Certification Exam Prep Mini-Series! Whether you're an aspiring cloud enthusiast or a seasoned developer looking to deepen your architectural acumen, you've landed in the perfect spot. In this six-part saga, we're demystifying the pivotal role of a Solutions Architect in the AWS cloud computing cosmos. In this fourth episode, Caroline and Dave chat again with Anya Derbakova, a Senior Startup Solutions Architect at AWS, known for weaving social media magic, and Ted Trentler, a Senior AWS Technical Instructor with a knack for simplifying the complex. Together, we will step into the realm of performance, where we untangle the complexities of designing high-performing architectures in the cloud. We dissect the essentials of high-performing storage solutions, dive deep into elastic compute services for scaling and cost efficiency, and unravel the intricacies of optimizing database solutions for unparalleled performance. Expect to uncover: • The spectrum of AWS storage services and their optimal use cases, from Amazon S3's versatility to the shared capabilities of Amazon EFS. • How to leverage Amazon EC2, Auto Scaling, and Load Balancing to create elastic compute solutions that adapt to your needs. • Insights into serverless computing paradigms with AWS Lambda and Fargate, highlighting the shift towards de-coupled architectures. • Strategies for selecting high-performing database solutions, including the transition from on-premise databases to AWS-managed services like RDS and the benefits of caching with Amazon ElastiCache. • A real-world scenario where we'll navigate the challenge of processing hundreds of thousands of online votes in minutes, testing your understanding and application of high-performing AWS architectures. Whether you're dealing with vast amounts of data, requiring robust compute power, or ensuring your architecture can handle peak loads without a hitch, we've got you covered! Anya on LinkedIn: https://www.linkedin.com/in/annadderbakova/ Ted on Twitter: https://twitter.com/ttrentler Ted on LinkedIn: https://linkedin/in/tedtrentler Caroline on Twitter: https://twitter.com/carolinegluck Caroline on LinkedIn: https://www.linkedin.com/in/cgluck/ Dave on Twitter: https://twitter.com/thedavedev Dave on LinkedIn: https://www.linkedin.com/in/davidisbitski AWS SAA Exam Guide - https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate_Exam-Guide.pdf Party Rock for Exam Study - https://partyrock.aws/u/tedtrent/KQtYIhbJb/Solutions-Architect-Study-Buddy All Things AWS Training - Links to Self-paced and Instructor Led https://aws.amazon.com/training/ AWS Skill Builder – Free CPE Course - https://explore.skillbuilder.aws/learn/course/134/aws-cloud-practitioner-essentials AWS Skill Builder – Learning Badges - https://explore.skillbuilder.aws/learn/public/learning_plan/view/1044/solutions-architect-knowledge-badge-readiness-path AWS Usergroup Communities: https://aws.amazon.com/developer/community/usergroups Subscribe: Spotify: https://open.spotify.com/show/7rQjgnBvuyr18K03tnEHBI Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-developers-podcast/id1574162669 Stitcher: https://www.stitcher.com/show/1065378 Pandora: https://www.pandora.com/podcast/aws-developers-podcast/PC:1001065378 TuneIn: https://tunein.com/podcasts/Technology-Podcasts/AWS-Developers-Podcast-p1461814/ Amazon Music: https://music.amazon.com/podcasts/f8bf7630-2521-4b40-be90-c46a9222c159/aws-developers-podcast Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zb3VuZGNsb3VkLmNvbS91c2Vycy9zb3VuZGNsb3VkOnVzZXJzOjk5NDM2MzU0OS9zb3VuZHMucnNz RSS Feed: https://feeds.soundcloud.com/users/soundcloud:users:994363549/sounds.rss
Whether its GitOps, DevOps, Platform Engineering, Observability as a Service or other terms. We all have our definitions, but rarely do we have a consensus on what those terms really mean! To get some clarity we invited Roberth Strand, CNCF Ambassador and Azure MVP, who has been passionately advocating for GitOps as it was initially defined and explained by Alexis Richardson, Weaveworks in his blog What is GitOps Really! Tune in and learn about Desired State Management, Continuous Pull vs Pushing from Pipelines, how Progressive Delivery or Auto-Scaling fits into declaring everything in Git, what OpenGItOps is and why this podcast will help you get your GitOps certification (coming soon)As we had a lot to talk we also touched on Platform Engineering and various other topicsHere are all the links we discussed:Alexis GitOps Blog Post: https://medium.com/weaveworks/what-is-gitops-really-e77329f23416OpenGitOps: https://opengitops.dev/Flux Image Reflector: https://fluxcd.io/flux/components/image/CNCF White Paper on Platform Engineering: https://tag-app-delivery.cncf.io/whitepapers/platforms/Platform Engineering Maturity Model: https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/Platform Engineering Working Group as part of TAG App Delivery: https://tag-app-delivery.cncf.io/wgs/platforms/
In deze aflevering verkennen we nieuwe mogelijkheden om inzicht te krijgen in het gebruik van Entra-licenties. Ook behandelen we updates over Azure Firewall, Key Vaults en External Users in Teams. Entra License Utilization Insights: Ontdek hoe je inzicht kunt krijgen in het gebruik van je Entra-licenties. https://techcommunity.microsoft.com/t5/microsoft-entra-blog/introducing-microsoft-entra-license-utilization-insights/ba-p/3796393 Azure Firewall Parallel IP Groups: Leer meer over de ondersteuning voor Parallel IP Groups in Azure Firewall, nu beschikbaar in public preview. https://azure.microsoft.com/en-us/updates/azure-firewall-parallel-ip-group-update-support-is-now-in-public-preview/ Azure Firewall Autoscaling en Flow Trace Logs: Ontdek de algemene beschikbaarheid van Autoscaling en Flow Trace Logs in Azure Firewall. https://azure.microsoft.com/en-us/updates/azure-firewall-flow-trace-logs-and-autoscaling-based-on-number-of-connections-and-are-now-generally-available/ Retirement van RBAC Application Impersonation: Maak kennis met het afscheid van RBAC Application Impersonation in Exchange Online. https://techcommunity.microsoft.com/t5/exchange-team-blog/retirement-of-rbac-application-impersonation-in-exchange-online/ba-p/4062671 Key Vault Updates: Ontdek de algemene verbeteringen in Azure Key Vault. https://azure.microsoft.com/en-us/updates/general-availability-improvements-in-azure-key-vault/ Microsoft Teams Labels for External Users: Verken de updates over Microsoft Teams-labels voor externe gebruikers. https://app.cloudscout.one/evergreen-item/mc716008/ Microsoft Edge Data and Consent Changes: Leer meer over de veranderingen in gegevens en toestemming in Microsoft Edge. https://app.cloudscout.one/evergreen-item/mc715685/ Ondersteuning voor Azure VMs met Ultra of Premium SSD V2 in Azure Backup: Ontdek de ondersteuning voor Ultra Disk Backup en Premium SSD V2 Backup in Azure Backup. https://azure.microsoft.com/en-us/updates/ultra-disk-backup-support-ga/ https://azure.microsoft.com/en-us/updates/premium-ssd-v2-backup-support-ga/
Covering the purpose and use cases of KEDA, best practices, common pitfalls, and wrapping up with a scale-from-zero KEDA demo.
Want to quickly provision your autonomous database? Then look no further than Oracle Autonomous Database Serverless, one of the two deployment choices offered by Oracle Autonomous Database. Autonomous Database Serverless delegates all operational decisions to Oracle, providing you with a completely autonomous experience. Join hosts Lois Houston and Nikita Abraham, along with Oracle Database experts, as they discuss how serverless infrastructure eliminates the need to configure any hardware or install any software because Autonomous Database handles provisioning the database, backing it up, patching and upgrading it, and growing or shrinking it for you. Oracle Autonomous Database Episode: https://oracleuniversitypodcast.libsyn.com/oracle-autonomous-database Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Rajeev Grover, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. 00:26 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! Welcome back to a new season of the Oracle University Podcast. This time, our focus is going to be on Oracle Autonomous Database. We've got a jam-packed season planned with some very special guests joining us. 00:52 Lois: If you're a regular listener of the podcast, you'll remember that we'd spoken a bit about Autonomous Database last year. That was a really good introductory episode so if you missed it, you might want to check it out. Nikita: Yeah, we'll post a link to the episode in today's show notes so you can find it easily. 01:07 Lois: Right, Niki. So, for today's episode, we wanted to focus on Autonomous Database on Serverless Infrastructure and we reached out to three experts in the field: Hannah Nguyen, Sean Stacey, and Kay Malcolm. Hannah is an Associate Cloud Engineer, Sean, a Director of Platform Technology Solutions, and Kay, who's been on the podcast before, is Senior Director of Database Product Management. For this episode, we'll be sharing portions of our conversations with them. So, let's get started. 01:38 Nikita: Hi Hannah! How does Oracle Cloud handle the process of provisioning an Autonomous Database? Hannah: The Oracle Cloud automates the process of provisioning an Autonomous Database, and it automatically provisions for you a highly scalable, highly secure, and a highly available database very simply out of the box. 01:56 Lois: Hannah, what are the components and architecture involved when provisioning an Autonomous Database in Oracle Cloud? Hannah: Provisioning the database involves very few steps. But it's important to understand the components that are part of the provisioned environment. When provisioning a database, the number of CPUs in increments of 1 for serverless, storage in increments of 1 terabyte, and backup are automatically provisioned and enabled in the database. In the background, an Oracle 19c pluggable database is being added to the container database that manages all the user's Autonomous Databases. Because this Autonomous Database runs on Exadata systems, Real Application Clusters is also provisioned in the background to support the on-demand CPU scalability of the service. This is transparent to the user and administrator of the service. But be aware it is there. 02:49 Nikita: Ok…So, what sort of flexibility does the Autonomous Database provide when it comes to managing resource usage and costs, you know… especially in terms of starting, stopping, and scaling instances? Hannah: The Autonomous Database allows you to start your instance very rapidly on demand. It also allows you to stop your instance on demand as well to conserve resources and to pause billing. Do be aware that when you do pause billing, you will not be charged for any CPU cycles because your instance will be stopped. However, you'll still be incurring charges for your monthly billing for your storage. In addition to allowing you to start and stop your instance on demand, it's also possible to scale your database instance on demand as well. All of this can be done very easily using the Database Cloud Console. 03:36 Lois: What about scaling in the Autonomous Database? Hannah: So you can scale up your OCPUs without touching your storage and scale it back down, and you can do the same with your storage. In addition to that, you can also set up autoscaling. So the database, whenever it detects the need, will automatically scale up to three times the base level number of OCPUs that you have allocated or provisioned for the Autonomous Database. 04:00 Nikita: Is autoscaling available for all tiers? Hannah: Autoscaling is not available for an always free database, but it is enabled by default for other tiered environments. Changing the setting does not require downtime. So this can also be set dynamically. One of the advantages of autoscaling is cost because you're billed based on the average number of OCPUs consumed during an hour. 04:23 Lois: Thanks, Hannah! Now, let's bring Sean into the conversation. Hey Sean, I want to talk about moving an autonomous database resource. When or why would I need to move an autonomous database resource from one compartment to another? Sean: There may be a business requirement where you need to move an autonomous database resource, serverless resource, from one compartment to another. Perhaps, there's a different subnet that you would like to move that autonomous database to, or perhaps there's some business applications that are within or accessible or available in that other compartment that you wish to move your autonomous database to take advantage of. 04:58 Nikita: And how simple is this process of moving an autonomous database from one compartment to another? What happens to the backups during this transition? Sean: The way you can do this is simply to take an autonomous database and move it from compartment A to compartment B. And when you do so, the backups, or the automatic backups that are associated with that autonomous database, will be moved with that autonomous database as well. 05:21 Lois: Is there anything that I need to keep in mind when I'm moving an autonomous database between compartments? Sean: A couple of things to be aware of when doing this is, first of all, you must have the appropriate privileges in that compartment in order to move that autonomous database both from the source compartment to the target compartment. In addition to that, once the autonomous database is moved to this new compartment, any policies or anything that's defined in that compartment to govern the authorization and privileges of that said user in that compartment will be applied immediately to that new autonomous database that has been moved into that new compartment. 05:59 Nikita: Sean, I want to ask you about cloning in Autonomous Database. What are the different types of clones that can be created? Sean: It's possible to create a new Autonomous Database as a clone of an existing Autonomous Database. This can be done as a full copy of that existing Autonomous Database, or it can be done as a metadata copy, where the objects and tables are cloned, but they are empty. So there's no rows in the tables. And this clone can be taken from a live running Autonomous Database or even from a backup. So you can take a backup and clone that to a completely new database. 06:35 Lois: But why would you clone in the first place? What are the benefits of this? Sean: When cloning or when creating this clone, it can be created in a completely new compartment from where the source Autonomous Database was originally located. So it's a nice way of moving one database to another compartment to allow developers or another community of users to have access to that environment. 06:58 Nikita: I know that along with having a full clone, you can also have a refreshable clone. Can you tell us more about that? Who is responsible for this? Sean: It's possible to create a refreshable clone from an Autonomous Database. And this is one that would be synced with that source database up to so many days. The task of keeping that refreshable clone in sync with that source database rests upon the shoulders of the administrator. The administrator is the person who is responsible for performing that sync operation. Now, actually performing the operation is very simple, it's point and click. And it's an automated process from the database console. And also be aware that refreshable clones can trail the source database or source Autonomous Database up to seven days. After that period of time, the refreshable clone, if it has not been refreshed or kept in sync with that source database, it will become a standalone, read-only copy of that original source database. 08:00 Nikita: Ok Sean, so if you had to give us the key takeaways on cloning an Autonomous Database, what would they be? Sean: It's very easy and a lot of flexibility when it comes to cloning an Autonomous Database. We have different models that you can take from a live running database instance with zero impact on your workload or from a backup. It can be a full copy, or it can be a metadata copy, as well as a refreshable, read-only clone of a source database. 08:33 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You'll find training on everything from cloud computing, database, and security to artificial intelligence and machine learning, all of which is available free to subscribers. So, get going! Pick a course of your choice, get certified, join the Oracle University Learning Community, and network with your peers. If you are already an Oracle MyLearn user, go to MyLearn to begin your journey. If you have not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started. 09:12 Nikita: Welcome back! Thank you, Sean, and hi Kay! I want to ask you about events and notifications in Autonomous Database. Where do they really come in handy? Kay: Events can be used for a variety of notifications, including admin password expiration, ADB services going down, and wallet expiration warnings. There's this service, and it's called the notifications service. It's part of OCI. And this service provides you with the ability to broadcast messages to distributed components using a publish and subscribe model. These notifications can be used to notify you when event rules or alarms are triggered or simply to directly publish a message. In addition to this, there's also something that's called a topic. This is a communication channel for sending messages to subscribers in the topic. You can manage these topics and their subscriptions really easy. It's not hard to do at all. 10:14 Lois: Kay, I want to ask you about backing up Autonomous Databases. How does Autonomous Database handle backups? Kay: Autonomous Database automatically backs up your database for you. The retention period for backups is 60 days. You can restore and recover your database to any point in time during this retention period. You can initiate recovery for your Autonomous Database by using the cloud console or an API call. Autonomous Database automatically restores and recovers your database to the point in time that you specify. In addition to a point in time recovery, we can also perform a restore from a specific backup set. 10:59 Lois: Kay, you spoke about automatic backups, but what about manual backups? Kay: You can do manual backups using the cloud console, for example, if you want to take a backup say before a major change to make restoring and recovery faster. These manual backups are put in your cloud object storage bucket. 11:20 Nikita: Are there any special instructions that we need to follow when configuring a manual backup? Kay: The manual backup configuration tasks are a one-time operation. Once this is configured, you can go ahead, trigger your manual backup any time you wish after that. When creating the object storage bucket for the manual backups, it is really important-- so I don't want you to forget-- that the name format for the bucket and the object storage follows this naming convention. It should be backup underscore database name. And it's not the display name here when I say database name. 12:00 Kay: In addition to that, the object name has to be all lowercase. So three rules. Backup underscore database name, and the specific database name is not the display name. It has to be in lowercase. Once you've created your object storage bucket to meet these rules, you then go ahead and set a database property. Default_backup_bucket. This points to the object storage URL and it's using the Swift protocol. Once you've got your object storage bucket mapped and you've created your mapping to the object storage location, you then need to go ahead and create a database credential inside your database. You may have already had this in place for other purposes, like maybe you were loading data, you were using Data Pump, et cetera. If you don't, you would need to create this specifically for your manual backups. Once you've done so, you can then go ahead and set your property to that default credential that you created. So once you follow these steps as I pointed out, you only have to do it one time. Once it's configured, you can go ahead and use it from now on for your manual backups. 13:21 Lois: Kay, the last topic I want to talk about before we let you go is Autonomous Data Guard. Can you tell us about it? Kay: Autonomous Data Guard monitors the primary database, in other words, the database that you're using right now. Lois: So, if ADB goes down… Kay: Then the standby instance will automatically become the primary instance. There's no manual intervention required. So failover from the primary database to that standby database I mentioned, it's completely seamless and it doesn't require any additional wallets to be downloaded or any new URLs to access APEX or Oracle Machine Learning. Even Oracle REST Data Services. All the URLs and all the wallets, everything that you need to authenticate, to connect to your database, they all remain the same for you if you have to failover to your standby database. 14:19 Lois: And what happens after a failover occurs? Kay: After performing a failover, a new standby for your primary will automatically be provisioned. So in other words, in performing a failover your standby does become your new primary. Any new standby is made for that primary. I know, it's kind of interesting. So currently, the standby database is created in the same region as the primary database. For better resilience, if your database is provisioned, it would be available on AD1 or Availability Domain 1. My secondary, or my standby, would be provisioned on a different availability domain. 15:10 Nikita: But there's also the possibility of manual failover, right? What are the differences between automatic and manual failover scenarios? When would you recommend using each? Kay: So in the case of the automatic failover scenario following a disastrous situation, if the primary ADB becomes completely unavailable, the switchover button will turn to a failover button. Because remember, this is a disaster. Automatic failover is automatically triggered. There's no user action required. So if you're asleep and something happens, you're protected. There's no user action required, but automatic failover is allowed to succeed only when no data loss will occur. 15:57 Nikita: For manual failover scenarios in the rare case when an automatic failover is unsuccessful, the switchover button will become a failover button and the user can trigger a manual failover should they wish to do so. The system automatically recovers as much data as possible, minimizing any potential data loss. But you can see anywhere from a few seconds or minutes of data loss. Now, you should only perform a manual failover in a true disaster scenario, expecting the fact that a few minutes of potential data loss could occur, to ensure that your database is back online as soon as possible. 16:44 Lois: Thank you so much, Kay. This conversation has been so educational for us. And thank you once again to Hannah and Sean. To learn more about Autonomous Database, head over to mylearn.oracle.com and search for the Oracle Autonomous Database Administration Workshop. Nikita: Thanks for joining us today. In our next episode, we will discuss Autonomous Database on Dedicated Infrastructure. Until then, this is Nikita Abraham… Lois: …and Lois Houston signing off. 17:12 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this sponsored episode of the Kubernetes Unpacked Podcast, we dive into the importance of cost and resource optimization with CAST AI. The truth is, it's not just about saving money. The goal is ensuring that your apps are performing the way they should be. This not only saves customer frustration, but engineering frustration. We... Read more »
In this sponsored episode of Kubernetes Unpacked, we dive into the importance of cost and resource optimization with CAST AI. The truth is, it's not just about saving money. The goal is ensuring that your apps are performing the way they should. This saves both customer and engineering frustration. We also explore from an engineering perspective how CAST AI uses AI in the background and how AI teams are building integrations into the product. The post KU040: Kubernetes Autoscaling Magic – Cost Control In Gen AI And LLMs With CAST AI (Sponsored) appeared first on Packet Pushers.
In this sponsored episode of the Kubernetes Unpacked Podcast, we dive into the importance of cost and resource optimization with CAST AI. The truth is, it's not just about saving money. The goal is ensuring that your apps are performing the way they should be. This not only saves customer frustration, but engineering frustration. We... Read more »
In this sponsored episode of Kubernetes Unpacked, we dive into the importance of cost and resource optimization with CAST AI. The truth is, it's not just about saving money. The goal is ensuring that your apps are performing the way they should. This saves both customer and engineering frustration. We also explore from an engineering perspective how CAST AI uses AI in the background and how AI teams are building integrations into the product. The post KU040: Kubernetes Autoscaling Magic – Cost Control In Gen AI And LLMs With CAST AI (Sponsored) appeared first on Packet Pushers.
In this sponsored episode of the Kubernetes Unpacked Podcast, we dive into the importance of cost and resource optimization with CAST AI. The truth is, it's not just about saving money. The goal is ensuring that your apps are performing the way they should be. This not only saves customer frustration, but engineering frustration. We... Read more »
In this episode, Adam McCrea joins us to talk about building and growing Judoscale (previously, Rails Autoscale) an autoscaler originally released in the Heroku Marketplace, and now available for Render, with other platforms to come. Adam shares about building a product on the side, launching on the Heroku platform, the process of a rebrand, and making the transition to full-time indie business owner.Adam McCrea:TwitterJudoscale:WebsiteMentioned in the episode:HerokuRenderFly.ioNate BerkopecTinySeed
Configure Azure Virtual Desktop infrastructure to run efficiently. Drive up utilization with scaling plans. Scale session host VMs in a host pool up or down automatically, without paying for idle resources. Ensure your VM images are up-to-date with required software, configurations, and Windows updates. Gain an end-to-end view of your services' performance by utilizing Azure Virtual Desktop Insights at Scale, which provides comprehensive visibility into how your services are running, and access a consolidated view of all your host pools and subscriptions for easy monitoring. Azure Expert, Matt McSpirit, walks through part four in our series on Azure Virtual Desktop. ► QUICK LINKS: 00:00 - Introduction 00:23 - Scaling plans 03:29 - Personal scaling plans 04:30 - Up-to-date VM images 07:13 - Azure Virtual Desktop Insights at Scale 09:02 - Session History chart 10:06 - Wrap up ► Link References: Check out our complete playlist at https://aka.ms/AVDMechanicsSeries ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Bret and Matt welcome Jake Warner back to the show to talk about LowOps. What does LowOps mean? What can Cycle offer us as an alternative to Swarm and Kubernetes?Jake Warner is the CEO and founder of Cycle.io. And I had him on the show a few years ago when I first heard about Cycle and I wanted to get an update on their platform offering. On this show we generally talk about Docker and Kubernetes but I'm also interested in any container tooling that can help us deploy and manage container based applications. Cycles' platform is an alternative container orchestrator as a service. In fact, they go beyond what you would provide normally with a container orchestrator and they provide OS updates, networking, the container runtime, and the orchestrator all in a single offering as a way to reduce the complexity that we're typically faced with when we're deploying Kubernetes. While I'm a fan of Docker swarm due to its simplicity, it still requires you to manage the OS underneath, to configure networking sometimes, and the feature releases have slowed down in recent years. But I still have a soft spot for those solutions that are removing the grunt work of OS and update management and helping smaller teams get more work done. I think Cycle has the potential to do that for a lot of teams that aren't all in on the Kubernetes way, but still value the container abstraction as the way to deploy software to servers.Live recording of the complete show from May 18, 2023 is on YouTube (Ep. #217). Includes demos.★Topics★Cycle.io website@cycleplatform on YouTube Support this show and get exclusive benefits on Patreon, YouTube, or bretfisher.com!★Join my Community★Get on the waitlist for my next live course on CI automation and gitops deploymentsBest coupons for my Docker and Kubernetes coursesChat with us and fellow students on our Discord Server DevOps FansGrab some merch at Bret's Loot BoxHomepage bretfisher.comCreators & Guests Bret Fisher - Host Cristi Cotovan - Editor Beth Fisher - Producer Matt Williams - Host Jake Warner @ Cycle.io - Guest (00:00) - Intro (02:25) - Introducing the guests (03:17) - What is Cycle? (12:33) - Deploying and staying up to date with Cycle (14:21) - Cycle's own OS and updates (17:12) - Core OS vs Cycle (22:10) - Use multiple providers with Cycle (22:52) - Run Cycle anywhere with infrastructure abstraction layer (24:33) - No latency requirement for the nodes (28:28) - DNS for container-to-container resolution (29:54) - Migration from one cloud provider to another? (31:17) - Roll back and telemetry (32:48) - Full-featured API (37:12) - Cycle data volumes (38:35) - Backups (40:24) - Autoscaling (43:00) - Getting started (44:40) - Control plane and self-hosting (44:58) - Question about moving to Reno (45:59) - Built from revenue and angels; no VC funding
What if you could significantly reduce the amount of time spent managing your database while still being confident that it is secure? Well, you can! With Oracle Autonomous Database (ADB), you can enjoy the highest levels of performance, scalability, and availability without the complexity of operating and securing your database. In this episode, Lois Houston and Nikita Abraham speak to William Masdon about how you can use the features of ADB to securely integrate, store, and analyze your data. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Ranbir Singh, and the OU Studio Team for helping us create this episode.
In deze aflevering praten Jan en Ronald je bij over een aantal belangrijke functionaliteiten en features binnen Kubernetes. We beginnen met Resource Planning; hoe weet Kubernetes of er genoeg resources beschikbaar zijn wanneer een pod op de node wordt gezet? Vervolgens wordt Horizontal Pod Autoscaling toegelicht; het automatisch opstarten van pods bij een bepaalde load. De heren gaan ook in op Node Affinity; pods over verschillende regio's inzetten voor een gemixt cluster.Handige links:https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
Change is the only constant, and for businesses these days, being able to match demand at any given time is of crucial importance. In this episode, Lois Houston and Nikita Abraham, along with special guest Rohit Rahi, look at how Oracle Cloud Infrastructure's scaling capabilities allow organizations to add and remove resources as needed, both manually and automatically, enabling them to fulfill their goals. They also discuss the OS Management service and how it can be used to manage and monitor updates and patches in operating system environments. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community Twitter: https://twitter.com/Oracle_Edu LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Special thanks to Arijit Ghosh, Kiran BR, David Wright, the OU Podcast Team, and the OU Studio Team for helping us create this episode.
O que é AWS EC2? Pra que serve e como extrair o máximo da computação em nuvem?Vamos ver todos os recursos que a computação em nuvem da AWS oferece para seu negócio ou projeto, e como fazer para economizar muito com as opções de instância reservada (saving plans) e spot instance.Como usar o EC2 com seus recursos de Elastic Load Balancer, Auto-Scaling, Elastic IP, Security Group, e muito mais.O curso AWS 2.0 está sendo preparado com muito cuidado e dedicação para atender às principais demandas de mercado para profissionais e empreendedores de tecnologia.Inscreva-se agora para aproveitar todas as vantagens do pré-lançamento:https://www.uminventorqualquer.com.br/curso-aws/Inscreva-se no Canal Wesley Milan para acompanhar os Reviews de serviços AWS:https://bit.ly/3LqiYwgReview do EC2: https://youtu.be/bGwZqR2dTOwInscreva-se no Canal Wesley Milan em Inglês e recomende a seus amigos gringos:https://bit.ly/3LqFjtAMe siga no Instagram: https://bit.ly/3tfzAj0LinkedIn: https://www.linkedin.com/in/wesleymilan/Podcast: https://bit.ly/3qa5JH1
Two brothers discussing all things AWS every week. Hosted by Andreas and Michael Wittig presented by cloudonaut.
Links: Ben Kehoe has left iRobot. And where's he going next? Presumably to re:Invent! I am too, with my re:Quinnvent nonsense Amazon Athena announces Query Result Reuse to accelerate queries Amazon EC2 enables you to opt out of directly shared Amazon Machine Images Amazon EC2 placement groups can now be shared across multiple AWS accounts Amazon EC2 now supports specifying list of instance types to use in attribute-based instance type selection for Auto Scaling groups, EC2 Fleet, and Spot Fleet Amazon Lightsail announces support for domain registration and DNS autoconfiguration Amazon RDS now supports new General Purpose gp3 storage volumes Announcing recurring custom line items for AWS Billing Conductor AWS Lambda announces Telemetry API, further enriching monitoring and observability capabilities of Lambda Extensions AWS Cost Explorer's New Look and Common Use Cases A New AWS Region Opens in Switzerland - eu-central-2 is now available. Introducing AWS Resource Explorer – Quickly Find Resources in Your AWS Account Overview of building resilient applications with Amazon DynamoDB global tables Publish Amazon DevOps Guru Insights to Slack Channel Uncompressed Media over IP on AWS: Read the whitepaper Enable cross-account queries on AWS CloudTrail lake using delegated administration from AWS Organizations NASA and ASDI announce no-cost access to important climate dataset on the AWS Cloud
In this episode we're talking about all things sizing - clusters, sharing, indexing and we're joined by Jan Srniček of Global Logic. We first came across Jan via his talk from MongoDB world, where he spoke about the journey he and his team took in understanding how to reduce usage & cost - all the while keeping performance and responsiveness high. In this episode, Jan talks about his journey with MongoDB, moving from Cosmo DB to MongoDB. Initially they stood up their own on-prem MongoDB database to learn more, but then soon realised that moving to Atlas was key, particularly as they could host on Azure.For Global Logic's client, Catalina, Jan manages everything from small clusters with a few hundred records all the way up to a system of 7 clusters with over 5Billion data records!So he know's all about scaling - and teaches us some lessons he's learnt along the way. He illustrates that if you only scale one large cluster (e.g with Autoscaling on), your database never gets a break! However, if you have smaller clusters, all autoscaling, along with predictable traffic patterns visualised by using Atlas metrics UI and performance advisor, you can analyse usage and re-organise structure appropriately.Ultimately, understanding the workloads in your application and dividing those across clusters all working together is key to application performance.Ján Srniček - https://www.linkedin.com/in/j%C3%A1n-srni%C4%8Dek-4a3b826bGlobal Logic - https://www.globallogic.com/Catalina - https://www.catalina.com/ MongoDB - https://www.mongodb.com/
Com alguns de seus companheiros de Getup, João Brito comenta as mudanças mais relevantes da versão 1.25 do Kubernetes. Algumas delas são a remoção definitiva do PSP (PodSecurityPolicy), a depreciação do suporte para GlusterFS e a morte do Autoscaling v.2 beta 1. Tem também a entrada, ainda em estágio alfa, do recurso de namespace de Linux (não o do Kubernetes, heim!?) e o avanço para estável das features Pod Security Admission e Local Ephemeral Storage Capacity Isolation.Outra novidade é que o PDB (Pod Disruption Budget) vai para versão default, por isso recomendamos que mantenham os deploys produtivos com pelo menos duas réplicas para não ter dor de cabeça na hora de uma atualização, por exemplo.Em meio às observações da nova versão, a turma falou sobre os prós e contras de trabalhar com um cluster gerenciado vs um cluster no On-Premise; e se tem alguma future gate que faz falta num cluster de produção.LINKS do que foi comentado no programa:Artigo da Karol Valencia da Aqua Security: https://blog.aquasec.com/kubernetes-version-1.25KubiLab - Vídeo tutorial do Adonai Costa sobre o KEDA: https://gtup.me/KubilabKeda RECOMENDAÇÕES dos participantes:One Punch Man (livro de mangá)Attack on Titan (série de mangá)The Sandman (filme) Narradores de Javé (filme)Five Days at Memorial (série na Apple TV+)CONVITE! Estamos perto do Kubicast #100 e vamos comemorar esse marco de um jeito muito especial! No formato “ASK ME ANYTHING”, a audiência vai poder tirar todas as suas dúvidas sobre Kubernetes e afins! Inscreva-se para participar: https://getup.io/participe-do-kubicast-100. O evento acontece no dia 15/9 às 19h no Zoom.SOBRE O KUBICASTO Kubicast é uma produção da Getup, especialista em Kubernetes. Todos os episódios do podcast estão no site da Getup e nas principais plataformas de áudio digital. Alguns deles estão registrados no YT. #DevOps #Kubernetes #Containers #Kubicast
Bret is joined by Nirmal Mehta, a Principal Specialist Solution Architect at AWS, and a Docker Captain, to discuss Karpenter, an autoscaling solution launched by AWS in 2021. Karpenter simplifies Kubernetes infrastructure by automating node scaling up and down, giving you "the right nodes at the right time."Autoscaling, particularly for Kubernetes, can be quite a complex project when you first start. Bret and Nirmal discuss how Karpenter works, how it can help or complement your existing setup, and how autoscaling generally works.Streamed live on YouTube on June 9, 2022.Unedited live recording of this show on YouTube (Ep #173). Includes demos.★Topics★Starship Shell PromptBret's favorite shell setupKarpenterKarpenter release blogK8s Scheduling ConceptsOther types of autoscalers:Horizontal Pod AutoscalerVertical Pod AutoscalerCluster Autoscaler★Nirmal Mehta★Nirmal on TwitterNirmal on LinkedIn★Join my Community★Best coupons for my Docker and Kubernetes coursesChat with us on our Discord Server Vital DevOpsHomepage bretfisher.com ★ Support this podcast on Patreon ★
We chat with Nir Mashkowski, Principal PM Manager, about the evolution of PaaS since its inception 10+ years ago, some of the usage patterns that he has observed, the benefits and reasoning of using PaaS services, and the future of where Azure PaaS services is headed. Media File: https://azpodcast.blob.core.windows.net/episodes/Episode431.mp3 YouTube: https://youtu.be/b3zUtaCnhTE Resources: Overview - Azure App Service | Microsoft Docs Azure Functions Overview | Microsoft Docs What is Azure Static Web Apps? | Microsoft Docs Azure Container Apps documentation | Microsoft Docs Dapr - Distributed Application Runtime KEDA | Kubernetes Event-driven Autoscaling
James and I talked to Michael Pytel, co-founder and CTO of Fulfilld, a cloud SaaS warehouse management startup. As a fellow SAP nerd, we have shared background for commiseration. Michael has done some amazing things with the technology powering Fulfilld, and he and the rest of his team have given a lot of thought to matching that technology up to business needs for several warehouse worker personas. It was a blast geeking out.HighlightsTechie with CNC machining in his blood — grandfathers, uncle, fatherCo-founded NIMBLGeeks out about digital twins — even got the chance to do a digital twin of 49ers stadiumThe new face of breadMastered breadmaking during COVIDOne way to look at goals: “how do we become the Uber and Waze of the warehouse?”Why SaaS? IT departments are hampered by operations and maintenance. It's HARD to get that innovation done with those burdens.It's freeing to come out of the limitations of the ERP systems to really build what Fulfilld wantedIt's not vapor, customers are signed.The cost to develop in new, open technologies is much lower than the traditional ERP realm.Autoscaling infrastructure allows them to spend much less on infra and much more on engineeringIt all boils down to: cost of license, cost of maintenance. Everything else is details.Michael sees augmented reality (AR) as a natural next step for the future of the warehouse.Money QuotesMichaelWe walk into the warehouse, we see a bunch of mobile devices on the charger…no one using them.Building open, building flexible. That's the key.If enough employees are trained on Fulfilld, could we create an on-demand workforce for warehouses?Paul[Choosing your own stack] It's a breath of fresh air!
https://go.dok.community/slack https://dok.community/ From the DoK Day EU 2022 (https://youtu.be/Xi-h4XNd5tE) Managing stateful workloads in a containerized environment has always been a concern. However, as Kubernetes developed, the whole community worked hard to bring stateful workloads to meet the needs of their enterprise users. As a result, Kubernetes introduced StatefulSets which supports stateful workloads since Kubernetes version 1.9. Users of Kubernetes now can use stateful applications like databases, AI workloads, and big data. Kubernetes support for stateful workloads comes in the form of StatefulSets. And as we all know, Kubernetes lets us automate many administration tasks along with provisioning and scaling. Rather than manually allocating resources, we can generate automated procedures that save time, it lets us respond faster when peaks in demand, and reduce costs by scaling this down when resources are not required. So, it's really important to capture autoscaling in terms of stateful workloads in Kubernetes for better fault tolerance, high availability, and cost management. There are still a few challenges regarding Autoscaling Stateful Workloads in Kubernetes. They are related to horizontal/vertical scaling and automating the scaling process. In Horizontal Scaling when we are scaling up the workloads, we need to make sure that the infant workloads join the existing workloads in terms of collaboration, integration, load-sharing, etc. And make sure that no data is lost, also the ongoing tasks have to be completed/transferred/aborted while scaling down the workloads. If the workloads are in primary-standby architecture, we need to make sure that scale-up or scale-down happens on standby workloads first, so that the failovers are minimized. While scaling down some workloads, we also need to ensure that the targeted workloads are excluded from the voting to prevent quorum loss. Similarly, while scaling up some workloads, we need to ensure that new workloads join the voting. When new resources are required, we have to make the tradeoff between vertical scaling and horizontal scaling. And when it comes to Automation, we have to determine how to generate resource (CPU/memory) recommendations for the workloads. Also, when to trigger the autoscaling? Let's say, a group of workloads may need to be autoscaled together. For example, In sharded databases, each shard is represented by one StatefulSet. But, all the shards are treated similarly by the database operator. Each shard may have its own recommendations. So, we have to find a way to scale them with the same recommendations. Also, we need to determine what happens when an autoscaling operation fails and what will happen to the future recommendations after the failure? There can be some workloads that may need a managed restart. For example, in a database, secondary nodes may need to be restarted before the primary. In this case, how to do a managed restart while autoscaling? Also, we need to figure out what happens when the workloads are going through maintenance? We will try to answer some of those questions throughout our session. ----- Fahim is a Software Engineer, working at AppsCode Inc. He has been involved with Kubernetes project since 2018 and is very enthusiastic about Kubernetes and open source in general. ----- MD Kamol Hasan is a Professional Software Developer with expertise in Kubernetes and backend development in Go. One of the lead engineers of KubeDB and KubeVault projects. Competitive contest programmer participated in different national and international programming contests including ACM ICPC, NCPC, etc
Kaslin Fields and Mark Mirchandani learn how GKE manages their releases and how customers can take advantage of the GKE release channels for smooth transitions. Guests Abdelfettah Sghiouar and Kobi Magnezi of the Google Cloud GKE team are here to explain. With releases every four months or so, Kobi tells us that Kubernetes requires two pieces to be managed with each release: the control plane and the nodes. Both are managed for the customer in GKE. The new addition of release channels allows flexibility with release updating so customers can adjust to their specific project needs. Each channel offers a different updating mix and speed, and clients choose the channel that's right for their project. The idea for release channels isn't a new one, Kobi explains. In fact, Google's frequent project releases, while keeping things secure and running well, also can be customized by choosing from an assortment of channels in other Google offerings like Chrome. Our guests talk us through the process of releasing through channels and how each release marinates in the Rapid channel to be sure the version is supported and secure before being pushed to customers through other channels. We hear how release channels differ from no-channel releases, the benefits of specialized channels, and recommendations for customers as far as which channels to use with different development environments. Abdel describes real-world use cases for the Rapid, Regular, and Stable channels, the Surge Upgrade feature, and how GKE notifications with Pub/Sub helps in the updating process. Kobi talks about maintenance and exclusion windows to help customers further customize when and how their projects will update. Kobi and Abdel wrap up with a discussion of the future of GKE release channels. Kobi Magnezi Kobi is the Product Manager for GKE at Google Cloud. Abdelfettah Sghiouar Abdel is a Cloud Dev Advocate with a focus on Cloud native, GKE, and Service Mesh technologies. Cool things of the week GKE Essentials videos KubeCon EU 2023 site KubeCon Call for Proposals site Kubernetes 1.24: Stargazer site GCP Podcast Episode 292: Pulumi and Kubernetes Releases with Kat Cosgrove podcast Optimize and scale your startup on Google Cloud: Introducing the Build Series blog Interview Kubernetes site GKE site Autoscaling with GKE: Overview and pods video GKE release schedule dcos Release channels docs Upgrade-scope maintenance windows docs Configure cluster notifications for third-party services docs Cluster notifications docs Pub/Sub site Agones site What's something cool you're working on? Kaslin is working on KubeCon and new episodes of GKE Essentials. Hosts Mark Mirchandani and Kaslin Fields
[00:01:10] Adam tells us a little bit about himself and how he got into this field. [00:03:48] We learn more about Adam's career path from edge case to Rails Autoscale. [00:05:09] Adam gives us a rundown of what Rails Autoscale is and the problem it solves.[00:06:41] Andrew wonders if Rails Autoscale will help if you don't have enough memory, and Adam tells us the solution for this.[00:09:39] Adam fills us in on the support load he gets and the kind of support he gives.[00:10:39] Find out how Rails Autoscale is different compared to other autoscalers Adam tried. [00:16:05] If you're wondering when Rails Autoscale is right for you, Adam tells us. Also, he announces that he's working on a new autoscaler that's going to be language- agnostic on Heroku.[00:17:41] Andrew wonders what prompted Adam to do this for other languages, and he tells us how the development has been so far. [00:20:28] We learn how the experience has been for Adam building an app within the Heroku marketplace. [00:22:37] Andrew asks Adam if he ever thought of making a bunch of fake accounts. ☺[00:23:50] Is YNAB a Rails app? Adam explains more about it and the team there. [00:26:26] Adam's been in the Ruby community for a long time, so we find out what he's currently excited about, and where you can find him online.Panelists:Jason CharnesAndrew MasonGuest:Adam McCreaSponsor:HoneybadgerLinks:Ruby Radar NewsletterRuby Radar TwitterAdam McCrea TwitterRails AutoscaleYNAB YNAB API Ruby Library-GitHub
A lot of things happened in October, and we talked about them all in early November. In this episode Arjen, Guy, and JM discuss a whole bunch of cool things that were released and may be a bit harsh on everything Microsoft. News Finally in Sydney Amazon EC2 Mac instances are now available in seven additional AWS Regions Amazon MemoryDB for Redis is now available in 11 additional AWS Regions Serverless Lambda AWS Lambda now supports triggering Lambda functions from an Amazon SQS queue in a different account AWS Lambda now supports IAM authentication for Amazon MSK as an event source Step Functions Now — AWS Step Functions Supports 200 AWS Services To Enable Easier Workflow Automation | AWS News Blog AWS Batch adds console support for visualizing AWS Step Functions workflows Amplify Announcing General Availability of Amplify Geo for AWS Amplify AWS Amplify for JavaScript now supports resumable file uploads for Storage Other Accelerating serverless development with AWS SAM Accelerate | AWS Compute Blog Containers Amazon EKS Managed Node Groups adds native support for Bottlerocket AWS Fargate now supports Amazon ECS Windows containers Announcing the general availability of cdk8s and support for Go | Containers Monitoring clock accuracy on AWS Fargate with Amazon ECS Amazon ECS Anywhere now supports GPU-based workloads AWS Console Mobile Application adds support for Amazon Elastic Container Service AWS Load Balancer Controller version 2.3 now available with support for ALB IPv6 targets AWS App Mesh Metric Extension is now generally available EC2 & VPC New – Amazon EC2 C6i Instances Powered by the Latest Generation Intel Xeon Scalable Processors | AWS News Blog Amazon EC2 now supports sharing Amazon Machine Images across AWS Organizations and Organizational Units Amazon EC2 Hibernation adds support for Ubuntu 20.04 LTS Announcing Amazon EC2 Capacity Reservation Fleet a way to easily migrate Amazon EC2 Capacity Reservations across instance types Amazon EC2 Auto Scaling now supports describing Auto Scaling groups using tags Amazon EC2 now offers Microsoft SQL Server on Microsoft Windows Server 2022 AMIs AWS Elastic Beanstalk supports Database Decoupling in an Elastic Beanstalk Environment AWS FPGA developer kit now supports Jumbo frames in virtual ethernet frameworks for Amazon EC2 F1 instances Amazon VPC Flow Logs now supports Apache Parquet, Hive-compatible prefixes and Hourly partitioned files Network Load Balancer now supports TLS 1.3 New – Attribute-Based Instance Type Selection for EC2 Auto Scaling and EC2 Fleet | AWS News Blog Amazon Lightsail now supports AWS CloudFormation for instances, disks and databases Dev & Ops CLI AWS Cloud Control API, a Uniform API to Access AWS & Third-Party Services | AWS News Blog Now programmatically manage alternate contacts on AWS accounts CodeGuru Amazon CodeGuru now includes recommendations powered by Infer Amazon CodeGuru announces Security detectors for Python applications and security analysis powered by Bandit Amazon CodeGuru Reviewer adds detectors for AWS Java SDK v2's best practices and features IaC AWS CDK releases v1.121.0 - v1.125.0 with features for faster development cycles using hotswap deployments and rollback control AWS CloudFormation customers can now manage their applications in AWS Systems Manager Other NoSQL Workbench for Amazon DynamoDB now enables you to import and automatically populate sample data to help build and visualize your data models Amazon Corretto October Quarterly Updates Bulk Editing of OpsItems in AWS Systems Manager OpsCenter AWS Fault Injection Simulator now supports Spot Interruptions AWS Fault Injection Simulator now injects Spot Instance Interruptions Security Firewalls AWS Firewall Manager now supports centralized logging of AWS Network Firewall logs AWS Network Firewall Adds New Configuration Options for Rule Ordering and Default Drop Backups AWS Backup Audit Manager adds compliance reports AWS Backup adds an additional layer for backup protection with the availability of AWS Backup Vault Lock Other AWS Security Hub adds support for cross-Region aggregation of findings to simplify how you evaluate and improve your AWS security posture Amazon SES now supports 2048-bit DKIM keys AWS License Manager now supports Delegated Administrator for Managed entitlements Data Storage & Processing Goodbye Microsoft SQL Server, Hello Babelfish | AWS News Blog Announcing availability of the Babelfish for PostgreSQL open source project Announcing Amazon RDS Custom for Oracle AWS announces AWS Snowcone SSD Amazon RDS Proxy now supports Amazon RDS for MySQL Version 8.0 Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) announces support for Cross-Cluster Replication Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now comes with an improved management console AWS Transfer Family customers can now use Amazon S3 Access Point aliases for granular and simplified data access controls Amazon EMR now supports Apache Spark SQL to insert data into and update Apache Hive metadata tables when Apache Ranger integration is enabled Amazon Neptune now supports Auto Scaling for Read Replicas AWS Glue Crawlers support Amazon S3 event notifications Amazon Keyspaces (for Apache Cassandra) now supports automatic data expiration by using Time to Live (TTL) settings New – AWS Data Exchange for Amazon Redshift | AWS News Blog AI & ML SageMaker Announcing Fast File Mode for Amazon SageMaker Amazon SageMaker Projects now supports Image Building CI/CD templates Amazon SageMaker Data Wrangler now supports Amazon Athena Workgroups, feature correlation, and customer managed keys Other Amazon Kendra launches support for 34 additional languages Amazon Fraud Detector now supports event datasets AWS announces a price reduction of up to 56% for Amazon Fraud Detector machine learning fraud predictions Amazon Fraud Detector launches new ML model for online transaction fraud detection Amazon Transcribe now supports custom language models for streaming transcription Amazon Textract launches TIFF support and adds asynchronous support for receipts and invoices processing Announcing Amazon EC2 DL1 instances for cost efficient training of deep learning models Other Cool Stuff AWS IoT Core now makes it optional for customers to send the entire trust chain when provisioning devices using Just-in-Time Provisioning and Just-in-Time Registration AWS IoT SiteWise announces support for using the same asset models across different hierarchies VMware Cloud on AWS Outposts Brings VMware SDDC as a Fully Managed Service on Premises | AWS News Blog AWS Outposts adds new CloudWatch dimension for capacity monitoring Amazon Monitron launches iOS app Amazon Braket offers D-Wave's Advantage 4.1 system for quantum annealing Amazon QuickSight adds support for Pixel-Perfect dashboards Amazon WorkMail adds Mobile Device Access Override API and MDM integration capabilities Announcing Amazon WorkSpaces API to create new updated images with latest AWS drivers Computer Vision at the Edge with AWS Panorama | AWS News Blog Amazon Connect launches API to configure hours of operation programmatically New region availability and Graviton2 support now available for Amazon GameLift Sponsors CMD Solutions Silver Sponsors Cevo Versent
About this podcastSteve and Frank are back with the latest cloud FinOps news for October! This episode covers the following topics:AWS Fault Injection Simulator now injects Spot Instance InterruptionsAWS Pricing Calculator now supports Amazon CloudFrontIntroducing Amazon EC2 C6i instancesAWS Marketplace announces Purchase Order Management for SaaS contractsAmazon EC2 announces attribute-based instance type selection for Auto Scaling groups, EC2 Fleet, and Spot FleetIntroducing GKE image streaming for fast application startup and autoscalingRun your fault-tolerant workloads cost-effectively with Google Cloud Spot VMsN2D VMs with latest AMD EPYC CPUs enable on average over 30% better price-performanceFind your GKE cost optimization opportunities right in the consoleGoogle Cloud billing tutorials: Because surprises are for home makeover shows, not your walletFind out more about Cloud FinOps by visiting our website.
Azure Virtual Machine Scale Sets lets you create and manage a group of virtual machines to run your app or workload and provides sophisticated load-balancing, management, and automation. This is a critical service for creating and dynamically managing thousands of VMs in your environment. If you are new to the service this show will get you up to speed or if you haven't looked at VM Scale Sets in a while we'll show you how the service has significantly evolved to help you efficiently architect your apps for centralized configuration, high availability, auto-scaling and performance, cost optimization, security, and more. ► QUICK LINKS: 00:00 - Virtual machine scale sets intro 00:32 - What is a virtual machine scale set? 00:47 - Centralized configuration options 02:30 - How do scale sets increase availability? 03:54 - How does autoscaling work? 04:58 - Keeping costs down with VM scale sets 05:47 - Building security into your scale set configurations 06:28 - Where you can learn more about VM scale sets ► Link References: To learn more, check out https://aka.ms/VMSSOverview Watch our episode about Azure Spot VMs at https://aka.ms/EssentialsSpotVMs ► Unfamiliar with Microsoft Mechanics? We are Microsoft's official video series for IT. You can watch and share valuable content and demos of current and upcoming tech from the people who build it at #Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries?sub_confirmation=1 Join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen via podcast here: https://microsoftmechanics.libsyn.com/website ► Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Follow us on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
Concept of the week: Event Stream abstractions and Pravega: 15:15Demo of the week: Event Stream abstractions and Pravega: 1:11:00PR of the week: Pravega presto-connector PR 49: 1:20:51Question of the week: What is the point of Trino Forum and what is the relationship to Trino Slack?: 1:26:07Show Notes: https://trino.io/episodes/28.htmlShow Page: https://trino.io/broadcast/
Lucas Cardoso comanda uma super explicação sobre o conceito de Auto Scaling e como utilizá-la em nosso dia a dia! Confere aí! Entre no nosso grupo do Telegram e tire mais dúvidas Cloud Evangelists BR: https://t.me/cloudevangelistOu acesse: https://www.darede.com.br/
Once again Arjen, Jean-Manuel, and Guy discuss the latest and greatest announcements from AWS in this roundup of the news of May. Also once again, this was recorded 2 months before it went up, but luckily it's all still relevant. Even the comments about being in lockdown. News Finally in Sydney
Neste vídeo, vou ensinar como configurar o Autoscaling na AWS (Amazon Web Services). O Auto Scaling é um recurso essencial para elevar o nível de sua aplicação. Afinal de contas, o maior motivo para utilizarmos recursos de Cloud Computing é garantir que a API tenha escalabilidade, redundância, segurança e melhor performance! Você vai aprender mais detalhes sobre o novo recurso da AWS chamado Autoscaling Templates, ou Modelos de Execução. Além disso, vou te guiar pelo passo a passo da criação de Grupos de Auto Scaling e como utilizar o ELB (Elastic Load Balancer) para garantir a efetividade de seu Autoscaling, ainda, como criar Snapshot do EC2, ou seja, criar uma imagem da máquina e gerenciar estas imagens através dos AMIs (Amazon Machine Images), como configurar scale up e scale down das máquinas (aumentar ou diminuir) para gerenciar o cluster de instâncias. E também, vou te mostrar como incrementar as versões dos modelos de execução. Após este vídeo, você será capaz de criar e configurar o Autoscaling juntamente com o Load Balancer na AWS, permitindo que este serviço orquestre a distribuição da carga das requisições entre diferentes máquinas disponíveis. Além disso, você poderá realizar testes com sua própria aplicação para verificar se suas máquinas estão respondendo adequadamente. Índice do vídeo: 01:12 - Criando uma AMI (imagem da máquina) 03:11 - Criando um modelo de execução a partir da imagem da máquina 06:44 - Criando um grupo de auto-scaling com a AMI 08:23 - Adicionando o ELB (Elastic Load Balancer) ao nosso Grupo de Auto-Scaling 16:26 - Verificando e testando a aplicação criada pelo Auto-Scaling 17:49 - Alterando o tamanho do meu cluster 22:14 - Criando uma nova versão do Modelo de Execução Gostou do conteúdo deste vídeo?
On The Cloud Pod this week, Ryan was busy buying stuff on Amazon Prime Day and didn't want to talk about JEDI, so he arrived late to the recording. Also, long-time sponsor of The Cloud Pod, Foghorn Consulting, has been acquired by Evoque, so the team grilled Peter for the juicy details. A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. JumpCloud, which offers a complete platform for identity, access, and device management — no matter where your users and devices are located. This week's highlights The $10 billion JEDI cloud contract has been canceled by the Pentagon. In its place, the DOD announced a new multi-vendor contract known as the “Joint Warfighter Cloud Capability.” Evoque Data Center Solutions has acquired cloud engineering experts Foghorn Consulting. This is a key part of the company's Multi-Generational Infrastructure (MGI) strategy, which it announced the same day as the acquisition. AWS released some incredible numbers from Amazon Prime Day. Jeff Barr gives his annual take on how AWS performed and the record-setting event. Top Quotes “The Pentagon has called off the $10 billion cloud contract [JEDI]. It was being dragged through the courts by Amazon and Microsoft, and this is sort of an admission that the Pentagon didn’t want Donald Trump to get subpoenaed and testify on what his involvement was in the whole contract.” “This is a big problem that almost every business has: how do you stop a deployment, especially a large deployment? Typically, we throw people at it, and we have them watch millions of dashboards, and hopefully, they catch it. But usually, it’s a problem somewhere that’s exposed to the customer that triggers that. So if we can have more tools like Gandalf that detect problems earlier, it's great.” General News: Some People Can't Take a Joke Evoque Data Center Solutions acquires Foghorn Consulting. Congratulations to Peter on this exciting news! The AWS Infinidash story has taken on a life of its own. What started as a joke has led to backlash from the community complaining about it being a form of technology gatekeeping. JEDI: We're Not Talking About This Anymore The Pentagon has canceled the $10 billion JEDI cloud contract. It's not really dead, they’ve just turned it into a joint multi-cloud offering, which is what we said they should do six months ago. Amazon Web Services: A Little Gooey Andy Jassy thanks AWS employees as he takes over as Amazon CEO. We wonder if Bezos is bored yet. Between the Blue Origin launch and The Washington Post, he's probably not. Jeff Barr is back with his annual take on how AWS did with Amazon Prime Day. It would be great to know how they manage the surge in workloads: maybe they have their own secret regions. The Bottlerocket AMI for Amazon ECS is now generally available. They've added functions to help automate clusters and troubleshoot, which is super cool. Amazon announces smaller units and a price drop for Amazon Kendra. It's still expensive, but if you're building internal tools to search your internal corporate internet, this is much more usable. Introducing AWS solution implementation Tag Tamer. If you’re embroiled in the tagging nightmare, this might be a better route than building it all in house. Google Cloud Platform: Looking To The Stars Rubin Observatory offers the first astronomy research platform in the cloud. It's great to see a partnership in the science field, rather than with another big corporation. Google introduces predictive autoscaling for managed instance groups (MIGs). Autoscaling can be difficult to manage, so anything that helps automate it is great. Google has made several updates to Google Cloud VMWare Engine. Allowing users to leverage policy-driven automation to scale nodes needed to meet compute demands of the VMWare infrastructure is fantastic. Azure: Big Fans of Lord of the Rings Azure VPN NAT is now in public preview. One of the most impressive features will help avoid IP conflict, especially with the transit gateway, which is awesome. Azure announces the integration of New Relic One performance monitoring into Azure Spring Cloud. This is supported by VMware and Azure at the same time, which is not bad if you have a Spring Boot app: Getting access to VMware engineers to troubleshoot your Java code is always a plus. Azure builds a safe deployment service called ‘Gandalf'. When the data center is on fire because of new code, it stops the problem from spreading. TCP Lightning Round In a controversial move, Peter claims that jokes that write themselves should be a point for him so awards himself this week's point, leaving scores at Justin (11), Ryan (5), Jonathan (8), Peter (2). Other Headlines Mentioned: Soft delete for blobs capability for Azure Data Lake Storage is now in limited public preview Amazon EKS managed node groups now supports parallel node upgrades AWS Glue Studio now provides data previews during visual job authoring AWS Glue DataBrew adds support for backslash delimiter () in .csv datasets AWS Glue DataBrew adds support for 14 new advanced data types for data preparation AWS Amplify launches new full-stack CI/CD capabilities AWS Amplify CLI adds support for storing environment variables and secrets accessed by AWS Lambda functions AWS IQ now supports attachments Amazon Athena adds parameterized queries to improve reusability and security Things Coming Up Announcing Google Cloud 2021 Summits [frequently updated] Security Summit — July 20th Retail and Consumer Goods Summit — July 28th Amazon re:Inforce — August 24–25 — Houston, TX Google Cloud Next 2021 — October 12–14, 2021 AWS re:Invent — November 29–December 3 — Las Vegas Oracle Open World (no details yet)
Neste vídeo, vou ensinar como configurar o ELB (Elastic Load Balancer) na AWS (Amazon Web Services). O Load Balancing é um recurso essencial para tornar sua aplicação escalável e redundante pois, ele é o primeiro passo para configurar o Auto Scaling de sua aplicação. Você também vai aprender mais detalhes sobre segurança da informação em Cloud Computing para que acessos externos não chegem até sua Aplicação de maneiras inadvertidas. Como por exemplo, como configurar adequadamente seus Grupos de Segurança na AWS para permitir o tráfego entre em suas aplicações através das portas corretas. Após este vídeo, você será capaz de criar e configurar o Load Balancer na AWS, permitindo que este serviço orquestre a distribuição da carga das requisições entre diferentes máquinas disponíveis. Além disso, você poderá utilizar não só a AWS para executar a distribuição de carga, mas também verá que é possível executar estas ações com o NGinx e o API Gateway Kong. Assista a este vídeo no Youtube: https://youtu.be/8vcrE0FojKY Índice do vídeo: 01:45 - O que é load balancing? 04:50 - Acessando o ELB 06:40 - Criando seu primeiro load balancer 08:35 - Certificado SSL no ELB da AWS 10:58 - Grupo de segurança para o load balancer 15:54 - Resolvendo problemas de instâncias não saudáveis no ELB 20:58 - Testando o Load Balancer Gostou do conteúdo deste vídeo?
In Episode #44 of the SAP on Azure video podcast we talk about SAPPHIRE NOW! The Keynote with Christian Klein, Julia White and Hasso Plattner, the related Innovation News Guide, The SAP Business Technology Platform and the announced Free tier, Event enablement for S/4HANA, Teams Development Resources, Fusion of Teams webinar and Office scripts. Then Santosh and Karan talk about how Microsoft Digital is using SAP auto-scaling to address different load requirements. https://youtu.be/AJOxyAySGmY hobru/SAPonAzure: Content related to the unofficial SAP on Azure YouTube Channel, http://youtube.com/SAPonAzure (github.com)
فهاد الحلقة دوينا على Auto Scaling Group (ASG) In this episode, we talked about Auto Scaling Group (ASG) #aws #ec2 #podcast #cloud #darija #morocco #asg #scalability #infra
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS」 おはようございます、火曜日担当の古川です。 今日は 4/26 に出たアップデートをピックアップしてご紹介 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ トークスクリプト https://blog.serverworks.co.jp/aws-update-2021-04-26 ■ UPDATE PICKUP AWS CodeDeployにて、Auto Scaling GroupsによるEC2デプロイメントのサポートをアップデート マネージメントコンソールにてAWS Service Catalog AppRegistryの作成・管理が可能に AWS Cost Categoriesにて、詳細ページが導入 AWS Resource Access Managerが大阪リージョンで提供開始 ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
Do you want to scale your workloads on Kubernetes without having to worry about the details? Do you want to run Azure Functions anywhere and easily scale it yourself? Tom Kerkhove shows Scott Hanselman how Kubernetes Event-Driven Autoscaling (KEDA) makes application autoscaling dead simple.[0:00:00]– Introduction[0:01:10]– Presentation[0:07:57]– Demo[0:17:45]– Discussion and wrap-upKubernetes Event-driven AutoscalingKEDA on GitHubAzure Functions on Kubernetes with KEDAAzure Friday - Azure Serverless on Kubernetes with KEDACreate a free account (Azure)
Do you want to scale your workloads on Kubernetes without having to worry about the details? Do you want to run Azure Functions anywhere and easily scale it yourself? Tom Kerkhove shows Scott Hanselman how Kubernetes Event-Driven Autoscaling (KEDA) makes application autoscaling dead simple.[0:00:00]– Introduction[0:01:10]– Presentation[0:07:57]– Demo[0:17:45]– Discussion and wrap-upKubernetes Event-driven AutoscalingKEDA on GitHubAzure Functions on Kubernetes with KEDAAzure Friday - Azure Serverless on Kubernetes with KEDACreate a free account (Azure)
Sponsor Circle CI Episode on CI/CD with Circle CI Show DetailsIn this episode, we cover the following topics: Pillars in depth Performance Efficiency "Ability to use resources efficiently to meet system requirements and maintain that efficiency as demand changes and technology evolves" Design principles Easy to try new advanced technologies (by letting AWS manage them, instead of standing them up yourself) Go global in minutes Use serverless architectures Experiment more often Mechanical sympathy (use the technology approach that aligns best to what you are trying to achieve) Key service: CloudWatch Focus areas SelectionServices: EC2, EBS, RDS, DynamoDB, Auto Scaling, S3, VPC, Route53, DirectConnect ReviewServices: AWS Blog, AWS What's New MonitoringServices: CloudWatch, Lambda, Kinesis, SQS TradeoffsServices: CloudFront, ElastiCache, Snowball, RDS (read replicas) Best practices SelectionChoose appropriate resource typesCompute, storage, database, networking Trade OffsProximity and caching Cost Optimization "Ability to run systems to deliver business value at the lowest price point" Design principles Adopt a consumption model (only pay for what you use) Measure overall efficiency Stop spending money on data center operations Analyze and attribute expenditures Use managed services to reduce TCO Key service: AWS Cost Explorer (with cost allocation tags) Focus areas Expenditure awarenessServices: Cost Explorer, AWS Budgets, CloudWatch, SNS Cost-effective resourcesServices: Reserved Instances, Spot Instances, Cost Explorer Matching supply and demandServices: Auto Scaling Optimizing over timeServices: AWS Blog, AWS What's New, Trusted Advisor Key pointsUse Trusted Advisor to find ways to save $$$ The Well-Architected Review Centered around the question "Are you well architected?" The Well-Architected review provides a consistent approach to review workload against current AWS best practices and gives advice on how to architect for the cloud Benefits of the review Build and deploy faster Lower or mitigate risks Make informed decisions Learn AWS best practices The AWS Well-Architected Tool Cloud-based service available from the AWS console Provides consistent process for you to review and measure your architecture using the AWS Well-Architected Framework Helps you: Learn Measure Improve Improvement plan Based on identified high and medium risk topics Canned list of suggested action items to address each risk topic MilestonesMakes a read-only snapshot of completed questions and answers Best practices Save milestone after initially completing workload review Then, whenever you make large changes to your workload architecture, perform a subsequent review and save as a new milestone Links AWS Well-Architected AWS Well-Architected Framework - Online/HTML versionincludes drill down pages for each review question, with recommended action items to address that issue AWS Well-Architected Tool Enhanced Networking Amazon EBS-optimized instance VPC Endpoint Amazon S3 Transfer Acceleration AWS Billing and Cost Management Whitepapers AWS Well-Architected Framework Operational Excellence Pillar Security Pillar Reliability Pillar Performance-Efficiency Pillar Cost Optimization Pillar End Song:The Shadow Gallery by Roy EnglandFor a full transcription of this episode, please visit the episode webpage.We'd love to hear from you! You can reach us at: Web: https://mobycast.fm Voicemail: 844-818-0993 Email: ask@mobycast.fm Twitter: https://twitter.com/hashtag/mobycast Reddit: https://reddit.com/r/mobycast