POPULARITY
Welcome to episode 297 of The Cloud Pod – where the forecast is always cloudy! Justin, Ryan, and Matthew have beaten the black lung and are in the studio – ready to bring you all the latest and greatest in cloud and AI news! We've got Wiz buyouts (that security, it's so hot right now!) Gemma 3, Glue 5 (but not 3 or 4) and Gemini Robots – plus looking forward to AI Skills Fest and Google Next, all this week on The Cloud Pod. Titles we almost went with this week: Google! Yer a WIZ—Ard Google Announces Network Security Integration… and that must include WIZ Gemini Robots…. What could go wrong AI Data Studios … So Hot Right Now I want 32 Billion dollars Azure Follow AWS in bad life choices – mk Wait Glue is more than v2 What happened to Glue 3 and 4? 5th Try and AWS Glue still sucks A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our slack channel for more info. Follow Up 01:05 Microsoft quantum computing claim still lacks evidence: physicists are dubious A MS researcher presented results behind the company's controversial claim to have created the first topological qubits – a long-sought goal of quantum computing. Theorists said it’s a hard problem, and that it was a beautiful talk but the claims come without evidence, and people think they have gone overboard. The Head of Quantum at Amazon was also highly skeptical: https://www.businessinsider.com/amazon-exec-casts-doubt-microsoft-quantum-claims-2025-3 02:09 Justin – “No one’s really buying Microsoft actually created a new topological qubit. There’s some doubt… basically they said that what they showed, which is a microscopic H-shaped aluminum wire on top of indium arsenide – a superconductor at ultra-cold temperatures, and the devices are designed to harness majoranas, previously undiscovered quasi-particles that are essential for topological qubits to work, and the goals for majoranas to appear at the four tips of the H-shaped wire emerging from reflective-behavior electrons, and these majorans in theory could be used to perform quantum computing that are resistant to information loss, but no proof, no evidence, and they think Microsoft’s full of it.” General News 04:12 Google + Wiz: Strengthening Multicloud Security Google has announced the signing of a definitive agreement to acquire Wiz. This will allow them to better provide business and governments with more choice in how they protect themselves. Google answers why now… and that they have seen their
AWS Morning Brief for the week of December 2, with Corey Quinn. Links:Amazon CloudWatch adds context to observability data in service consoles, accelerating analysisAmazon Cognito introduces Managed Login to support rich branding for end user journeysAmazon Cognito now supports passwordless authentication for low-friction and secure loginsAmazon Connect Email is now generally availableAmazon EBS announces Time-based Copy for EBS SnapshotsAmazon EC2 Auto Scaling introduces highly responsive scaling policiesAmazon EC2 Capacity Blocks now supports instant start times and extensionsAmazon ECR announces 10x increase in repository limit to 100,000Amazon EFS now supports up to 2.5 million IOPS per file systemAmazon S3 now supports enforcement of conditional write operations for S3 general purpose bucketsApplication Signals provides OTEL support via X-Ray OTLP endpoint for tracesAWS delivers enhanced root cause insights to help explain cost anomaliesEnhanced Pricing Calculator now supports discounts and purchase commitments (in preview)AWS PrivateLink now supports cross-region connectivityAnnouncing the new AWS User Notifications SDKAnnouncing new feature tiers: Essentials and Plus for Amazon CognitoAnnouncing Savings Plans Purchase AnalyzerData Exports for FOCUS 1.0 is now in general availabilityIntroducing a new experience for AWS Systems ManagerIntroducing generative AI troubleshooting for Apache Spark in AWS Glue (preview)Understanding how certain database parameters impact scaling in Amazon Aurora Serverless v2Analyzing your AWS Cost Explorer data with Amazon Q Developer: Now Generally AvailableYour guide to AWS for Advertising & Marketing at re:Invent 2024AWS IoT Services alignment with US Cyber Trust MarkStreamlining AWS Organizations Cleanup StrategiesSponsorWiz: wiz.io/lastweek
Hoje é dia de falar de engenharia de analytics. Neste episódio, conversamos sobre como a evolução da carreira de dados deu origem à área de analytics, e como é o dia a dia da das pessoas profissionais da dados no Itaú Unibanco. Vem ver quem participou desse papo: Paulo Silveira, o host que se lembra de termos que ficaram para trás André David, o co-host que quer levar os ouvintes do Hipsters para dentro do Itaú Rodrigo Lima, Head de Engenharia de Dados e Analytics no Itaú Unibanco Carlos “Cadu” Vaccaro, Gerente de Engenharia de Analytics no Itaú Unibanco Ana Hashimoto, Coordenadora de Engenharia de Dados no Itaú Unibanco
Quali sono le sfide nel realizzare un applicativo fintech nativamente in Cloud? Come è possibile creare un'architettura di gestione dei dati e analytics utilizzando servizi come AWS Glue, Amazon Managed Workflows per Apache Airflow e Karpenter? La GenAI può aiutarci anche nella scrittura di query complesse e nella validazione di scenari di business? In questo episodio ospitiamo Luca Soato, Data Engineering Manager di Cardo AI, per parlare di come è stata implementata una complessa architettura di cartolarizzazione utilizzando i servizi di AWS. Link utili: Cardo AI Amazon Bedrock Karpenter
AWS Morning Brief for the week of May 28th, 2024, with Corey Quinn. Links:Introducing Amazon EC2 C7i-flex instancesAmazon RDS for Db2 introduces hourly licensing from IBM through AWS MarketplaceAmazon VPC Lattice now supports TLS PassthroughAWS CloudFormation streamlines deployment troubleshooting with AWS CloudTrail integrationAWS CloudFormation accelerates dev-test cycle with a new parameter for DeleteStack APIAWS Lambda console now supports sharing test events between developers in additional regionsMail Manager – Amazon SES introduces new email routing and archiving featuresJoin us at the AWS World IPv6 Day Celebration5 ways to increase AWS Certified employees in your organization
Summary Data lakehouse architectures are gaining popularity due to the flexibility and cost effectiveness that they offer. The link that bridges the gap between data lake and warehouse capabilities is the catalog. The primary purpose of the catalog is to inform the query engine of what data exists and where, but the Nessie project aims to go beyond that simple utility. In this episode Alex Merced explains how the branching and merging functionality in Nessie allows you to use the same versioning semantics for your data lakehouse that you are used to from Git. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster (https://www.dataengineeringpodcast.com/dagster) today to get started. Your first 30 days are free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Join us at the top event for the global data community, Data Council Austin. From March 26-28th 2024, we'll play host to hundreds of attendees, 100 top speakers and dozens of startups that are advancing data science, engineering and AI. Data Council attendees are amazing founders, data scientists, lead engineers, CTOs, heads of data, investors and community organizers who are all working together to build the future of data and sharing their insights and learnings through deeply technical talks. As a listener to the Data Engineering Podcast you can get a special discount off regular priced and late bird tickets by using the promo code dataengpod20. Don't miss out on our only event this year! Visit dataengineeringpodcast.com/data-council (https://www.dataengineeringpodcast.com/data-council) and use code dataengpod20 to register today! Your host is Tobias Macey and today I'm interviewing Alex Merced, developer advocate at Dremio and co-author of the upcoming book from O'reilly, "Apache Iceberg, The definitive Guide", about Nessie, a git-like versioned catalog for data lakes using Apache Iceberg Interview Introduction How did you get involved in the area of data management? Can you describe what Nessie is and the story behind it? What are the core problems/complexities that Nessie is designed to solve? The closest analogue to Nessie that I've seen in the ecosystem is LakeFS. What are the features that would lead someone to choose one or the other for a given use case? Why would someone choose Nessie over native table-level branching in the Apache Iceberg spec? How do the versioning capabilities compare to/augment the data versioning in Iceberg? What are some of the sources of, and challenges in resolving, merge conflicts between table branches? Can you describe the architecture of Nessie? How have the design and goals of the project changed since it was first created? What is involved in integrating Nessie into a given data stack? For cases where a given query/compute engine doesn't natively support Nessie, what are the options for using it effectively? How does the inclusion of Nessie in a data lake influence the overall workflow of developing/deploying/evolving processing flows? What are the most interesting, innovative, or unexpected ways that you have seen Nessie used? What are the most interesting, unexpected, or challenging lessons that you have learned while working with Nessie? When is Nessie the wrong choice? What have you heard is planned for the future of Nessie? Contact Info LinkedIn (https://www.linkedin.com/in/alexmerced) Twitter (https://www.twitter.com/amdatalakehouse) Alex's Article on Dremio's Blog (https://www.dremio.com/authors/alex-merced/) Alex's Substack (https://amdatalakehouse.substack.com/) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. Links Project Nessie (https://projectnessie.org/) Article: What is Nessie, Catalog Versioning and Git-for-Data? (https://www.dremio.com/blog/what-is-nessie-catalog-versioning-and-git-for-data/) Article: What is Lakehouse Management?: Git-for-Data, Automated Apache Iceberg Table Maintenance and more (https://www.dremio.com/blog/what-is-lakehouse-management-git-for-data-automated-apache-iceberg-table-maintenance-and-more/) Free Early Release Copy of "Apache Iceberg: The Definitive Guide" (https://hello.dremio.com/wp-apache-iceberg-the-definitive-guide-reg.html) Iceberg (https://iceberg.apache.org/) Podcast Episode (https://www.dataengineeringpodcast.com/iceberg-with-ryan-blue-episode-52/) Arrow (https://arrow.apache.org/) Podcast Episode (https://www.dataengineeringpodcast.com/voltron-data-apache-arrow-episode-346/) Data Lakehouse (https://www.forbes.com/sites/bernardmarr/2022/01/18/what-is-a-data-lakehouse-a-super-simple-explanation-for-anyone/?sh=6cc46c8c6088) LakeFS (https://lakefs.io/) Podcast Episode (https://www.dataengineeringpodcast.com/lakefs-data-lake-versioning-episode-157) AWS Glue (https://aws.amazon.com/glue/) Tabular (https://tabular.io/) Podcast Episode (https://www.dataengineeringpodcast.com/tabular-iceberg-lakehouse-tables-episode-363) Trino (https://trino.io/) Presto (https://prestodb.io/) Dremio (https://www.dremio.com/) Podcast Episode (https://www.dataengineeringpodcast.com/dremio-with-tomer-shiran-episode-58) RocksDB (https://rocksdb.org/) Delta Lake (https://delta.io/) Podcast Episode (https://www.dataengineeringpodcast.com/delta-lake-data-lake-episode-85/) Hive Metastore (https://cwiki.apache.org/confluence/display/hive/design#Design-Metastore) PyIceberg (https://py.iceberg.apache.org/) Optimistic Concurrency Control (https://en.wikipedia.org/wiki/Optimistic_concurrency_control) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Hey readers
En este episodio continuamos el recorrido por los servicios de Analítica en AWS. Nos enfocaremos en AWS Glue, y sus capacidades para hidratar el lago de datos y crear un catálogo de datos unificado. Material Adicional: https://aws.amazon.com/glue/
Welcome episode 221 of The Cloud Pod podcast - where the forecast is always cloudy! This week your hosts, Justin, Jonathan, Ryan, and Matthew look at some of the announcements from AWS Summit, as well as try to predict the future - probably incorrectly - about what's in store at Next 2023. Plus, we talk more about the storm attack, SFTP connectors (and no, that isn't how you get to the Moscone Center for Next) Llama 2, Google Cloud Deploy and more! Titles we almost went with this week: Now You Too Can Get Ignored by Google Support via Mobile App The Tech Sector Apparently Believes Multi-Cloud is Great… We Hate You All. The cloud pod now wants all your HIPAA Data The Meta Llama is Spreading Everywhere The Cloud Pod Recursively Deploys Deploy A big thanks to this week's sponsor: Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.
AWS Glue for Ray makes it easier to scale Python code to process large scale data in AWS Glue. Learn how you can unlock Python at scale for data integration workloads of all sizes, and simplify and streamline your ability to use Ray, a new open source framework for distributing Python across clusters. AWS Glue website: https://go.aws/3pLUTtp AWS Glue Data Integration Engines: https://go.aws/3JYfiSP Blog: AWS Glue for Ray: https://go.aws/43piC04
MLOps Coffee Sessions #166 with Roy Hasson & Santona Tuli, Eliminating Garbage In/Garbage Out for Analytics and ML. // Abstract Shift left data quality ownership and observability that makes it easy for users to catch bad data at the source and stop it from entering your analytics/ML stack. // Bio Santona Tuli Santona Tuli, Ph.D. began her data journey through fundamental physics—searching through massive event data from particle collisions at CERN to detect rare particles. She's since extended her machine learning engineering to natural language processing, before switching focus to product and data engineering for data workflow authoring frameworks. As a Python engineer, she started with the programmatic data orchestration tool, Airflow, helping improve its developer experience for data science and machine learning pipelines. Currently, at Upsolver, she leads data engineering and science, driving developer research and engagement for the declarative workflow authoring framework in SQL. Dr. Tuli is passionate about building, as well as empowering others to build, end-to-end data and ML pipelines, scalably. Roy Hasson Roy is the head of product at Upsolver helping companies deliver high-quality data to their analytics and ML tools. Previously, Roy led product management for AWS Glue and AWS Lake Formation. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links https://royondata.substack.com/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Roy on LinkedIn: https://www.linkedin.com/in/royhasson/ Connect with Santona on LinkedIn: https://www.linkedin.com/in/santona-tuli/ Timestamps: [00:00] Santona's and Roy's preferred coffee [01:05] Santona's and Roy's background [03:33] Takeaways [05:49] Please like, share, and subscribe to our MLOps channels! [06:42] Back story of having Santona and Roy on the podcast [09:51] Santona's story [11:37] Optimal tag teamwork [16:53] Dealing with stakeholder needs [26:25] Having mechanisms in place [27:30] Building for data Engineers vs building for data scientists [34:50] Creating solutions for users [38:55] User experience holistic point of view [41:11] Tooling sprawl is real [42:00] LLMs reliability [45:00] Things would have loved to learn five years ago [49:46] Wrap up
AWS Morning Brief for the week of June 12, 2023 with Corey Quinn. Links: AWS CloudTrail Lake now supports selective start or stop ingestion of CloudTrail events AWS Glue for Ray is now generally available AWS Lambda adds support for Ruby 3.2 AWS Mainframe Modernization service is now HIPAA eligible Announcing AWS Snowblade for U.S Department of Defense JWCC AWS Trusted Advisor adds new checks for Amazon EFS Announcing the general availability of AWS Database Migration Service Serverless Announcing Live Tail in Amazon CloudWatch Logs, providing real-time exploration of logs AWS announces scripts to bulk updates policies per new AWS Billing and Cost Management permissions Drug Analyzer on AWS Provides Analytics That Inform Treatment Decisions and Support New Therapies Selecting cost effective capacity reservations for your business-critical workloads on Amazon EC2 Announcing Container Image Signing with AWS Signer and Amazon EKS How to deploy workloads in a multicloud environment with AWS developer tools How businesses can gain ecommerce capabilities to increase sales A Guide to Maintaining a Healthy Email Database Using Amazon IVS for turnkey town halls AWS's long-term commitment to Virginia How AWS data centers reuse retired hardware
Hundreds of thousands of customers build data lakes everyday and these data lakes can quickly become data swamps if they dont pay attention to the data quality. Setting up data quality can be a time-consuming, tedious process. Shiv Narayanan, Product Manager for AWS Glue chats with (Name) about the launch of AWS Glue Data Quality. AWS Glue Data Quality Quality builds confidence in your data by ensuring high data quality. It automatically measures, monitors, and manages data quality in your data lakes and data pipelines, so it's easy to identify missing, stale, or bad data. Take the steps to make your data lake crystal clear. Whats New: AWS Glue Data Quality: https://go.aws/3IWPXIw AWS Glue Data Quality: https://go.aws/3MUrk07 Blog: AWS Glue Data Quality: https://go.aws/42nq6jG
Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan and Jonathan are your hosts this week as we discuss all the latest news and announcements in the world of the cloud and AI - including Amazon's new AI, Bedrock, as well as new AI tools from other developers. We also address the new updates to AWS's CodeWhisperer, and return to our Cloud Journey Series where we discuss *insert dramatic music* - Kubernetes! Titles we almost went with this week: ⭐I'm always Whispering to My Code as an Individual
Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan and Matthew are your hosts this week as we discuss all the latest news and announcements in the world of the cloud and AI. Do people really love Matt's Azure know-how? Can Google make Bard fit into literally everything they make? What's the latest with Azure AI and their space collaborations? Let's find out! Titles we almost went with this week: Clouds in Space, Fictional Realms of Oracles, Oh My. The cloudpod streams lambda to the cloud A big thanks to this week's sponsor: Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.
On this episode of The Cloud Pod, the team talks about the new AWS region in Malaysia, the launch of AWS App Composer, the expansion of spanner database capabilities, the release of a vision AI by Microsoft; Florence Foundation Model, and the three migration techniques to the cloud space. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
Summary Five years of hosting the Data Engineering Podcast has provided Tobias Macey with a wealth of insight into the work of building and operating data systems at a variety of scales and for myriad purposes. In order to condense that acquired knowledge into a format that is useful to everyone Scott Hirleman turns the tables in this episode and asks Tobias about the tactical and strategic aspects of his experiences applying those lessons to the work of building a data platform from scratch. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you're ready to build your next pipeline, or want to test out the projects you hear about on the show, you'll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode (https://www.dataengineeringpodcast.com/linode) today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don't forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan's active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan (https://www.dataengineeringpodcast.com/atlan) today to learn more about how Atlan's active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Struggling with broken pipelines? Stale dashboards? Missing data? If this resonates with you, you're not alone. Data engineers struggling with unreliable data need look no further than Monte Carlo, the leading end-to-end Data Observability Platform! Trusted by the data teams at Fox, JetBlue, and PagerDuty, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo monitors and alerts for data issues across your data warehouses, data lakes, dbt models, Airflow jobs, and business intelligence tools, reducing time to detection and resolution from weeks to just minutes. Monte Carlo also gives you a holistic picture of data health with automatic, end-to-end lineage from ingestion to the BI layer directly out of the box. Start trusting your data with Monte Carlo today! Visit dataengineeringpodcast.com/montecarlo (http://www.dataengineeringpodcast.com/montecarlo) to learn more. Your host is Tobias Macey and today I'm being interviewed by Scott Hirleman about my work on the podcasts and my experience building a data platform Interview Introduction How did you get involved in the area of data management? Data platform building journey Why are you building, who are the users/use cases How to focus on doing what matters over cool tools How to build a good UX Anything surprising or did you discover anything you didn't expect at the start How to build so it's modular and can be improved in the future General build vs buy and vendor selection process Obviously have a good BS detector - how can others build theirs So many tools, where do you start - capability need, vendor suite offering, etc. Anything surprising in doing much of this at once How do you think about TCO in build versus buy Any advice Guest call out Be brave, believe you are good enough to be on the show Look at past episodes and don't pitch the same as what's been on recently And vendors, be smart, work with your customers to come up with a good pitch for them as guests... Tobias' advice and learnings from building out a data platform: Advice: when considering a tool, start from what are you actually trying to do. Yes, everyone has tools they want to use because they are cool (or some resume-driven development). Once you have a potential tool, is the capabilty you want to use a unloved feature or a main part of the product. If it's a feature, will they give it the care and attention it needs? Advice: lean heavily on open source. You can fix things yourself and better direct the community's work than just filing a ticket and hoping with a vendor. Learning: there is likely going to be some painful pieces missing, especially around metadata, as you build out your platform. Advice: build in a modular way and think of what is my escape hatch? Yes, you have to lock yourself in a bit but build with the possibility of a vendor or a tool going away - whether that is your choice (e.g. too expensive) or it literally disappears (anyone remember FoundationDB?). Learning: be prepared for tools to connect with each other but the connection to not be as robust as you want. Again, be prepared to have metadata challenges especially. Advice: build your foundation to be strong. This will limit pain as things evolve and change. You can't build a large building on a bad foundation - or at least it's a BAD idea... Advice: spend the time to work with your data consumers to figure out what questions they want to answer. Then abstract that to build to general challenges instead of point solutions. Learning: it's easy to put data in S3 but it can be painfully difficult to query it. There's a missing piece as to how to store it for easy querying, not just the metadata issues. Advice: it's okay to pay a vendor to lessen pain. But becoming wholly reliant on them can put you in a bad spot. Advice: look to create paved path / easy path approaches. If someone wants to follow the preset path, it's easy for them. If they want to go their own way, more power to them, but not the data platform team's problem if it isn't working well. Learning: there will be places you didn't expect to bend - again, that metadata layer for Tobias - to get things done sooner. It's okay to not have the end platform built at launch, move forward and get something going. Advice: "one of the perennial problems in technlogy is the bias towards speed and action without necessarily understanding the destination." Really consider the path and if you are creating a scalable and maintainable solution instead of pushing for speed to deliver something. Advice: consider building a buffer layer between upstream sources so if there are changes, it doesn't automatically break things downstream. Tobias' data platform components: data lakehouse paradigm, Airbyte for data integration (chosen over Meltano), Trino/Starburst Galaxy for distributed querying, AWS S3 for the storage layer, AWS Glue for very basic metadata cataloguing, Dagster as the crucial orchestration layer, dbt Contact Info LinkedIn (https://www.linkedin.com/in/scotthirleman/) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers Links Data Mesh Community (https://datameshlearning.com/community/) Podcast (https://www.linkedin.com/company/80887002/admin/) OSI Model (https://en.wikipedia.org/wiki/OSI_model) Schemata (https://schemata.app/) Podcast Episode (https://www.dataengineeringpodcast.com/schemata-schema-compatibility-utility-episode-324/) Atlan (https://atlan.com/) Podcast Episode (https://www.dataengineeringpodcast.com/atlan-data-team-collaboration-episode-179/) OpenMetadata (https://open-metadata.org/) Podcast Episode (https://www.dataengineeringpodcast.com/openmetadata-universal-metadata-layer-episode-237/) Chris Riccomini (https://daappod.com/data-mesh-radio/devops-for-data-mesh-chris-riccomini/) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
About BenBen Whaley is a staff software engineer at Chime. Ben is co-author of the UNIX and Linux System Administration Handbook, the de facto standard text on Linux administration, and is the author of two educational videos: Linux Web Operations and Linux System Administration. He is an AWS Community Hero since 2014. Ben has held Red Hat Certified Engineer (RHCE) and Certified Information Systems Security Professional (CISSP) certifications. He earned a B.S. in Computer Science from Univ. of Colorado, Boulder.Links Referenced: Chime Financial: https://www.chime.com/ alternat.cloud: https://alternat.cloud Twitter: https://twitter.com/iamthewhaley LinkedIn: https://www.linkedin.com/in/benwhaley/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Forget everything you know about SSH and try Tailscale. Imagine if you didn't need to manage PKI or rotate SSH keys every time someone leaves. That'd be pretty sweet, wouldn't it? With Tailscale SSH, you can do exactly that. Tailscale gives each server and user device a node key to connect to its VPN, and it uses the same node key to authorize and authenticate SSH.Basically you're SSHing the same way you manage access to your app. What's the benefit here? Built-in key rotation, permissions as code, connectivity between any two devices, reduce latency, and there's a lot more, but there's a time limit here. You can also ask users to reauthenticate for that extra bit of security. Sounds expensive?Nope, I wish it were. Tailscale is completely free for personal use on up to 20 devices. To learn more, visit snark.cloud/tailscale. Again, that's snark.cloud/tailscaleCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn and this is an episode unlike any other that has yet been released on this august podcast. Let's begin by introducing my first-time guest somehow because apparently an invitation got lost in the mail somewhere. Ben Whaley is a staff software engineer at Chime Financial and has been an AWS Community Hero since Andy Jassy was basically in diapers, to my level of understanding. Ben, welcome to the show.Ben: Corey, so good to be here. Thanks for having me on.Corey: I'm embarrassed that you haven't been on the show before. You're one of those people that slipped through the cracks and somehow I was very bad at following up slash hounding you into finally agreeing to be here. But you certainly waited until you had something auspicious to talk about.Ben: Well, you know, I'm the one that really should be embarrassed here. You did extend the invitation and I guess I just didn't feel like I had something to drop. But I think today we have something that will interest most of the listeners without a doubt.Corey: So, folks who have listened to this podcast before, or read my newsletter, or follow me on Twitter, or have shared an elevator with me, or at any point have passed me on the street, have heard me complain about the Managed NAT Gateway and it's egregious data processing fee of four-and-a-half cents per gigabyte. And I have complained about this for small customers because they're in the free tier; why is this thing charging them 32 bucks a month? And I have complained about this on behalf of large customers who are paying the GDP of the nation of Belize in data processing fees as they wind up shoving very large workloads to and fro, which is I think part of the prerequisite requirements for having a data warehouse. And you are no different than the rest of these people who have those challenges, with the singular exception that you have done something about it, and what you have done is so, in retrospect, blindingly obvious that I am embarrassed the rest of us never thought of it.Ben: It's interesting because when you are doing engineering, it's often the simplest solution that is the best. I've seen this repeatedly. And it's a little surprising that it didn't come up before, but I think it's in some way, just a matter of timing. But what we came up with—and is this the right time to get into it, do you want to just kind of name the solution, here?Corey: Oh, by all means. I'm not going to steal your thunder. Please, tell us what you have wrought.Ben: We're calling it AlterNAT and it's an alternative solution to a high-availability NAT solution. As everybody knows, NAT Gateway is sort of the default choice; it certainly is what AWS pushes everybody towards. But there is, in fact, a legacy solution: NAT instances. These were around long before NAT Gateway made an appearance. And like I said they're considered legacy, but with the help of lots of modern AWS innovations and technologies like Lambdas and auto-scaling groups with max instance lifetimes and the latest generation of networking improved or enhanced instances, it turns out that we can maybe not quite get as effective as a NAT Gateway, but we can save a lot of money and skip those data processing charges entirely by having a NAT instance solution with a failover NAT Gateway, which I think is kind of the key point behind the solution. So, are you interested in diving into the technical details?Corey: That is very much the missing piece right there. You're right. What we used to use was NAT instances. That was the thing that we used because we didn't really have another option. And they had an interface in the public subnet where they lived and an interface hanging out in the private subnet, and they had to be configured to wind up passing traffic to and fro.Well, okay, that's great and all but isn't that kind of brittle and dangerous? I basically have a single instance as a single point of failure and these are the days early on when individual instances did not have the level of availability and durability they do now. Yeah, it's kind of awful, but here you go. I mean, the most galling part of the Managed NAT Gateway service is not that it's expensive; it's that it's expensive, but also incredibly good at what it does. You don't have to think about this whole problem anymore, and as of recently, it also supports ipv4 to ipv6 translation as well.It's not that the service is bad. It's that the service is stonkingly expensive, particularly at scale. And everything that we've seen before is either oh, run your own NAT instances or bend your knee and pays your money. And a number of folks have come up with different options where this is ridiculous. Just go ahead and run your own NAT instances.Yeah, but what happens when I have to take it down for maintenance or replace it? It's like, well, I guess you're not going to the internet today. This has the, in hindsight, obvious solution, well, we just—we run the Managed NAT Gateway because the 32 bucks a year in instance-hour charges don't actually matter at any point of scale when you're doing this, but you wind up using that for day in, day out traffic, and the failover mode is simply you'll use the expensive Managed NAT Gateway until the instance is healthy again and then automatically change the route table back and forth.Ben: Yep. That's exactly it. So, the auto-scaling NAT instance solution has been around for a long time well, before even NAT Gateway was released. You could have NAT instances in an auto-scaling group where the size of the group was one, and if the NAT instance failed, it would just replace itself. But this left a period in which you'd have no internet connectivity during that, you know, when the NAT instance was swapped out.So, the solution here is that when auto-scaling terminates an instance, it fails over the route table to a standby NAT Gateway, rerouting the traffic. So, there's never a point at which there's no internet connectivity, right? The NAT instance is running, processing traffic, gets terminated after a certain period of time, configurable, 14 days, 30 days, whatever makes sense for your security strategy could be never, right? You could choose that you want to have your own maintenance window in which to do it.Corey: And let's face it, this thing is more or less sitting there as a network traffic router, for lack of a better term. There is no need to ever log into the thing and make changes to it until and unless there's a vulnerability that you can exploit via somehow just talking to the TCP stack when nothing's actually listening on the host.Ben: You know, you can run your own AMI that has been pared down to almost nothing, and that instance doesn't do much. It's using just a Linux kernel to sit on two networks and pass traffic back and forth. It has a translation table that kind of keeps track of the state of connections and so you don't need to have any service running. To manage the system, we have SSM so you can use Session Manager to log in, but frankly, you can just disable that. You almost never even need to get a shell. And that is, in fact, an option we have in the solution is to disable SSM entirely.Corey: One of the things I love about this approach is that it is turnkey. You throw this thing in there and it's good to go. And in the event that the instance becomes unhealthy, great, it fails traffic over to the Managed NAT Gateway while it terminates the old node and replaces it with a healthy one and then fails traffic back. Now, I do need to ask, what is the story of network connections during that failover and failback scenario?Ben: Right, that's the primary drawback, I would say, of the solution is that any established TCP connections that are on the NAT instance at the time of a route change will be lost. So, say you have—Corey: TCP now terminates on the floor.Ben: Pretty much. The connections are dropped. If you have an open SSH connection from a host in the private network to a host on the internet and the instance fails over to the NAT Gateway, the NAT Gateway doesn't have the translation table that the NAT instance had. And not to mention, the public IP address also changes because you have an Elastic IP assigned to the NAT instance, a different Elastic IP assigned to the NAT Gateway, and so because that upstream IP is different, the remote host is, like, tracking the wrong IP. So, those connections, they're going to be lost.So, there are some use cases where this may not be suitable. We do have some ideas on how you might mitigate that, for example, with the use of a maintenance window to schedule the replacement, replaced less often so it doesn't have to affect your workflow as much, but frankly, for many use cases, my belief is that it's actually fine. In our use case at Chime, we found that it's completely fine and we didn't actually experience any errors or failures. But there might be some use cases that are more sensitive or less resilient to failure in the first place.Corey: I would also point out that a lot of how software is going to behave is going to be a reflection of the era in which it was moved to cloud. Back in the early days of EC2, you had no real sense of reliability around any individual instance, so everything was written in a very defensive manner. These days, with instances automatically being able to flow among different hardware so we don't get instance interrupt notifications the way we once did on a semi-constant basis, it more or less has become what presents is bulletproof, so a lot of people are writing software that's a bit more brittle. But it's always been a best practice that when a connection fails okay, what happens at failure? Do you just give up and throw your hands in the air and shriek for help or do you attempt to retry a few times, ideally backing off exponentially?In this scenario, those retries will work. So, it's a question of how well have you built your software. Okay, let's say that you made the worst decisions imaginable, and okay, if that connection dies, the entire workload dies. Okay, you have the option to refactor it to be a little bit better behaved, or alternately, you can keep paying the Manage NAT Gateway tax of four-and-a-half cents per gigabyte in perpetuity forever. I'm not going to tell you what decision to make, but I know which one I'm making.Ben: Yeah, exactly. The cost savings potential of it far outweighs the potential maintenance troubles, I guess, that you could encounter. But the fact is, if you're relying on Managed NAT Gateway and paying the price for doing so, it's not as if there's no chance for connection failure. NAT Gateway could also fail. I will admit that I think it's an extremely robust and resilient solution. I've been really impressed with it, especially so after having worked on this project, but it doesn't mean it can't fail.And beyond that, upstream of the NAT Gateway, something could in fact go wrong. Like, internet connections are unreliable, kind of by design. So, if your system is not resilient to connection failures, like, there's a problem to solve there anyway; you're kind of relying on hope. So, it's a kind of a forcing function in some ways to build architectural best practices, in my view.Corey: I can't stress enough that I have zero problem with the capabilities and the stability of the Managed NAT Gateway solution. My complaints about it start and stop entirely with the price. Back when you first showed me the blog post that is releasing at the same time as this podcast—and you can visit that at alternat.cloud—you sent me an early draft of this and what I loved the most was that your math was off because of a not complete understanding of the gloriousness that is just how egregious the NAT Gateway charges are.Your initial analysis said, “All right, if you're throwing half a terabyte out to the internet, this has the potential of cutting the bill by”—I think it was $10,000 or something like that. It's, “Oh no, no. It has the potential to cut the bill by an entire twenty-two-and-a-half thousand dollars.” Because this processing fee does not replace any egress fees whatsoever. It's purely additive. If you forget to have a free S3 Gateway endpoint in a private subnet, every time you put something into or take something out of S3, you're paying four-and-a-half cents per gigabyte on that, despite the fact there's no internet transitory work, it's not crossing availability zones. It is simply a four-and-a-half cent fee to retrieve something that has only cost you—at most—2.3 cents per month to store in the first place. Flip that switch, that becomes completely free.Ben: Yeah. I'm not embarrassed at all to talk about the lack of education I had around this topic. The fact is I'm an engineer primarily and I came across the cost stuff because it kind of seemed like a problem that needed to be solved within my organization. And if you don't mind, I might just linger on this point and kind of think back a few months. I looked at the AWS bill and I saw this egregious ‘EC2 Other' category. It was taking up the majority of our bill. Like, the single biggest line item was EC2 Other. And I was like, “What could this be?”Corey: I want to wind up flagging that just because that bears repeating because I often get people pushing back of, “Well, how bad—it's one Managed NAT Gateway. How much could it possibly cost? $10?” No, it is the majority of your monthly bill. I cannot stress that enough.And that's not because the people who work there are doing anything that they should not be doing or didn't understand all the nuances of this. It's because for the security posture that is required for what you do—you are at Chime Financial, let's be clear here—putting everything in public subnets was not really a possibility for you folks.Ben: Yeah. And not only that but there are plenty of services that have to be on private subnets. For example, AWS Glue services must run in private VPC subnets if you want them to be able to talk to other systems in your VPC; like, they cannot live in public subnet. So essentially, if you want to talk to the internet from those jobs, you're forced into some kind of NAT solution. So, I dug into the EC2 Other category and I started trying to figure out what was going on there.There's no way—natively—to look at what traffic is transiting the NAT Gateway. There's not an interface that shows you what's going on, what's the biggest talkers over that network. Instead, you have to have flow logs enabled and have to parse those flow logs. So, I dug into that.Corey: Well, you're missing a step first because in a lot of environments, people have more than one of these things, so you get to first do the scavenger hunt of, okay, I have a whole bunch of Managed NAT Gateways and first I need to go diving into CloudWatch metrics and figure out which are the heavy talkers. Is usually one or two followed by a whole bunch of small stuff, but not always, so figuring out which VPC you're even talking about is a necessary prerequisite.Ben: Yeah, exactly. The data around it is almost missing entirely. Once you come to the conclusion that it is a particular NAT Gateway—like, that's a set of problems to solve on its own—but first, you have to go to the flow logs, you have to figure out what are the biggest upstream IPs that it's talking to. Once you have the IP, it still isn't apparent what that host is. In our case, we had all sorts of outside parties that we were talking to a lot and it's a matter of sorting by volume and figuring out well, this IP, what is the reverse IP? Who is potentially the host there?I actually had some wrong answers at first. I set up VPC endpoints to S3 and DynamoDB and SQS because those were some top talkers and that was a nice way to gain some security and some resilience and save some money. And then I found, well, Datadog; that's another top talker for us, so I ended up creating a nice private link to Datadog, which they offer for free, by the way, which is more than I can say for some other vendors. But then I found some outside parties, there wasn't a nice private link solution available to us, and yet, it was by far the largest volume. So, that's what kind of started me down this track is analyzing the NAT Gateway myself by looking at VPC flow logs. Like, it's shocking that there isn't a better way to find that traffic.Corey: It's worse than that because VPC flow logs tell you where the traffic is going and in what volumes, sure, on an IP address and port basis, but okay, now you have a Kubernetes cluster that spans two availability zones. Okay, great. What is actually passing through that? So, you have one big application that just seems awfully chatty, you have multiple workloads running on the thing. What's the expensive thing talking back and forth? The only way that you can reliably get the answer to that I found is to talk to people about what those workloads are actually doing, and failing that you're going code spelunking.Ben: Yep. You're exactly right about that. In our case, it ended up being apparent because we have a set of subnets where only one particular project runs. And when I saw the source IP, I could immediately figure that part out. But if it's a K8s cluster in the private subnets, yeah, how are you going to find it out? You're going to have to ask everybody that has workloads running there.Corey: And we're talking about in some cases, millions of dollars a month. Yeah, it starts to feel a little bit predatory as far as how it's priced and the amount of work you have to put in to track this stuff down. I've done this a handful of times myself, and it's always painful unless you discover something pretty early on, like, oh, it's talking to S3 because that's pretty obvious when you see that. It's, yeah, flip switch and this entire engagement just paid for itself a hundred times over. Now, let's see what else we can discover.That is always one of those fun moments because, first, customers are super grateful to learn that, oh, my God, I flipped that switch. And I'm saving a whole bunch of money. Because it starts with gratitude. “Thank you so much. This is great.” And it doesn't take a whole lot of time for that to alchemize into anger of, “Wait. You mean, I've been being ridden like a pony for this long and no one bothered to mention that if I click a button, this whole thing just goes away?”And when you mention this to your AWS account team, like, they're solicitous, but they either have to present as, “I didn't know that existed either,” which is not a good look, or, “Yeah, you caught us,” which is worse. There's no positive story on this. It just feels like a tax on not knowing trivia about AWS. I think that's what really winds me up about it so much.Ben: Yeah, I think you're right on about that as well. My misunderstanding about the NAT pricing was data processing is additive to data transfer. I expected when I replaced NAT Gateway with NAT instance, that I would be substituting data transfer costs for NAT Gateway costs, NAT Gateway data processing costs. But in fact, NAT Gateway incurs both data processing and data transfer. NAT instances only incur data transfer costs. And so, this is a big difference between the two solutions.Not only that, but if you're in the same region, if you're egressing out of your say us-east-1 region and talking to another hosted service also within us-east-1—never leaving the AWS network—you don't actually even incur data transfer costs. So, if you're using a NAT Gateway, you're paying data processing.Corey: To be clear you do, but it is cross-AZ in most cases billed at one penny egressing, and on the other side, that hosted service generally pays one penny ingressing as well. Don't feel bad about that one. That was extraordinarily unclear and the only reason I know the answer to that is that I got tired of getting stonewalled by people that later turned out didn't know the answer, so I ran a series of experiments designed explicitly to find this out.Ben: Right. As opposed to the five cents to nine cents that is data transfer to the internet. Which, add that to data processing on a NAT Gateway and you're paying between thirteen-and-a-half cents to nine-and-a-half cents for every gigabyte egressed. And this is a phenomenal cost. And at any kind of volume, if you're doing terabytes to petabytes, this becomes a significant portion of your bill. And this is why people hate the NAT Gateway so much.Corey: I am going to short-circuit an angry comment I can already see coming on this where people are going to say, “Well, yes. But it's a multi-petabyte scale. Nobody's paying on-demand retail price.” And they're right. Most people who are transmitting that kind of data, have a specific discount rate applied to what they're doing that varies depending upon usage and use case.Sure, great. But I'm more concerned with the people who are sitting around dreaming up ideas for a company where I want to wind up doing some sort of streaming service. I talked to one of those companies very early on in my tenure as a consultant around the billing piece and they wanted me to check their napkin math because they thought that at their numbers when they wound up scaling up, if their projections were right, that they were going to be spending $65,000 a minute, and what did they not understand? And the answer was, well, you didn't understand this other thing, so it's going to be more than that, but no, you're directionally correct. So, that idea that started off on a napkin, of course, they didn't build it on top of AWS; they went elsewhere.And last time I checked, they'd raised well over a quarter-billion dollars in funding. So, that's a business that AWS would love to have on a variety of different levels, but they're never going to even be considered because by the time someone is at scale, they either have built this somewhere else or they went broke trying.Ben: Yep, absolutely. And we might just make the point there that while you can get discounts on data transfer, you really can't—or it's very rare—to get discounts on data processing for the NAT Gateway. So, any kind of savings you can get on data transfer would apply to a NAT instance solution, you know, saving you four-and-a-half cents per gigabyte inbound and outbound over the NAT Gateway equivalent solution. So, you're paying a lot for the benefit of a fully-managed service there. Very robust, nicely engineered fully-managed service as we've already acknowledged, but an extremely expensive solution for what it is, which is really just a proxy in the end. It doesn't add any value to you.Corey: The only way to make that more expensive would be to route it through something like Splunk or whatnot. And Splunk does an awful lot for what they charge per gigabyte, but it just feels like it's rent-seeking in some of the worst ways possible. And what I love about this is that you've solved the problem in a way that is open-source, you have already released it in Terraform code. I think one of the first to-dos on this for someone is going to be, okay now also make it CloudFormation and also make it CDK so you can drop it in however you want.And anyone can use this. I think the biggest mistake people might make in glancing at this is well, I'm looking at the hourly charge for the NAT Gateways and that's 32-and-a-half bucks a month and the instances that you recommend are hundreds of dollars a month for the big network-optimized stuff. Yeah, if you care about the hourly rate of either of those two things, this is not for you. That is not the problem that it solves. If you're an independent learner annoyed about the $30 charge you got for a Managed NAT Gateway, don't do this. This will only add to your billing concerns.Where it really shines is once you're at, I would say probably about ten terabytes a month, give or take, in Managed NAT Gateway data processing is where it starts to consider this. The breakeven is around six or so but there is value to not having to think about things. Once you get to that level of spend, though it's worth devoting a little bit of infrastructure time to something like this.Ben: Yeah, that's effectively correct. The total cost of running the solution, like, all-in, there's eight Elastic IPs, four NAT Gateways, if you're—say you're four zones; could be less if you're in fewer zones—like, n NAT Gateways, n NAT instances, depending on how many zones you're in, and I think that's about it. And I said right in the documentation, if any of those baseline fees are a material number for your use case, then this is probably not the right solution. Because we're talking about saving thousands of dollars. Any of these small numbers for NAT Gateway hourly costs, NAT instance hourly costs, that shouldn't be a factor, basically.Corey: Yeah, it's like when I used to worry about costing my customers a few tens of dollars in Cost Explorer or CloudWatch or request fees against S3 for their Cost and Usage Reports. It's yeah, that does actually have a cost, there's no real way around it, but look at the savings they're realizing by going through that. Yeah, they're not going to come back and complaining about their five-figure consulting engagement costing an additional $25 in AWS charges and then lowering it by a third. So, there's definitely a difference as far as how those things tend to be perceived. But it's easy to miss the big stuff when chasing after the little stuff like that.This is part of the problem I have with an awful lot of cost tooling out there. They completely ignore cost components like this and focus only on the things that are easy to query via API, of, oh, we're going to cost-optimize your Kubernetes cluster when they think about compute and RAM. And, okay, that's great, but you're completely ignoring all the data transfer because there's still no great way to get at that programmatically. And it really is missing the forest for the trees.Ben: I think this is key to any cost reduction project or program that you're undertaking. When you look at a bill, look for the biggest spend items first and work your way down from there, just because of the impact you can have. And that's exactly what I did in this project. I saw that ‘EC2 Other' slash NAT Gateway was the big item and I started brainstorming ways that we could go about addressing that. And now I have my next targets in mind now that we've reduced this cost to effectively… nothing, extremely low compared to what it was, we have other new line items on our bill that we can start optimizing. But in any cost project, start with the big things.Corey: You have come a long way around to answer a question I get asked a lot, which is, “How do I become a cloud economist?” And my answer is, you don't. It's something that happens to you. And it appears to be happening to you, too. My favorite part about the solution that you built, incidentally, is that it is being released under the auspices of your employer, Chime Financial, which is immune to being acquired by Amazon just to kill this thing and shut it up.Because Amazon already has something shitty called Chime. They don't need to wind up launching something else or acquiring something else and ruining it because they have a Slack competitor of sorts called Amazon Chime. There's no way they could acquire you [unintelligible 00:27:45] going to get lost in the hallways.Ben: Well, I have confidence that Chime will be a good steward of the project. Chime's goal and mission as a company is to help everyone achieve financial peace of mind and we take that really seriously. We even apply it to ourselves and that was kind of the impetus behind developing this in the first place. You mentioned earlier we have Terraform support already and you're exactly right. I'd love to have CDK, CloudFormation, Pulumi supports, and other kinds of contributions are more than welcome from the community.So, if anybody feels like participating, if they see a feature that's missing, let's make this project the best that it can be. I suspect we can save many companies, hundreds of thousands or millions of dollars. And this really feels like the right direction to go in.Corey: This is easily a multi-billion dollar savings opportunity, globally.Ben: That's huge. I would be flabbergasted if that was the outcome of this.Corey: The hardest part is reaching these people and getting them on board with the idea of handling this. And again, I think there's a lot of opportunity for the project to evolve in the sense of different settings depending upon risk tolerance. I can easily see a scenario where in the event of a disruption to the NAT instance, it fails over to the Managed NAT Gateway, but fail back becomes manual so you don't have a flapping route table back and forth or a [hold 00:29:05] downtime or something like that. Because again, in that scenario, the failure mode is just well, you're paying four-and-a-half cents per gigabyte for a while until you wind up figuring out what's going on as opposed to the failure mode of you wind up disrupting connections on an ongoing basis, and for some workloads, that's not tenable. This is absolutely, for the common case, the right path forward.Ben: Absolutely. I think it's an enterprise-grade solution and the more knobs and dials that we add to tweak to make it more robust or adaptable to different kinds of use cases, the best outcome here would actually be that the entire solution becomes irrelevant because AWS fixes the NAT Gateway pricing. If that happens, I will consider the project a great success.Corey: I will be doing backflips like you wouldn't believe. I would sing their praises day in, day out. I'm not saying reduce it to nothing, even. I'm not saying it adds no value. I would change the way that it's priced because honestly, the fact that I can run an EC2 instance and be charged $0 on a per-gigabyte basis, yeah, I would pay a premium on an hourly charge based upon traffic volumes, but don't meter per gigabyte. That's where it breaks down.Ben: Absolutely. And why is it additive to data transfer, also? Like, I remember first starting to use VPC when it was launched and reading about the NAT instance requirement and thinking, “Wait a minute. I have to pay this extra management and hourly fee just so my private hosts could reach the internet? That seems kind of janky.”And Amazon established a norm here because Azure and GCP both have their own equivalent of this now. This is a business choice. This is not a technical choice. They could just run this under the hood and not charge anybody for it or build in the cost and it wouldn't be this thing we have to think about.Corey: I almost hate to say it, but Oracle Cloud does, for free.Ben: Do they?Corey: It can be done. This is a business decision. It is not a technical capability issue where well, it does incur cost to run these things. I understand that and I'm not asking for things for free. I very rarely say that this is overpriced when I'm talking about AWS billing issues. I'm talking about it being unpredictable, I'm talking about it being impossible to see in advance, but the fact that it costs too much money is rarely my complaint. In this case, it costs too much money. Make it cost less.Ben: If I'm not mistaken. GCPs equivalent solution is the exact same price. It's also four-and-a-half cents per gigabyte. So, that shows you that there's business games being played here. Like, Amazon could get ahead and do right by the customer by dropping this to a much more reasonable price.Corey: I really want to thank you both for taking the time to speak with me and building this glorious, glorious thing. Where can we find it? And where can we find you?Ben: alternat.cloud is going to be the place to visit. It's on Chime's GitHub, which will be released by the time this podcast comes out. As for me, if you want to connect, I'm on Twitter. @iamthewhaley is my handle. And of course, I'm on LinkedIn.Corey: Links to all of that will be in the podcast notes. Ben, thank you so much for your time and your hard work.Ben: This was fun. Thanks, Corey.Corey: Ben Whaley, staff software engineer at Chime Financial, and AWS Community Hero. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry rant of a comment that I will charge you not only four-and-a-half cents per word to read, but four-and-a-half cents to reply because I am experimenting myself with being a rent-seeking schmuck.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
About Steve:Steve Rice is Principal Product Manager for AWS AppConfig. He is surprisingly passionate about feature flags and continuous configuration. He lives in the Washington DC area with his wife, 3 kids, and 2 incontinent dogs.Links Referenced:AWS AppConfig: https://go.aws/awsappconfig TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: Forget everything you know about SSH and try Tailscale. Imagine if you didn't need to manage PKI or rotate SSH keys every time someone leaves. That'd be pretty sweet, wouldn't it? With tail scale, ssh, you can do exactly that. Tail scale gives each server and user device a node key to connect to its VPN, and it uses the same node key to authorize and authenticate.S. Basically you're SSHing the same way you manage access to your app. What's the benefit here? Built in key rotation permissions is code connectivity between any two devices, reduce latency and there's a lot more, but there's a time limit here. You can also ask users to reauthenticate for that extra bit of security. Sounds expensive?Nope, I wish it were. tail scales. Completely free for personal use on up to 20 devices. To learn more, visit snark.cloud/tailscale. Again, that's snark.cloud/tailscaleCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This is a promoted guest episode. What does that mean? Well, it means that some people don't just want me to sit here and throw slings and arrows their way, they would prefer to send me a guest specifically, and they do pay for that privilege, which I appreciate. Paying me is absolutely a behavior I wish to endorse.Today's victim who has decided to contribute to slash sponsor my ongoing ridiculous nonsense is, of all companies, AWS. And today I'm talking to Steve Rice, who's the principal product manager on AWS AppConfig. Steve, thank you for joining me.Steve: Hey, Corey, great to see you. Thanks for having me. Looking forward to a conversation.Corey: As am I. Now, AppConfig does something super interesting, which I'm not aware of any other service or sub-service doing. You are under the umbrella of AWS Systems Manager, but you're not going to market with Systems Manager AppConfig. You're just AWS AppConfig. Why?Steve: So, AppConfig is part of AWS Systems Manager. Systems Manager has, I think, 17 different features associated with it. Some of them have an individual name that is associated with Systems Manager, some of them don't. We just happen to be one that doesn't. AppConfig is a service that's been around for a while internally before it was launched externally a couple years ago, so I'd say that's probably the origin of the name and the service. I can tell you more about the origin of the service if you're curious.Corey: Oh, I absolutely am. But I just want to take a bit of a detour here and point out that I make fun of the sub-service names in Systems Manager an awful lot, like Systems Manager Session Manager and Systems Manager Change Manager. And part of the reason I do that is not just because it's funny, but because almost everything I found so far within the Systems Manager umbrella is pretty awesome. It aligns with how I tend to think about the world in a bunch of different ways. I have yet to see anything lurking within the Systems Manager umbrella that has led to a tee-hee-hee bill surprise level that rivals, you know, the GDP of Guam. So, I'm a big fan of the entire suite of services. But yes, how did AppConfig get its name?Steve: [laugh]. So, AppConfig started about six years ago, now, internally. So, we actually were part of the region services department inside of Amazon, which is in charge of launching new services around the world. We found that a centralized tool for configuration associated with each service launching was really helpful. So, a service might be launching in a new region and have to enable and disable things as it moved along.And so, the tool was sort of built for that, turning on and off things as the region developed and was ready to launch publicly; then the regions launch publicly. It turned out that our internal customers, which are a lot of AWS services and then some Amazon services as well, started to use us beyond launching new regions, and started to use us for feature flagging. Again, turning on and off capabilities, launching things safely. And so, it became massively popular; we were actually a top 30 service internally in terms of usage. And two years ago, we thought we really should launch this externally and let our customers benefit from some of the goodness that we put in there, and some of—those all come from the mistakes we've made internally. And so, it became AppConfig. In terms of the name itself, we specialize in application configuration, so that's kind of a mouthful, so we just changed it to AppConfig.Corey: Earlier this year, there was a vulnerability reported around I believe it was AWS Glue, but please don't quote me on that. And as part of its excellent response that AWS put out, they said that from the time that it was disclosed to them, they had patched the service and rolled it out to every AWS region in which Glue existed in a little under 29 hours, which at scale is absolutely magic fast. That is superhero speed and then some because you generally don't just throw something over the wall, regardless of how small it is when we're talking about something at the scale of AWS. I mean, look at who your customers are; mistakes will show. This also got me thinking that when you have Adam, or previously Andy, on stage giving a keynote announcement and then they mention something on stage, like, “Congratulations. It's now a very complicated service with 14 adjectives in his name because someone's paid by the syllable. Great.”Suddenly, the marketing pages are up, the APIs are working, it's showing up in the console, and it occurs to me only somewhat recently to think about all of the moving parts that go on behind this. That is far faster than even the improved speed of CloudFront distribution updates. There's very clearly something going on there. So, I've got to ask, is that you?Steve: Yes, a lot of that is us. I can't take credit for a hundred percent of what you're talking about, but that's how we are used. We're essentially used as a feature-flagging service. And I can talk generically about feature flagging. Feature flagging allows you to push code out to production, but it's hidden behind a configuration switch: a feature toggle or a feature flag. And that code can be sitting out there, nobody can access it until somebody flips that toggle. Now, the smart way to do it is to flip that toggle on for a small set of users. Maybe it's just internal users, maybe it's 1% of your users. And so, the features available, you can—Corey: It's your best slash worst customers [laugh] in that 1%, in some cases.Steve: Yeah, you want to stress test the system with them and you want to be able to look and see what's going to break before it breaks for everybody. So, you release us to a small cohort, you measure your operations, you measure your application health, you measure your reputational concerns, and then if everything goes well, then you maybe bump it up to 2%, and then 10%, and then 20%. So, feature flags allow you to slowly release features, and you know what you're releasing by the time it's at a hundred percent. It's tempting for teams to want to, like, have everybody access it at the same time; you've been working hard on this feature for a long time. But again, that's kind of an anti-pattern. You want to make sure that on production, it behaves the way you expect it to behave.Corey: I have to ask what is the fundamental difference between feature flags and/or dynamic configuration. Because to my mind, one of them is a means of achieving the other, but I could also see very easily using the terms interchangeably. Given that in some of our conversations, you have corrected me which, first, how dare you? Secondly, okay, there's probably a reason here. What is that point of distinction?Steve: Yeah. Typically for those that are not eat, sleep, and breathing dynamic configuration—which I do—and most people are not obsessed with this kind of thing, feature flags is kind of a shorthand for dynamic configuration. It allows you to turn on and off things without pushing out any new code. So, your application code's running, it's pulling its configuration data, say every five seconds, every ten seconds, something like that, and when that configuration data changes, then that app changes its behavior, again, without a code push or without restarting the app.So, dynamic configuration is maybe a superset of feature flags. Typically, when people think feature flags, they're thinking of, “Oh, I'm going to release a new feature, so it's almost like an on-off switch.” But we see customers using feature flags—and we use this internally—for things like throttling limits. Let's say you want to be able to throttle TPS transactions per second. Or let's say you want to throttle the number of simultaneous background tasks, and say, you know, I just really don't want this creeping above 50; bad things can start to happen.But in a period of stress, you might want to actually bring that number down. Well, you can push out these changes with dynamic configuration—which is, again, any type of configuration, not just an on-off switch—you can push this out and adjust the behavior and see what happens. Again, I'd recommend pushing it out to 1% of your users, and then 10%. But it allows you to have these dials and switches to do that. And, again, generically, that's dynamic configuration. It's not as fun to term as feature flags; feature flags is sort of a good mental picture, so I do use them interchangeably, but if you're really into the whole world of this dynamic configuration, then you probably will care about the difference.Corey: Which makes a fair bit of sense. It's the question of what are you talking about high level versus what are you talking about implementation detail-wise.Steve: Yep. Yep.Corey: And on some level, I used to get… well, we'll call it angsty—because I can't think of a better adjective right now—about how AWS was reluctant to disclose implementation details behind what it did. And in the fullness of time, it's made a lot more sense to me, specifically through a lens of, you want to be able to have the freedom to change how something works under the hood. And if you've made no particular guarantee about the implementation detail, you can do that without potentially worrying about breaking a whole bunch of customer expectations that you've inadvertently set. And that makes an awful lot of sense.The idea of rolling out changes to your infrastructure has evolved over the last decade. Once upon a time you'd have EC2 instances, and great, you want to go ahead and make a change there—or this actually predates EC2 instances. Virtual machines in a data center or heaven forbid, bare metal servers, you're not going to deploy a whole new server because there's a new version of the code out, so you separate out your infrastructure from the code that it runs. And that worked out well. And increasingly, we started to see ways of okay, if we want to change the behavior of the application, we'll just push out new environment variables to that thing and restart the service so it winds up consuming those.And that's great. You've rolled it out throughout your fleet. With containers, which is sort of the next logical step, well, okay, this stuff gets baked in, we'll just restart containers with a new version of code because that takes less than a second each and you're fine. And then Lambda functions, it's okay, we'll just change the deployment option and the next invocation will wind up taking the brand new environment variables passed out to it. How do feature flags feature into those, I guess, three evolving methods of running applications in anger, by which I mean, of course, production?Steve: [laugh]. Good question. And I think you really articulated that well.Corey: Well, thank you. I should hope so. I'm a storyteller. At least I fancy myself one.Steve: [laugh]. Yes, you are. Really what you talked about is the evolution of you know, at the beginning, people were—well, first of all, people probably were embedding their variables deep in their code and then they realized, “Oh, I want to change this,” and now you have to find where in my code that is. And so, it became a pattern. Why don't we separate everything that's a configuration data into its own file? But it'll get compiled at build time and sent out all at once.There was kind of this breakthrough that was, why don't we actually separate out the deployment of this? We can separate the deployment from code from the deployment of configuration data, and have the code be reading that configuration data on a regular interval, as I already said. So now, as the environments have changed—like you said, containers and Lambda—that ability to make tweaks at microsecond intervals is more important and more powerful. So, there certainly is still value in having things like environment variables that get read at startup. We call that static configuration as opposed to dynamic configuration.And that's a very important element in the world of containers that you talked about. Containers are a bit ephemeral, and so they kind of come and go, and you can restart things, or you might spin up new containers that are slightly different config and have them operate in a certain way. And again, Lambda takes that to the next level. I'm really excited where people are going to take feature flags to the next level because already today we have people just fine-tuning to very targeted small subsets, different configuration data, different feature flag data, and allows them to do this like at we've never seen before scale of turning this on, seeing how it reacts, seeing how the application behaves, and then being able to roll that out to all of your audience.Now, you got to be careful, you really don't want to have completely different configurations out there and have 10 different, or you know, 100 different configurations out there. That makes it really tough to debug. So, you want to think of this as I want to roll this out gradually over time, but eventually, you want to have this sort of state where everything is somewhat consistent.Corey: That, on some level, speaks to a level of operational maturity that my current deployment adventures generally don't have. A common reference I make is to my lasttweetinaws.com Twitter threading app. And anyone can visit it, use it however they want.And it uses a Route 53 latency record to figure out, ah, which is the closest region to you because I've deployed it to 20 different regions. Now, if this were a paid service, or I had people using this in large volume and I had to worry about that sort of thing, I would probably approach something that is very close to what you describe. In practice, I pick a devoted region that I deploy something to, and cool, that's sort of my canary where I get things working the way I would expect. And when that works the way I want it to I then just push it to everything else automatically. Given that I've put significant effort into getting deployments down to approximately two minutes to deploy to everything, it feels like that's a reasonable amount of time to push something out.Whereas if I were, I don't know, running a bank, for example, I would probably have an incredibly heavy process around things that make changes to things like payment or whatnot. Because despite the lies, we all like to tell both to ourselves and in public, anything that touches payments does go through waterfall, not agile iterative development because that mistake tends to show up on your customer's credit card bills, and then they're also angry. I think that there's a certain point of maturity you need to be at as either an organization or possibly as a software technology stack before something like feature flags even becomes available to you. Would you agree with that, or is this something everyone should use?Steve: I would agree with that. Definitely, a small team that has communication flowing between the two probably won't get as much value out of a gradual release process because everybody kind of knows what's going on inside of the team. Once your team scales, or maybe your audience scales, that's when it matters more. You really don't want to have something blow up with your users. You really don't want to have people getting paged in the middle of the night because of a change that was made. And so, feature flags do help with that.So typically, the journey we see is people start off in a maybe very small startup. They're releasing features at a very fast pace. They grow and they start to build their own feature flagging solution—again, at companies I've been at previously have done that—and you start using feature flags and you see the power of it. Oh, my gosh, this is great. I can release something when I want without doing a big code push. I can just do a small little change, and if something goes wrong, I can roll it back instantly. That's really handy.And so, the basics of feature flagging might be a homegrown solution that you all have built. If you really lean into that and start to use it more, then you probably want to look at a third-party solution because there's so many features out there that you might want. A lot of them are around safeguards that makes sure that releasing a new feature is safe. You know, again, pushing out a new feature to everybody could be similar to pushing out untested code to production. You don't want to do that, so you need to have, you know, some checks and balances in your release process of your feature flags, and that's what a lot of third parties do.It really depends—to get back to your question about who needs feature flags—it depends on your audience size. You know, if you have enough audience out there to want to do a small rollout to a small set first and then have everybody hit it, that's great. Also, if you just have, you know, one or two developers, then feature flags are probably something that you're just kind of, you're doing yourself, you're pushing out this thing anyway on your own, but you don't need it coordinated across your team.Corey: I think that there's also a bit of—how to frame this—misunderstanding on someone's part about where AppConfig starts and where it stops. When it was first announced, feature flags were one of the things that it did. And that was talked about on stage, I believe in re:Invent, but please don't quote me on that, when it wound up getting announced. And then in the fullness of time, there was another announcement of AppConfig now supports feature flags, which I'm sitting there and I had to go back to my old notes. Like, did I hallucinate this? Which again, would not be the first time I'd imagine such a thing. But no, it was originally how the service was described, but now it's extra feature flags, almost like someone would, I don't know, flip on a feature-flag toggle for the service and now it does a different thing. What changed? What was it that was misunderstood about the service initially versus what it became?Steve: Yeah, I wouldn't say it was a misunderstanding. I think what happened was we launched it, guessing what our customers were going to use it as. We had done plenty of research on that, and as I mentioned before we had—Corey: Please tell me someone used it as a database. Or am I the only nutter that does stuff like that?Steve: We have seen that before. We have seen something like that before.Corey: Excellent. Excellent, excellent. I approve.Steve: And so, we had done our due diligence ahead of time about how we thought people were going to use it. We were right about a lot of it. I mentioned before that we have a lot of usage internally, so you know, that was kind of maybe cheating even for us to be able to sort of see how this is going to evolve. What we did announce, I guess it was last November, was an opinionated version of feature flags. So, we had people using us for feature flags, but they were building their own structure, their own JSON, and there was not a dedicated console experience for feature flags.What we announced last November was an opinionated version that structured the JSON in a way that we think is the right way, and that afforded us the ability to have a smooth console experience. So, if we know what the structure of the JSON is, we can have things like toggles and validations in there that really specifically look at some of the data points. So, that's really what happened. We're just making it easier for our customers to use us for feature flags. We still have some customers that are kind of building their own solution, but we're seeing a lot of them move over to our opinionated version.Corey: This episode is brought to us in part by our friends at Datadog. Datadog's SaaS monitoring and security platform that enables full stack observability for developers, IT operations, security, and business teams in the cloud age. Datadog's platform, along with 500 plus vendor integrations, allows you to correlate metrics, traces, logs, and security signals across your applications, infrastructure, and third party services in a single pane of glass.Combine these with drag and drop dashboards and machine learning based alerts to help teams troubleshoot and collaborate more effectively, prevent downtime, and enhance performance and reliability. Try Datadog in your environment today with a free 14 day trial and get a complimentary T-shirt when you install the agent.To learn more, visit datadoghq/screaminginthecloud to get. That's www.datadoghq/screaminginthecloudCorey: Part of the problem I have when I look at what it is you folks do, and your use cases, and how you structure it is, it's similar in some respects to how folks perceive things like FIS, the fault injection service, or chaos engineering, as is commonly known, which is, “We can't even get the service to stay up on its own for any [unintelligible 00:18:35] period of time. What do you mean, now let's intentionally degrade it and make it work?” There needs to be a certain level of operational stability or operational maturity. When you're still building a service before it's up and running, feature flags seem awfully premature because there's no one depending on it. You can change configuration however your little heart desires. In most cases. I'm sure at certain points of scale of development teams, you have a communications problem internally, but it's not aimed at me trying to get something working at 2 a.m. in the middle of the night.Whereas by the time folks are ready for what you're doing, they clearly have that level of operational maturity established. So, I have to guess on some level, that your typical adopter of AppConfig feature flags isn't in fact, someone who is, “Well, we're ready for feature flags; let's go,” but rather someone who's come up with something else as a stopgap as they've been iterating forward. Usually something homebuilt. And it might very well be you have the exact same biggest competitor that I do in my consulting work, which is of course, Microsoft Excel as people try to build their own thing that works in their own way.Steve: Yeah, so definitely a very common customer of ours is somebody that is using a homegrown solution for turning on and off things. And they really feel like I'm using the heck out of these feature flags. I'm using them on a daily or weekly basis. I would like to have some enhancements to how my feature flags work, but I have limited resources and I'm not sure that my resources should be building enhancements to a feature-flagging service, but instead, I'd rather have them focusing on something, you know, directly for our customers, some of the core features of whatever your company does. And so, that's when people sort of look around externally and say, “Oh, let me see if there's some other third-party service or something built into AWS like AWS AppConfig that can meet those needs.”And so absolutely, the workflows get more sophisticated, the ability to move forward faster becomes more important, and do so in a safe way. I used to work at a cybersecurity company and we would kind of joke that the security budget of the company is relatively low until something bad happens, and then it's, you know, whatever you need to spend on it. It's not quite the same with feature flags, but you do see when somebody has a problem on production, and they want to be able to turn something off right away or make an adjustment right away, then the ability to do that in a measured way becomes incredibly important. And so, that's when, again, you'll see customers starting to feel like they're outgrowing their homegrown solution and moving to something that's a third-party solution.Corey: Honestly, I feel like so many tools exist in this space, where, “Oh, yeah, you should definitely use this tool.” And most people will use that tool. The second time. Because the first time, it's one of those, “How hard could that be out? I can build something like that in a weekend.” Which is sort of the rallying cry of doomed engineers who are bad at scoping.And by the time that they figure out why, they have to backtrack significantly. There's a whole bunch of stuff that I have built that people look at and say, “Wow, that's a really great design. What inspired you to do that?” And the absolute honest answer to all of it is simply, “Yeah, I worked in roles for the first time I did it the way you would think I would do it and it didn't go well.” Experience is what you get when you didn't get what you wanted, and this is one of those areas where it tends to manifest in reasonable ways.Steve: Absolutely, absolutely.Corey: So, give me an example here, if you don't mind, about how feature flags can improve the day-to-day experience of an engineering team or an engineer themselves. Because we've been down this path enough, in some cases, to know the failure modes, but for folks who haven't been there that's trying to shave a little bit off of their journey of, “I'm going to learn from my own mistakes.” Eh, learn from someone else's. What are the benefits that accrue and are felt immediately?Steve: Yeah. So, we kind of have a policy that the very first commit of any new feature ought to be the feature flag. That's that sort of on-off switch that you want to put there so that you can start to deploy your code and not have a long-lived branch in your source code. But you can have your code there, it reads whether that configuration is on or off. You start with it off.And so, it really helps just while developing these things about keeping your branches short. And you can push the mainline, as long as the feature flag is off and the feature is hidden to production, which is great. So, that helps with the mess of doing big code merges. The other part is around the launch of a feature.So, you talked about Andy Jassy being on stage to launch a new feature. Sort of the old way of doing this, Corey, was that you would need to look at your pipelines and see how long it might take for you to push out your code with any sort of code change in it. And let's say that was an hour-and-a-half process and let's say your CEO is on stage at eight o'clock on a Friday. And as much as you like to say it, “Oh, I'm never pushing out code on a Friday,” sometimes you have to. The old way—Corey: Yeah, that week, yes you are, whether you want to or not.Steve: [laugh]. Exactly, exactly. The old way was this idea that I'm going to time my release, and it takes an hour-and-a-half; I'm going to push it out, and I'll do my best, but hopefully, when the CEO raises her arm or his arm up and points to a screen that everything's lit up. Well, let's say you're doing that and something goes wrong and you have to start over again. Well, oh, my goodness, we're 15 minutes behind, can you accelerate things? And then you start to pull away some of these blockers to accelerate your pipeline or you start editing it right in the console of your application, which is generally not a good idea right before a really big launch.So, the new way is, I'm going to have that code already out there on a Wednesday [laugh] before this big thing on a Friday, but it's hidden behind this feature flag, I've already turned it on and off for internals, and it's just waiting there. And so, then when the CEO points to the big screen, you can just flip that one small little configuration change—and that can be almost instantaneous—and people can access it. So, that just reduces the amount of stress, reduces the amount of risk in pushing out your code.Another thing is—we've heard this from customers—customers are increasing the number of deploys that they can do per week by a very large percentage because they're deploying with confidence. They know that I can push out this code and it's off by default, then I can turn it on whenever I feel like it, and then I can turn it off if something goes wrong. So, if you're into CI/CD, you can actually just move a lot faster with a number of pushes to production each week, which again, I think really helps engineers on their day-to-day lives. The final thing I'm going to talk about is that let's say you did push out something, and for whatever reason, that following weekend, something's going wrong. The old way was oop, you're going to get a page, I'm going to have to get on my computer and go and debug things and fix things, and then push out a new code change.And this could be late on a Saturday evening when you're out with friends. If there's a feature flag there that can turn it off and if this feature is not critical to the operation of your product, you can actually just go in and flip that feature flag off until the next morning or maybe even Monday morning. So, in theory, you kind of get your free time back when you are implementing feature flags. So, I think those are the big benefits for engineers in using feature flags.Corey: And the best way to figure out whether someone is speaking from a position of experience or is simply a raving zealot when they're in a position where they are incentivized to advocate for a particular way of doing things or a particular product, as—let's be clear—you are in that position, is to ask a form of the following question. Let's turn it around for a second. In what scenarios would you absolutely not want to use feature flags? What problems arise? When do you take a look at a situation and say, “Oh, yeah, feature flags will make things worse, instead of better. Don't do it.”Steve: I'm not sure I wouldn't necessarily don't do it—maybe I am that zealot—but you got to do it carefully.Corey: [laugh].Steve: You really got to do things carefully because as I said before, flipping on a feature flag for everybody is similar to pushing out untested code to production. So, you want to do that in a measured way. So, you need to make sure that you do a couple of things. One, there should be some way to measure what the system behavior is for a small set of users with that feature flag flipped to on first. And it could be some canaries that you're using for that.You can also—there's other mechanisms you can do that to: set up cohorts and beta testers and those kinds of things. But I would say the gradual rollout and the targeted rollout of a feature flag is critical. You know, again, it sounds easy, “I'll just turn it on later,” but you ideally don't want to do that. The second thing you want to do is, if you can, is there some sort of validation that the feature flag is what you expect? So, I was talking about on-off feature flags; there are things, as when I was talking about dynamic configuration, that are things like throttling limits, that you actually want to make sure that you put in some other safeguards that say, “I never want my TPS to go above 1200 and never want to set it below 800,” for whatever reason, for example. Well, you want to have some sort of validation of that data before the feature flag gets pushed out. Inside Amazon, we actually have the policy that every single flag needs to have some sort of validation around it so that we don't accidentally fat-finger something out before it goes out there. And we have fat-fingered things.Corey: Typing the wrong thing into a command structure into a tool? “Who would ever do something like that?” He says, remembering times he's taken production down himself, exactly that way.Steve: Exactly, exactly, yeah. And we've done it at Amazon and AWS, for sure. And so yeah, if you have some sort of structure or process to validate that—because oftentimes, what you're doing is you're trying to remediate something in production. Stress levels are high, it is especially easy to fat-finger there. So, that check-and-balance of a validation is important.And then ideally, you have something to automatically roll back whatever change that you made, very quickly. So AppConfig, for example, hooks up to CloudWatch alarms. If an alarm goes off, we're actually going to roll back instantly whatever that feature flag was to its previous state so that you don't even need to really worry about validating against your CloudWatch. It'll just automatically do that against whatever alarms you have.Corey: One of the interesting parts about working at Amazon and seeing things in Amazonian scale is that one in a million events happen thousands of times every second for you folks. What lessons have you learned by deploying feature flags at that kind of scale? Because one of my problems and challenges with deploying feature flags myself is that in some cases, we're talking about three to five users a day for some of these things. That's not really enough usage to get insights into various cohort analyses or A/B tests.Steve: Yeah. As I mentioned before, we build these things as features into our product. So, I just talked about the CloudWatch alarms. That wasn't there originally. Originally, you know, if something went wrong, you would observe a CloudWatch alarm and then you decide what to do, and one of those things might be that I'm going to roll back my configuration.So, a lot of the mistakes that we made that caused alarms to go off necessitated us building some automatic mechanisms. And you know, a human being can only react so fast, but an automated system there is going to be able to roll things back very, very quickly. So, that came from some specific mistakes that we had made inside of AWS. The validation that I was talking about as well. We have a couple of ways of validating things.You might want to do a syntactic validation, which really you're validating—as I was saying—the range between 100 and 1000, but you also might want to have sort of a functional validation, or we call it a semantic validation so that you can make sure that, for example, if you're switching to a new database, that you're going to flip over to your new database, you can have a validation there that says, “This database is ready, I can write to this table, it's truly ready for me to switch.” Instead of just updating some config data, you're actually going to be validating that the new target is ready for you. So, those are a couple of things that we've learned from some of the mistakes we made. And again, not saying we aren't making mistakes still, but we always look at these things inside of AWS and figure out how we can benefit from them and how our customers, more importantly, can benefit from these mistakes.Corey: I would say that I agree. I think that you have threaded the needle of not talking smack about your own product, while also presenting it as not the global panacea that everyone should roll out, willy-nilly. That's a good balance to strike. And frankly, I'd also say it's probably a good point to park the episode. If people want to learn more about AppConfig, how you view these challenges, or even potentially want to get started using it themselves, what should they do?Steve: We have an informational page at go.aws/awsappconfig. That will tell you the high-level overview. You can search for our documentation and we have a lot of blog posts to help you get started there.Corey: And links to that will, of course, go into the [show notes 00:31:21]. Thank you so much for suffering my slings, arrows, and other assorted nonsense on this. I really appreciate your taking the time.Steve: Corey thank you for the time. It's always a pleasure to talk to you. Really appreciate your insights.Corey: You're too kind. Steve Rice, principal product manager for AWS AppConfig. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment. But before you do, just try clearing your cookies and downloading the episode again. You might be in the 3% cohort for an A/B test, and you [want to 00:32:01] listen to the good one instead.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
About NipunNipun Agarwal is a Senior Vice President, MySQL HeatWave Development, Oracle. His interests include distributed data processing, machine learning, cloud technologies and security. Nipun was part of the Oracle Database team where he introduced a number of new features. He has been awarded over 170 patents.Links Referenced: Oracle: https://oracle.com MySQL HeatWave info: https://www.oracle.com/mysql/ MySQL Service on AWS and OCI login (Oracle account required): https://cloud.mysql.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us in part by our friends at Datadog. Datadog's SaaS monitoring and security platform that enables full stack observability for developers, IT operations, security, and business teams in the cloud age. Datadog's platform, along with 500 plus vendor integrations, allows you to correlate metrics, traces, logs, and security signals across your applications, infrastructure, and third party services in a single pane of glass.Combine these with drag and drop dashboards and machine learning based alerts to help teams troubleshoot and collaborate more effectively, prevent downtime, and enhance performance and reliability. Try Datadog in your environment today with a free 14 day trial and get a complimentary T-shirt when you install the agent.To learn more, visit datadoghq.com/screaminginthecloud to get. That's www.datadoghq.com/screaminginthecloudCorey: This episode is sponsored in part by our friends at Sysdig. Sysdig secures your cloud from source to run. They believe, as do I, that DevOps and security are inextricably linked. If you wanna learn more about how they view this, check out their blog, it's definitely worth the read. To learn more about how they are absolutely getting it right from where I sit, visit Sysdig.com and tell them that I sent you. That's S Y S D I G.com. And my thanks to them for their continued support of this ridiculous nonsense.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted episode is sponsored by our friends at Oracle, and back for a borderline historic third round going out and telling stories about these things, we have Nipun Agarwal, who is, as opposed to his first appearance on the show, has been promoted to senior vice president of MySQL HeatWave. Nipun, thank you for coming back. Most people are not enamored enough with me to subject themselves to my slings and arrows a second time, let alone a third. So first, thanks. And are you okay, over there?Nipun: Thank you, Corey. Yeah, very happy to be back.Corey: [laugh]. So, since the last time we've spoken, there have been some interesting developments that have happened. It was pre-announced by Larry Ellison on a keynote stage or an earnings call, I don't recall the exact format, that HeatWave was going to be coming to AWS. Now, you've conducted a formal announcement, this usual media press blitz, et cetera, talking about it with an eye toward general availability later this year, if I'm not mistaken, and things seem to be—if you'll forgive the term—heating up a bit.Nipun: That is correct. So, as you know, we have had MySQL HeatWave on OCI for just about two years now. Very good reception, a lot of people who are using MySQL HeatWave, are migrating from other clouds, specifically from AWS, and now we have announced availability of MySQL HeatWave on AWS.Corey: So, for those who have not done the requisite homework of listening to the entire back catalog of nearly 400 episodes of this show, what exactly is MySQL HeatWave, just so we make sure that we set the stage for what we're going to be talking about? Because I sort of get the sense that without a baseline working knowledge of what that is, none of the rest of this is going to make a whole lot of sense.Nipun: MySQL HeatWave is a managed MySQL service provided by Oracle. But it is different from other MySQL-based services in the sense that we have significantly enhanced the service such that it can very efficiently process transactions, analytics, and in-database machine learning. So, what customers get with the service, with MySQL HeatWave, is a single MySQL database which can process OLTP, transaction processing, real-time analytics, and machine learning. And they can do this without having to move the data out of MySQL into some other specialized database services who are running analytics or machine learning. And all existing tools and applications which work with MySQL work as is because this is something that enhances the server. In addition to that, it provides very good performance and very good price performance compared to other similar services out there.Corey: The idea historically that some folks were pushing around the idea of multi-cloud was that you would have workloads that—oh, they live in one cloud, but the database was going to be all the way across the other side of the internet, living in a different provider. And in practice, what we generally tend to see is that where the data lives is where the compute winds up living. By and large, it's easier to bring the compute resources to the data than it is to move the data to the compute, just because data egress in most of the cloud providers—notably exempting yours—is astronomically expensive. You are, if I recall correctly, less than 10% of AWS's data egress charge on just retail pricing alone, which is wild to me. So first, thank you for keeping that up and not raising prices because I would have felt rather annoyed if I'd been saying such good things. And it was, haha, it was a bait and switch. It was not. I'm still a big fan. So, thank you for that, first and foremost.Nipun: Certainly. And what you described is absolutely correct that while we have a lot of customers migrating from AWS to use MySQL HeatWave and OCI, a class of customers are unable to, and the number one reason they're unable to is that AWS charges these customers all very high egress fees to move the data out of AWS into OCI for them to benefit from MySQL HeatWave. And this has definitely been one of the key incentives for us, the key motivation for us, to offer MySQL HeatWave on AWS so that customers don't need to pay this exorbitant data egress fees.Corey: I think it's fair to disclose that I periodically advise a variety of different cloud companies from a perspective of voice-of-the-customer feedback, which essentially distills down to me asking really annoying slash obnoxious questions that I, as a customer, legitimately want to know, but people always frown at me when I asked that in vendor pitches. For some reason, when I'm doing this on an advisory basis, people instead nod thoughtfully and take notes, so that at least feels better from my perspective. Oracle Cloud has been one of those, and I've been kicking the tires on the AWS offering that you folks have built out for a bit of time now. I have to say, it is legitimate. I was able to run a significant series of tests on this, and what I found going through that process was interesting on a bunch of different levels.I'm wondering if it's okay with you, if we go through a few of them, just things that jumped out to me as we went through a series of conversations around, “So, we're going to run a service on AWS.” And my initial answer was, “Is this Oracle? Are you sure?” And here we are today; we are talking about it and press releases.Nipun: Yes, certainly fine with me. Please go ahead.Corey: So, I think one of the first questions I had when you said, “We're going to run a database service on AWS itself,” was, if I'm true to type, is going to be fairly sarcastic, which is, “Oh, thank God. Finally, a way to run a MySQL database on AWS. There's never been one of those before.” Unless you count EC2 or Aurora or Redshift depending upon how you squint at it, or a variety of other increasingly strange things. It feels like that has been a largely saturated market in many respects.I generally don't tend to advise on things that I find patently ridiculous, and your answer was great, but I don't want to put words in your mouth. What was it that you saw that made you say, “Ah, we're going to put a different database offering on AWS, and no, it's not a terrible decision.”Nipun: Got it. Okay, so if you look at it, the value proposition which MySQL HeatWave offers is that customers of MySQL or customers have MySQL compatible databases, whether Aurora, or whether it's RDS MySQL, right, or even, like, you know, customers of Redshift, they have been migrating to MySQL HeatWave on OCI. Like, for the reasons I said: it's a single database, customers don't need to have multiple databases for managing different kinds of workloads, it's much faster, it's a lot less expensive, right? So, there are a lot of value propositions. So, what we found is that if you were to offer MySQL HeatWave on AWS, it will significantly ease the migration of other customers who might be otherwise thinking that it will be difficult for them to migrate, perhaps because of the high egress cost of AWS, or because of the high latency some of the applications in the AWS incur when the database is running somewhere else.Or, if they really have an ecosystem of applications already running on AWS and they just want to replace the database, it'll be much easier for them if MySQL HeatWave was offered on AWS. Those are the reasons why we feel it's a compelling proposition, that if existing customers of AWS are willing to migrate the cloud from AWS to OCI and use MySQL HeatWave, there is clearly a value proposition we are offering. And if we can now offer the same service in AWS, it will hopefully increase the number of customers who can benefit from MySQL HeatWave.Corey: One of the next questions I had going in was, “Okay, so what actually is this under the hood?” Is this you effectively just stuffing some software into a machine image or an AMI—or however they want to mispronounce that word over an AWS-land—and then just making it available to your account and running that, where's the magic or mystery behind this? Like, it feels like the next more modern cloud approach is to stuff the whole thing into a Docker container. But that's not what you wound up doing.Nipun: Correct. So, HeatWave has been designed and architected for scale-out processing, and it's been optimized for the cloud. So, when we decided to offer MySQL HeatWave on AWS, we have actually gone ahead and optimize our server for the AWS architecture. So, the processor we are running on, right, we have optimized our software for that instance types in AWS, right? So, the data plane has been optimized for AWS architecture.The second thing is we have a brand new control plane layer, right? So, it's not the case that we're just taking what we had in OCI and running it on AWS. We have optimized the data plane for AWS, we have a native control plane, which is running on AWS, which is using the respective services on AWS. And third, we have a brand new console which we are offering, which is a very interactive console where customers can run queries from the console. They can do data management from the console, they're able to use Autopilot from the console, and we have performance monitoring from the console, right? So, data plane, control plane, console. They're all running natively in AWS. And this provides for a very seamless integration or seamless experience for the AWS customers.Corey: I think it's also a reality, however much we may want to pretend otherwise, that if there is an opportunity to run something in a different cloud provider that is better than where you're currently running it now, by and large, customers aren't going to do it because it needs to not just be better, but so astronomically better in ways that are foundational to a company's business model in order to justify the tremendous expense of a cloud migration, not just in real, out of pocket, cost in dollars and cents that are easy to measure, but also in terms of engineering effort, in terms of opportunity cost—because while you're doing that you're not doing other things instead—and, on some level, people tend to only do that when there's an overwhelming strategic reason to do it. When folks already have existing workloads on AWS, as many of them do, it stands to reason that they are not going to want to completely deviate from that strategy just because something else offers a better database experience any number of axes. So, meeting customers where they are is one of the, I guess, foundational shifts that we've really seen from the entire IT industry over the last 40 years, rather than you will buy it from us and you will tolerate it. It's, now customers have choice, and meeting them where they are and being much more, I guess, able to impedance-match with them has been critical. And I'm really optimistic about what the launch of this service portends for Oracle.Nipun: Indeed, but let me give you another data point. We find a very large number of Aurora customers migrating to MySQL HeatWave on OCI, right? And this is the same workload they were running on Aurora, but now they want to run the same workload on MySQL HeatWave on OCI. They are willing to undertake this journey of migration because their applications, they get much faster, and for a lot less price, but they get much faster. Then the second aspect is, there's another class of customers who are for instance running, on Aurora or other transactions or workloads, but then they have to keep moving the data, they'll keep performing the ETL process into some other service, whether it's Snowflake, or whether it's Redshift for analytics.Now, with this migration, when they move to MySQL HeatWave, customers don't need to, like, have multiple databases, and they get real-time analytics, meaning that if any data changes inside the server inside the OLTP as a database service, right? If they were to run a query, that query is giving them the latest results, right? It's not stale. Whereas with an ETL process, it gets to be stale. So, given that we already found that there were so many customers migrating to OCI to use MySQL HeatWave, I think there's a clear value proposition of MySQL HeatWave, and there's a lot of demand.But like, as I was mentioning earlier, by having MySQL HeatWave be offered on AWS, it makes the proposition even more compelling because, as you said, yes, there is some engineering work that customers will need to do to migrate between clouds, and if they don't want to, then absolutely now they have MySQL HeatWave which they can now use in AWS itself.Corey: I think that one of the things I continually find myself careening into, perhaps unexpectedly, is a failure to really internalize just how vast this entire industry really is. Every time I think I've seen it all, all I have to do is talk to one more cloud customer and I learn something completely new and different. Sometimes it's an innovative, exciting use of a thing. Other times, it's people holding something fearfully wrong and trying to use it as a hammer instead. And you know, if it's dumb and it works, is it really dumb? There are questions around that.And this in turn gave rise to one of my next obnoxious questions as I was looking at what you were building at the time because a lot of your pricing and discussions and framing of this was targeting very large enterprise-style customers, and the price points reflected that. And then I asked the question that Big E enterprise never quite expects, for whatever reason, it's like, “That looks awesome if I have a budget with many commas in it. What can I get for $4?” And as of this recording, pricing has not been finalized slash published for the service, but everything that you have shown me so far absolutely makes developing on this for a proof of concept or an evening puttering around, completely tenable: it is not bound to a fixed period of licensing; it's, use it when you want to use it, turn it off when you're done; and the hourly pricing is not egregious. I think that is something that historically, Oracle Database offerings have not really aligned with.OCI very much has, particularly with an eye toward its extraordinarily awesome free tier that's always free. But this feels like it's a weird blending of the OCI model versus historical Oracle Database pricing models in a way that, honestly I'm pretty excited about.Nipun: So, we react to what the customer requirements and needs are. So, for this class of customers who are using, say, RDS, MySQL, Aurora, we understand that they are very cost sensitive, right? So, one of the things which we have done in addition to offering MySQL HeatWave on AWS is based on the customer feedback and such. We are now offering a small shape of HeatWave instance in addition to the regular large shape. So, if customers want to just, you know, kick the tires, if developers just want to get started, they can get a MySQL node with HeatWave for less than ten cents an hour. So, for less than ten cents an hour, they get the ability to run transaction processing, analytics, and machine learning.And if you were to compare the corresponding cost of Aurora for the same, like, you know, core count, it's, like, you know, 12-and-a-half cents. And that's just Aurora, without Redshift or without SageMaker. So yes, you're right that based on the feedback and we have found that it would be much more attractive to have this low-end shape for the AWS developers. We are offering this smaller shape. And yeah, it's very, very affordable. It's about just shy of ten cents an hour.Corey: This brings up another question that I raised pretty early on in the process because you folks kept talking about shapes, and it turns out that is the Oracle Cloud term that applies to instance size over an AWS-land. And as we dug into this a bit further, it does make sense for how you think about these things and how you build them to customers. Specifically, if I want to run this, I log into cloud.oracle.com and sign up for it there, and pay you over on that side of the world, this does not show up on my AWS bill. What drove that decision?Nipun: Okay, so a couple of things. One clarification is that the site people log in to is cloud.mysql.com. So, that's where they come to: cloud.mysql.com.Corey: Oh, my apologies. I keep forgetting that you folks have multiple cloud offerings and domains. They're kind of a thing. How do they work? Given I have a bad domain by habit myself, I have no room to judge.Nipun: So, they come to cloud.mysql.com. From there, they can provision an instance. And we, as, like, you know, Oracle or MySQL, go ahead and create an instance in AWS, in the Oracle tenancy. From there, customers can then, you know, access their data on AWS and such. Now, what we want to provide the customers is a very seamless experience, that they just come to cloud.mysql.com, and from there, they can do everything: provisioning an instance, running the queries, payment and such. So, this is one of the reasons that we want customers just to be able to come to the site, cloud.mysql.com, and take care of the billing and such.Now, the other thing is that, okay, why not allow customers to pay from AWS, right? Now, one of the things over there is that if you were to do that and there's a customer, they'll be like, “Hey, I got to pay something to AWS, something to Oracle, so we'd prefer, it'd be better to have a one-stop shop.” And since many of these are already Oracle customers, it's helpful to do it this way.Corey: Another approach you could have taken—and I want to be very clear here that I am not suggesting that this would have been a good idea—but an approach that you could have taken would have been to go down the weird AWS partner rabbit hole, and we're going to provide this to customers on the AWS Marketplace. Because according to AWS, that's where all of their customers go to discover new softwares. Yeah, first, that's a lie. They do not. But aside from that, what was it about that Marketplace model that drove you to a decision point where okay, at launch, we are not going to be offering this on the AWS Marketplace? And to be clear, I'm not suggesting that was the wrong decision.Nipun: Right. The main reason is we want to offer the MySQL HeatWave service at the least expensive cost to the user, right, or like, the least cost. If you were to, like, have MySQL HeatWave in the Marketplace, AWS charges a premium. This the customers would need to pay. So, we just didn't want the customers to have to pay this additional premium just because they can now source this thing from the Marketplace. So, it's really to, like, save costs for the customer.Corey: The value of the Marketplace, from my perspective, has been effectively not having to deal as much with customer procurement departments because well, AWS is already on the procurement approved list, so we're just going to go ahead and take the hit to wind up making it accessible from that perspective and calling it good. The downside to this is that increasingly, as customers are making larger and longer-term commitments that are tied to certain levels of spend on AWS, they're increasingly trying to drag every vendor with whom they do business into the your AWS bill so they can check those boxes off. And the problem that I keep seeing with that is vendors who historically have been doing just fine, have great working relationships with a customer are reporting that suddenly customers are coming back with, “Yeah, so for our contract renewal, we want to go through the AWS Marketplace.” In return, effectively, these companies are then just getting a haircut off whatever it is they're able to charge their customers but receiving no actual value for any of this. It attenuates the relationship by introducing a third party into the process, and it doesn't make anything better from the vendor's point of view because they already had something functional and working; now they just have to pay a commission on it to AWS, who, it seems, is pathologically averse to any transaction happening where they don't get a cut, on some level. But I digress. I just don't like that model very much at all. It feels coercive.Nipun: That's absolutely right. That's absolutely right. And we thought that, yes, there is some value to be going to Marketplace, but it's not worth the additional premium customers would need to pay. Totally agree.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: It's also worth pointing out that in Oracle's historical customer base, by which I mean the last 40 years that you folks have been in business, you do have significant customers with very sizable estates. A lot of your cloud efforts have focused around, I guess, we'll call it an Oracle-specific currency: Oracle Credits. Which is similar to the AWS style of currency just for a different company in different ways. One of the benefits that you articulated to me relatively early on was that by going through cloud.mysql.com, customers with those credits—which can be in sizable amounts based upon various differentiating variables that change from case to case—and apply that to their use of MySQL HeatWave on AWS.Nipun: Right. So, in fact, just for starters, right, what we give to customers is we offer some free credits for customers to try a service on OCI of, you know, $300. And that's the same thing, the same experience you would like customers who are trying HeatWave on AWS to get. Yes, so you're right, this is the kind of consistency we want to have, and yet another reason why cloud.mysql.com makes sense is the entry point for customers to try the service.Corey: There was a time where I would have struggled not to laugh in your face at the idea that we're talking about something in the context of an Oracle database, and well, there's $300 in credit. That's, “What can I get for that? Hung up on?” No. A surprising amount, when it comes to these things.I feel like that opens up an entirely new universe of experimentation. And, “Let's see how this thing actually works with his workload,” and lets people kick the tires on it for themselves in a way that, “Oh, we have this great database. Well, can I try it? Sure, for $8 million, you absolutely can.” “Well, it can stay great and awesome over there because who wants to take that kind of a bet?” It feels like it's a new world and in a bunch of different respects, and I just can't make enough noise about how glad I am to see this transformation happening.Nipun: Yeah. Absolutely, right? So, just think about it. So, you're getting MySQL and HeatWave together for just shy of ten cents an hour, right? So, what you could get for $300 is 3000 hours for MySQL HeatWave instance, which is very good for people to try for free. And then, you know, decide if they want to go ahead with it.Corey: One other, I guess, obnoxious question that I love to ask—it's not really a question so much as a statement; that that's part of the first thing that makes it really obnoxious—but it always distills down to the following that upsets product people left and right, which is, “I don't get it.” And one of the things that I didn't fully understand at the outset of how you were structuring things was the idea of separating out HeatWave from its constituent components. I believe it was Autopilot if I'm not mistaken, and it was effectively different SKUs that you could wind up opting to go for. And okay, if I'm trying to kick the tires on this and contextualize it as someone for whom the world's best database is Route 53, then it really felt like an additional decision point that I wasn't clear on the value of. And I'm still not entirely sure on the differentiation point and the value there, but now you offer it bundled as a default, which I think is so much better, from the user experience perspective.Nipun: Okay, so let me clarify a couple of things.Corey: Please. Databases are not my forte, so expect me to wind up getting most of the details hilariously wrong.Nipun: Sure. So, MySQL Autopilot provides machine-learning-based automation for various aspects of the MySQL service; very popular. There is no charge for it. It is built into MySQL HeatWave; there is no additional charge for it, right, so there never was any SKU for it. What you're referring to is, we have had a SKU for the MySQL node or the MySQL instance, and there's a separate SKU for HeatWave.The reason there is a need to have a different SKU for these two is because you always only have one node of MySQL. It could be, like, you know, running on one core, or like, you know, multiple cores, but it's always, like, you know, one node. But with HeatWave, it's a scale-out architecture, so you can have multiple nodes. So, the users need to be able to express how many nodes of HeatWave are they provisioning, right? So, that's why there is a need to have two SKUs, and we continue to have those two SKUs.What we are doing now differently is that when users instantiate a MySQL instance, by default, they always get the HeatWave node associated with it, right? So, they don't need to, like, you know, make the decision to—okay when to add HeatWave; they always get HeatWave along with the MySQL instance, and that's what I was saying a combination of both of these is, you know, like, just about ten cents an hour. If for whatever reason, they decide that they do not want HeatWave, they can turn it off, and then the price drops to half. But what we're providing is the AWS service that HeatWave is turned on by default.Corey: Which makes an awful lot of sense. It's something that lets people opt out if they decide they don't need this as they continue to scale out, but for the newcomer who does not, in many cases—in my particular case—have a nuanced understanding of where this offering starts and stops, it's clearly the right decision of—rather than, “Oh, yeah. The thing you were trying and it didn't work super well? Well, yeah. If you enable this other thing, it would have been awesome.” “Well, great. Please enable it for me by default and let me opt out later in time as my level of understanding deepens.”Nipun: That's right. And that's exactly what we are doing. Now, this was a feedback we got because many, if not most, of our customers would want to have HeatWave, and we just kind of, you know, mitigating them from going through one more step, it's always enabled by default.Corey: As far as I'm aware, you folks are running this effectively as any other AWS customer might, where you establish a private link connection to your customers, in some cases, or give them a public or private endpoint where they can wind up communicating with this service. It doesn't require any favoritism or special permissions from AWS themselves that they wouldn't give to any other random customer out there, correct?Nipun: Yes, that is correct. So, for now, we are exposing this thing as a public endpoint. In the future, we have plans to support the private endpoint as well, but for now, it's public.Corey: Which means that foundationally what you're building out is something that fits into a model that could work extraordinarily well across a variety of different environments. How purpose-tuned is the HeatWave installation you have running on AWS for the AWS environment, versus something that is relatively agnostic, could be dropped into any random cloud provider, up to and including the terrifyingly obsolete rack I have in the spare room?Nipun: So, as I mentioned, when we decided to offer MySQL HeatWave on AWS, the idea was that okay, for the AWS customers, we now want to have an offering which is completely optimized for AWS, provides the best price-performance on AWS. So, we have determined which instance types underneath will provide the best price performance, and that's what we have optimized for, right? So, I can tell you, like, in terms of many of—for instance, take the case of the cache size of the underlying processor that we're using on AWS is different than what we're using for OCI. So, we have gone ahead, made these optimizations in our code, and we believe that our code is really optimized now for the AWS infrastructure.Corey: I think that makes a fair deal of sense because, again, one of the big problems AWS has had is the proliferation of EC2 instance types to the point now where the answer is super easy, too, “Are you using the correct instance type for your workload?” Because that answer now is, “Of course not. Who could possibly say that they were with any degree of confidence?” But when you take the time to look at a very specific workload that's going to be scaled out, it's worth the time investment to figure out exactly how to optimize things for price and performance, given the constraints. Let's be very clear here, I would argue that the better price performance for HeatWave is almost certainly not going to be on AWS themselves, if for no other reason than the joy that is their data transfer pricing, even for internal things moving around from time to time.Personally, I love getting charged data transfer for taking data from S3, running it through AWS Glue, putting it into a different S3 bucket, accessing it with Athena, then hooking that up to Tableau as we go down and down and down the spiraling rabbit hole that never ends. It's not exactly what I would call well-optimized economically. Their entire system feels almost like it's a rigged game, on some level. But given those constraints, yeah, dialing in it and making it cost-effective is absolutely something that I've watched you folks put significant time and effort into.Nipun: So, I'll make two points, right, to the questions. First is yes, I just want to, like, be clear about it, that when a user provisions MySQL HeatWave via cloud.mysql.com and we create an instance in AWS, we don't give customers a multitude of things to, like, you know, choose from.We have determined which instance type is going to provide the customer the best price performance, and that's what we provision. So, the customer doesn't even need to know or care, is it going to be, like, you know, AMD? Is it going to be Intel? Is it going to be, like, you know, ARM, right? So, it's something which we have predetermined and we have optimized for it. That's first.The second point is in terms of the price performance. So, you're absolutely right, that for the class of customers who cannot migrate away from AWS because of the egress costs or because of the high latency because of AWS, right, sure, MySQL HeatWave on AWS will provide the best price-performance compared to other services out in AWS like Redshift, or Aurora, or Snowflake. But if customers have the flexibility to choose a cloud of their choice, it is indeed the case that customers are going to find that running MySQL HeatWave on OCI is going to provide them, by far, the best price performance, right? So, the price performance of running MySQL HeatWave on OCI is indeed better than MySQL HeatWave on AWS. And just because of the fact that when we are running the service in AWS, we are paying the list price, right, on AWS; that's how we get the gear. Whereas with OCI, like, you know, things are a lot less expensive for us.But even when you're running on AWS, we are very, very price competitive with other services. And you know, as you've probably seen from the performance benchmarks and such, what I'm very intrigued about is that we're able to run a standard workload, like some, like, you know, TPC-H and offer seven times better price-performance while running in AWS compared to Redshift. So, what this goes to show is that we are really passing on the savings to the customers. And clearly, Redshift is not doing a good job of performance or, like, you know, they're charging too much. But the fact that we can offer seven times better price performance than Redshift in AWS speaks volumes, both about architecture and how much of savings we are passing to our customers.Corey: What I love about this story is that it makes testing the waters of what it's like to run MySQL HeatWave a lot easier for customers because the barrier to entry is so much lower. Where everything you just said I agree with it is more cost-effective to run on Oracle Cloud. I think there are a number of workloads that are best placed on Oracle Cloud. But unless you let people kick the tires on those things, where they happen to be already, it's difficult to get them to a point where they're going to be able to experience that themselves. This is a massive step on that path.Nipun: Yep. Right.Corey: I really want to thank you for taking time out of your day to walk us through exactly how this came to be and what the future is going to look like around this. If people want to learn more, where should they go?Nipun: Oh, they can go to oracle.com/mysql, and there they can get a lot more information about the capabilities of MySQL HeatWave, what we are offering in AWS, price-performance. By the way, all the price performance numbers I was talking about, all the scripts are available publicly on GitHub. So, we welcome, we encourage customers to download the scripts from GitHub, try for themselves, and all of this information is available from oracle.com/mysql where they can get this detailed information.Corey: And we will, of course, put links to that in the show notes. Thank you so much for your time. I appreciate it.Nipun: Sure thing, Corey. Thank you for the opportunity.Corey: Nipun Agarwal, Senior Vice President of MySQL HeatWave. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry insulting comment. You will then be overcharged for the data transfer to submit that insulting comment, and then AWS will take a percentage of that just because they're obnoxious and can.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
About ThomasThomas Hazel is Founder, CTO, and Chief Scientist of ChaosSearch. He is a serial entrepreneur at the forefront of communication, virtualization, and database technology and the inventor of ChaosSearch's patented IP. Thomas has also patented several other technologies in the areas of distributed algorithms, virtualization and database science. He holds a Bachelor of Science in Computer Science from University of New Hampshire, Hall of Fame Alumni Inductee, and founded both student & professional chapters of the Association for Computing Machinery (ACM).Links Referenced: ChaosSearch: https://www.chaossearch.io/ Twitter: https://twitter.com/ChaosSearch Facebook: https://www.facebook.com/CHAOSSEARCH/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted episode is brought to us by our returning sponsor and friend, ChaosSearch. And once again, the fine folks at ChaosSearch has seen fit to basically subject their CTO and Founder, Thomas Hazel, to my slings and arrows. Thomas, thank you for joining me. It feels like it's been a hot minute since we last caught up.Thomas: Yeah, Corey. Great to be on the program again, then. I think it's been almost a year. So, I look forward to these. They're fun, they're interesting, and you know, always a good time.Corey: It's always fun to just take a look at companies' web pages in the Wayback Machine, archive.org, where you can see snapshots of them at various points in time. Usually, it feels like this is either used for long-gone things and people want to remember the internet of yesteryear, or alternately to deliver sick burns with retorting a “This you,” when someone winds up making an unpopular statement. One of the approaches I like to use it for, which is significantly less nefarious—usually—is looking back in time at companies' websites, just to see how the positioning of the product evolves over time.And ChaosSearch has had an interesting evolution in that direction. But before we get into that, assuming that there might actually be people listening who do not know the intimate details of exactly what it is you folks do, what is ChaosSearch, and what might you folks do?Thomas: Yeah, well said, and I look forward to [laugh] doing the Wayback Time because some of our ideas, way back when, seemed crazy, but now they make a lot of sense. So, what ChaosSearch is all about is transforming customers' cloud object stores like Amazon S3 into an analytical database that supports search and SQL-type use cases. Now, where's that apply? In log analytics, observability, security, security data lakes, operational data, particularly at scale, where you just stream your data into your data lake, connect our service, our SaaS service, to that lake and automagically we index it and provide well-known APIs like Elasticsearch and integrate with Kibana or Grafana, and SQL APIs, something like, say, a Superset or Tableau or Looker into your data. So, you stream it in and you get analytics out. And the key thing is the time-cost complexity that we all know that operational data, particularly at scale, like terabytes and a day and up causes challenges, and we all know how much it costs.Corey: They certainly do. One of the things that I found interesting is that, as I've mentioned before, when I do consulting work at The Duckbill Group, we have absolutely no partners in the entire space. That includes AWS, incidentally. But it was easy in the beginning because I was well aware of what you folks were up to, and it was great when there was a use case that matched of you're spending an awful lot of money on Elasticsearch; consider perhaps migrating some of that—if it makes sense—to ChaosSearch. Ironically, when you started sponsoring some of my nonsense, that conversation got slightly trickier where I had to disclose, yeah our media arm is does have sponsorships going on with them, but that has no bearing on what I'm saying.And if they take their sponsorships away—please don't—then we would still be recommending them because it's the right answer, and it's what we would use if we were in your position. We receive no kickbacks or partner deal or any sort of reseller arrangement because it just clouds the whole conflict of interest perception. But you folks have been fantastic for a long time in a bunch of different ways.Thomas: Well, you know, I would say that what you thought made a lot of sense made a lot of sense to us as well. So, the ChaosSearch idea just makes sense. Now, you had to crack some code, solve some problems, invent some technology, and create some new architecture, but the idea that Elasticsearch is a useful solution with all the tooling, the visualization, the wonderful community around that, was a good place to start, but here's the problem: setting it up, scaling it out, keep it up, when things are happening, things go bump in the night. All those are real challenges, and one of them was just the storaging of the data. Well, what if you could make S3 the back-end store? One hundred percent; no SSDs or HDDs. Makes a lot of sense.And then support the APIs that your tooling uses. So, it just made a lot of sense on what we were trying to do, just no one thought of it. Now, if you think about the Northstar you were talking about, you know, five, six years ago, when I said, transforming cloud storage into an analytical database for search and SQL, people thought that was crazy and mad. Well, now everyone's using Cloud Storage, everyone's using S3 as a data lake. That's not in question anymore.But it was a question five, six, you know, years ago. So, when we met up, you're like, “Well, that makes sense.” It always made sense, but people either didn't think was possible, or were worried, you know, I'll just try to set up an Elastic cluster and deal with it. Because that's what happens when you particularly deal with large-scale implementations. So, you know, to us, we would love the Elastic API, the tooling around it, but what we all know is the cost, the time the complexity, to manage it, to scale it out, just almost want to pull your hair out. And so, that's where we come in is, don't change what you do, just change how you do it.Corey: Every once in a while, I'll talk to a client who's running an Amazon Elasticsearch cluster, and they have nothing but good things to say about it. Which, awesome. On the one hand, part of me wishes that I had some of their secrets, but often what's happened is that they have this down to a science, they have a data lifecycle that's clearly defined and implemented, the cluster is relatively static, so resizes aren't really a thing, and it just works for their use cases. And in those scenarios, like, “Do you care about the bill?” “Not overly. We don't have to think about it.”Great. Then why change? If there's no pain, you're not going to sell someone something, especially when we're talking, this tends to be relatively smaller-scale as well. It's okay, great, they're spending $5,000 a month on it. It doesn't necessarily justify the engineering effort to move off.Now, when you start looking at this, and, “Huh, that's a quarter million bucks a month we're spending on this nonsense, and it goes down all the time,” yeah, that's when it starts to be one of those logical areas to start picking apart and diving into. What's also muddied the waters since the last time we really went in-depth on any of this was it used to be we would be talking about it exactly like we are right now, about how it's Elasticsearch-compatible. Technically, these days, we probably shouldn't be saying it is OpenSearch compatible because of the trademark issues between Elastic and AWS and the Schism of the OpenSearch fork of the Elasticsearch project. And now it feels like when you start putting random words in front of the word search, ChaosSearch fits right in. It feels like your star is rising.Thomas: Yeah, no, well said. I appreciate that. You know, it's funny when Elastic changed our license, we all didn't know what was going to happen. We knew something was going to happen, but we didn't know what was going to happen. And Amazon, I say ironically, or, more importantly, decided they'll take up the open mantle of keeping an open, free solution.Now, obviously, they recommend running that in their cloud. Fair enough. But I would say we don't hear as much Elastic replacement, as much as OpenSearch replacement with our solution because of all the benefits that we talked about. Because the trigger points for when folks have an issue with the OpenSearch or Elastic stack is got too expensive, or it was changing so much and it was falling over, or the complexity of the schema changing, or all the above. The pipelines were complex, particularly at scale.That's both for Elasticsearch, as well as OpenSearch. And so, to us, we want either to win, but we want to be the replacement because, you know, at scale is where we shine. But we have seen a real trend where we see less Elasticsearch and more OpenSearch because the community is worried about the rules that were changed, right? You see it day in, day out, where you have a community that was built around open and fair and free, and because of business models not working or the big bad so-and-so is taking advantage of it better, there's a license change. And that's a trust change.And to us, we're following the OpenSearch path because it's still open. The 600-pound gorilla or 900-pound gorilla of Amazon. But they really held the mantle, saying, “We're going to stay open, we assume for as long as we know, and we'll follow that path. But again, at that scale, the time, the costs, we're here to help solve those problems.” Again, whether it's on Amazon or, you know, Google et cetera.Corey: I want to go back to what I mentioned at the start of this with the Wayback Machine and looking at how things wound up unfolding in the fullness of time. The first time that it snapshotted your site was way back in the year 2018, which—Thomas: Nice. [laugh].Corey: Some of us may remember, and at that point, like, I wasn't doing any work with you, and later in time I would make fun of you folks for this, but back then your brand name was in all caps, so I would periodically say things like this episode is sponsored by our friends at [loudly] CHAOSSEARCH.Thomas: [laugh].Corey: And once you stopped capitalizing it and that had faded from the common awareness, it just started to look like I had the inability to control the volume of my own voice. Which, fair, but generally not mid-sentence. So, I remember those early days, but the positioning of it was, “The future of log management and analytics,” back in 2018. Skipping forward a year later, you changed this because apparently in 2019, the future was already here. And you were talking about, “Log search analytics, purpose-built for Amazon S3. Store everything, ask anything all on your Amazon S3.”Which is awesome. You were still—unfortunately—going by the all caps thing, but by 2020, that wound up changing somewhat significantly. You were at that point, talking for it as, “The data platform for scalable log analytics.” Okay, it's clearly heading in a log direction, and that made a whole bunch of sense. And now today, you are, “The data lake platform for analytics at scale.” So, good for you, first off. You found a voice?Thomas: [laugh]. Well, you know, it's funny, as a product mining person—I'll take my marketing hat off—we've been building the same solution with the same value points and benefits as we mentioned earlier, but the market resonates with different terminology. When we said something like, “Transforming your Cloud Object Storage like S3 into an analytical database,” people were just were like, blown away. Is that even possible? Right? And so, that got some eyes.Corey: Oh, anything is a database if you hold that wrong. Absolutely.Thomas: [laugh]. Yeah, yeah. And then you're saying log analytics really resonated for a few years. Data platform, you know, is more broader because we do more broader things. And now we see over the last few years, observability, right? How do you fit in the observability viewpoint, the stack where log analytics is one aspect to it?Some of our customers use Grafana on us for that lens, and then for the analysis, alerting, dashboarding. You can say that Kibana in the hunting aspect, the log aspects. So, you know, to us, we're going to put a message out there that resonates with what we're hearing from our customers. For instance, we hear things like, “I need a security data lake. I need that. I need to stream all my data. I need to have all the data because what happens today that now, I need to know a week, two weeks, 90 days.”We constantly hear, “I need at least 90 days forensics on that data.” And it happens time and time again. We hear in the observability stack where, “Hey, I love Datadog, but I can't afford it more than a week or two.” Well, that's where we come in. And we either replace Datadog for the use cases that we support, or we're auxiliary to it.Sometimes we have an existing Grafana implementation, and then they store data in us for the long tail. That could be the scenario. So, to us, the message is around what resonates with our customers, but in the end, it's operational data, whether you want to call it observability, log analytics, security analytics, like the data lake, to us, it's just access to your data, all your data, all the time, and supporting the APIs and the tooling that you're using. And so, to me, it's the same product, but the market changes with messaging and requirements. And this is why we always felt that having a search and SQL platform is so key because what you'll see in Elastic or OpenSearch is, “Well, I only support the Elastic API. I can't do correlations. I can't do this. I can't do that. I'm going to move it over to say, maybe Athena but not so much. Maybe a Snowflake or something else.”Corey: “Well, Thomas, it's very simple. Once you learn our own purpose-built, domain-specific language, specifically for our product, well, why are you still sitting here, go learn that thing.” People aren't going to do that.Thomas: And that's what we hear. It was funny, I won't say what the company was, a big banking company that we're talking to, and we hear time and time again, “I only want to do it via the Elastic tooling,” or, “I only want to do it via the BI tooling.” I hear it time and time again. Both of these people are in the same company.Corey: And that's legitimate as well because there's a bunch of pre-existing processes pointing at things and we're not going to change 200 different applications in their data model just because you want to replace a back-end system. I also want to correct myself. I was one tab behind. This year's branding is slightly different: “Search and analyze unlimited log data in your cloud object storage.” Which is, I really like the evolution on this.Thomas: Yeah, yeah. And I love it. And what was interesting is the moving, the setting up, the doubling of your costs, let's say you have—I mean, we deal with some big customers that have petabytes of data; doubling your petabytes, that means, if your Elastic environment is costing you tens of millions and then you put into Snowflake, that's also going to be tens of millions. And with a solution like ours, you have really cost-effective storage, right? Your cloud storage, it's secure, it's reliable, it's Elastic, and you attach Chaos to get the well-known APIs that your well-known tooling can analyze.So, to us, our evolution has been really being the end viewpoint where we started early, where the search and SQL isn't here today—and you know, in the future, we'll be coming out with more ML type tooling—but we have two sides: we have the operational, security, observability. And a lot of the business side wants access to that data as well. Maybe it's app data that they need to do analysis on their shopping cart website, for instance.Corey: The thing that I find curious is, the entire space has been iterating forward on trying to define observability, generally, as whatever people are already trying to sell in many cases. And that has seemed to be a bit of a stumbling block for a lot of folks. I figured this out somewhat recently because I've built the—free for everyone to use—the lasttweetinaws.com, Twitter threading client.That's deployed to 20 different AWS regions because it's go—the idea is that should be snappy for people, no matter where they happen to be on the planet, and I use it for conferences when I travel, so great, let's get ahead of it. But that also means I've got 20 different sources of logs. And given that it's an omnibus Lambda function, it's very hard to correlate that to users, or user sessions, or even figure out where it's going. The problem I've had is, “Oh, well, this seems like something I could instrument to spray logs somewhere pretty easily, but I don't want to instrument it for 15 different observability vendors. Why don't I just use otel—or Open Telemetry—and then tell that to throw whatever I care about to various vendors and do a bit of a bake-off?” The problem, of course, is that open telemetry and Lambda seem to be in just the absolute wrong directions. A lot.Thomas: So, we see the same trend of otel coming out, and you know, this is another API that I'm sure we're going to go all-in on because it's getting more and more talked about. I won't say it's the standard that I think is trending to all your points about I need to normalize a process. But as you mentioned, we also need to correlate across the data. And this is where, you know, there are times where search and hunting and alerting is awesome and wonderful and solves all your needs, and sometimes correlation. Imagine trying to denormalize all those logs, set up a pipeline, put it into some database, or just do a SELECT *, you know, join this to that to that, and get your answers.And so, I think both OpenTelemetry and SQL and search all need to be played into one solution, or at least one capability because if you're not doing that, you're creating some hodgepodge pipeline to move it around and ultimately get your questions answered. And if it takes weeks—maybe even months, depending on the scale—you may sometimes not choose to do it.Corey: One other aspect that has always annoyed me about more or less every analytics company out there—and you folks are no exception to this—is the idea of charging per gigabyte ingested because that inherently sets up a weird dichotomy of, well, this is costing a lot, so I should strive to log less. And that is sort of the exact opposite, not just of the direction you folks want customers to go in, but also where customers themselves should be going in. Where you diverge from an awful lot of those other companies because of the nature of how you work, is that you don't charge them again for retention. And the idea that, yeah, the fact that anything stored in ChaosSearch lives in your own S3 buckets, you can set your own lifecycle policies and do whatever you want to do with that is a phenomenal benefit, just because I've always had a dim view of short-lived retention periods around logs, especially around things like audit logs. And these days, I would consider getting rid of audit logging data and application logging data—especially if there's a correlation story—any sooner than three years feels like borderline malpractice.Thomas: [laugh]. We—how many times—I mean, we've heard it time and time again is, “I don't have access to that data because it was too costly.” No one says they don't want the data. They just can't afford the data. And one of the key premises that if you don't have all the data, you're at risk, particularly in security—I mean, even audits. I mean, so many times our customers ask us, you know, “Hey, what was this going on? What was that go on?” And because we can so cost-effectively monitor our own service, we can provide that information for them. And we hear this time and time again.And retention is not a very sexy aspect, but it's so crucial. Anytime you look in problems with X solution or Y solution, it's the cost of the data. And this is something that we wanted to address, officially. And why do we make it so cost-effective and free after you ingest it was because we were using cloud storage. And it was just a great place to land the data cost-effective, securely.Now, with that said, there are two types of companies I've seen. Everybody needs at least 90 days. I see time and time again. Sure, maybe daily, in a weeks, they do a lot of their operation, but 90 days is where it lands. But there's also a bunch of companies that need it for years, for compliance, for audit reasons.And imagine trying to rehydrate, trying to rebuild—we have one customer—again I won't say who—has two petabytes of data that they rehydrate when they need it. And they say it's a nightmare. And it's growing. What if you just had it always alive, always accessible? Now, as we move from search to SQL, there are use cases where in the log world, they just want to pay upfront, fixed fee, this many dollars per terabyte, but as we get into the more ad hoc side of it, more and more folks are asking for, “Can I pay per query?”And so, you'll see coming out soon, about scenarios where we have a different pricing model. For logs, typically, you want to pay very consistent, you know, predetermined cost structure, but in the case of more security data lakes, where you want to go in the past and not really pay for something until you use it, that's going to be an option as well coming out soon. So, I would say you need both in the pricing models, but you need the data to have either side, right?Corey: This episode is sponsored in part by our friends at ChaosSearch. You could run Elasticsearch or Elastic Cloud—or OpenSearch as they're calling it now—or a self-hosted ELK stack. But why? ChaosSearch gives you the same API you've come to know and tolerate, along with unlimited data retention and no data movement. Just throw your data into S3 and proceed from there as you would expect. This is great for IT operations folks, for app performance monitoring, cybersecurity. If you're using Elasticsearch, consider not running Elasticsearch. They're also available now in the AWS marketplace if you'd prefer not to go direct and have half of whatever you pay them count towards your EDB commitment. Discover what companies like Equifax, Armor Security, and Blackboard already have. To learn more, visit chaossearch.io and tell them I sent you just so you can see them facepalm, yet again.Corey: You'd like to hope. I mean, you could always theoretically wind up just pulling what Ubiquiti apparently did—where this came out in an indictment that was unsealed against an insider—but apparently one of their employees wound up attempting to extort them—which again, that's not their fault, to be clear—but what came out was that this person then wound up setting the CloudTrail audit log retention to one day, so there were no logs available. And then as a customer, I got an email from them saying there was no evidence that any customer data had been accessed. I mean, yeah, if you want, like, the world's most horrifyingly devilish best practice, go ahead and set your log retention to nothing, and then you too can confidently state that you have no evidence of anything untoward happening.Contrast this with what AWS did when there was a vulnerability reported in AWS Glue. Their analysis of it stated explicitly, “We have looked at our audit logs going back to the launch of the service and have conclusively proven that the only time this has ever happened was in the security researcher who reported the vulnerability to us, in their own account.” Yeah, one of those statements breeds an awful lot of confidence. The other one makes me think that you're basically being run by clowns.Thomas: You know what? CloudTrail is such a crucial—particularly Amazon, right—crucial service because of that, we see time and time again. And the challenge of CloudTrail is that storing a long period of time is costly and the messiness the JSON complexity, every company struggles with it. And this is how uniquely—how we represent information, we can model it in all its permutations—but the key thing is we can store it forever, or you can store forever. And time and time again, CloudTrail is a key aspect to correlate—to your question—correlate this happened to that. Or do an audit on two years ago, this happened.And I got to tell you, to all our listeners out there, please store your CloudTrail data—ideally in ChaosSearch—because you're going to need it. Everyone always needs that. And I know it's hard. CloudTrail data is messy, nested JSON data that can explode; I get it. You know, there's tricks to do it manually, although quite painful. But CloudTrail, every one of our customers is indexing with us in CloudTrail because of stories like that, as well as the correlation across what maybe their application log data is saying.Corey: I really have never regretted having extra logs lying around, especially with, to be very direct, the almost ridiculously inexpensive storage classes that S3 offers, especially since you can wind up having some of the offline retrieval stuff as part of a lifecycle policy now with intelligent tiering. I'm a big believer in just—again—the Glacier Deep Archive I've at the cost of $1,000 a month per petabyte, with admittedly up to 12 hours of calling that as a latency. But that's still, for audit logs and stuff like that, why would I ever want to delete things ever again?Thomas: You're exactly right. And we have a bunch of customers that do exactly that. And we automate the entire process with you. Obviously, it's your S3 account, but we can manage across those tiers. And it's just to a point where, why wouldn't you? It's so cost-effective.And the moments where you don't have that information, you're at risk, whether it's internal audits, or you're providing a service for somebody, it's critical data. With CloudTrail, it's critical data. And if you're not storing it and if you're not making it accessible through some tool like an Elastic API or Chaos, it's not worth it. I think, to your point about your story, it's epically not worth it.Corey: It's really not. It's one of those areas where that is not a place to overly cost optimize. This is—I mean we talked earlier about my business and perceptions of conflict of interest. There's a reason that I only ever charge fixed-fee and not percentage of savings or whatnot because, at some point, I'll be placed in a position of having to say nonsense, like, “Do you really need all of these backups?” That doesn't make sense at that point.I do point out things like you have hourly disk snapshots of your entire web fleet, which has no irreplaceable data on them dating back five years. Maybe cleaning some of that up might be the right answer. The happy answer is somewhere in between those two, and it's a business decision around exactly where that line lies. But I'm a believer in never regretting having kept logs almost into perpetuity. Until and unless I start getting more or less pillaged by some particularly rapacious vendor that's oh, yeah, we're going to charge you not just for ingest, but also for retention. And for how long you want to keep it, we're going to treat it like we're carving it into platinum tablets. No. Stop that.Thomas: [laugh]. Well, you know, it's funny, when we first came out, we were hearing stories that vendors were telling customers why they didn't need their data, to your point, like, “Oh, you don't need that,” or, “Don't worry about that.” And time and time again, they said, “Well, turns out we didn't need that.” You know, “Oh, don't index all your data because you just know what you know.” And the problem is that life doesn't work out that way business doesn't work out that way.And now what I see in the market is everyone's got tiering scenarios, but the accessibility of that data takes some time to get access to. And these are all workarounds and bandaids to what fundamentally is if you design an architecture and a solution is such a way, maybe it's just always hot; maybe it's just always available. Now, we talked about tiering off to something very, very cheap, then it's like virtually free. But you know, our solution was, whether it's ultra warm, or this tiering that takes hours to rehydrate—hours—no one wants to live in that world, right? They just want to say, “Hey, on this date on this year, what was happening? And let me go look, and I want to do it now.”And it has to be part of the exact same system that I was using already. I didn't have to call up IT to say, “Hey, can you rehydrate this?” Or, “Can I go back to the archive and look at it?” Although I guess we're talking about archiving with your website, viewing from days of old, I think that's kind of funny. I should do that more often myself.Corey: I really wish that more companies would put themselves in the customers' shoes. And for what it's worth, periodically, I've spoken to a number of very happy ChaosSearch customers. I haven't spoken to any angry ones yet, which tells me you're either terrific at crisis comms, or the product itself functions as intended. So, either way, excellent job. Now, which team of yours is doing that excellent job, of course, is going to depend on which one of those outcomes it is. But I'm pretty good at ferreting out stories on those things.Thomas: Well, you know, it's funny, being a company that's driven by customer ask, it's so easy build what the customer wants. And so, we really take every input of what the customer needs and wants—now, there are cases where we relace Splunk. They're the Cadillac, they have all the bells and whistles, and there's times where we'll say, “Listen, that's not what we're going to do. We're going to solve these problems in this vector.” But they always keep on asking, right? You know, “I want this, I want that.”But most of the feedback we get is exactly what we should be building. People need their answers and how they get it. It's really helped us grow as a company, grow as a product. And I will say ever since we went live now many, many years ago, all our roadmap—other than our Northstar of transforming cloud storage into a search SQL big data analytics database has been customer-driven, market customer-driven, like what our customer is asking for, whether it's observability and integrating with Grafana and Kibana or, you know, security data lakes. It's just a huge theme that we're going to make sure that we provide a solution that meets those needs.So, I love when customers ask for stuff because the product just gets better. I mean, yeah, sometimes you have to have a thick skin, like, “Why don't you have this?” Or, “Why don't you have that?” Or we have customers—and not to complain about customers; I love our customers—but they sometimes do crazy things that we have to help them on crazy-ify. [laugh]. I'll leave it at that. But customers do silly things and you have to help them out. I hope they remember that, so when they ask for a feature that maybe takes a month to make available, they're patient with us.Corey: We sure can hope. I really want to thank you for taking so much time to once again suffer all of my criticisms, slings and arrows, blithe market observations, et cetera, et cetera. If people want to learn more, where's the best place to find you?Thomas: Well, of course, chaossearch.io. There's tons of material about what we do, use cases, case studies; we just published a big case study with Equifax recently. We're in Gartner and a whole bunch of Hype Cycles that you can pull down to see how we fit in the market.Reach out to us. You can set up a trial, kick the tires, again, on your cloud storage like S3. And ChaosSearch on Twitter, we have a Facebook, we have all this classic social medias. But our website is really where all the good content and whether you want to learn about the architecture and how we've done it, and use cases; people who want to say, “Hey, I have a problem. How do you solve it? How do I learn more?”Corey: And we will, of course, put links to that in the show notes. For my own purposes, you could also just search for the term ChaosSearch in your email inbox and find one of their sponsored ads in my newsletter and click that link, but that's a little self-serving as we do it. I'm kidding. I'm kidding. There's no need to do that. That is not how we ever evaluate these things. But it is funny to tell that story. Thomas, thank you so much for your time. As always, it's appreciated.Thomas: Corey Quinn, I truly enjoyed this time. And I look forward to upcoming re:Invent. I'm assuming it's going to be live like last year, and this is where we have a lot of fun with the community.Corey: Oh, I have no doubt that we're about to go through that particular path very soon. Thank you. It's been an absolute pleasure.Thomas: Thank you.Corey: Thomas Hazel, CTO and Founder of ChaosSearch. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry, insulting comment that I will then set to have a retention period of one day, and then go on to claim that I have received no negative feedback.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Hive aws glue and additional data catalogs compare
About SheeriAfter almost 2 decades as a database administrator and award-winning thought leader, Sheeri Cabral pivoted to technical product management. Her super power of “new customer” empathy informs her presentations and explanations. Sheeri has developed unique insights into working together and planning, having survived numerous reorganizations, “best practices”, and efficiency models. Her experience is the result of having worked at everything from scrappy startups such as Guardium – later bought by IBM – to influential tech companies like Mozilla and MongoDB, to large established organizations like Salesforce.Links Referenced: Collibra: https://www.collibra.com WildAid GitHub: https://github.com/wildaid Twitter: https://twitter.com/sheeri Personal Blog: https://sheeri.org TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored by our friends at Fortinet. Fortinet's partnership with AWS is a better-together combination that ensures your workloads on AWS are protected by best-in-class security solutions powered by comprehensive threat intelligence and more than 20 years of cybersecurity experience. Integrations with key AWS services simplify security management, ensure full visibility across environments, and provide broad protection across your workloads and applications. Visit them at AWS re:Inforce to see the latest trends in cybersecurity on July 25-26 at the Boston Convention Center. Just go over to the Fortinet booth and tell them Corey Quinn sent you and watch for the flinch. My thanks again to my friends at Fortinet.Corey: Let's face it, on-call firefighting at 2am is stressful! So there's good news and there's bad news. The bad news is that you probably can't prevent incidents from happening, but the good news is that incident.io makes incidents less stressful and a lot more valuable. incident.io is a Slack-native incident management platform that allows you to automate incident processes, focus on fixing the issues and learn from incident insights to improve site reliability and fix your vulnerabilities. Try incident.io, recover faster and sleep more.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. My guest today is Sheeri Cabral, who's a Senior Product Manager of ETL lineage at Collibra. And that is an awful lot of words that I understand approximately none of, except maybe manager. But we'll get there. The origin story has very little to do with that.I was following Sheeri on Twitter for a long time and really enjoyed the conversations that we had back and forth. And over time, I started to realize that there were a lot of things that didn't necessarily line up. And one of the more interesting and burning questions I had is, what is it you do, exactly? Because you're all over the map. First, thank you for taking the time to speak with me today. And what is it you'd say it is you do here? To quote a somewhat bizarre and aged movie now.Sheeri: Well, since your listeners are technical, I do like to match what I say with the audience. First of all, hi. Thanks for having me. I'm Sheeri Cabral. I am a product manager for technical and ETL tools and I can break that down for this technical audience. If it's not a technical audience, I might say something—like if I'm at a party, and people ask what I do—I'll say, “I'm a product manager for technical data tool.” And if they ask what a product manager does, I'll say I helped make sure that, you know, we deliver a product the customer wants. So, you know, ETL tools are tools that transform, extract, and load your data from one place to another.Corey: Like AWS Glue, but for some of them, reportedly, you don't have to pay AWS by the gigabyte-second.Sheeri: Correct. Correct. We actually have an AWS Glue technical lineage tool in beta right now. So, the technical lineage is how data flows from one place to another. So, when you're extracting, possibly transforming, and loading your data from one place to another, you're moving it around; you want to see where it goes. Why do you want to see where it goes? Glad you asked. You didn't really ask. Do you care? Do you want to know why it's important?Corey: Oh, I absolutely do. Because it's—again, people who are, like, “What do you do?” “Oh, it's boring, and you won't care.” It's like when people aren't even excited themselves about what they work on, it's always a strange dynamic. There's a sense that people aren't really invested in what they do.I'm not saying you have to have this overwhelming passion and do this in your spare time, necessarily, but you should, at least in an ideal world, like what you do enough to light up a bit when you talk about it. You very clearly do. I'm not wanting to stop you. Please continue.Sheeri: I do. I love data and I love helping people. So, technical lineage does a few things. For example, a DBA—which I used to be a DBA—can use technical lineage to predict the impact of a schema update or migration, right? So, if I'm going to change the name of this column, what uses it downstream? What's going to be affected? What scripts do I need to change? Because if the name changes other thing—you know, then I need to not get errors everywhere.And from a data governance perspective, which Collibra is data governance tool, it helps organizations see if, you know, you have private data in a source, does it remain private throughout its journey, right? So, you can take a column like email address or government ID number and see where it's used down the line, right? GDPR compliance, CCPA compliance. The CCPA is a little newer; people might not know that acronym. It's California Consumer Privacy Act.I forget what GDPR is, but it's another privacy act. It also can help the business see where data comes from so if you have technical lineage all the way down to your reports, then you know whether or not you can trust the data, right? So, you have a report and it shows salary ranges for job titles. So, where did the data come from? Did it come from a survey? Did it come from job sites? Or did it come from a government source like the IRS, right? So, now you know, like, what you get to trust the most.Corey: Wait, you can do that without a blockchain? I kid, I kid, I kid. Please don't make me talk about blockchains. No, it's important. The provenance of data, being able to establish a almost a chain-of-custody style approach for a lot of these things is extraordinarily important.Sheeri: Yep.Corey: I was always a little hazy on the whole idea of ETL until I started, you know, working with large-volume AWS bills. And it turns out that, “Well, why do you have to wind up moving and transforming all of these things?” “Oh, because in its raw form, it's complete nonsense. That's why. Thank you for asking.” It becomes a problem—Sheeri: [laugh]. Oh, I thought you're going to say because AWS has 14 different products for things, so you have to move it from one product to the other to use the features.Corey: And two of them are good. It's a wild experience.Sheeri: [laugh].Corey: But this is also something of a new career for you. You were a DBA for a long time. You're also incredibly engaging, you have a personality, you're extraordinarily creative, and that—if I can slander an entire profession for a second—does not feel like it is a common DBA trait. It's right up there with an overly creative accountant. When your accountant has done a stand-up comedy, you're watching and you're laughing and thinking, “I am going to federal prison.” It's one of those weird things that doesn't quite gel, if we're speaking purely in terms of stereotypes. What has your career been like?Sheeri: I was a nerd growing up. So, to kind of say, like, I have a personality, like, my personality is very nerdish. And I get along with other nerdy people and we have a lot of fun, but when I was younger, like, when I was, I don't know, seven or eight, one of the things I really love to do is I had a penny collection—you know, like you do—and I love to sort it by date. So, in the states anyway, we have these pennies that have the date that they were minted on it. And so, I would organize—and I probably had, like, five bucks worth a pennies.So, you're talking about 500 pennies and I would sort them and I'd be like, “Oh, this is 1969. This was 1971.” And then when I was done, I wanted to sort things more, so I would start to, like, sort them in order how shiny the pennies were. So, I think that from an early age, it was clear that I wanted to be a DBA from that sorting of my data and ordering it, but I never really had a, like, “Oh, I want to be this when I grew up.” I kind of had a stint when I was in, like, middle school where I was like, maybe I'll be a creative writer and I wasn't as creative a writer as I wanted to be, so I was like, “Ah, whatever.”And I ended up actually coming to computer science just completely through random circumstance. I wanted to do neuroscience because I thought it was completely fascinating at how the brain works and how, like, you and I are, like, 99.999—we're, like, five-nines the same except for, like, a couple of genetic, whatever. But, like, how our brain wiring right how the neuron, how the electricity flows through it—Corey: Yeah, it feels like I want to store a whole bunch of data, that's okay. I'll remember it. I'll keep it in my head. And you're, like, rolling up the sleeves and grabbing, like, the combination software package off the shelf and a scalpel. Like, “Not yet, but you're about to.” You're right, there is an interesting point of commonality on this. It comes down to almost data organization and the—Sheeri: Yeah.Corey: —relationship between data nodes if that's a fair assessment.Sheeri: Yeah. Well, so what happened was, so I went to university and in order to take introductory neuroscience, I had to take, like, chemistry, organic chemistry, biology, I was basically doing a pre-med track. And so, in the beginning of my junior year, I went to go take introductory neuroscience and I got a D-minus. And a D-minus level doesn't even count for the major. And I'm like, “Well, I want to graduate in three semesters.”And I had this—I got all my requirements done, except for the pesky little major thing. So, I was already starting to take, like, a computer science, you know, basic courses and so I kind of went whole-hog, all-in did four or five computer science courses a semester and got my degree in computer science. Because it was like math, so it kind of came a little easy to me. So taking, you know, logic courses, and you know, linear algebra courses was like, “Yeah, that's great.” And then it was the year 2000, when I got my bachelor's, the turn of the century.And my university offered a fifth-year master's degree program. And I said, I don't know who's going to look at me and say, conscious bias, unconscious bias, “She's a woman, she can't do computer science, so, like, let me just get this master's degree.” I, like, fill out a one page form, I didn't have to take a GRE. And it was the year 2000. You were around back then.You know what it was like. The jobs were like—they were handing jobs out like candy. I literally had a friend who was like, “My company”—that he founded. He's like, just come, you know, it's Monday in May—“Just start, you will just bring your resume the first day and we'll put it on file.” And I was like, no, no, I have this great opportunity to get a master's degree in one year at 25% off the cost because I got a tuition reduction or whatever for being in the program. I was like, “What could possibly go wrong in one year?”And what happened was his company didn't exist the next year, and, like, everyone was in a hiring freeze in 2001. So, it was the best decision I ever made without really knowing because I would have had a job for six months had been laid off with everyone else at the end of 2000 and… and that's it. So, that's how I became a DBA is I, you know, got a master's degree in computer science, really wanted to use databases. There weren't any database jobs in 2001, but I did get a job as a sysadmin, which we now call SREs.Corey: Well, for some of the younger folks in the audience, I do want to call out the fact that regardless of how they think we all rode dinosaurs to school, databases did absolutely exist back in that era. There's a reason that Oracle is as large as it is of a company. And it's not because people just love doing business with them, but technology was head and shoulders above everything else for a long time, to the point where people worked with them in spite of their reputation, not because of it. These days, it seems like in the database universe, you have an explosion of different options and different ways that are great at different things. The best, of course, is Route 53 or other DNS TXT records. Everything else is competing for second place on that. But no matter what it is, you're after, there are options available. This was not the case back then. It was like, you had a few options, all of them with serious drawbacks, but you had to pick your poison.Sheeri: Yeah. In fact, I learned on Postgres in university because you know, that was freely available. And you know, you'd like, “Well, why not MySQL? Isn't that kind of easier to learn?” It's like, yeah, but I went to college from '96 to 2001. MySQL 1.0 or whatever was released in '95. By the time I graduated, it was six years old.Corey: And academia is not usually the early adopter of a lot of emerging technologies like that. That's not a dig on them any because otherwise, you wind up with a major that doesn't exist by the time that the first crop of students graduates.Sheeri: Right. And they didn't have, you know, transactions. They didn't have—they barely had replication, you know? So, it wasn't a full-fledged database at the time. And then I became a MySQL DBA. But yeah, as a systems administrator, you know, we did websites, right? We did what web—are they called web administrators now? What are they called? Web admins? Webmaster?Corey: Web admins, I think that they became subsumed into sysadmins, by and large and now we call them DevOps, or SRE, which means the exact same thing except you get paid 60% more and your primary job is arguing about which one of those you're not.Sheeri: Right. Right. Like we were still separated from network operations, but database stuff that stuff and, you know, website stuff, it's stuff we all did, back when your [laugh] webmail was your Horde based on PHP and you had a database behind it. And yeah, it was fun times.Corey: I worked at a whole bunch of companies in that era. And that's where basically where I formed my early opinion of a bunch of DBA-leaning sysadmins. Like the DBA in and a lot of these companies, it was, I don't want to say toxic, but there's a reason that if I were to say, “I'm writing a memoir about a career track in tech called The Legend of Surly McBastard,” people are going to say, “Oh, is it about the DBA?” There's a reason behind this. It always felt like there was a sense of elitism and a sense of, “Well, that's not my job, so you do your job, but if anything goes even slightly wrong, it's certainly not my fault.” And to be fair, all of these fields have evolved significantly since then, but a lot of those biases that started early in our career are difficult to shake, particularly when they're unconscious.Sheeri: They are. I'd never ran into that person. Like, I never ran into anyone who—like a developer who treated me poorly because the last DBA was a jerk and whatever, but I heard a lot of stories, especially with things like granting access. In fact, I remember, my first job as an actual DBA and not as a sysadmin that also the DBA stuff was at an online gay dating site, and the CTO rage-quit. Literally yelled, stormed out of the office, slammed the door, and never came back.And a couple of weeks later, you know, we found out that the customer service guys who were in-house—and they were all guys, so I say guys although we also referred to them as ladies because it was an online gay dating site.Corey: Gals works well too, in those scenarios. “Oh, guys is unisex.” “Cool. So's ‘gals' by that theory. So gals, how we doing?” And people get very offended by that and suddenly, yeah, maybe ‘folks' is not a terrible direction to go in. I digress. Please continue.Sheeri: When they hired me, they were like, are you sure you're okay with this? I'm like, “I get it. There's, like, half-naked men posters on the wall. That's fine.” But they would call they'd be, like, “Ladies, let's go to our meeting.” And I'm like, “Do you want me also?” Because I had to ask because that was when ladies actually might not have included me because they meant, you know.Corey: I did a brief stint myself as the director of TechOps at Grindr. That was a wild experience in a variety of different ways.Sheeri: Yeah.Corey: It's over a decade ago, but it was still this… it was a very interesting experience in a bunch of ways. And still, to this day, it remains the single biggest source of InfoSec nightmares that kept me awake at night. Just because when I'm working at a bank—which I've also done—it's only money, which sounds ridiculous to say, especially if you're in a regulated profession, but here in reality where I'm talking about it, it's I'm dealing instead, with cool, this data leaks, people will die. Most of what I do is not life or death, but that was and that weighed very heavily on me.Sheeri: Yeah, there's a reason I don't work for a bank or a hospital. You know, I make mistakes. I'm human, right?Corey: There's a reason I work on databases for that exact same reason. Please, continue.Sheeri: Yeah. So, the CTO rage-quit. A couple of weeks later, the head of customer service comes to me and be like, “Can we have his spot as an admin for customer service?” And I'm like, “What do you mean?” He's like, “Well, he told us, we had, like, ten slots of permission and he was one of them so we could have have, like, nine people.”And, like, I went and looked, and they put permission in the htaccess file. So, this former CTO had just wielded his power to be like, “Nope, can't do that. Sorry, limitations.” When there weren't any. I'm like, “You could have a hundred. You want every customer service person to be an admin? Whatever. Here you go.” So, I did hear stories about that. And yeah, that's not the kind of DBA I was.Corey: No, it's the more senior you get, the less you want to have admin rights on things. But when I leave a job, like, the number one thing I want you to do is revoke my credentials. Not—Sheeri: Please.Corey: Because I'm going to do anything nefarious; because I don't want to get blamed for it. Because we have a long standing tradition in tech at a lot of places of, “Okay, something just broke. Whose fault is it? Well, who's the most recent person to leave the company? Let's blame them because they're not here to refute the character assassination and they're not going to be angling for a raise here; the rest of us are so let's see who we can throw under the bus that can't defend themselves.” Never a great plan.Sheeri: Yeah. So yeah, I mean, you know, my theory in life is I like helping. So, I liked helping developers as a DBA. I would often run workshops to be like, here's how to do an explain and find your explain plan and see if you have indexes and why isn't the database doing what you think it's supposed to do? And so, I like helping customers as a product manager, right? So…Corey: I am very interested in watching how people start drifting in a variety of different directions. It's a, you're doing product management now and it's an ETL lineage product, it is not something that is directly aligned with your previous positioning in the market. And those career transitions are always very interesting to me because there's often a mistaken belief by people in their career realizing they're doing something they don't want to do. They want to go work in a different field and there's this pervasive belief that, “Oh, time for me to go back to square one and take an entry level job.” No, you have a career. You have experience. Find the orthogonal move.Often, if that's challenging because it's too far apart, you find the half-step job that blends the thing you do now with something a lot closer, and then a year or two later, you complete the transition into that thing. But starting over from scratch, it's why would you do that? I can't quite wrap my head around jumping off the corporate ladder to go climb another one. You very clearly have done a lateral move in that direction into a career field that is surprisingly distant, at least in my view. How'd that happen?Sheeri: Yeah, so after being on call for 18 years or so, [laugh] I decided—no, I had a baby, actually. I had a baby. He was great. And then I another one. But after the first baby, I went back to work, and I was on call again. And you know, I had a good maternity leave or whatever, but you know, I had a newborn who was six, eight months old and I was getting paged.And I was like, you know, this is more exhausting than having a newborn. Like, having a baby who sleeps three hours at a time, like, in three hour chunks was less exhausting than being on call. Because when you have a baby, first of all, it's very rare that they wake up and crying in the midnight it's an emergency, right? Like they have to go to the hospital, right? Very rare. Thankfully, I never had to do it.But basically, like, as much as I had no brain cells, and sometimes I couldn't even go through this list, right: they need to be fed; they need to be comforted; they're tired, and they're crying because they're tired, right, you can't make them go to sleep, but you're like, just go to sleep—what is it—or their diaper needs changing, right? There's, like, four things. When you get that beep of that pager in the middle of the night it could be anything. It could be logs filling up disk space, you're like, “Alright, I'll rotate the logs and be done with it.” You know? It could be something you need snoozed.Corey: “Issue closed. Status, I no longer give a shit what it is.” At some point, it's one of those things where—Sheeri: Replication lag.Corey: Right.Sheeri: Not actionable.Corey: Don't get me started down that particular path. Yeah. This is the area where DBAs and my sysadmin roots started to overlap a bit. Like, as the DBA was great at data analysis, the table structure and the rest, but the backups of the thing, of course that fell to the sysadmin group. And replication lag, it's, “Okay.”“It's doing some work in the middle of the night; that's normal, and the network is fine. And why are you waking me up with things that are not actionable? Stop it.” I'm yelling at the computer at that point, not the person—Sheeri: Right,right.Corey: —to be very clear. But at some point, it's don't wake me up with trivial nonsense. If I'm getting woken up in the middle of the night, it better be a disaster. My entire business now is built around a problem that's business-hours only for that explicit reason. It's the not wanting to deal with that. And I don't envy that, but product management. That's a strange one.Sheeri: Yeah, so what happened was, I was unhappy at my job at the time, and I was like, “I need a new job.” So, I went to, like, the MySQL Slack instance because that was 2018, 2019. Very end of 2018, beginning of 2019. And I said, “I need something new.” Like, maybe a data architect, or maybe, like, a data analyst, or data scientist, which was pretty cool.And I was looking at data scientist jobs, and I was an expert MySQL DBA and it took a long time for me to be able to say, “I'm an expert,” without feeling like oh, you're just ballooning yourself up. And I was like, “No, I'm literally a world-renowned expert DBA.” Like, I just have to say it and get comfortable with it. And so, you know, I wasn't making a junior data scientist's salary. [laugh].I am the sole breadwinner for my household, so at that point, I had one kid and a husband and I was like, how do I support this family on a junior data scientist's salary when I live in the city of Boston? So, I needed something that could pay a little bit more. And a former I won't even say coworker, but colleague in the MySQL world—because is was the MySQL Slack after all—said, “I think you should come at MongoDB, be a product manager like me.”Corey: This episode is sponsored in part by Honeycomb. When production is running slow, it's hard to know where problems originate. Is it your application code, users, or the underlying systems? I've got five bucks on DNS, personally. Why scroll through endless dashboards while dealing with alert floods, going from tool to tool to tool that you employ, guessing at which puzzle pieces matter? Context switching and tool sprawl are slowly killing both your team and your business. You should care more about one of those than the other; which one is up to you. Drop the separate pillars and enter a world of getting one unified understanding of the one thing driving your business: production. With Honeycomb, you guess less and know more. Try it for free at honeycomb.io/screaminginthecloud. Observability: it's more than just hipster monitoring. Corey: If I've ever said, “Hey, you should come work with me and do anything like me,” people will have the blood drain from their face. And like, “What did you just say to me? That's terrible.” Yeah, it turns out that I have very hard to explain slash predict, in some ways. It's always fun. It's always wild to go down that particular path, but, you know, here we are.Sheeri: Yeah. But I had the same question everybody else does, which was, what's a product manager? What does the product manager do? And he gave me a list of things a product manager does, which there was some stuff that I had the skills for, like, you have to talk to customers and listen to them.Well, I've done consulting. I could get yelled at; that's fine. You can tell me things are terrible and I have to fix it. I've done that. No problem with that. Then there are things like you have to give presentations about how features were okay, I can do that. I've done presentations. You know, I started the Boston MySQL Meetup group and ran it for ten years until I had a kid and foisted it off on somebody else.And then the things that I didn't have the skills in, like, running a beta program were like, “Ooh, that sounds fascinating. Tell me more.” So, I was like, “Yeah, let's do it.” And I talked to some folks, they were looking for a technical product manager for MongoDB's sharding product. And they had been looking for someone, like, insanely technical for a while, and they found me; I'm insanely technical.And so, that was great. And so, for a year, I did that at MongoDB. One of the nice things about them is that they invest in people, right? So, my manager left, the team was like, we really can't support someone who doesn't have the product management skills that we need yet because you know, I wasn't a master in a year, believe it or not. And so, they were like, “Why don't you find another department?” I was like, “Okay.”And I ended up finding a place in engineering communications, doing, like, you know, some keynote demos, doing some other projects and stuff. And then after—that was a kind of a year-long project, and after that ended, I ended up doing product management for developer relations at MongoDB. Also, this was during the pandemic, right, so this is 2019, until '21; beginning of 2019, to end of 2020, so it was, you know, three full years. You know, I kind of like woke up from the pandemic fog and I was like, “What am I doing? Do I want to really want to be a content product manager?” And I was like, “I want to get back to databases.”One of the interesting things I learned actually in looking for a job because I did it a couple of times at MongoDB because I changed departments and I was also looking externally when I did that. I had the idea when I became a product manager, I was like, “This is great because now I'm product manager for databases and so, I'm kind of leveraging that database skill and then I'll learn the product manager stuff. And then I can be a product manager for any technical product, right?”Corey: I like the idea. Of some level, it feels like the product managers likeliest to succeed at least have a grounding or baseline in the area that they're in. This gets into the age-old debate of how important is industry-specific experience? Very often you'll see a bunch of job ads just put that in as a matter of course. And for some roles, yeah, it's extremely important.For other roles it's—for example, I don't know, hypothetically, you're looking for someone to fix the AWS bill, it doesn't necessarily matter whether you're a services company, a product company, or a VC-backed company whose primary output is losing money, it doesn't matter because it's a bounded problem space and that does not transform much from company to company. Same story with sysadmin types to be very direct. But the product stuff does seem to get into that industry specific stuff.Sheeri: Yeah, and especially with tech stuff, you have to understand what your customer is saying when they're saying, “I have a problem doing X and Y,” right? The interesting part of my folly in that was that part of the time that I was looking was during the pandemic, when you know, everyone was like, “Oh, my God, it's a seller's market. If you're looking for a job, employers are chomping at the bit for you.” And I had trouble finding something because so many people were also looking for jobs, that if I went to look for something, for example, as a storage product manager, right—now, databases and storage solutions have a lot in common; databases are storage solutions, in fact; but file systems and databases have much in common—but all that they needed was one person with file system experience that had more experience than I did in storage solutions, right? And they were going to choose them over me. So, it was an interesting kind of wake-up call for me that, like, yeah, probably data and databases are going to be my niche. And that's okay because that is literally why they pay me the literal big bucks. If I'm going to go niche that I don't have 20 years of experience and they shouldn't pay me as big a bucks right?Corey: Yeah, depending on what you're doing, sure. I don't necessarily believe in the idea that well you're new to this particular type of role so we're going to basically pay you a lot less. From my perspective it's always been, like, there's a value in having a person in a role. The value to the company is X and, “Well, I have an excuse now to pay you less for that,” has never resonated with me. It's if you're not, I guess, worth—the value-added not worth being paid what the stated rate for a position is, you are probably not going to find success in that role and the role has to change. That has always been my baseline operating philosophy. Not to yell at people on this, but it's, uh, I am very tired of watching companies more or less dunk on people from a position of power.Sheeri: Yeah. And I mean, you can even take the power out of that and take, like, location-based. And yes, I understand the cost of living is different in different places, but why do people get paid differently if the value is the same? Like if I want to get a promotion, right, my company is going to be like, “Well, show me how you've added value. And we only pay your value. We don't pay because—you know, you don't just automatically get promoted after seven years, right? You have to show the value and whatever.” Which is, I believe, correct, right?And yet, there are seniority things, there are this many years experience. And you know, there's the old caveat of do you have ten years experience or do you have two years of experience five times?Corey: That is the big problem is that there has to be a sense of movement that pushes people forward. You're not the first person that I've had on the show and talked to about a 20 year career. But often, I do wind up talking to folks as I move through the world where they basically have one year of experience repeated 20 times. And as the industry continues to evolve and move on and skill sets don't keep current, in some cases, it feels like they have lost touch, on some level. And they're talking about the world that was and still is in some circles, but it's a market in long-term decline as opposed to keeping abreast of what is functionally a booming industry.Sheeri: Their skills have depreciated because they haven't learned more skills.Corey: Yeah. Tech across the board is a field where I feel like you have to constantly be learning. And there's a bit of an evolve-or-die dinosaur approach. And I have some, I do have some fallbacks on this. If I ever decide I am tired of learning and keeping up with AWS, all I have to do is go and work in an environment that uses GovCloud because that's, like, AWS five years ago.And that buys me the five years to find something else to be doing until a GovCloud catches up with the modern day of when I decided to make that decision. That's a little insulting and also very accurate for those who have found themselves in that environment. But I digress.Sheeri: No, and I find it to with myself. Like, I got to the point with MySQL where I was like, okay, great. I know MySQL back and forth. Do I want to learn all this other stuff? Literally just today, I was looking at my DMs on Twitter and somebody DMed me in May, saying, “Hi, ma'am. I am a DBA and how can I use below service: Lambda, Step Functions, DynamoDB, AWS Session Manager, and CloudWatch?”And I was like, “You know, I don't know. I have not ever used any of those technologies. And I haven't evolved my DBA skills because it's been, you know, six years since I was a DBA.” No, six years, four or five? I can't do math.Corey: Yeah. Which you think would be a limiting factor to a DBA but apparently not. One last question that [laugh] I want to ask you, before we wind up calling this a show. You've done an awful lot across the board. As you look at all of it, what is it you would say that you're the most proud of?Sheeri: Oh, great question. What I'm most proud of is my work with WildAid. So, when I was at MongoDB—I referenced a job with engineering communications, and they hired me to be a product manager because they wanted to do a collaboration with a not-for-profit and make a reference application. So, make an application using MongoDB technology and make it something that was going to be used, but people can also see it. So, we made this open-source project called o-fish.And you know, we can give GitHub links: it's github.com/wildaid, and it has—that's the organization's GitHub which we created, so it only has the o-fish projects in it. But it is a mobile and web app where governments who patrol waters, patrol, like, marine protected areas—which are like national parks but in the water, right, so they are these, you know, wildlife preserves in the water—and they make sure that people aren't doing things they shouldn't do: they're not throwing trash in the ocean, they're not taking turtles out of the Galapagos Island area, you know, things like that. And they need software to track that and do that because at the time, they were literally writing, you know, with pencil on paper, and, you know, had stacks and stacks of this paper to do data entry.And MongoDB had just bought the Realm database and had just integrated it, and so there was, you know, some great features about offline syncing that you didn't have to do; it did all the foundational plumbing for you. And then the reason though, that I'm proud of that project is not just because it's pretty freaking cool that, you know, doing something that actually makes a difference in the world and helps fight climate change and all that kind of stuff, the reason I was proud of it is I was the sole product manager. It was the first time that I'd really had sole ownership of a product and so all the mistakes were my own and the credit was my own, too. And so, it was really just a great learning experience and it turned out really well.Corey: There's a lot to be said for pitching in and helping out with good causes in a way that your skill set winds up benefitting. I found that I was a lot happier with a lot of the volunteer stuff that I did when it was instead of licking envelopes, it started being things that I had a bit of proficiency in. “Hey, can I fix your AWS bill?” It turns out as some value to certain nonprofits. You have to be at a certain scale before it makes sense, otherwise it's just easier to maybe not do it that way, but there's a lot of value to doing something that puts good back into the world. I wish more people did that.Sheeri: Yeah. And it's something to do in your off-time that you know is helping. It might feel like work, it might not feel like work, but it gives you a sense of accomplishment at the end of the day. I remember my first job, one of the interview questions was—no, it wasn't. [laugh]. It wasn't an interview question until after I was hired and they asked me the question, and then they made it an interview question.And the question was, what video games do you play? And I said, “I don't play video games. I spend all day at work staring at a computer screen. Why would I go home and spend another 12 hours till three in the morning, right—five in the morning—playing video games?” And they were like, we clearly need to change our interview questions. This was again, back when the dinosaurs roamed the earth. So, people are are culturally sensitive now.Corey: These days, people ask me, “What's your favorite video game?” My answer is, “Twitter.”Sheeri: Right. [laugh]. Exactly. It's like whack-a-mole—Corey: Yeah.Sheeri: —you know? So, for me having a tangible hobby, like, I do a lot of art, I knit, I paint, I carve stamps, I spin wool into yarn. I know that's not a metaphor for storytelling. That is I literally spin wool into yarn. And having something tangible, you work on something and you're like, “Look. It was nothing and now it's this,” is so satisfying.Corey: I really want to thank you for taking the time to speak with me today about where you've been, where you are, and where you're going, and as well as helping me put a little bit more of a human angle on Twitter, which is intensely dehumanizing at times. It turns out that 280 characters is not the best way to express the entirety of what makes someone a person. You need to use a multi-tweet thread for that. If people want to learn more about you, where can they find you?Sheeri: Oh, they can find me on Twitter. I'm @sheeri—S-H-E-E-R-I—on Twitter. And I've started to write a little bit more on my blog at sheeri.org. So hopefully, I'll continue that since I've now told people to go there.Corey: I really want to thank you again for being so generous with your time. I appreciate it.Sheeri: Thanks to you, Corey, too. You take the time to interview people, too, so I appreciate it.Corey: I do my best. Sheeri Cabral, Senior Product Manager of ETL lineage at Collibra. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice or smash the like and subscribe buttons on the YouTubes, whereas if you've hated it, do exactly the same thing—like and subscribe, hit those buttons, five-star review—but also leave a ridiculous comment where we will then use an ETL pipeline to transform it into something that isn't complete bullshit.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Johnny and Bob discuss Glue the serverless ETL tool from AWS #dataengineering #etl #aws Connect with Johnny Youtube Channel https://www.youtube.com/c/JohnnyChivers The QuestionBank - Completely free Bank of community Questions AWS Certifications developed on AWS Amplify https://www.thequestionbank.io/ Personal Website for Contact, forum and free AWS learning resource https://johnnychivers.co.uk/ Connect with Bob Twitter - @bobhaffner LinkedIn - linkedin.com/in/bobhaffner https://twitter.com/EngSideOfData
About RafalRafal is Serverless Engineer at Stedi by day, and Dynobase founder by night - a modern DynamoDB UI client. When he is not coding or answering support tickets, he loves climbing and tasting whiskey (not simultaneously).Links Referenced:Company Website: https://dynobase.dev TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored by our friends at Revelo. Revelo is the Spanish word of the day, and its spelled R-E-V-E-L-O. It means “I reveal.” Now, have you tried to hire an engineer lately? I assure you it is significantly harder than it sounds. One of the things that Revelo has recognized is something I've been talking about for a while, specifically that while talent is evenly distributed, opportunity is absolutely not. They're exposing a new talent pool to, basically, those of us without a presence in Latin America via their platform. It's the largest tech talent marketplace in Latin America with over a million engineers in their network, which includes—but isn't limited to—talent in Mexico, Costa Rica, Brazil, and Argentina. Now, not only do they wind up spreading all of their talent on English ability, as well as you know, their engineering skills, but they go significantly beyond that. Some of the folks on their platform are hands down the most talented engineers that I've ever spoken to. Let's also not forget that Latin America has high time zone overlap with what we have here in the United States, so you can hire full-time remote engineers who share most of the workday as your team. It's an end-to-end talent service, so you can find and hire engineers in Central and South America without having to worry about, frankly, the colossal pain of cross-border payroll and benefits and compliance because Revelo handles all of it. If you're hiring engineers, check out revelo.io/screaming to get 20% off your first three months. That's R-E-V-E-L-O dot I-O slash screaming.Corey: The company 0x4447 builds products to increase standardization and security in AWS organizations. They do this with automated pipelines that use well-structured projects to create secure, easy-to-maintain and fail-tolerant solutions, one of which is their VPN product built on top of the popular OpenVPN project which has no license restrictions; you are only limited by the network card in the instance. To learn more visit: snark.cloud/deployandgoCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. It's not too often that I wind up building an episode here out of a desktop application. I've done it once or twice, and I'm sure that the folks at Microsoft Excel are continually hoping for an invite to talk about things. But we're going in a bit of a different direction today. Rafal Wilinski is a serverless engineer at Stedi and, in apparently what is the job requirement at Stedi, he also has a side project that manifests itself as a desktop app. Rafal, thank you for joining me today. I appreciate it.Rafal: Yeah. Hi, everyone. Thanks for having me, Corey.Corey: I first heard about you when you launched Dynobase, which is awesome. It sounds evocative of dinosaurs unless you read it, then it's D-Y-N-O, and it's, “Ah, this sounds a lot like DynamoDB. Let me see what it is.” And sure enough, it was. As much as I love misusing things as databases, DynamoDB is actually a database that is decent and good at what it does.And please correct me if I get any of this wrong, but Dynobase is effectively an Electron app that you install, at least on a Mac, in my case; I don't generally use other desktops, that's other people's problems. And it provides a user-friendly interface to DynamoDB that is not actively hostile to the customer.Rafal: Yeah, exactly. That was the goal. That's how I envisioned it, and I hope I executed correctly.Corey: It was almost prescient in some ways because they recently redid the DynamoDB console in AWS to actively make it worse, to wind up working with individual items, to modify things. It feels like they are validating your market for you by, “Oh, we really like Dynobase. How do we drive more traffic to it? We're going to make this thing worse.” But back then when you first created this, the console was his previous version. What was it that inspired you to say, “You know what I'm going to build? A desktop application for a cloud service.” Because on the surface, it seems relatively close to psychotic, but it's brilliant.Rafal: [laugh]. Yeah, sure. So, a few years ago, I was freelancing on AWS. I was jumping between clients and my side projects. That also involved jumping between regions, and AWS doesn't have a good out-of-the-box solution for switching your accounts and switching your regions, so when you want it to work on your client table in Australia and simultaneously on my side project in Europe, there was no other solution than to have two browser windows open or to, even, browsers open.And it was super frustrating. So, I was like, hey, “DynamoDB has SDK. Electron is this thing that allows you to make a desktop application using HTML and JS and some CSS, so maybe I can do something with it.” And I was so naive to think that it's going to be a trivial task because it's going to be—come on, it's like, a couple of SDK calls, displaying some lists and tables, and that's pretty much it, right?Corey: Right. I use Retool as my system to build my newsletter every week, and that is the front-end I use to interact with DynamoDB. And it's great. It has a table component that just—I run a query that, believe it or not, is a query, not a scan—I know, imagine that, I did something slightly right this one time—and it populates things for the current issue into it, and then I basically built a CRUD API around it and have components that let me update, delete, remove, the usual stuff. And it's great, it works for my purposes, and it's fine.And that's what I use most of the time until I, you know, hit an edge case or a corner case—because it turns out, surprise everyone, I'm bad at programming—and I need to go in and tweak the table myself manually. And that's where Dynobase, at least for my use case, really comes into its own.Rafal: Good to hear. Good to hear. Yeah, that was exactly same case why I built it because yeah, I was also, a few years ago, I started working on some project which was really crazy. It was before AppSync times. We wanted to have GraphQL serverless API using single table design and testing principles [unintelligible 00:04:38] there.So, we've been verifying many things by just looking at the contents of the table, and sometimes fixing them manually. So, that was also the thing that motivated me to make the editing experience a little bit better.Corey: One thing I appreciate about the application is that it does things right. I mean, there's no real other way to frame that. When I fire up the application myself and I go to the account that I've been using it with—because in this case, there's really only one account that I have that contains the data that I spent that my time working with—and I get access to it on my machine via Granted, which because it's a federated SSO login. And it says, “Ah, this is an SSL account. Click here to open the browser tab and do the thing.”I didn't have to configure Dynobase. It is automatically reading my AWS config file in my user directory. It does a lot of things right. There's no duplication of work. From my perspective. It doesn't freak out because it doesn't know how SSO works. It doesn't have run into these obnoxious edge case problems that so many early generation desktop interfaces for AWS things seem to.Rafal: Wow, it seems like it works for you even better than for me. [laugh].Corey: Oh, well again, how I get into accounts has always been a little weird. I've ranted before about Granted, which is something that Common Fate puts out. It is a binary utility that winds up logging into different federated SSO accounts, opens them in Firefox containers so you could have you know, two accounts open, side-by-side. It's some nice affordances like that. But it still uses the standard AWS profile syntax which Dynobase does as well.There are a bunch of different ways I've logged into things, and I've never experienced friction [unintelligible 00:06:23] using Dynobase for this. To be clear, you haven't paid me a dime. In fact, just the opposite. I wind up paying my monthly Dynobase subscription with a smile on my face. It is worth every penny, just because on those rare moments when I have to work with something odd in DynamoDB, it's great having the tool.I want to be very clear here. I don't recall what the current cost on this is, but I know for a fact it is more than I spend every month on DynamoDB itself, which is fine. You pay for utility, not for the actual raw cost of the underlying resources on it. Some people tend to have issues with that and I think it's the wrong direction to go in.Rafal: Yeah, exactly. So, my logic was that it's a productivity improvement. And a lot of programmers are simply obsessed with productivity, right? We tend to write those obnoxious nasty Bash and Python scripts to automate boring tasks in our day jobs. So, if you can eliminate this chore of logging to different AWS accounts and trying to find them, and even if it takes, like, five or ten seconds, if I can shave that five or ten seconds every time you try to do something, that over time accumulates into a big number and it's a huge time investment. So, even if you save, like, I don't know, maybe one hour a month or one hour a quarter, I think it's still a fair price.Corey: Your pricing is very interesting, and the reason I say that is you do not have a free tier as such, you have a free seven-day trial, which is great. That is the way to do it. You can sign up with no credit card, grab the thing, and it's awesome. Dynobase.dev for folks who are wondering.And you have a solo yearly plan, which is what I'm on, which is $9 a month. Which means that you end up, I think, charging me $108 a year billed annually. You have a solo lifetime option for 200 bucks—and I'm going to fight with you about that one in a second; we're going to come back to it—then you have a team plan that is for I think for ten licenses at 79 bucks a month, and for 20 licenses it's 150 bucks a month. Great. And then you have an enterprise option for 250 a month, the end. Billed annually. And I have problems with that, too.So, I like arguing with pricing, I [unintelligible 00:08:43] about pricing with people just because I find that is one of those underappreciated aspects of things. Let's start with my own decisions on this, if I may. The reason that I go for the solo yearly plan instead of a lifetime subscription of I buy this and I get to use it forever in perpetuity. I like the tool but, like, the AWS service that underlies it, it's going to have to evolve in the fullness of time. It is going to have to continue to support new DynamoDB functionality, like the fact that they have infrequent access storage classes now, for tables, as an example. I'm sure they're coming up with other things as well, like, I don't know, maybe a sane query syntax someday. That might be nice if they ever built one of those.Some people don't like the idea of a subscription software. I do just because I like the fact that it is a continual source of revenue. It's not the, “Well, five years ago, you paid me that one-off thing and now you expect feature enhancements for the rest of time.” How do you think about that?Rafal: So, there are a couple of things here. First thing is that the lifetime support, it doesn't mean that I will be always implementing to my death all the features that are going to appear in DynamoDB. Maybe there is going to be a some feature and I'm not going to implement it. For instance, it's not possible to create the global tables via Dynobase right now, and it won't be possible because we think that majority of people dealing with cloud are using infrastructure as a code, and creating tables via Dynobase is not a super useful feature. And we also believe that it's not going to break even without support. [laugh]. I know it sounds bad; it sounds like I'm not going to support it at some point, but don't worry, there are no plans to discontinue support [crosstalk 00:10:28]—Corey: We all get hit by buses from time to time, let's be clear.Rafal: [laugh].Corey: And I want to also point out as well that this is a graphical tool that is a front-end for an underlying AWS service. It is extremely convenient, there is tremendous value in it, but it is not critical path as if suddenly I cannot use Dynobase, my production app is down. It doesn't work that way, in the sense—Rafal: Yes.Corey: Of a SaaS product. It is a desktop application. And huge fan of that as well. So, please continue.Rafal: Yeah, exactly—Corey: I just want to make sure that I'm not misleading people into thinking it's something it's not here. It's, “Oh, that sounds dangerous if that's critical pa”—yeah, it's not designed to be. I imagine, at least. If so it seems like a very strange use case.Rafal: Yeah. Also, you have to keep in mind that AWS isn't basically introducing breaking changes, especially in a service that is so popular as DynamoDB. I cannot imagine them, like, announcing, like, “Hey, in a month, we are going to deprecate this API, so you'd better start, you know, using this new API because this one is going to be removed.” I think that's not going to happen because of the millions of clients using DynamoDB actively. So, I think that makes Dynobase safe. It's built on a rock-solid foundation that is going to change only additively. No features are going to be just being removed.Corey: I think that there's a direction in a number of at least consumer offerings where people are upset at the idea of software subscriptions, the idea of why should I pay in perpetuity for a thing? And I want to call out my own bias here. For something like this, where you're charging $9 a month, I do not care about the price, truly I don't. I am a price inflexible customer. It could go and probably as high as 50 bucks a month and I would neither notice nor care.That is probably not the common case customer, and it's certainly not over in consumer-land. I understand that I am significantly in a privileged position when it comes to being able to acquire the tools that I need. It turns out compared to the AWS bill I have to deal with, I don't have to worry about the small stuff, comparatively. Not everyone is in that position, so I am very sympathetic to that. Which is why I want to deviate here a little bit because somewhat recently, Dynobase showed up on the AWS Marketplace.And I can go into the Marketplace now and get a yearly subscription for a single seat for $129. It is slightly more than buying it directly through your website, but there are some advantages for many folks in getting it on the Marketplace. AWS is an approved vendor, for example, so there's no procurement dance. It counts toward your committed spend on contracts if someone is trying to wind up hitting certain levels of spend on their EDP. It provides a centralized place to manage things, as far as those licenses go when people are purchasing it. What was it that made you decide to put this on the Marketplace?Rafal: So, this decision was pretty straightforward. It's just, you know, yet another distribution channel for us. So, imagine you're a software engineer that works for a really, really big company and it's super hard to approve some kind of expense using traditional credit card. You basically cannot go to my site and check out with a company credit card because of the processes, or maybe it takes two years. But maybe it's super easy to click this subscribe on your AWS account. So yeah, we thought that, hey, maybe it's going to unlock some engineers working at those big corporations, and maybe this is the way that they are going to start using Dynobase.Corey: Are you seeing significant adoption yet? Or is it more or less a—it's something that's still too early to say? And beyond that, are you finding that people are discovering the product via the AWS Marketplace, or is it strictly just a means of purchasing it?Rafal: So, when it comes to discovering, I think we don't have any data about it yet, which is supported by the fact that we also have zero subscriptions from the Marketplace yet. But it's also our fault because we haven't actually actively promoted the fact, apart from me sending just a tweet on Twitter, which is in [crosstalk 00:14:51]—Corey: Which did not include a link to it as well, which means that Google was our friend for this because let's face it, AWS Marketplace search is bad.Rafal: Well, maybe. I didn't know. [laugh]. I was just, you know, super relieved to see—Corey: No, I—you don't need to agree with that statement. I'm stating it as a fact. I am not a fan of Marketplace search. It irks me because for whatever reason whenever I'm in there looking for something, it does not show me the things I'm looking for, it shows me the biggest partners first that AWS has and it seems like the incentives are misaligned. I'm sure someone is going to come on the show to yell about me. I'm waiting for your call.Rafal: [laugh].Corey: Do you find that if someone is going to purchase it, do you have a preference that they go directly, that they go through the Marketplace? Is there any direction for you that makes more sense than another?Rafal: So ideally, would like to continue all the customers to purchase the software using the classical way, using the subscriptions for our website because it's just one flow, one system, it's simpler, it's cleaner, but we want it to give that option and to have more adoption. We'll see if that's going to work.Corey: I was going to say there were two issues I had with the pricing. That was one of them. The other is at the high end, the enterprise pricing being $250 a month for unlimited licenses, that doesn't feel like it is the right direction, and the reason I say that is a 50-person company would wind up being able to spend 250 bucks a month to get this for their entire team, and that's great and they're happy. So, could AWS or Coca-Cola, and at that very high level, it becomes something that you are signing up for significant amount of support work, in theory, or a bunch of other directions.I've always found that from where I stand, especially dealing with those very large companies with very specific SLA requirements and the rest, the pricing for enterprise that I always look for as the right answer for my mind is ‘click here to contact us.' Because procurement departments, for example, we want this, this, this, this, and this around data guarantees and indemnities and all the rest. And well, yeah, that's going to be expensive. And well, yeah. We're a procurement company at a Fortune 50. We don't sign contracts that don't have two commas in them.So, it feels like there's a dialing it in with some custom optionality that feels like it is signaling to the quote-unquote, ‘sophisticated buyer,' as patio11 likes to say on Twitter from time to time, that might be the right direction.Rafal: That's really good feedback. I haven't thought about it this way, but you really opened my eyes on this issue.Corey: I'm glad it was helpful. The reason I think about it this way is that more and more I'm realizing that pricing is one of the most key parts of marketing and messaging around something, and that is not really well understood, even by larger companies with significant staff and full marketing teams. I still see the pricing often feels like an afterthought, but personally, when I'm trying to figure out is this tool for me, the first thing I do is—I don't even read the marketing copy of the landing page; I look for the pricing tab and click because if the only prices ‘call for details,' I know, A, it's going to be expensive, be it's going to be a pain in the neck to get to use it because it's two in the morning; I'm trying to get something done. I want to use it right now. If I had to have a conversation with your sales team first, that's not going to be cheap and it's not going to be something I'm going to be able to solve my problem this week. And that is the other end of it. I yell at people on both sides on that one.Rafal: Okay.Corey: Again, none of this stuff is intuitive; all of this stuff is complicated, and the way that I tend to see the world is, granted, a little bit different than the way that most folks who are kicking around databases and whatnots tend to view the world. Do you have plans in the future to extend Dynobase beyond strictly DynamoDB, looking to explore other fine database options like Redis, or MongoDB, or my personal favorite Route 53 TXT records?Rafal: [laugh]. Yeah. So, we had plans. Oh, we had really big plans. We felt that we are going to create a second JetBrains company. We started analyzing the market when it comes to MongoDB, when it comes to Cassandra, when it comes to Redis. And our first pick was Cassandra because it seemed, like, to have really, really similar structure of the table.I mean, it's also no secret it also has a primary index, secondary global indexes, and things like that. But as always, reality surprises us over the amount of detail that we cannot see from the very top. And it isn't as simple as just an install AWS SDK and install Cassandra Connector on—or Cassandra SDK and just roll with that. It requires a really big and significant investment. And we decided to focus just on one thing and nail this one thing and do this properly.It's like, if you go into the cloud, you can try to build a service that is agnostic, it's not using the best features of the cloud. And you can move your containers, for instance, across the clouds and say, “Hey, I'm cloud-agnostic,” but at the same time, you're missing out all the best features. And this is the same way we thought about Dynabase. Hey, we can provide an agnostic core, but then the agnostic application isn't going to be as good and as sophisticated as something tailored specifically for the needs of this database and user using this exact database.Corey: This episode is sponsored in parts by our friend EnterpriseDB. EnterpriseDB has been powering enterprise applications with PostgreSQL for 15 years. And now EnterpriseDB has you covered wherever you deploy PostgreSQL on premises, private cloud, and they just announced a fully managed service on AWS and Azure called BigAnimal, all one word.Don't leave managing your database to your cloud vendor because they're too busy launching another half dozen manage databases to focus on any one of them that they didn't build themselves. Instead, work with the experts over at EnterpriseDB. They can save you time and money, they can even help you migrate legacy applications, including Oracle, to the cloud.To learn more, try BigAnimal for free. Go to biganimal.com/snark, and tell them Corey sent you.Corey: Some of the things that you do just make so much sense that I get actively annoyed that there aren't better ways to do it and other places for other things. For example, when I fire up a table in a particular region within Dynobase, first it does a scan, which, okay, that's not terrible. But on some big tables, that can get really expensive. But you cap it automatically to a thousand items. And okay, great.Then it tells me, how long did it take? In this case because, you know, I am using on-demand and the rest and it's a little bit of a pokey table, that scan took about a second-and-a-half. Okay. You scanned a thousand items. Well, there's a lot more than a thousand items in this table. Ah, you limited it, so you didn't wind up taking all that time.It also says that it took 51-and-a-half RCUs—or Read Credit Units—because you know, why use normal numbers when you're AWS and doing pricing dimensions on this stuff.Rafal: [laugh].Corey: And to be clear, I forget the exact numbers for reads, but it's something like a million read RCUs cost me a dollar or something like that. It is trivial; it does not matter, but because it is consumption-based pricing, I always live in a little bit of a concern that, okay, if I screw up and just, like, scan the entire 10-megabyte table every time I want to make an operation here, and I make a lot of operations in the course of a week, that's going to start showing up in the bill in some really unfortunate ways. This sort of tells me as an ongoing basis of what it is that I'm going to wind up encountering.And these things are all configurable, too. The initial stream limit that you have configured as a thousand. I can set that to any number I want if I think that's too many or too few. You have a bunch of pagination options around it. And you also help people build out intelligent queries, [unintelligible 00:22:11] can export that to code. It's not just about the graphical interface clickety and done—because I do love my ClickOps but there are limits to it—it helps formulate what kind of queries I want to build and then wind up implementing in code. And that is no small thing.Rafal: Yeah, exactly. This is how we also envision that. The language syntax in DynamoDB is really… hard.Corey: Awful. The term is awful.Rafal: [laugh]. Yeah, especially for people—Corey: I know, people are going to be mad at me, but they're wrong. It is not intuitive, it took a fair bit of wrapping my head around. And more than once, what I found myself doing is basically just writing a thin CRUD API in Lambda in front of it just so I can query it in a way that I think about it as opposed to—now I'm not even talking changing the query modeling; I just want better syntax. That's all it is.Rafal: Yeah. You also touch on modeling; that's also very important thing, especially—or maybe even scan or query. Suppose I'm an engineer with tens years of experience. I come to the DynamoDB, I jump straight into the action without reading any of the documentation—at least that's my way of working—and I have no idea what's the difference between a scan and query. So, in Dynobase, when I'm going to enter all those filtering parameters into the UI, I'm going to hit scan, Dynobase is automatically going to figure out for you what's the best way to query—or to scan if query is not possible—and also give you the code that actually was behind that operation so you can just, like, copy and paste that straight to your code or service or API and have exactly the same result.So yeah, we want to abstract away some of the weird things about DynamoDB. Like, you know, scan versus query, expression attribute names, expression attribute values, filter, filtering conditions, all sorts of that stuff. Also the DynamoDB JSON, that's also, like, a bizarre thing. This JSON-type thing we should get out of the box, we also take care of that. So, yeah. Yeah, that's also our mission to make the DynamoDB as approachable as possible. Because it's a great database, but to truly embrace it and to truly use it, it's hard.Corey: I want to be clear, just for folks who are not seeing some of the benefits of it the way that I've described it thus far. Yes, on some level, it basically just provides a attractive, usable interface to wind up looking at items in a DynamoDB table. You can also use it to wind up refining queries to look at very specific things. You can export either a selection or an entire table either to a local file—or to S3, which is convenient—but it goes beyond on that because once you have the query dialed in and you're seeing the things you want to see, there's a generate code button that spits it out in—for Python, for JavaScript, for Golang.And there are a few things that the AWS CLI is coming soon, according to the drop-down itself. Java; ooh, you do like pain. And Golang for example, it effectively exports the thing you have done by clicking around as code, which is, for some godforsaken reason, anathema to most AWS services. “Oh, you clicked around to the console to do a thing. Good job. Now, throw it all away and figure out how to do it in code.” As opposed to, “Here's how to do what you just did programmatically.” My God, the console could be the best IDE in the world, except that they don't do it for some reason.Rafal: Yeah, yeah.Corey: And I love the fact that Dynobase does.Rafal: Thank you.Corey: I'm a big fan of this. You can also import data from a variety of formats, export data, as well. And one of the more obnoxious—you talk about weird problems I have with DynamoDB that I wish to fix: I would love to move this table to a table in a different AWS account. Great, to do that, I effectively have to pause the service that is in front of this because I need to stop all writes—great—export the table, take the table to the new account, import the table, repoint the code to talk to that thing, and then get started again. Now, there are ways to do it without that, and they all suck because you have to either write a shim for it or you have to wind up doing a stream that winds up feeding from one to the other.And in many cases, well okay, I want to take the table here, I do a knife-edge cutover so that new rights go to the new thing, and then I just want to backfill this old table data into it. How do I do that? The official answer is not what you would expect it to be, the DynamoDB console of ‘import this data.' Instead, it's, “Oh, use AWS Glue to wind up writing an ETL function to do all of this.” And it's… what? How is that the way to do these things?There are import and export buttons in Dynobase that solve this problem beautifully without having to do all of that. It really is such a different approach to thinking about this, and I am stunned that this had to be done as a third party. It feels like you were using the native tooling and the native console the same way the rest of us do, grousing about it the same way the rest of us do, and then set out to fix it like none of us do. What was it that finally made you say, “You know, I think there's a better way and I'm going to prove it.” What pushed you over the edge?Rafal: Oh, I think I was spending, just, hours in the console, and I didn't have a really sophisticated suite of tests, which forced me [unintelligible 00:27:43] time to look at the data a lot and import data a lot and edit it a lot. And it was just too much. I don't know, at some point I realized, like, hey, there's got to be a better way. I browsed for the solutions on the internet; I realized that there is nothing on the market, so I asked a couple of my friends saying like, “Hey, do you also have this problem? Is this also a problem for you? Do you see the same challenges?”And basically, every engineer I talked to said, “Yeah. I mean, this really sucks. You should do something about it.” And that was the moment I realized that I'm really onto something and this is a pain that I'm not alone. And so… yeah, that gave me a lot of motivation. So, there was a lot of frustration, but there was also a lot of motivation to push me to create a first product in my life.Corey: It's your first product, but it does follow an interesting pattern that seems to be emerging, Cloudash—Tomasz and Maciej—wound up doing that as well. They're also working at Stedi and they have their side project which is an Electron-based desktop application that winds up, we're interfacing with AWS services. And it's. What are your job requirements over at Stedi, exactly?People could be forgiven for seeing these things and not knowing what the hell EDI is—which guilty—and figure, “Ah, it's just a very fancy term for a DevRels company because they're doing serverless DevRel as a company.” It increasingly feels an awful lot like that.j, what's going on over there where that culture just seems to be an emergent property?Rafal: So, I feel like Stedi just attracts a lot of people that like challenges and the people that have a really strong sense of ownership and like to just create things. And this is also how it feels inside. There is plenty of individuals that basically have tons of energy and motivation to solve so many problems not only in Stedi, but as you can see also outside of Stedi, which is a result—Cloudash is a result, the mapping tool from Zack Charles is also a result, and Michael Barr created a scheduling service. So, yeah, I think the principles that we have at Stedi basically attract top-notch builders.Corey: It certainly seems so. I'm going to have to do a little more digging and see what some of those projects are because they're new to me. I really want to thank you for taking so much time to speak with me about what you're building. If people want to learn more or try to kick the tires on Dynobase which I heartily recommend, where should they go?Rafal: Go to dynobase.dev, and there's a big download button that you cannot miss. You download the software, you start it. No email, no credit card required. You just run it. It scans your credentials, profiles, SSOs, whatever, and you can play with it. And that's pretty much it.Corey: Excellent. And we will put a link to that in the [show notes 00:30:48]. Thank you so much for your time. I really appreciate it.Rafal: Yeah. Thanks for having me.Corey: Rafal Wilinski, serverless engineer at Stedi and creator of Dynobase. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice—or a thumbs up and like and subscribe buttons on the YouTubes if that's where you're watching it—whereas if you've hated this podcast, same thing—five-star review, hit the buttons and such—but also leave an angry, bitter comment that you're not going to be able to find once you write it because no one knows how to put it into DynamoDB by hand.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Maheswarakumar said about AWS Community Builder, Data, AWS Glue.he is from india. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
About YoavYoav is a security veteran recognized on Microsoft Security Response Center's Most Valuable Research List (BlackHat 2019). Prior to joining Orca Security, he was a Unit 8200 researcher and team leader, a chief architect at Hyperwise Security, and a security architect at Check Point Software Technologies. Yoav enjoys hunting for Linux and Windows vulnerabilities in his spare time.Links Referenced: Orca Security: https://orca.security Twitter: https://twitter.com/yoavalon TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Vultr. Optimized cloud compute plans have landed at Vultr to deliver lightning fast processing power, courtesy of third gen AMD EPYC processors without the IO, or hardware limitations, of a traditional multi-tenant cloud server. Starting at just 28 bucks a month, users can deploy general purpose, CPU, memory, or storage optimized cloud instances in more than 20 locations across five continents. Without looking, I know that once again, Antarctica has gotten the short end of the stick. Launch your Vultr optimized compute instance in 60 seconds or less on your choice of included operating systems, or bring your own. It's time to ditch convoluted and unpredictable giant tech company billing practices, and say goodbye to noisy neighbors and egregious egress forever. Vultr delivers the power of the cloud with none of the bloat. "Screaming in the Cloud" listeners can try Vultr for free today with a $150 in credit when they visit getvultr.com/screaming. That's G E T V U L T R.com/screaming. My thanks to them for sponsoring this ridiculous podcast.Corey: Finding skilled DevOps engineers is a pain in the neck! And if you need to deploy a secure and compliant application to AWS, forgettaboutit! But that's where DuploCloud can help. Their comprehensive no-code/low-code software platform guarantees a secure and compliant infrastructure in as little as two weeks, while automating the full DevSecOps lifestyle. Get started with DevOps-as-a-Service from DuploCloud so that your cloud configurations are done right the first time. Tell them I sent you and your first two months are free. To learn more visit: snark.cloud/duplocloud. Thats's snark.cloud/D-U-P-L-O-C-L-O-U-D. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Periodically, I would say that I enjoy dealing with cloud platform security issues, except I really don't. It's sort of forced upon me to deal with much like a dead dog is cast into their neighbor's yard for someone else to have to worry about. Well, invariably, it seems like it's my yard.And I'm only on the periphery of these things. Someone who's much more in the trenches in the wide world of cloud security is joining me today. Yoav Alon is the CTO at Orca Security. Yoav, thank you for taking the time to join me today and suffer the slings and arrows I'll no doubt be hurling your way.Yoav: Thank you, Corey, for having me. I've been a longtime listener, and it's an honor to be here.Corey: I still am periodically surprised that anyone listens to these things. Because it's unlike a newsletter where everyone will hit reply and give me a piece of their mind. People generally don't wind up sending me letters about things that they hear on the podcast, so whenever I talk to somebody listens to it as, “Oh. Oh, right, I did turn the microphone on. Awesome.” So, it's always just a little on the surreal side.But we're not here to talk necessarily about podcasting, or the modern version of an AM radio show. Let's start at the very beginning. What is Orca Security, and why would folks potentially care about what it is you do?Yoav: So, Orca Security is a cloud security company, and our vision is very simple. Given a customer's cloud environment, we want to detect all the risks in it and implement mechanisms to prevent it from occurring. And while it sounds trivial, before Orca, it wasn't really possible. You will have to install multiple tools and aggregate them and do a lot of manual work, and it was messy. And we wanted to change that, so we had, like, three guiding principles.We call it seamless, so I want to detect all the risks in your environment without friction, which is our speak for fighting with your peers. We also want to detect everything so you don't have to install, like, a tool for each issue: A tool for vulnerabilities, a tool for misconfigurations, and for sensitive data, IAM roles, and such. And we put a very high priority on context, which means telling you what's important, what's not. So, for example, S3 bucket open to the internet is important if it has sensitive data, not if it's a, I don't know, static website.Corey: Exactly. I have a few that I'd like to get screamed at in my AWS account, like, “This is an open S3 bucket and it's terrible.” I look at it the name is assets.lastweekinaws.com. Gee, I wonder if that's something that's designed to be a static hosted website.Increasingly, I've been slapping CloudFront in front of those things just to make the broken warning light go away. I feel like it's an underhanded way of driving CloudFront adoption some days, but not may not be the most charitable interpretation thereof. Orca has been top-of-mind for a lot of folks in the security community lately because let's be clear here, dealing with security problems in cloud providers from a vendor perspective is an increasingly crowded—and clouded—space. Just because there's so much—there's investment pouring into it, everyone has a slightly different take on the problem, and it becomes somewhat challenging to stand out from the pack. You didn't really stand out from the pack so much as leaped to the front of it and more or less have become the de facto name in a very short period of time, specifically—at least from my world—when you wound up having some very interesting announcements about vulnerabilities within AWS itself. You will almost certainly do a better job of relating the story, so please, what did you folks find?Yoav: So, back in September of 2021, two of my researchers, Yanir Tsarimi and Tzah Pahima, each one of them within a relatively short span of time from each other, found a vulnerability in AWS. Tzah found a vulnerability in CloudFormation which we named BreakingFormation and Yanir found a vulnerability in AWS Glue, which we named SuperGlue. We're not the best copywriters, but anyway—Corey: No naming things is hard. Ask any Amazonian.Yoav: Yes. [laugh]. So, I'll start with BreakingFormation which caught the eyes of many. It was an XXE SSRF, which is jargon to say that we were able to read files and execute HTTP requests and read potentially sensitive data from CloudFormation servers. This one was mitigated within 26 hours by AWS, so—Corey: That was mitigated globally.Yoav: Yes, globally, which I've never seen such quick turnaround anywhere. It was an amazing security feat to see.Corey: Particularly in light of the fact that AWS does a lot of things very right when it comes to, you know, designing cloud infrastructure. Imagine that, they've had 15 years of experience and basically built the idea of cloud, in some respects, at the scale that hyperscalers operate at. And one of their core tenets has always been that there's a hard separation between regions. There are remarkably few global services, and those are treated with the utmost of care and delicacy. To the point where when something like that breaks as an issue that spans more than one region, it is headline-making news in many cases.So it's, they almost never wind up deploying things to all regions at the same time. That can be irksome when we're talking about things like I want a feature that solves a problem that I have, and I have to wait months for it to hit a region that I have resources living within, but for security, stuff like this, I am surprised that going from, “This is the problem,” to, “It has been mitigated,” took place within 26 hours. I know it sounds like a long time to folks who are not deep in the space, but that is superhero speed.Yoav: A small correction, it's 26 hours for, like, the main regions. And it took three to four days to propagate to all regions. But still, it's speed of lighting in for security space.Corey: When this came out, I was speaking to a number of journalists on background about trying to wrap their head around this, and they said that, “Oh yeah, and security is always, like, the top priority for AWS, second only to uptime and reliability.” And… and I understand the perception, but I disagree with it in the sense of the nightmare scenario—that every time I mention to a security person watching the blood drain from their face is awesome—but the idea that take IAM, which as Werner said in his keynote, processes—was it 500 million or was it 500 billion requests a second, some ludicrous number—imagine fails open where everything suddenly becomes permitted. I have to imagine in that scenario, they would physically rip the power cables out of the data centers in order to stop things from going out. And that is the right move. Fortunately, I am extremely optimistic that will remain a hypothetical because that is nightmare fuel right there.But Amazon says that security is job zero. And my cynical interpretation is that well, it wasn't, but they forgot security, decided to bolt it on to the end, like everyone else does, and they just didn't want to renumber all their slides, so instead of making it point one, they just put another slide in front of it and called the job zero. I'm sure that isn't how it worked, but for those of us who procrastinate and building slide decks for talks, it has a certain resonance to it. That was one issue. The other seemed a little bit more pernicious focusing on Glue, which is their ETL-as-a-Service… service. One of them I suppose. Tell me more about it.Yoav: So, one of the things that we found when we found the BreakingFormation when we reported the vulnerability, it led us to do a quick Google search, which led us back to the Glue service. It had references to Glue, and we started looking around it. And what we were able to do with the vulnerability is given a specific feature in Glue, which we don't disclose at the moment, we were able to effectively take control over the account which hosts the Glue service in us-east-1. And having this control allowed us to essentially be able to impersonate the Glue service. So, every role in AWS that has a trust to the Glue service, we were able to effectively assume a role into it in any account in AWS. So, this was more critical a vulnerability in its effect.Corey: I think on some level, the game of security has changed because for a lot of us who basically don't have much in the way of sensitive data living in AWS—and let's be clear, I take confidentiality extremely seriously. Our clients on the consulting side view their AWS bills themselves as extremely confidential information that Amazon stuffs into a PDF and emails every month. But still. If there's going to be a leak, we absolutely do not want it to come from us, and that is something that we take extraordinarily seriously. But compared to other jobs I've had in the past, no one will die if that information gets out.It is not the sort of thing that is going to ruin people's lives, which is very often something that can happen in some data breaches. But in my world, one of the bad cases of a breach of someone getting access to my account is they could spin up a bunch of containers on the 17 different services that AWS offers that can run containers and mine cryptocurrency with it. And the damage to me then becomes a surprise bill. Okay, great. I can live with that.Something that's a lot scarier to a lot of companies with, you know, serious problems is, yep, fine, cost us money, whatever, but our access to our data is the one thing that is going to absolutely be the thing that cannot happen. So, from that perspective alone, something like Glue being able to do that is a lot more terrifying than subverting CloudFormation and being able to spin up additional resources or potentially take resources down. Is that how you folks see it too, or is—I'm sure there's nuance I'm missing.Yoav: So yeah, the access to data is top-of-mind for everyone. It's a bit scary to think about it. I have to mention, again, the quick turnaround time for AWS, which almost immediately issued a patch. It was a very fast one and they mitigated, again, the issue completely within days. About your comment about data.Data is king these days, there is nothing like data, and it has all the properties of everything that we care about. It's expensive to store, it's expensive to move, and it's very expensive if it leaks. So, I think a lot of people were more alarmed about the Glue vulnerability than the CloudFormation vulnerability. And they're right in doing so.Corey: I do want to call out that AWS did a lot of things right in this area. Their security posture is very clearly built around defense-in-depth. The fact that they were able to disclose—after some prodding—that they checked the CloudTrail logs for the service itself, dating back to the time the service launched, and verified that there had never been an exploit of this, that is phenomenal, as opposed to the usual milquetoast statements that companies have. We have no evidence of it, which can mean that we did the same thing and we looked through all the logs in it's great, but it can also mean that, “Oh, yeah, we probably should have logs, shouldn't we? But let's take a backlog item for that.” And that's just terrifying on some level.It becomes a clear example—a shining beacon for some of us in some cases—of doing things right from that perspective. There are other sides to it, though. As a customer, it was frustrating in the extreme to—and I mean, no offense by this—to learn about this from you rather than from the provider themselves. They wound up putting up a security notification many hours after your blog post went up, which I would also just like to point out—and we spoke about it at the time and it was a pure coincidence—but there was something that was just chef's-kiss perfect about you announcing this on Andy Jassy's birthday. That was just very well done.Yoav: So, we didn't know about Andy's birthday. And it was—Corey: Well, I see only one of us has a company calendar with notable executive birthdays splattered all over it.Yoav: Yes. And it was also published around the time that AWS CISO was announced, which was also a coincidence because the date was chosen a lot of time in advance. So, we genuinely didn't know.Corey: Communicating around these things is always challenging because on the one hand, I can absolutely understand the cloud providers' position on this. We had a vulnerability disclosed to us. We did our diligence and our research because we do an awful lot of things correctly and everyone is going to have vulnerabilities, let's be serious here. I'm not sitting here shaking my fist, angry at AWS's security model. It works, and I am very much a fan of what they do.And I can definitely understand then, going through all of that there was no customer impact, they've proven it. What value is there to them telling anyone about it, I get that. Conversely, you're a security company attempting to stand out in a very crowded market, and it is very clear that announcing things like this demonstrates a familiarity with cloud that goes beyond the common. I radically changed my position on how I thought about Orca based upon these discoveries. It went from, “Orca who,” other than the fact that you folks have sponsored various publications in the past—thanks for that—but okay, a security company. Great to, “Oh, that's Orca. We should absolutely talk to them about a thing that we're seeing.” It has been transformative for what I perceive to be your public reputation in the cloud security space.So, those two things are at odds: The cloud provider doesn't want to talk about anything and the security company absolutely wants to demonstrate a conversational fluency with what is going on in the world of cloud. And that feels like it's got to be a very delicate balancing act to wind up coming up with answers that satisfy all parties.Yoav: So, I just want to underline something. We don't do what we do in order to make a marketing stand. It's a byproduct of our work, but it's not the goal. For the Orca Security Research Pod, which it's the team at Orca which does this kind of research, our mission statement is to make cloud security better for everyone. Not just Orca customers; for everyone.And you get to hear about the more shiny things like big headline vulnerabilities, but we also have very sensible blog posts explaining how to do things, how to configure things and give you more in-depth understanding into security features that the cloud providers themselves provide, which are great, and advance the state of the cloud security. I would say that having a cloud vulnerability is sort of one of those things, which makes me happy to be a cloud customer. On the one side, we had a very big vulnerability with very big impact, and the ability to access a lot of customers' data is conceptually terrifying. The flip side is that everything was mitigated by the cloud providers in warp speed compared to everything else we've seen in all other elements of security. And you get to sleep better knowing that it happened—so no platform is infallible—but still the cloud provider do work for you, and you'll get a lot of added value from that.Corey: You've made a few points when this first came out, and I want to address them. The first is, when I reached out to you with a, “Wow, great work.” You effectively instantly came back with, “Oh, it wasn't me. It was members of my team.” So, let's start there. Who was it that found these things? I'm a huge believer giving people credit for the things that they do.The joy of being in a leadership position is if the company screws up, yeah, you take responsibility for that, whether the company does something great, yeah, you want to pass praise onto the people who actually—please don't take this the wrong way—did the work. And not that leadership is not work, it absolutely is, but it's a different kind of work.Yoav: So, I am a security researcher, and I am very mindful for the effort and skill it requires to find vulnerabilities and actually do a full circle on them. And the first thing I'll mention is Tzah Pahima, which found the BreakingFormation vulnerability and the vulnerability in CloudFormation, and Yanir Tsarimi, which found the AutoWarp vulnerability, which is the Azure vulnerability that we have not mentioned, and the Glue vulnerability, dubbed SuperGlue. Both of them are phenomenal researcher, world-class, and I'm very honored to work with them every day. It's one of my joys.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.Corey: It's very clear that you have built an extraordinary team for people who are able to focus on vulnerability research. Which, on some level, is very interesting because you are not branded as it were as a vulnerability research company. This is not something that is your core competency; it's not a thing that you wind up selling directly that I'm aware of. You are selling a security platform offering. So, on the one hand, it makes perfect sense that you would have a division internally that works on this, but it's also very noteworthy, I think, that is not the core description of what it is that you do.It is a means by which you get to the outcome you deliver for customers, not the thing that you are selling directly to them. I just find that an interesting nuance.Yoav: Yes, it is. And I would elaborate and say that research informs the product, and the product informs research. And we get to have this fun dance where we learn new things by doing research. We [unintelligible 00:18:08] the product, and we use the customers to teach us things that we didn't know. So, it's one of those happy synergies.Corey: I want to also highlight a second thing that you have mentioned and been very, I guess, on message about since news of this stuff first broke. And because it's easy to look at this and sensationalize aspects of it, where, “See? The cloud providers security model is terrible. You shouldn't use them. Back to data centers we go.” Is basically the line taken by an awful lot of folks trying to sell data center things.That is not particularly helpful for the way that the world is going. And you've said, “Yeah, you should absolutely continue to be in cloud. Do not disrupt your cloud plan as a result.” And let's be clear, none of the rest of us are going to find and mitigate these things with anything near the rigor or rapidity that the cloud providers can and do demonstrate.Yoav: I totally agree. And I would say that the AWS security folks are doing a phenomenal job. I can name a few, but they're all great. And I think that the cloud is by far a much safer alternative than on-prem. I've never seen issues in my on-prem environment which were critical and fixed in such a high velocity and such a massive scale.And you always get the incremental improvements of someone really thinking about all the ins and outs of how to do security, how to do security in the cloud, how to make it faster, more reliable, without a business interruptions. It's just phenomenal to see and phenomenal to witness how far we've come in such a relatively short time as an industry.Corey: AWS in particular, has a reputation for being very good at security. I would argue that, from my perspective, Google is almost certainly slightly better at their security approach than AWS is, but to be clear, both of them are significantly further along the path than I am going to be. So great, fantastic. You also have found something interesting over in the world of Azure, and that honestly feels like a different class of vulnerability. To my understanding, the Azure vulnerability that you recently found was you could get credential material for other customers simply by asking for it on a random high port. Which is one of those—I'm almost positive I'm misunderstanding something here. I hope. Please?Yoav: I'm not sure you're misunderstanding. So, I would just emphasize that the vulnerability again, was found by Yanir Tsarimi. And what he found was, he used a service called Azure Automation which enables you essentially to run a Python script on various events and schedules. And he opened the python script and he tried different ports. And one of the high ports he found, essentially gave him his credentials. And he said, “Oh, wait. That's a really odd port for an HTTP server. Let's try, I don't know, a few ports on either way.” And he started getting credentials from other customers. Which was very surprising to us.Corey: That is understating it by a couple orders of magnitude. Yes, like, “Huh. That seems sub-optimal,” is sort of like the corporate messaging approved thing. At the time you discover that—I'm certain it was a three-minute-long blistering string of profanity in no fewer than four languages.Yoav: I said to him that this is, like, a dishonorable bug because he worked very little to find it. So it was, from start to finish, the entire research took less than two hours, which, in my mind, is not enough for this kind of vulnerability. You have to work a lot harder to get it. So.Corey: Yeah, exactly. My perception is that when there are security issues that I have stumbled over—for example, I gave a talk at re:Invent about it in the before times, one of them was an overly broad permission in a managed IAM policy for SageMaker. Okay, great. That was something that obviously was not good, but it also was more of a privilege escalation style of approach. It wasn't, “Oh, by the way, here's the keys to everything.”That is the type of vulnerability I have come to expect, by and large, from cloud providers. We're just going to give you access credentials for other customers is one of those areas that… it bugs me on a visceral level, not because I'm necessarily exposed personally, but because it more or less shores up so many of the arguments that I have spent the last eight years having with folks are like, “Oh, you can't go to cloud. Your data should live on your own stuff. It's more secure that way.” And we were finally it feels like starting to turn a cultural corner on these things.And then something like that happens, and it—almost have those naysayers become vindicated for it. And it's… it almost feels, on some level, and I don't mean to be overly unkind on this, but it's like, you are absolutely going to be in a better security position with the cloud providers. Except to Azure. And perhaps that is unfair, but it seems like Azure's level of security rigor is nowhere near that of the other two. Is that generally how you're seeing things?Yoav: I would say that they have seen more security issues than most other cloud providers. And they also have a very strong culture of report things to us, and we're very streamlined into patching those and giving credit where credit's due. And they give out bounties, which is an incentives for more research to happen on those platforms. So, I wouldn't say this categorically, but I would say that the optics are not very good. Generally, the cloud providers are much safer than on-prem because you only hear very seldom on security issues in the cloud.You hear literally every other day on issues happening to on-prem environments all over the place. And people just say they expect it to be this way. Most of the time, it's not even a headline. Like, “Company X affected with cryptocurrency or whatever.” It happens every single day, and multiple times a day, breaches which are massively bigger. And people who don't want to be in the cloud will find every reason not to be the cloud. Let us have fun.Corey: One of the interesting parts about this is that so many breaches that are on-prem are just never discovered because no one knows what the heck's running in an environment. And the breaches that we hear about are just the ones that someone had at least enough wherewithal to find out that, “Huh. That shouldn't be the way that it is. Let's dig deeper.” And that's a bad day for everyone. I mean, no one enjoys those conversations and those moments.And let's be clear, I am surprisingly optimistic about the future of Azure Security. It's like, “All right, you have a magic wand. What would you do to fix it?” It's, “Well, I'd probably, you know, hire Charlie Bell and get out of his way,” is not a bad answer as far as how these things go. But it takes time to reform a culture, to wind up building in security as a foundational principle. It's not something you can slap on after the fact.And perhaps this is unfair. But Microsoft has 30 years of history now of getting the world accustomed to oh, yeah, just periodically, terrible vulnerabilities are going to be discovered in your desktop software. And every once a month on Tuesdays, we're going to roll out a whole bunch of patches, and here you go. Make sure you turn on security updates, yadda, yadda, yadda. That doesn't fly in the cloud. It's like, “Oh, yeah, here's this month's list of security problems on your cloud provider.” That's one of those things that, like, the record-scratch, freeze-frame moment of wait, what are we doing here, exactly?Yoav: So, I would say that they also have a very long history of making those turnarounds. Bill Gates famously did his speech where security comes first, and they have done a very, very long journey and turn around the company from doing things a lot quicker and a lot safer. It doesn't mean they're perfect; everyone will have bugs, and Azure will have more people finding bugs into it in the near future, but security is a journey, and they've not started from zero. They're doing a lot of work. I would say it's going to take time.Corey: The last topic I want to explore a little bit is—and again, please don't take this as anyway being insulting or disparaging to your company, but I am actively annoyed that you exist. By which I mean that if I go into my AWS account, and I want to configure it to be secure. Great. It's not a matter of turning on the security service, it's turning on the dozen or so security services that then round up to something like GuardDuty that then, in turn, rounds up to something like Security Hub. And you look at not only the sheer number of these services and the level of complexity inherent to them, but then the bill comes in and you do some quick math and realize that getting breached would have been less expensive than what you're spending on all of these things.And somehow—the fact that it's complex, I understand; computers are like that. The fact that there is—[audio break 00:27:03] a great messaging story that's cohesive around this, I come to accept that because it's AWS; talking is not their strong suit. Basically declining to comment is. But the thing that galls me is that they are selling these services and not inexpensively either, so it almost feels, on some level like, shouldn't this on some of the built into the offerings that you folks are giving us?And don't get me wrong, I'm glad that you exist because bringing order to a lot of that chaos is incredibly important. But I can't shake the feeling that this should be a foundational part of any cloud offering. I'm guessing you might have a slightly different opinion than mine. I don't think you show up at the office every morning, “I hate that we exist.”Yoav: No. And I'll add a bit of context and nuance. So, for every other company than cloud providers, we expect them to be very good at most things, but not exceptional at everything. I'll give the Redshift example. Redshift is a pretty good offering, but Snowflake is a much better offering for a much wider range of—Corey: And there's a reason we're about to become Snowflake customers ourselves.Yoav: So, yeah. And there are a few other examples of that. A security company, a company that is focused solely on your security will be much better suited to help you, in a lot of cases more than the platform. And we work actively with AWS, Azure, and GCP requesting new features, helping us find places where we can shed more light and be more proactive. And we help to advance the conversation and make it a lot more actionable and improve from year to year. It's one of those collaborations. I think the cloud providers can do anything, but they can't do everything. And they do a very good job at security; it doesn't mean they're perfect.Corey: As you folks are doing an excellent job of demonstrating. Again, I'm glad you folks exist; I'm very glad that you are publishing the research that you are. It's doing a lot to bring a lot I guess a lot of the undue credit that I was giving AWS for years of, “No, no, it's not that they don't have vulnerabilities like everyone else does. It just that they don't ever talk about them.” And they're operationalizing of security response is phenomenal to watch.It's one of those things where I think you've succeeded and what you said earlier that you were looking to achieve, which is elevating the state of cloud security for everyone, not just Orca customers.Yoav: Thank you.Corey: Thank you. I really appreciate your taking the time out of your day to speak with me. If people want to learn more, where's the best place they can go to do that?Yoav: So, we have our website at orca.security. And you can reach me out on Twitter. My handle is at @yoavalon, which is @-Y-O-A-V-A-L-O-N.Corey: And we will of course put links to that in the [show notes 00:29:44]. Thanks so much for your time. I appreciate it.Yoav: Thank you, Corey.Corey: Yoav Alon, Chief Technology Officer at Orca Security. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, or of course on YouTube, smash the like and subscribe buttons because that's what they do on that platform. Whereas if you've hated this podcast, please do the exact same thing, five-star review, smash the like and subscribe buttons on YouTube, but also leave an angry comment that includes a link that is both suspicious and frightening, and when we click on it, suddenly our phones will all begin mining cryptocurrency.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
In this episode, Dave continues his chat with Luca Bianchi, Chief Technology Officer at Neosperience and WizKey. Luca covers his career in Machine Learning, an overview of Machine Learning on AWS, Amazon Rekognition, Amazon Comprehend, Amazon Sagemaker, and AWS Glue. Luca also discusses current 2022 ML trends including Transformers, and Edge Computing. Luca on Twitter: twitter.com/bianchiluca Dave on Twitter: twitter.com/thedavedev Luca on LinkedIn: www.linkedin.com/in/lucabianchipavia/ Luca on Medium: medium.com/@aletheia Amazon Rekognition: aws.amazon.com/rekognition/ Amazon Comprehend: aws.amazon.com/comprehend/ Amazon Sagemaker: aws.amazon.com/sagemaker/ AWS Glue: aws.amazon.com/glue Towards Data Science: towardsdatascience.com/ Clean or Dirty HVAC? Using Amazon SageMaker and Amazon Rekognition Custom Labels to Automate Detection: bit.ly/3rvpNDJ Building a Neural Network on Amazon SageMaker with PyTorch Lightning: bit.ly/3xAmxLc ---------------------------------------------- Subscribe: Amazon Music: https://music.amazon.com/podcasts/f8bf7630-2521-4b40-be90-c46a9222c159/aws-developers-podcast Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-developers-podcast/id1574162669 Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zb3VuZGNsb3VkLmNvbS91c2Vycy9zb3VuZGNsb3VkOnVzZXJzOjk5NDM2MzU0OS9zb3VuZHMucnNz Spotify: https://open.spotify.com/show/7rQjgnBvuyr18K03tnEHBI TuneIn: https://tunein.com/podcasts/Technology-Podcasts/AWS-Developers-Podcast-p1461814/ RSS Feed: https://feeds.soundcloud.com/users/soundcloud:users:994363549/sounds.rss
In this episode, Dave chats with Luca Bianchi, Chief Technology Officer at Neosperience and WizKey. Luca covers his career in Machine Learning, an overview of Machine Learning on AWS, Amazon Rekognition, Amazon Comprehend, Amazon Sagemaker, and AWS Glue. Luca also discusses Edge ML and what can be achieved using commodity hardware. Luca on Twitter: https://twitter.com/bianchiluca Dave on Twitter: https://twitter.com/thedavedev Luca on LinkedIn: https://www.linkedin.com/in/lucabianchipavia/ Luca on Medium: https://medium.com/@aletheia Amazon Rekognition: https://aws.amazon.com/rekognition/ Amazon Comprehend: https://aws.amazon.com/comprehend/ Amazon Sagemaker: https://aws.amazon.com/sagemaker/ AWS Glue: https://aws.amazon.com/glue Towards Data Science: https://towardsdatascience.com/ Clean or Dirty HVAC? Using Amazon SageMaker and Amazon Rekognition Custom Labels to Automate Detection: https://bit.ly/3rvpNDJ Building a Neural Network on Amazon SageMaker with PyTorch Lightning: https://bit.ly/3xAmxLc Subscribe: Amazon Music: https://music.amazon.com/podcasts/f8bf7630-2521-4b40-be90-c46a9222c159/aws-developers-podcast Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-developers-podcast/id1574162669 Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zb3VuZGNsb3VkLmNvbS91c2Vycy9zb3VuZGNsb3VkOnVzZXJzOjk5NDM2MzU0OS9zb3VuZHMucnNz Spotify: https://open.spotify.com/show/7rQjgnBvuyr18K03tnEHBI TuneIn: https://tunein.com/podcasts/Technology-Podcasts/AWS-Developers-Podcast-p1461814/ RSS Feed: https://feeds.soundcloud.com/users/soundcloud:users:994363549/sounds.rss
In this month's episode Arjen, JM, and Guy discuss the news from January 2022. Well, everything announced after re:Invent really, but that's mostly from January. There are good announcements all over; from a new Console Home to unpronounceable instance types, but there is also some news around the podcast that's either good or bad depending on how you interpret it. Find us at melb.awsug.org.au or as @AWSMelb on Twitter. News Finally in Sydney Amazon EC2 R6i instances are now available in 8 additional regions Amazon EC2 C6i instances are now available in 10 additional regions AWS Panorama is now available in Asia Pacific (Sydney), and Asia Pacific (Singapore) AWS Resilience Hub expands to 13 additional AWS Regions AWS Direct Connect announces new location in Australia Serverless AWS Lambda now supports Internet Protocol Version 6 (IPv6) endpoints for inbound connections Amazon Virtual Private Cloud (VPC) now supports Bring Your Own IPv6 Addresses (BYOIPv6) - Old announcement mentioned in show Announcing AWS Serverless Application Model (SAM) CLI support for local testing of AWS Cloud Development Kit (CDK) AWS Lambda now supports ES Modules and Top-Level Await for Node.js 14 AWS Lambda now supports Max Batching Window for Amazon MSK, Apache Kafka, Amazon MQ for Apache Active MQ and RabbitMQ as event sources Containers Amazon EKS now supports Internet Protocol version 6 (IPv6) Amazon Elastic Kubernetes Service Adds IPv6 Networking | AWS News Blog EBS CSI driver now available in EKS add-ons in preview Amazon ECS launches new simplified console experience for creating ECS clusters and task definitions ACM Private CA Kubernetes cert-manager plugin is production ready Amazon EMR on EKS adds support for customized container images for AWS Graviton-based EC2 instances Amazon ECR adds the ability to monitor repository pull statistics Amazon ECS now supports Amazon ECS Exec and Amazon Linux 2 for on-premises container workloads EC2 & VPC Introducing Amazon EC2 Hpc6a instances New – Amazon EC2 Hpc6a Instance Optimized for High Performance Computing | AWS News Blog New – Amazon EC2 X2iezn Instances Powered by the Fastest Intel Xeon Scalable CPU for Memory-Intensive Workloads Instance Tags now available on the Amazon EC2 Instance Metadata Service Amazon EC2 On-Demand Capacity Reservations now support Cluster Placement Groups AWS Compute Optimizer makes it easier to optimize by leveraging multiple EC2 instance architectures AWS Announces New Launch Speed Optimizations for Microsoft Windows Server Instances on Amazon EC2 Amazon EC2 customers can now use ED25519 keys for authentication with EC2 Instance Connect Metrics now available for AWS PrivateLink Dev & Ops Amazon Corretto January Quarterly Updates Amazon CloudWatch Logs announces AWS Organizations support for cross account Subscriptions AWS Toolkit for JetBrains IDEs adds support for ECS-Exec for troubleshooting ECS containers AWS Systems Manager Automation now enables you to take action in third-party applications through webhooks Security AWS Secrets Manager now automatically enables SSL connections when rotating database secrets AWS announces phone number enrichments for Amazon Fraud Detector Models Announcing AWS CloudTrail Lake, a managed audit and security lake AWS Firewall Manager now supports AWS Shield Advanced automatic application layer DDoS mitigation Amazon SNS now supports Attribute-based access controls (ABAC) Amazon GuardDuty now detects EC2 instance credentials used from another AWS account Amazon GuardDuty Enhances Detection of EC2 Instance Credential Exfiltration | AWS News Blog Amazon GuardDuty now protects Amazon Elastic Kubernetes Service clusters AWS Security Hub integrates with AWS Health AWS Trusted Advisor now integrates with AWS Security Hub AWS Client VPN now supports banner text and maximum session duration Data Storage & Processing Databases AWS Migration Hub Strategy Recommendations adds support for Babelfish for Aurora PostgreSQL Now DynamoDB can return the throughput capacity consumed by PartiQL API calls to help you optimize your queries and throughput costs Amazon DocumentDB (with MongoDB compatibility) adds support for $mergeObjects and $reduce Amazon DocumentDB (with MongoDB compatibility) adds additional Geospatial query capabilities Amazon DocumentDB (with MongoDB compatibility) now offers a free trial Amazon RDS Performance Insights now supports query execution plan capture for RDS for Oracle Glue Introducing Autoscaling in AWS Glue jobs (Preview) Introducing AWS Glue Interactive Sessions and Job Notebooks (Preview) Announcing Personal Identifiable Information (PII) detection and remediation in AWS Glue (Preview) EMR Introducing real-time collaborative notebooks with EMR Studio Introducing SQL Explorer in EMR Studio Amazon EMR now supports Apache Iceberg, a highly performant, concurrent, ACID-compliant table format for data lakes Amazon EMR on EKS adds error message details in DescribeJobRun API response to simplify debugging Amazon EMR on EKS adds support for customized container images for interactive jobs run using managed endpoints Amazon EMR now supports Apache Spark SQL to insert data into and update Glue Data Catalog tables when Lake Formation integration is enabled OpenSearch Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now supports OpenSearch version 1.1 Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now supports anomaly detection for historical data Fine grained access control now supported on existing Amazon OpenSearch Service domains Redshift Announcing AWS Data Exchange for Amazon Redshift Amazon Redshift Spectrum now offers custom data validation rules Other New – Replication for Amazon Elastic File System (EFS) Amazon ElastiCache adds support for streaming and storing Redis engine logs AWS Storage Gateway management console simplifies gateway creation and management Amazon S3 File Gateway adds schedule-based network bandwidth throttling Amazon FSx for NetApp ONTAP now provides performance and capacity metrics in Amazon CloudWatch AI & ML SageMaker Amazon SageMaker Pipelines now offers native EMR integration for large scale data processing Amazon SageMaker Pipelines now supports concurrency control Amazon SageMaker JumpStart adds LightGBM and CatBoost Models for Tabular Data Amazon SageMaker Feature Store connector for Apache Spark for easy batch data ingestion Announcing SageMaker Training support for ml.g5 instances Other Amazon Kendra launches support for query language Amazon Forecast now supports AWS CloudFormation for managing dataset and dataset group resources Amazon Rekognition improves accuracy of Content Moderation for Video AWS Panorama Appliances now available for purchase on Amazon.com and Amazon Business Amazon Textract adds synchronous support for single page PDF documents and support for PDF documents containing JPEG 2000 encoded images Other Cool Stuff Now Open – AWS Asia Pacific (Jakarta) Region | AWS News Blog Announcing the new Console Home in AWS Management Console A New AWS Console Home Experience | AWS News Blog Amazon Nimble Studio launches the ability to validate launch profile configurations via the Nimble Studio console AWS Elastic Disaster Recovery now supports failback automation Amazon Interactive Video Service adds thumbnail configuration Announcing matrix routing for Amazon Location Service Amazon Location Service enables request-based pricing for all customer use cases IoT AWS IoT Device Management launches Automated Retry capability for Jobs to improve success rates of large scale deployments AWS IoT Core for LoRaWAN Launches Two New Features to Manage and Monitor Communications Between Device and Cloud AWS IoT SiteWise Edge supports new data storage and upload prioritization strategies for intermittent cloud connectivity Sponsors CMD Solutions Silver Sponsors Cevo Versent
Antonio Calo Chas es el Head de Arquitectura de Dominion Commercial y nos viene a contar como fue su entrada a la compañía en medio del ransomware que sufrieron en Abril del 2021 cuando ya había planes de migración en camino. Nos cuenta como hicieron un inventario, después como movieron los servidores usando Lift & Shift y como empezaron a modernizar sus aplicaciones para obtener todas las ventajas de AWS. Antonio se centra mucho en el desarrollo de carrera de la gente y destaca la importancia utilizar tecnologías modernas para poder atraer y retener talento. Antonio Calo ChasAntonio es el Head of Architecture de Dominion Commercial, holding perteneciente al grupo empresarial Dominion Global y que integra, entre otras, las marcas Phonehouse, Alterna y Luzdostres. Un apasionado del mundo de IT con extensa experiencia en desarrollo de software, architectura cloud y I+D. Rodrigo Asensio - @rasensioBasado en Barcelona, España, Rodrigo es responsable de un equipo de Solution Architecture del segmento Enterprise que ayuda a grandes clientes en su estrategia y migración al cloud con una experiencia de más de 20 años en IT.Secciones00:00 Intro00:38 Global Dominion y Dominion Commercial01:30 Carrera de Antonio04:52 El ransomware y la migración a AWS09:16 Lo que aprendieron con el ransomware09:59 Cambio de modelo operativo14:06 Cuales fueron los cambios de trabajar on-prem al Cloud18:19 Que servicios está usando Dominion Commercial 20:50 Legacy y deuda técnica24:04 Talento y progresión de carrera25:29 Que esta haciendo Dominion Commercial para retener talento30:15 Responsabilidad individual 32:28 Servicios de AWS35:50 Data lakes y datos37:24 Optimización de costes y como ayuda AWS39:10 Que viene en el futuro42:04 CierreLinksSQS como servicio de colas https://aws.amazon.com/sqs/ Infraestructura como código https://aws.amazon.com/cloudformation/ API gateway para crear y administrar APIs https://aws.amazon.com/api-gateway/ Cognito para manejo de usuarios https://aws.amazon.com/cognito/ Redshift como datawarehouse https://aws.amazon.com/redshift/ AWS Glue para gestionar ETL de datos https://aws.amazon.com/glue Athena para consumer datos usando SQL sin servidores https://aws.amazon.com/athena Quieres ver o escuchar otros episodios ? https://go.aws/3GvrSF8
Parce que… c'est l'épisode 0x111! Préambule Shameless plug COVID-19 4 au 6 avril 2022 - Québec Numérique - SéQCure 2022 4 au 8 avril 2022 - Québec Numérique - Semaine numériqc Notes Superglue: Orca Security Research Team Discovers AWS Glue Vulnerability BreakingFormation: Orca Security Research Team Discovers AWS CloudFormation Vulnerability Collaborateurs Nicolas-Loïc Fortin Cédric Thibault Crédits Montage audio par Intrasecure inc Locaux virtuels par Zoom
This isn't a story about NPM even though it's inspired by NPM. Twice. The maintainer of the "colors" NPM library intentionally changed the library's behavior from its expected functionality to printing garbage messages. The library was exhibiting the type of malicious activity that typically comes from a compromised package. Only this time users of the library, which easily number in the thousands, discovered this was sabotage by the package maintainer himself. This opens up a broader discussion on supply chain security than just provenance. How do we ensure open source tools receive the investments they need -- security or otherwise? For that matter, how do we ensure internal tools receive the investments they need? Log4j was just one recent example of seeing old code appear in surprising places. Scams and security flaws in (so-called) web3 and when decentralization looks centralized, SSRF from a URL parsing problem, vuln in AWS Glue, 10 vulns used for CI/CD compromises! Show Notes: https://securityweekly.com/asw180 Segment resources: - https://www.bleepingcomputer.com/news/security/dev-corrupts-npm-libs-colors-and-faker-breaking-thousands-of-apps/ - https://www.zdnet.com/article/when-open-source-developers-go-bad/ - https://www.zdnet.com/article/log4j-after-white-house-meeting-google-calls-for-list-of-critical-open-source-projects/ - https://www.theregister.com/2022/01/17/open_source_closed_wallets_big/ - https://blog.google/technology/safety-security/making-open-source-software-safer-and-more-secure/ - https://docs.linuxfoundation.org/lfx/security/onboarding-your-project - https://arstechnica.com/information-technology/2016/03/rage-quit-coder-unpublished-17-lines-of-javascript-and-broke-the-internet/ Visit https://www.securityweekly.com/asw for all the latest episodes! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly
This isn't a story about NPM even though it's inspired by NPM. Twice. The maintainer of the "colors" NPM library intentionally changed the library's behavior from its expected functionality to printing garbage messages. The library was exhibiting the type of malicious activity that typically comes from a compromised package. Only this time users of the library, which easily number in the thousands, discovered this was sabotage by the package maintainer himself. This opens up a broader discussion on supply chain security than just provenance. How do we ensure open source tools receive the investments they need -- security or otherwise? For that matter, how do we ensure internal tools receive the investments they need? Log4j was just one recent example of seeing old code appear in surprising places. Scams and security flaws in (so-called) web3 and when decentralization looks centralized, SSRF from a URL parsing problem, vuln in AWS Glue, 10 vulns used for CI/CD compromises! Show Notes: https://securityweekly.com/asw180 Segment resources: - https://www.bleepingcomputer.com/news/security/dev-corrupts-npm-libs-colors-and-faker-breaking-thousands-of-apps/ - https://www.zdnet.com/article/when-open-source-developers-go-bad/ - https://www.zdnet.com/article/log4j-after-white-house-meeting-google-calls-for-list-of-critical-open-source-projects/ - https://www.theregister.com/2022/01/17/open_source_closed_wallets_big/ - https://blog.google/technology/safety-security/making-open-source-software-safer-and-more-secure/ - https://docs.linuxfoundation.org/lfx/security/onboarding-your-project - https://arstechnica.com/information-technology/2016/03/rage-quit-coder-unpublished-17-lines-of-javascript-and-broke-the-internet/ Visit https://www.securityweekly.com/asw for all the latest episodes! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly
Scams and security flaws in (so-called) web3 and when decentralization looks centralized, SSRF from a URL parsing problem, vuln in AWS Glue, 10 vulns used for CI/CD compromises Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw180
Scams and security flaws in (so-called) web3 and when decentralization looks centralized, SSRF from a URL parsing problem, vuln in AWS Glue, 10 vulns used for CI/CD compromises Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw180
About DanDan is CISO and VP of Cybersecurity for Shipt, a Target subsidiary. He worked previously as a Distinguished Engineer on Target's cloud infrastructure. He served as CTO for Joe Biden's 2020 Presidential campaign. Prior to that Dan worked with the Hillary for America tech team through the Groundwork, and contributed as a founding developer on Spinnaker while at Netflix. Dan is an O'Reilly published author and avid public speaker. Links: Shipt: https://www.shipt.com/ Twitter: https://twitter.com/danveloper LinkedIn: https://www.linkedin.com/in/danveloper TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: It seems like there is a new security breach every day. Are you confident that an old SSH key, or a shared admin account, isn't going to come back and bite you? If not, check out Teleport. Teleport is the easiest, most secure way to access all of your infrastructure. The open source Teleport Access Plane consolidates everything you need for secure access to your Linux and Windows servers—and I assure you there is no third option there. Kubernetes clusters, databases, and internal applications like AWS Management Console, Yankins, GitLab, Grafana, Jupyter Notebooks, and more. Teleport's unique approach is not only more secure, it also improves developer productivity. To learn more visit: goteleport.com. And not, that is not me telling you to go away, it is: goteleport.com.Corey: Writing ad copy to fit into a 30 second slot is hard, but if anyone can do it the folks at Quali can. Just like their Torque infrastructure automation platform can deliver complex application environments anytime, anywhere, in just seconds instead of hours, days or weeks. Visit Qtorque.io today and learn how you can spin up application environments in about the same amount of time it took you to listen to this ad.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. Sometimes I talk to people who are involved in working on the nonprofit slash political side of the world. Other times I talk to folks who are deep in the throes of commercial businesses, and I obviously personally spend more of my time on one of those sides of the world than I do the other. But today's guest is a little bit different, Dan Woods is the CISO and VP of Cybersecurity at Shipt, a division of Target where he's worked for a fair number of years, but took some time off for his side project, the side hustle as the kids call it, as the CTO for the Biden campaign. Dan, thank you for joining me.Dan: Yeah. Thank you, Corey. Happy to be here.Corey: So, you have an interesting track record as far as your career goes, you've been at Target for a long time. You were a distinguished engineer—not to be confused with ‘extinguished engineer,' which is just someone who is finally—the fire has gone out. And from there you went from being a distinguished engineer to a VP slash CISO, which generally looks a lot less engineer-like, and a lot more, at least in my experience, of sitting in a whole lot of executive-level meetings, managing teams, et cetera. Was that, in fact, an individual contributor—or IC—move into a management track, or am I just misunderstanding this because these are commonly overloaded terms in our industry?Dan: Yeah, yeah, no, that's exactly right. So, IC to leadership, two distinct tracks, distinct career paths. It was something that I've spent a number of years thinking about and more or less working toward and making sure that it was the right path for me to go. The interesting thing about the break that I took in the middle of Target when I was CTO for the campaign is that that was a leadership role, right. I led the team. I managed the team.I did performance reviews and all of that kind of managerial stuff, but I also sat down and did a lot of tech. So, it was kind of like a mix of being a senior executive, but also still continuing to be a distinguished engineer. So, then the natural path out of that for me was to make a decision about do I continue to be an individual contributor or do I go into a leadership track? And I felt like for a number of reasons that my interests more aligned with being on the leadership side of the world, and so that's how I've ended up where I am.Corey: And correct me if I'm wrong because generally speaking political campaigns are not usually my target customers given the fact that they're turning the entire AWS environment off in a few months—win or lose—and yeah, that is, in fact, remains the best way to save money on your AWS bill; it's hard for me to beat that. But at that point most of the people you're working with are in large part volunteers I would imagine.So, managing in a traditional sense of, “Well, we're going to have your next quarterly review.” Well, your candidate might not be in the race then, and what we're going to put you on a PIP, and what exactly you're going to stop letting me volunteer here? You're going to dock them pay—you're not paying me for this. It becomes an interesting management challenge I would imagine just because the people you're working with are passionate and volunteering, and a lot of traditional management and career advice doesn't necessarily map one-to-one I would have to assume.Dan: That is the best way that I've heard it described yet. I try to explain this to folks sometimes and it's kind of difficult to get that message across that like there is sort of a base level organization that exists, right. There were full-time employees who were a part of the tech team, really great group of folks especially from very early on willing to join the campaign and be a part of what it was that we were doing.And then there was this whole ecosystem of folks who just wanted to volunteer, folks who wanted to be a part of it but didn't want to leave their 9:00 to 5:00 who wanted to come in. One of the most difficult things about—we rely on volunteers very heavily in the political space, and very grateful for all the folks who step up and volunteer with organizations that they feel passionate about. In fact, one of the best little tidbits of wisdom the President imparted to me at one point, we were having dinner at his house very early on in the campaign, and he said, “The greatest gift that you can give somebody is your time.” And I think that's so incredibly true. So, the folks who volunteer, it's really important, really grateful that they're all there.In particular, how it becomes difficult, is that you need somebody to manage the volunteers, right, who are there. You need somebody to come up with work and check in that work is getting done because while it's great that folks want to volunteer five, ten hours a week, or whatever it is that they can put in, we also have very real things that need to get done, and they need to get done in a timely manner.So, we had a lot of difficulty especially early on in the campaign utilizing the volunteers to the extent that we could because we were such a small and scrappy team and because everybody who was working on the campaign at the time had a lot of responsibilities that they needed to see through on their own. And so getting into this, it's quite literally a full-time job having to sit down and follow up with volunteers and make sure that they have the appropriate amount of work and make sure that we've set up our environment appropriately so that volunteers can come and go and all of that kind of stuff, so yeah.Corey: It's always an interesting joy looking at the swath of architectural decisions and how they came to be. I talked on a previous episode with Jackie Singh, who was, I believe, after your tenure as CISO, she was involved on the InfoSec side of things, and she was curious as to your thought process or rationale with a lot of the initial architectural decisions that she talked about on her episode which I'm sure she didn't intend it this way, but I am going to blatantly miscategorize as, “Justify yourself. What were you thinking?” Usually it takes years for that kind of, “I don't understand what's going on here so I'm playing data center archeologist or cloud spelunker.” This was a very short window. How did decisions get made architecturally as far as what you're going to run things on? It's been disclosed that you were on AWS, for example. Was that a hard decision?Dan: No, not at all. Not at all. We started out the campaign—I in particular I was one of the first employees hired onto the campaign and the idea all along was that we're not going to be clever, right? We're basically just going to develop what needs to be developed. And the idea with that was that a lot of the code that we were going to sit down and write or a lot of the infrastructure that we were going to build was going to be glue, it not AWS Glue, right, ideally, but just glue that would bind data streams together, right?So, data movement, vendor A produces a CSV file for you and it needs to end up in a bucket somewhere. So, somebody needs to write the code to make that happen, or you need to find a sufficient vendor who can make that happen. There's a lot more vendors today believe it or not than there were two years ago that are doing much better in that kind of space, but two years ago we had the constraints of time and money.Our idea was that the code that we were going to write was going to be for those purposes. What it actually turned into is that in other areas of the business—and I will call it a business because we had formalized roadmaps and different departments working on different things—but in other areas of the business where we didn't have enough money to purchase a solution, we had the ability to go and write software.The interesting thing about this group of technologists who came together especially early on in the campaign to build out the tech team most of them came from an enterprise software development background, right? So, we had the know-how of how to build things at scale and how to do continuous delivery and continuous deployment, and how to operate a cloud-native environment, and how to build applications for that world.So, we ended up doing things like writing an API for managing our donor vetting pipeline, right? And that turned into a complex system of Lambda functions and continuous delivery for a variety of different services that facilitated that pipeline. We also built an architecture for our mobile app which there were plenty of companies that wanted to sell us a mobile app and we just couldn't afford it so we ended up writing the mobile app ourselves.So, after some point in time, what we said was we actually have a fairly robust and complex software infrastructure. We have a number of microservices that are doing various things to facilitate the operation of the business, and something that we need to do is we need to spend a little bit of time and make sure that we're building this in a cohesive way, right? And what part of that means was that, for example, we had to take a step back and say, “Okay, we need to have a unified identity service.” We can't have a different identity—or we can't have every single individual service creating its own identity. We need to have—Corey: I really wish you could pass that lesson out on some of the AWS service teams.Dan: [laugh]. Yes, I know. I know. Yeah. So, we went through—Corey: So, there were some questionable choices you made in there, like you started that with the beginning of, “Well, we had no time which is fine and no budget. So, we chose AWS.” It's like, “Oh, that looks like the exact opposite direction of a great decision, given, you know, my view on it.” Stepping past that entirely, you are also dealing with challenges that I don't think map very well to things that exist in the corporate world. For example, you said you had to build a donor vetting pipeline.It's in the corporate world I didn't have it. It's one of those, “Why in the world would I get in the way of people trying to give me money?” And the obvious answer in your case is, federal law, and it turns out that the best outcome generally does not involve serving prison time. So, you have to address these things in ways that don't necessarily have a one-to-one analog in other spaces.Dan: That's true. That's true. Yes, correct to the federal law thing. Our more pressing reason to do this kind of thing was that we made a commitment very early on in the campaign that we wouldn't take money from executives of the gas and oil industry, for example. There were another bunch of other commitments that were made, but it was inconceivable for us to have enough people that could possibly go manually through those filings. So, for us to be able to build an automated system for doing that meant that we were literally saving thousands of human hours and still getting a beneficial result out of it.Corey: And everything you do is subject to intense scrutiny by folks who are willing to make hay out of anything. If it had leaked at the time, I would have absolutely done some ridiculous nonsense thing about, “Ah, clearly looking at this AWS bill. Joe Biden's supports managed NAT gateway data processing pricing.” And it's absolutely not, but that doesn't stop people from making hay about this because headlines are going to be headlines.And do you have to also deal with the interesting aspect—industrial espionage is always kind of a thing, but by and large most companies don't have to worry that effectively half of the population is diametrically opposed to the thing it is that they're trying to do to the point where they might very well try to get insiders there to start leaking things out. Everything you do has to be built with optics in mind, working under tight constraints, and it seems like an almost insurmountable challenge except for the fact where you actually pulled it off.Dan: Yeah. Yeah. Yeah. We kept saying that the tech was not the story, right, and we wanted to do everything within our power to keep the conversation on the candidate and not on emails or AWS bills or any of that kind of stuff. And so we were very intentional about a lot of the decisions that we ended up making with the idea that if the optics are bad, we pull away from the primary mission of what it is that we're trying to do.Corey: So, what was it that qualified you to be the CTO of a—at the time very fledgling and uncertain campaign, given that you were coming from a role where you were a distinguished engineer, which is not nothing, let's be clear, but it's an executive-level of role rather than a hands-on level of role as CTO. And then if we go back in time, you were one of the founding developers of Spinnaker over at Netflix.And I have a lot of thoughts about Netflix technology and a lot of thoughts about Spinnaker as well, and none of those thoughts are, “This seems like a reasonable architecture I should roll out for a presidential campaign.” So, please, don't take this as the insult that probably sounds like, but why were you the CTO that got tapped?Dan: Great question. And I think in some ways, right place, right time. But in other ways probably needs to speak a little bit to the journey of how I've gotten anywhere in my career. So, going back to Netflix, yeah, so I worked in Netflix. I had the opportunity to work with a lot of incredibly bright and talented folks there. One of the people in particular who I met there and became friends with was Corey Bertram who worked on the core SRE team.Corey left Netflix to go off and at the time he was just like, “I'm going to go do a political startup.” The interesting thing about Netflix at the time—this was 2013, so, this was just after the Obama for America '12 campaign. And a bunch of folks from OFA world came and worked at Netflix and a variety of other organizations in the Bay Area. Corey was not one of those people but we were very well-connected with folks in that world, and Corey said he was going off to do a political startup, and so after my non-mutual departure from Netflix, I was talking to Corey and he said, “Hey, why don't you come over and help us figure out how to do continuous delivery over on the political startup.” That political startup turned into the groundwork which turned into essentially the tech platform for the Hillary for America campaign.So, I had the opportunity working for the groundwork to work very closely with the folks in the technology organization at HFA. And that got me more exposure to what that world is and more connections into that space. And the groundwork was run by Corey, but was the CEO or head—I don't even know what he called himself, was Michael Slaby, who was President Obama's CTO in 2008 and had a bigger technical role in the 2012 campaign.And so, for his involvement in HFA '16 meant that he was a person who was very well connected for the 2020 campaign. And when we were out at a political conference in late 2018 and he said, “Hey, I think that Vice President Biden is going to run. Do you have any interest in talking with his team?” And I said, “Yes, absolutely. Please introduce me.”And I had a couple of conversations with Greg Schultz who was the campaign manager and we just hit it off. And it was a really great fit. Greg was an excellent leader. He was a real visionary, exactly the person that President Biden needed. And he brought me in to set up the tech operation and get everything to where we ultimately won the primary and won the election after that.Corey: And then, as all things do, it ended and the question then becomes, “Great, what's next?” And the answer for you was apparently, “Okay, I'm going to go back to Target-ish.” Although now you're the CISO of a Target subsidiary, Shipt and Target's relationship is—again, I imagine I have that correct as far as you are in fact a subsidiary of Target, so it wasn't exactly a new company, but rather a transition into the previous organization you were in a different role.Dan: Yeah, correct. Yeah, it's a different department inside of Target, but my paycheck still come from Target. [laugh].Corey: So, what was it that inspired you to go into the CISO role? Because obviously security is everyone's job, which is what everyone says, which is why we get away with treating it like it's nobody's job because shared responsibilities tend to work out that way.Dan: Yeah.Corey: And you've done an awful lot of stuff that was not historically deeply security-centric although there's always an element passing through it. Now, going into a CISO role as someone without a deep InfoSec background that I'm aware of, what drove that? How did that work?Dan: You know, I think the most correct answer is that security has always been in my blood. I think like most people who started out—Corey: There are medications for that now.Dan: Yeah, [laugh] good. I might need them. [laugh]. I think like most folks who are kind of my era who started seriously getting into software development and computer system administration in the late ‘90s, early thousands, cybersecurity it wasn't called cybersecurity at the time. It wasn't even called InfoSec, right, it was just called, I don't know, dabbling or something. But that was a gateway for getting into Linux system administration, network engineering, so forth and so on.And for a short period of time I became—when I was getting my RHCE certification way back in the day, I became pretty entrenched in network security and that was a really big focus area that I spent a lot of time on and I got whatever the supplemental network security certification from Red Hat was at the time. And then I realized pretty quickly that the world isn't going to need box operators for very long, and this was just before the DevOps revolution had really come around and more and more things were automated.So, we were still doing hand deployments. I was still dropping WAR files onto a file system and restarting Apache. That was our deployment process. And I saw the writing on the wall and I said, “If I don't dedicate myself to becoming first and foremost a software engineer, then I'm not going to have a very good time in technology here.” So, I jumped out of that and I got into software development, and so that's where my software engineering career evolved out of.So, when I was CTO for the campaign, I like to tell people that I was a hundred percent of CTO, I was a hundred percent a CIO, and I was a hundred percent of CISO for the first 514 days of the campaign or whatever it was. So, I was 300 percent doing all of the top-level technology jobs for the campaign, but cybersecurity was without a doubt the one that we would drop everything for every single time.And that was by necessity; we were constantly under attack on the campaign. And a lot of my headspace during that period of time was dedicated to how do we make sure that we're doing things in the most secure way? So, when I left—when I came back into Target and I came back in as a distinguished engineer there were some areas that they were hoping that I could contribute positively and help move a couple of things along.The idea always the whole time was going to be for me to jump into a leadership position. And I got a call one day from Rich Agostino who's the CISO for Target and he said, “Hey, Shipt needs a cybersecurity operation built out and you're looking for a leadership role. Would you be interested in doing this?” And believe it or not, I had missed the world of cybersecurity so much that when the opportunity came up I said, “Yes, absolutely. I'll dive in head first.” And so that was the path for getting there.Corey: This episode is sponsored by our friends at Oracle HeatWave is a new high-performance accelerator for the Oracle MySQL Database Service. Although I insist on calling it “my squirrel.” While MySQL has long been the worlds most popular open source database, shifting from transacting to analytics required way too much overhead and, ya know, work. With HeatWave you can run your OLTP and OLAP, don't ask me to ever say those acronyms again, workloads directly from your MySQL database and eliminate the time consuming data movement and integration work, while also performing 1100X faster than Amazon Aurora, and 2.5X faster than Amazon Redshift, at a third of the cost. My thanks again to Oracle Cloud for sponsoring this ridiculous nonsense.Corey: My take to cybersecurity space is, a little, I think, different than most people's journeys through it. The reason I started a Thursday edition of the Last Week in AWS newsletter is the security happenings in the AWS ecosystem for folks who don't have the word security in their job titles because I used to dabble in that space a fair bit. The problem I found is that is as you move up the ladder to executives that our directors, VPs, and CISOs, the language changes significantly.And it almost becomes a dialect of corporate-speak that I find borderline impenetrable, versus the real world terminology we're talking about when, “Okay, let's make sure that we rotate credentials on a reasonable expected basis where it makes sense,” et cetera et cetera. It almost becomes much more of a box-checking compliance exercise slash layering on as much as you possibly can that for plausible deniability for the inevitable breach that one day hits and instead of actually driving towards better outcomes.And I understand that's a cynical, strange perspective, but I started talking to people about this, and I'm very far from alone in that, which is why people are subscribing to that newsletter and that's the corner of the market I wanted to start speaking to. So, given that you've been an engineer practitioner trying to build things and now a security executive as well, is my assessment of the further higher up you go the entire messaging and purpose change, or is that just someone who's been in the trenches for too long and hasn't been on that side of the world, and I have a certain lack of perspective that would make this all very clear. Which I freely accept, if that's the case.Dan: No, I think that you're right for a lot of organizations. I think that that's a hundred percent true, and it is exactly as you described: a box-checking exercise for a lot of organizations. Something that's important to remember about Target is—Target was the subject of a data breach in 2012, and that was before there were data breaches every single day, right.Now, we look at a data breach and we say that's just going to happen, right, that's the cost of doing business. But back in 2012 it was really a very big story and it was a very big deal, and there was quite a bit of activity in the Target technology world after that breach. So, it reshaped the culture quite literally, new executives were brought in, but there's this whole world of folks inside of Target who have never forgotten that, right, and work day-in and day-out to make sure that we don't have another breach.So, security at Target is a main centrally thought about kind of thing. So, it's very much something that is a part of the way that people operate inside of Target. So, coming over to Shipt, obviously, Shipt is—it is a subsidiary. It is a part of Target, but it doesn't have that long history and hasn't had that same kind of experience. The biggest thing that we really needed at Shipt is first and foremost to get the program established, right. So, I'm three or four months onto the job now and we've tripled the team size. I've been—Corey: And you've stayed out of the headlines, which is basically the biggest and most accurate breach indicator I've found so far.Dan: So far so good. Well, but the thing that we want to do though is to be able to bring that same kind of focus of importance that Target has on cybersecurity into the world of engineering at Shipt. And it's not just a compliance game, and it's not just a thing where we're just trying to say that we have it. We're actually trying to make sure that as we go forward we've got all these best practices from an organization that's been through the bad stuff that we can adopt into our day-to-day and kind of get it done.When we talk about it at an executive level, obviously we're not talking about the penetration tests done by the red team the earlier day, right. We're not calling any of that stuff out in particular. But we do try to summarize it in a way that makes it clear that the thing that we're trying to do is build a security-minded culture and not just check some boxes and make sure that we have the appropriate titles in the appropriate places so that our insurance rates go down, right. We're actually trying to keep people safe.Corey: There's a lot to be said for that. With the Target breach back in—I want to say 2012, was it?Dan: 2012. Yep.Corey: Again, it was a wake-up call and the argument that I've always seen is that everyone is vulnerable—just depends on how much work it's going to take to get there. And for, credit where due, there was a complete rotation in the executive levels which whether that's fair or not, I—people have different opinions on it; my belief has always been you own the responsibility, regardless of who's doing the work.And there's no one as fanatical as a convert, on some level, and you've clearly been doing a lot of things in the right direction. The thing that always surprises me is that when I wind up seeing these surveys in the industry that—what is it? 65% of companies say that they would be vulnerable to a breach, and everybody said, “Oh, we should definitely look at those companies.” My argument is, “Hang on a sec. I want to talk to the 35% who say, ‘oh, we're impenetrable.'” because, spoiler, you are not.No one is. Just the question of how heavy is the lift and how much work is it going to take to get there? I do know that mouthing off in public about how perfect the security of anything is, is the best way to more or less climb to the top of a mountain during a thunderstorm, a hold up a giant metal rod, and curse the name of God. It doesn't lead to positive outcomes, basically ever. In turn, this also leads to companies not talking about security openly.I find that in many cases it is easier for me to get people to talk about their AWS bills than their InfoSec posture. And I do believe, incidentally, those two things are not entirely unrelated, but how do you view it? It was surprisingly easy to get Shipt's CISO to have a conversation with me here on this podcast. It is significantly more challenging in most other companies.Dan: Well, in fairness, you've been asking me for about two-and-a-half years pretty regularly [laugh] to come.Corey: And I always say I will stop bothering you if you want. You said, “No, no. Ask me again in a few months. Ask me again, after the election. Ask me again after—I don't know, like, the one-day delivery thing gets sorted out.” Whatever it happens to be. And that's fine. I follow up religiously, and eventually I can wear people down by being polite yet persistent.Dan: So, persistence on you is actually to credit here. No, I think to your question though, I think that there's a good balance. There's a good balance in being open about what it is that you're trying to do versus over-sharing areas that maybe you're less proficient in, right. So, it wouldn't make a lot of sense for me to come on here and tell you the areas that we need to develop into security. But on the other side of things, I am very happy to come in and talk to you about how our incident response plan is evolving, right, and what our plan looks like for doing all of that kind of stuff.Some of the best security practitioners who I've worked with in the world will tell you that you're not going to prevent a breach from a motivated attacker, and your job as CISO is to make sure that your response is appropriate, right, more so than anything. So, our incident response areas where today we're dedicating quite a bit of effort to build up our proficiency, and that's a very important aspect of the cybersecurity program that we're trying to build here.Corey: And unlike the early days of a campaign, you still have to be ultra-conscious about security, but now you have the luxury of actually being able to hire security staff because it turns out that, “Please come volunteer here,” is not presumably Shipt's hiring pitch.Dan: That's correct. Yeah, exactly. We have a lot of buy-in from the rest of leadership to build out this program. Shipt's history with cybersecurity is one where there were a couple of folks who did a remarkably good job for just being two or three of them for a really long period of time who ran the cybersecurity operation very much was not a part of the engineering culture at Shipt, but there still was coverage.Those folks left earlier in the year, all of them, simultaneously, unfortunately. And that's sort of how the position became open to me in the first place. But it also meant that I was quite literally starting with next to nothing, right. And from that standpoint it made it feel a lot like the early days of the campaign because I was having to build a team from scratch and having to get people motivated to come and work on this thing that had kind of an unknown future roadmap associated with it and all of that kind of stuff.But we've been very privileged to—because we have that leadership support we're able to pay market rates and actually hire qualified and capable and competent engineers and engineering leaders to help build out the aspects of this program that we need. And like I said, we've managed to—we weren't exactly at zero when I walked in the door. So, when I say we were able to quadruple the team, it doesn't mean that we just added four zeros there, [laugh] but we've got a little bit over a dozen people focusing on all areas of security for the business that we can think of. And that's just going to continue to grow. So, it's exciting; it's a challenge. But having the support of the entire organization behind something like this really, really helps a lot.Corey: I know we're running out of time for a lot of the interview, but one more question I want to ask you about is, when you're the CISO for a nationally known politician who is running for the highest office, the risk inherent to getting it wrong is massive. This is one of those mistakes will show indelibly for the rest of, well, one would argue US history, you could arguably say that there will be consequences that go that far out.On the other side of it, once you're done on the campaign you're now the CISO at Shipt. And I am not in any way insinuating that the security of your customers, and your partners, and your data across the board is important. But it does not seem to me from the outside that it has the same, “If we get this wrong there are repercussions that will extend into my grandchildren's time.” How do you find that your ability to care as deeply about this has changed, if it has?Dan: My stress levels are a lot lower I'll say that, but—Corey: You can always spot the veterans on an SRE team because—when I say veterans I mean veterans from the armed forces because, “No one's shooting at me. We can't serve ads right now. I'm really not going to run around and scream like, ‘My hair's on fire,' because this is nothing compared to what stress can look like.” And yeah there's always a worst stressor, but, on some level, it feels like it would be an asset. And again this is not to suggest you don't take security seriously. I want to be very clear on that point.Dan: Yeah, yeah, no. The important challenge of the role is building this out in a way that we have coverage over all the areas that we really need, right, and that is actually the kind of stuff that I enjoy quite a bit. I enjoy starting a program. I enjoy seeing a program come to fruition. I enjoy helping other people build their careers out, and so I have a number of folks who are at earlier at points in their career who I'm very happy that we have them on our team because I can see them grow and I can see them understand and set up what the next thing for them to do is.And so when I look at the day-to-day here, I was motivated on the campaign by that reality of like there is some quite literal life or death stuff that is going to happen here. And that's a really strong presser to make sure that you're doing all the right stuff at the right time. In this case, my motivation is different because I actually enjoy building this kind of stuff out and making sure that we're doing all the right stuff and not having the stress of, like, this could be the end of the world if we get this wrong.Means that I can spend time focusing on making sure that the program is coming together as it should, and getting joy from seeing the program come together is where a lot of that motivation is coming from today. So, it's just different, right? It's a different thing, but at the end of the day it's very rewarding and I'm enjoying it and can see this continuing on for quite some time.Corey: And I look forward to ideally getting you back in another two-and-a-half years after I began badgering you in two hours in order to come back on the show. If—Dan: [laugh].Corey: —people want to hear more about what you're up to, how you view about these things, potentially consider working with you, where can they find you?Dan: Best place although I've not been as active because it has been very busy the last couple of months, but find me on Twitter, @danveloper, find me on LinkedIn. Those—you know, I posted a couple of blog posts about the technology choices that we made on the campaign that I think folks find interesting, and periodically I'll share out my thoughts on Twitter about whatever the most current thing is, Kubernetes or AWS about to go down or something along those lines. So, yeah, that's the best way. And I tweet out all the jobs and post all the jobs that we're hiring for on LinkedIn and all of that kind of stuff. So, usual social channels. Just not Facebook.Corey: Amen to that. And I will of course include links to those things in the [show notes 00:37:29]. Thank you so much for taking the time to speak with me. I appreciate it.Dan: Thank you, Corey.Corey: Dan Woods, CISO and VP of Cybersecurity at Shipt, also formerly of the Biden campaign because wherever he goes he clearly paints a target on his back. I'm Cloud Economist, Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast please leave a five-star review on your podcast platform of choice along with an incoherent rant that is no doubt tied to either politics or the alternate form of politics: Spinnaker.Dan: [laugh].Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
About ThomasThomas Hazel is Founder, CTO, and Chief Scientist of ChaosSearch. He is a serial entrepreneur at the forefront of communication, virtualization, and database technology and the inventor of ChaosSearch's patented IP. Thomas has also patented several other technologies in the areas of distributed algorithms, virtualization and database science. He holds a Bachelor of Science in Computer Science from University of New Hampshire, Hall of Fame Alumni Inductee, and founded both student & professional chapters of the Association for Computing Machinery (ACM).Links:ChaosSearch: https://www.chaossearch.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by my friends at ThinkstCanary. Most companies find out way too late that they've been breached. ThinksCanary changes this and I love how they do it. Deploy canaries and canary tokens in minutes and then forget about them. What's great is the attackers tip their hand by touching them, giving you one alert, when it matters. I use it myself and I only remember this when I get the weekly update with a “we're still here, so you're aware” from them. It's glorious! There is zero admin overhead to this, there are effectively no false positives unless I do something foolish. Canaries are deployed and loved on all seven continents. You can check out what people are saying at canary.love. And, their Kub config canary token is new and completely free as well. You can do an awful lot without paying them a dime, which is one of the things I love about them. It is useful stuff and not an, “ohh, I wish I had money.” It is speculator! Take a look; that's canary.love because it's genuinely rare to find a security product that people talk about in terms of love. It really is a unique thing to see. Canary.love. Thank you to ThinkstCanary for their support of my ridiculous, ridiculous non-sense. Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted episode is brought to us by our friends at ChaosSearch.We've been working with them for a long time; they've sponsored a bunch of our nonsense, and it turns out that we've been talking about them to our clients since long before they were a sponsor because it actually does what it says on the tin. Here to talk to us about that in a few minutes is Thomas Hazel, ChaosSearch's CTO and founder. First, Thomas, nice to talk to you again, and as always, thanks for humoring me.Thomas: [laugh]. Hi, Corey. Always great to talk to you. And I enjoy these conversations that sometimes go up and down, left and right, but I look forward to all the fun we're going to have.Corey: So, my understanding of ChaosSearch is probably a few years old because it turns out, I don't spend a whole lot of time meticulously studying your company's roadmap in the same way that you presumably do. When last we checked in with what the service did-slash-does, you are effectively solving the problem of data movement and querying that data. The idea behind data warehouses is generally something that's shoved onto us by cloud providers where, “Hey, this data is going to be valuable to you someday.” Data science teams are big proponents of this because when you're storing that much data, their salaries look relatively reasonable by comparison. And the ChaosSearch vision was, instead of copying all this data out of an object store and storing it on expensive disks, and replicating it, et cetera, what if we queried it in place in a somewhat intelligent manner?So, you take the data and you store it, in this case, in S3 or equivalent, and then just query it there, rather than having to move it around all over the place, which of course, then incurs data transfer fees, you're storing it multiple times, and it's never in quite the format that you want it. That was the breakthrough revelation, you were Elasticsearch—now OpenSearch—API compatible, which was great. And that was, sort of, a state of the art a year or two ago. Is that generally correct?Thomas: No, you nailed our mission statement. No, you're exactly right. You know, the value of cloud object stores, S3, the elasticity, the durability, all these wonderful things, the problem was you couldn't get any value out of it, and you had to move it out to these siloed solutions, as you indicated. So, you know, our mission was exactly that, transformed customers' cloud storage into an analytical database, a multi-model analytical database, where our first use case was search and log analytics, replacing the ELK stack and also replacing the data pipeline, the schema management, et cetera. We automate the entire step, raw data to insights.Corey: It's funny we're having this conversation today. Earlier, today, I was trying to get rid of a relatively paltry 200 gigs or so of small files on an EFS volume—you know, Amazon's version of NFS; it's like an NFS volume except you're paying Amazon for the privilege—great. And it turns out that it's a whole bunch of operations across a network on a whole bunch of tiny files, so I had to spin up other instances that were not getting backed by spot terminations, and just firing up a whole bunch of threads. So, now the load average on that box is approaching 300, but it's plowing through, getting rid of that data finally.And I'm looking at this saying this is a quarter of a terabyte. Data warehouses are in the petabyte range. Oh, I begin to see aspects of the problem. Even searching that kind of data using traditional tooling starts to break down, which is sort of the revelation that Google had 20-some-odd years ago, and other folks have since solved for, but this is the first time I've had significant data that wasn't just easily searched with a grep. For those of you in the Unix world who understand what that means, condolences. We're having a support group meeting at the bar.Thomas: Yeah. And you know, I always thought, what if you could make cloud object storage like S3 high performance and really transform it into a database? And so that warehouse capability, that's great. We like that. However to manage it, to scale it, to configure it, to get the data into that, was the problem.That was the promise of a data lake, right? This simple in, and then this arbitrary schema on read generic out. The problem next came, it became swampy, it was really hard, and that promise was not delivered. And so what we're trying to do is get all the benefits of the data lake: simple in, so many services naturally stream to cloud storage. Shoot, I would say every one of our customers are putting their data in cloud storage because their data pipeline to their warehousing solution or Elasticsearch may go down and they're worried they'll lose the data.So, what we say is what if you just said activate that data lake and get that ELK use case, get that BI use case without that data movement, as you indicated, without that ETL-ing, without that data pipeline that you're worried is going to fall over. So, that vision has been Chaos. Now, we haven't talked in, you know, a few years, but this idea that we're growing beyond what we are just going after logs, we're going into new use cases, new opportunities, and I'm looking forward to discussing with you.Corey: It's a great answer that—though I have to call out that I am right there with you as far as inappropriately using things as databases. I know that someone is going to come back and say, “Oh, S3 is a database. You're dancing around it. Isn't that what Athena is?” Which is named, of course, after the Greek Goddess of spending money on AWS? And that is a fair question, but to my understanding, there's a schema story behind that does not apply to what you're doing.Thomas: Yeah, and that is so crucial is that we like the relational access. The time-cost complexity to get it into that, as you mentioned, scaled access, I mean, it could take weeks, months to test it, to configure it, to provision it, and imagine if you got it wrong; you got to redo it again. And so our unique service removes all that data pipeline schema management. And because of our innovation because of our service, you do all schema definition, on the fly, virtually, what we call views on your index data, that you can publish an elastic index pattern for that consumption, or a relational table for that consumption. And that's kind of leading the witness into things that we're coming out with this quarter into 2022.Corey: I have to deal with a little bit of, I guess, a shame here because yeah, I'm doing exactly what you just described. I'm using Athena to wind up querying our customers' Cost and Usage Reports, and we spend a couple hundred bucks a month on AWS Glue to wind up massaging those into the way that they expect it to be. And it's great. Ish. We hook it up to Tableau and can make those queries from it, and all right, it's great.It just, burrr goes the money printer, and we somehow get access and insight to a lot of valuable data. But even that is knowing exactly what the format is going to look like. Ish. I mean, Cost and Usage Reports from Amazon are sort of aspirational when it comes to schema sometimes, but here we are. And that's been all well and good.But now the idea of log files, even looking at the base case of sending logs from an application, great. Nginx, or Apache, or [unintelligible 00:07:24], or any of the various web servers out there all tend to use different logging formats just to describe the same exact things, start spreading that across custom in-house applications and getting signal from that is almost impossible. “Oh,” people say, “So, we'll use a structured data format.” Now, you're putting log and structuring requirements on application developers who don't care in the first place, and now you have a mess on your hands.Thomas: And it really is a mess. And that challenge is, it's so problematic. And schemas changing. You know, we have customers and one reasons why they go with us is their log data is changing; they didn't expect it. Well, in your data pipeline, and your Athena database, that breaks. That brings the system down.And so our system uniquely detects that and manages that for you and then you can pick and choose how you want to export in these views dynamically. So, you know, it's really not rocket science, but the problem is, a lot of the technology that we're using is designed for static, fixed thinking. And then to scale it is problematic and time-consuming. So, you know, Glue is a great idea, but it has a lot of sharp [pebbles 00:08:26]. Athena is a great idea but also has a lot of problems.And so that data pipeline, you know, it's not for digitally native, active, new use cases, new workloads coming up hourly, daily. You think about this long-term; so a lot of that data prep pipelining is something we address so uniquely, but really where the customer cares is the value of that data, right? And so if you're spending toils trying to get the data into a database, you're not answering the questions, whether it's for security, for performance, for your business needs. That's the problem. And you know, that agility, that time-to-value is where we're very uniquely coming in because we start where your data is raw and we automate the process all the way through.Corey: So, when I look at the things that I have stuffed into S3, they generally fall into a couple of categories. There are a bunch of logs for things I never asked for nor particularly wanted, but AWS is aggressive about that, first routing through CloudTrail so you can get charged 50-cent per gigabyte ingested. Awesome. And of course, large static assets, images I have done something to enter colloquially now known as shitposts, which is great. Other than logs, what could you possibly be storing in S3 that lends itself to, effectively, the type of analysis that you built around this?Thomas: Well, our first use case was the classic log use cases, app logs, web service logs. I mean, CloudTrail, it's famous; we had customers that gave up on elastic, and definitely gave up on relational where you can do a couple changes and your permutation of attributes for CloudTrail is going to put you to your knees. And people just say, “I give up.” Same thing with Kubernetes logs. And so it's the classic—whether it's CSV, where it's JSON, where it's log types, we auto-discover all that.We also allow you, if you want to override that and change the parsing capabilities through a UI wizard, we do discover what's in your buckets. That term data swamp, and not knowing what's in your bucket, we do a facility that will index that data, actually create a report for you for knowing what's in. Now, if you have text data, if you have log data, if you have BI data, we can bring it all together, but the real pain is at the scale. So classically, app logs, system logs, many devices sending IoT-type streams is where we really come in—Kubernetes—where they're dealing with terabytes of data per day, and managing an ELK cluster at that scale. Particularly on a Black Friday.Shoot, some of our customers like—Klarna is one of them; credit card payment—they're ramping up for Black Friday, and one of the reasons why they chose us is our ability to scale when maybe you're doing a terabyte or two a day and then it goes up to twenty, twenty-five. How do you test that scale? How do you manage that scale? And so for us, the data streams are, traditionally with our customers, the well-known log types, at least in the log use cases. And the challenge is scaling it, is getting access to it, and that's where we come in.Corey: I will say the last time you were on the show a couple of years ago, you were talking about the initial logging use case and you were speaking, in many cases aspirationally, about where things were going. What a difference a couple years is made. Instead of talking about what hypothetical customers might want, or what—might be able to do, you're just able to name-drop them off the top of your head, you have scaled to approximately ten times the number of employees you had back then. You've—Thomas: Yep. Yep.Corey: —raised, I think, a total of—what, 50 million?—since then.Thomas: Uh, 60 now. Yeah.Corey: Oh, 60? Fantastic.Thomas: Yeah, yeah.Corey: Congrats. And of course, how do you do it? By sponsoring Last Week in AWS, as everyone should. I'm taking clear credit for that every time someone announces around, that's the game. But no, there is validity to it because telling fun stories and sponsoring exciting things like this only carry you so far. At some point, customers have to say, yeah, this is solving a pain that I have; I'm willing to pay you money to solve it.And you've clearly gotten to a point where you are addressing the needs of those customers at a pretty fascinating clip. It's bittersweet from my perspective because it seems like the majority of your customers have not come from my nonsense anymore. They're finding you through word of mouth, they're finding through more traditional—read as boring—ad campaigns, et cetera, et cetera. But you've built a brand that extends beyond just me. I'm no longer viewed as the de facto ombudsperson for any issue someone might have with ChaosSearch on Twitters. It's kind of, “Aww, the company grew up. What happened there?”Thomas: No, [laugh] listen, this you were great. We reached out to you to tell our story, and I got to be honest. A lot of people came by, said, “I heard something on Corey Quinn's podcasts,” or et cetera. And it came a long way now. Now, we have, you know, companies like Equifax, multi-cloud—Amazon and Google.They love the data lake philosophy, the centralized, where use cases are now available within days, not weeks and months. Whether it's logs and BI. Correlating across all those data streams, it's huge. We mentioned Klarna, [APM Performance 00:13:19], and, you know, we have Armor for SIEM, and Blackboard for [Observers 00:13:24].So, it's funny—yeah, it's funny, when I first was talking to you, I was like, “What if? What if we had this customer, that customer?” And we were building the capabilities, but now that we have it, now that we have customers, yeah, I guess, maybe we've grown up a little bit. But hey, listen to you're always near and dear to our heart because we remember, you know, when you stop[ed by our booth at re:Invent several times. And we're coming to re:Invent this year, and I believe you are as well.Corey: Oh, yeah. But people listening to this, it's if they're listening the day it's released, this will be during re:Invent. So, by all means, come by the ChaosSearch booth, and see what they have to say. For once they have people who aren't me who are going to be telling stories about these things. And it's fun. Like, I joke, it's nothing but positive here.It's interesting from where I sit seeing the parallels here. For example, we have both had—how we say—adult supervision come in. You have a CEO, Ed, who came over from IBM Storage. I have Mike Julian, whose first love language is of course spreadsheets. And it's great, on some level, realizing that, wow, this company has eclipsed my ability to manage these things myself and put my hands-on everything. And eventually, you have to start letting go. It's a weird growth stage, and it's a heck of a transition. But—Thomas: No, I love it. You know, I mean, I think when we were talking, we were maybe 15 employees. Now, we're pushing 100. We brought on Ed Walsh, who's an amazing CEO. It's funny, I told him about this idea, I invented this technology roughly eight years ago, and he's like, “I love it. Let's do it.” And I wasn't ready to do it.So, you know, five, six years ago, I started the company always knowing that, you know, I'd give him a call once we got the plane up in the air. And it's been great to have him here because the next level up, right, of execution and growth and business development and sales and marketing. So, you're exactly right. I mean, we were a young pup several years ago, when we were talking to you and, you know, we're a little bit older, a little bit wiser. But no, it's great to have Ed here. And just the leadership in general; we've grown immensely.Corey: Now, we are recording this in advance of re:Invent, so there's always the question of, “Wow, are we going to look really silly based upon what is being announced when this airs?” Because it's very hard to predict some things that AWS does. And let's be clear, I always stay away from predictions, just because first, I have a bit of a knack for being right. But also, when I'm right, people will think, “Oh, Corey must have known about that and is leaking,” whereas if I get it wrong, I just look like a fool. There's no win for me if I start doing the predictive dance on stuff like that.But I have to level with you, I have been somewhat surprised that, at least as of this recording, AWS has not moved more in your direction because storing data in S3 is kind of their whole thing, and querying that data through something that isn't Athena has been a bit of a reach for them that they're slowly starting to wrap their heads around. But their UltraWarm nonsense—which is just, okay, great naming there—what is the point of continually having a model where oh, yeah, we're going to just age it out, the stuff that isn't actively being used into S3, rather than coming up with a way to query it there. Because you've done exactly that, and please don't take this as anything other than a statement of fact, they have better access to what S3 is doing than you do. You're forced to deal with this thing entirely from a public API standpoint, which is fine. They can theoretically change the behavior of aspects of S3 to unlock these use cases if they chose to do so. And they haven't. Why is it that you're the only folks that are doing this?Thomas: No, it's a great question, and I'll give them props for continuing to push the data lake [unintelligible 00:17:09] to the cloud providers' S3 because it was really where I saw the world. Lakes, I believe in. I love them. They love them. However, they promote the move the data out to get access, and it seems so counterintuitive on why wouldn't you leave it in and put these services, make them more intelligent? So, it's funny, I've trademark ‘Smart Object Storage,' I actually trademarked—I think you [laugh] were a part of this—‘UltraHot,' right? Because why would you want UltraWarm when you can have UltraHot?And the reason, I feel, is that if you're using Parquet for Athena [unintelligible 00:17:40] store, or Lucene for Elasticsearch, these two index technologies were not designed for cloud storage, for real-time streaming off of cloud storage. So, the trick is, you have to build UltraWarm, get it off of what they consider cold S3 into a more warmer memory or SSD type access. What we did, what the invention I created was, that first read is hot. That first read is fast.Snowflake is a good example. They give you a ten terabyte demo example, and if you have a big instance and you do that first query, maybe several orders or groups, it could take an hour to warm up. The second query is fast. Well, what if the first query is in seconds as well? And that's where we really spent the last five, six years building out the tech and the vision behind this because I like to say you go to a doctor and say, “Hey, Doc, every single time I move my arm, it hurts.” And the doctor says, “Well, don't move your arm.”It's things like that, to your point, it's like, why wouldn't they? I would argue, one, you have to believe it's possible—we're proving that it is—and two, you have to have the technology to do it. Not just the index, but the architecture. So, I believe they will go this direction. You know, little birdies always say that all these companies understand this need.Shoot, Snowflake is trying to be lake-y; Databricks is trying to really bring this warehouse lake concept. But you still do all the pipelining; you still have to do all the data management the way that you don't want to do. It's not a lake. And so my argument is that it's innovation on why. Now, they have money; they have time, but, you know, we have a big head start.Corey: I remembered last year at re:Invent they released a, shall we say, significant change to S3 that it enabled read after write consistency, which is awesome, for again, those of us in the business of misusing things as databases. But for some folks, the majority of folks I would say, it was a, “I don't know what that means and therefore I don't care.” And that's fine. I have no issue with that. There are other folks, some of my customers for example, who are suddenly, “Wait a minute. This means I can sunset this entire janky sidecar metadata system that is designed to make sure that we are consistent in our use of S3 because it now does it automatically under the hood?” And that's awesome. Does that change mean anything for ChaosSearch?Thomas: It doesn't because of our architecture. We're append-only, write-once scenario, so a lot of update-in-place viewpoints. My viewpoint is that if you're seeing S3 as the database and you need that type of consistency, it make sense of why you'd want it, but because of our distributive fabric, our stateless architecture, our append-only nature, it really doesn't affect us.Now, I talked to the S3 team, I said, “Please if you're coming up with this feature, it better not be slower.” I want S3 to be fast, right? And they said, “No, no. It won't affect performance.” I'm like, “Okay. Let's keep that up.”And so to us, any type of S3 capability, we'll take advantage of it if benefits us, whether it's consistency as you indicated, performance, functionality. But we really keep the constructs of S3 access to really limited features: list, put, get. [roll-on 00:20:49] policies to give us read-only access to your data, and a location to write our indices into your account, and then are distributed fabric, our service, acts as those indices and query them or searches them to resolve whatever analytics you need. So, we made it pretty simple, and that is allowed us to make it high performance.Corey: I'll take it a step further because you want to talk about changes since the last time we spoke, it used to be that this was on top of S3, you can store your data anywhere you want, as long as it's S3 in the customer's account. Now, you're also supporting one-click integration with Google Cloud's object storage, which, great. That does mean though, that you're not dependent upon provider-specific implementations of things like a consistency model for how you've built things. It really does use the lowest common denominator—to my understanding—of object stores. Is that something that you're seeing broad adoption of, or is this one of those areas where, well, you have one customer on a different provider, but almost everything lives on the primary? I'm curious what you're seeing for adoption models across multiple providers?Thomas: It's a great question. We built an architecture purposely to be cloud-agnostic. I mean, we use compute in a containerized way, we use object storage in a very simple construct—put, get, list—and we went over to Google because that made sense, right? We have customers on both sides. I would say Amazon is the gorilla, but Google's trying to get there and growing.We had a big customer, Equifax, that's on both Amazon and Google, but we offer the same service. To be frank, it looks like the exact same product. And it should, right? Whether it's Amazon Cloud, or Google Cloud, multi-select and I want to choose either one and get the other one. I would say that different business types are using each one, but our bulk of the business isn't Amazon, but we just this summer released our SaaS offerings, so it's growing.And you know, it's funny, you never know where it comes from. So, we have one customer—actually DigitalRiver—as one of our customers on Amazon for logs, but we're growing in working together to do a BI on GCP or on Google. And so it's kind of funny; they have two departments on two different clouds with two different use cases. And so do they want unification? I'm not sure, but they definitely have their BI on Google and their operations in Amazon. It's interesting.Corey: You know its important to me that people learn how to use the cloud effectively. Thats why I'm so glad that Cloud Academy is sponsoring my ridiculous non-sense. They're a great way to build in demand tech skills the way that, well personally, I learn best which I learn by doing not by reading. They have live cloud labs that you can run in real environments that aren't going to blow up your own bill—I can't stress how important that is. Visit cloudacademy.com/corey. Thats C-O-R-E-Y, don't drop the “E.” Use Corey as a promo-code as well. You're going to get a bunch of discounts on it with a lifetime deal—the price will not go up. It is limited time, they assured me this is not one of those things that is going to wind up being a rug pull scenario, oh no no. Talk to them, tell me what you think. Visit: cloudacademy.com/corey, C-O-R-E-Y and tell them that I sent you!Corey: I know that I'm going to get letters for this. So, let me just call it out right now. Because I've been a big advocate of pick a provider—I care not which one—and go all-in on it. And I'm sitting here congratulating you on extending to another provider, and people are going to say, “Ah, you're being inconsistent.”No. I'm suggesting that you as a provider have to meet your customers where they are because if someone is sitting in GCP and your entire approach is, “Step one, migrate those four petabytes of data right on over here to AWS,” they're going to call you that jackhole that you would be by making that suggestion and go immediately for option B, which is literally anything that is not ChaosSearch, just based upon that core misunderstanding of their business constraints. That is the way to think about these things. For a vendor position that you are in as an ISV—Independent Software Vendor for those not up on the lingo of this ridiculous industry—you have to meet customers where they are. And it's the right move.Thomas: Well, you just said it. Imagine moving terabytes and petabytes of data.Corey: It sounds terrific if I'm a salesperson for one of these companies working on commission, but for the rest of us, it sounds awful.Thomas: We really are a data fabric across clouds, within clouds. We're going to go where the data is and we're going to provide access to where that data lives. Our whole philosophy is the no-movement movement, right? Don't move your data. Leave it where it is and provide access at scale.And so you may have services in Google that naturally stream to GCS; let's do it there. Imagine moving that amount of data over to Amazon to analyze it, and vice versa. 2020, we're going to be in Azure. They're a totally different type of business, users, and personas, but you're getting asked, “Can you support Azure?” And the answer is, “Yes,” and, “We will in 2022.”So, to us, if you have cloud storage, if you have compute, and it's a big enough business opportunity in the market, we're there. We're going there. When we first started, we were talking to MinIO—remember that open-source, object storage platform?—We've run on our laptops, we run—this [unintelligible 00:25:04] Dr. Seuss thing—“We run over here; we run over there; we run everywhere.”But the honest truth is, you're going to go with the big cloud providers where the business opportunity is, and offer the same solution because the same solution is valued everywhere: simple in; value out; cost-effective; long retention; flexibility. That sounds so basic, but you mentioned this all the time with our Rube Goldberg, Amazon diagrams we see time and time again. It's like, if you looked at that and you were from an alien planet, you'd be like, “These people don't know what they're doing. Why is it so complicated?” And the simple answer is, I don't know why people think it's complicated.To your point about Amazon, why won't they do it? I don't know, but if they did, things would be different. And being honest, I think people are catching on. We do talk to Amazon and others. They see the need, but they also have to build it; they have to invent technology to address it. And using Parquet and Lucene are not the answer.Corey: Yeah, it's too much of a demand on the producers of that data rather than the consumer. And yeah, I would love to be able to go upstream to application developers and demand they do things in certain ways. It turns out as a consultant, you have zero authority to do that. As a DevOps team member, you have limited ability to influence it, but it turns out that being the ‘department of no' quickly turns into being the ‘department of unemployment insurance' because no one wants to work with you. And collaboration—contrary to what people wish to believe—is a key part of working in a modern workplace.Thomas: Absolutely. And it's funny, the demands of IT are getting harder; the actual getting the employees to build out the solutions are getting harder. And so a lot of that time is in the pipeline, is the prep, is the schema, the sharding, and et cetera, et cetera, et cetera. My viewpoint is that should be automated away. More and more databases are being autotune, right?This whole knobs and this and that, to me, Glue is a means to an end. I mean, let's get rid of it. Why can't Athena know what to do? Why can't object storage be Athena and vice versa? I mean, to me, it seems like all this moving through all these services, the classic Amazon viewpoint, even their diagrams of having this centralized repository of S3, move it all out to your services, get results, put it back in, then take it back out again, move it around, it just doesn't make much sense. And so to us, I love S3, love the service. I think it's brilliant—Amazon's first service, right?—but from there get a little smarter. That's where ChaosSearch comes in.Corey: I would argue that S3 is in fact, a modern miracle. And one of those companies saying, “Oh, we have an object store; it's S3 compatible.” It's like, “Yeah. We have S3 at home.” Look at S3 at home, and it's just basically a series of failing Raspberry Pis.But you have this whole ecosystem of things that have built up and sprung up around S3. It is wildly understated just how scalable and massive it is. There was an academic paper recently that won an award on how they use automated reasoning to validate what is going on in the S3 environment, and they talked about hundreds of petabytes in some cases. And folks are saying, ah, S3 is hundreds of petabytes. Yeah, I have clients storing hundreds of petabytes.There are larger companies out there. Steve Schmidt, Amazon's CISO, was recently at a Splunk keynote where he mentioned that in security info alone, AWS itself generates 500 petabytes a day that then gets reduced down to a bunch of stuff, and some of it gets loaded into Splunk. I think. I couldn't really hear the second half of that sentence because of the sound of all of the Splunk salespeople in that room becoming excited so quickly you could hear it.Thomas: [laugh]. I love it. If I could be so bold, those S3 team, they're gods. They are amazing. They created such an amazing service, and when I started playing with S3 now, I guess, 2006 or 7, I mean, we were using for a repository, URL access to get images, I was doing a virtualization [unintelligible 00:29:05] at the time—Corey: Oh, the first time I played with it, “This seems ridiculous and kind of dumb. Why would anyone use this?” Yeah, yeah. It turns out I'm really bad at predicting the future. Another reason I don't do the prediction thing.Thomas: Yeah. And when I started this company officially, five, six years ago, I was thinking about S3 and I was thinking about HDFS not being a good answer. And I said, “I think S3 will actually achieve the goals and performance we need.” It's a distributed file system. You can run parallel puts and parallel gets. And the performance that I was seeing when the data was a certain way, certain size, “Wait, you can get high performance.”And you know, when I first turned on the engine, now four or five years ago, I was like, “Wow. This is going to work. We're off to the races.” And now obviously, we're more than just an idea when we first talked to you. We're a service.We deliver benefits to our customers both in logs. And shoot, this quarter alone we're coming out with new features not just in the logs, which I'll talk about second, but in a direct SQL access. But you know, one thing that you hear time and time again, we talked about it—JSON, CloudTrail, and Kubernetes; this is a real nightmare, and so one thing that we've come out with this quarter is the ability to virtually flatten. Now, you heard time and time again, where, “Okay. I'm going to pick and choose my data because my database can't handle whether it's elastic, or say, relational.” And all of a sudden, “Shoot, I don't have that. I got to reindex that.”And so what we've done is we've created a index technology that we're always planning to come out with that indexes the JSON raw blob, but in the data refinery have, post-index you can select how to unflatten it. Why is that important? Because all that tooling, whether it's elastic or SQL, is now available. You don't have to change anything. Why is Snowflake and BigQuery has these proprietary JSON APIs that none of these tools know how to use to get access to the data?Or you pick and choose. And so when you have a CloudTrail, and you need to know what's going on, if you picked wrong, you're in trouble. So, this new feature we're calling ‘Virtual Flattening'—or I don't know what we're—we have to work with the marketing team on it. And we're also bringing—this is where I get kind of excited where the elastic world, the ELK world, we're bringing correlations into Elasticsearch. And like, how do you do that? They don't have the APIs?Well, our data refinery, again, has the ability to correlate index patterns into one view. A view is an index pattern, so all those same constructs that you had in Kibana, or Grafana, or Elastic API still work. And so, no more denormalizing, no more trying to hodgepodge query over here, query over there. You're actually going to have correlations in Elastic, natively. And we're excited about that.And one more push on the future, Q4 into 2022; we have been given early access to S3 SQL access. And, you know, as I mentioned, correlations in Elastic, but we're going full in on publishing our [TPCH 00:31:56] report, we're excited about publishing those numbers, as well as not just giving early access, but going GA in the first of the year, next year.Corey: I look forward to it. This is also, I guess, it's impossible to have a conversation with you, even now, where you're not still forward-looking about what comes next. Which is natural; that is how we get excited about the things that we're building. But so much less of what you're doing now in our conversations have focused around what's coming, as opposed to the neat stuff you're already doing. I had to double-check when we were talking just now about oh, yeah, is that Google cloud object store support still something that is roadmapped, or is that out in the real world?No, it's very much here in the real world, available today. You can use it. Go click the button, have fun. It's neat to see at least some evidence that not all roadmaps are wishes and pixie dust. The things that you were talking to me about years ago are established parts of ChaosSearch now. It hasn't been just, sort of, frozen in amber for years, or months, or these giant periods of time. Because, again, there's—yeah, don't sell me vaporware; I know how this works. The things you have promised have come to fruition. It's nice to see that.Thomas: No, I appreciate it. We talked a little while ago, now a few years ago, and it was a bit of aspirational, right? We had a lot to do, we had more to do. But now when we have big customers using our product, solving their problems, whether it's security, performance, operation, again—at scale, right? The real pain is, sure you have a small ELK cluster or small Athena use case, but when you're dealing with terabytes to petabytes, trillions of rows, right—billions—when you were dealing trillions, billions are now small. Millions don't even exist, right?And you're graduating from computer science in college and you say the word, “Trillion,” they're like, “Nah. No one does that.” And like you were saying, people do petabytes and exabytes. That's the world we're living in, and that's something that we really went hard at because these are challenging data problems and this is where we feel we uniquely sit. And again, we don't have to break the bank while doing it.Corey: Oh, yeah. Or at least as of this recording, there's a meme going around, again, from an old internal Google Video, of, “I just want to serve five terabytes of traffic,” and it's an internal Google discussion of, “I don't know how to count that low.” And, yeah.Thomas: [laugh].Corey: But there's also value in being able to address things at much larger volume. I would love to see better responsiveness options around things like Deep Archive because the idea of being able to query that—even if you can wait a day or two—becomes really interesting just from the perspective of, at that point, current cost for one petabyte of data in Glacier Deep Archive is 1000 bucks a month. That is ‘why would I ever delete data again?' Pricing.Thomas: Yeah. You said it. And what's interesting about our technology is unlike, let's say Lucene, when you index it, it could be 3, 4, or 5x the raw size, our representation is smaller than gzip. So, it is a full representation, so why don't you store it efficiently long-term in S3? Oh, by the way, with the Glacier; we support Glacier too.And so, I mean, it's amazing the cost of data with cloud storage is dramatic, and if you can make it hot and activated, that's the real promise of a data lake. And, you know, it's funny, we use our own service to run our SaaS—we log our own data, we monitor, we alert, have dashboards—and I can't tell you how cheap our service is to ourselves, right? Because it's so cost-effective for long-tail, not just, oh, a few weeks; we store a whole year's worth of our operational data so we can go back in time to debug something or figure something out. And a lot of that's savings. Actually, huge savings is cloud storage with a distributed elastic compute fabric that is serverless. These are things that seem so obvious now, but if you have SSDs, and you're moving things around, you know, a team of IT professionals trying to manage it, it's not cheap.Corey: Oh, yeah, that's the story. It's like, “Step one, start paying for using things in cloud.” “Okay, great. When do I stop paying?” “That's the neat part. You don't.” And it continues to grow and build.And again, this is the thing I learned running a business that focuses on this, the people working on this, in almost every case, are more expensive than the infrastructure they're working on. And that's fine. I'd rather pay people than technologies. And it does help reaffirm, on some level, that—people don't like this reminder—but you have to generate more value than you cost. So, when you're sitting there spending all your time trying to avoid saving money on, “Oh, I've listened to ChaosSearch talk about what they do a few times. I can probably build my own and roll it at home.”It's, I've seen the kind of work that you folks have put into this—again, you have something like 100 employees now; it is not just you building this—my belief has always been that if you can buy something that gets you 90, 95% of where you are, great. Buy it, and then yell at whoever selling it to you for the rest of it, and that'll get you a lot further than, “We're going to do this ourselves from first principles.” Which is great for a weekend project for just something that you have a passion for, but in production mistakes show. I've always been a big proponent of buying wherever you can. It's cheaper, which sounds weird, but it's true.Thomas: And we do the same thing. We have single-sign-on support; we didn't build that ourselves, we use a service now. Auth0 is one of our providers now that owns that [crosstalk 00:37:12]—Corey: Oh, you didn't roll your own authentication layer? Why ever not? Next, you're going to tell me that you didn't roll your own payment gateway when you wound up charging people on your website to sign up?Thomas: You got it. And so, I mean, do what you do well. Focus on what you do well. If you're repeating what everyone seems to do over and over again, time, costs, complexity, and… service, it makes sense. You know, I'm not trying to build storage; I'm using storage. I'm using a great, wonderful service, cloud object storage.Use whats works, whats works well, and do what you do well. And what we do well is make cloud object storage analytical and fast. So, call us up and we'll take away that 2 a.m. call you have when your cluster falls down, or you have a new workload that you are going to go to the—I don't know, the beach house, and now the weekend shot, right? Spin it up, stream it in. We'll take over.Corey: Yeah. So, if you're listening to this and you happen to be at re:Invent, which is sort of an open question: why would you be at re:Invent while listening to a podcast? And then I remember how long the shuttle lines are likely to be, and yeah. So, if you're at re:Invent, make it on down to the show floor, visit the ChaosSearch booth, tell them I sent you, watch for the wince, that's always worth doing. Thomas, if people have better decision-making capability than the two of us do, where can they find you if they're not in Las Vegas this week?Thomas: So, you find us online chaossearch.io. We have so much material, videos, use cases, testimonials. You can reach out to us, get a free trial. We have a self-service experience where connect to your S3 bucket and you're up and running within five minutes.So, definitely chaossearch.io. Reach out if you want a hand-held, white-glove experience POV. If you have those type of needs, we can do that with you as well. But we booth on re:Invent and I don't know the booth number, but I'm sure either we've assigned it or we'll find it out.Corey: Don't worry. This year, it is a low enough attendance rate that I'm projecting that you will not be as hard to find in recent years. For example, there's only one expo hall this year. What a concept. If only it hadn't taken a deadly pandemic to get us here.Thomas: Yeah. But you know, we'll have the ability to demonstrate Chaos at the booth, and really, within a few minutes, you'll say, “Wow. How come I never heard of doing it this way?” Because it just makes so much sense on why you do it this way versus the merry-go-round of data movement, and transformation, and schema management, let alone all the sharding that I know is a nightmare, more often than not.Corey: And we'll, of course, put links to that in the [show notes 00:39:40]. Thomas, thank you so much for taking the time to speak with me today. As always, it's appreciated.Thomas: Corey, thank you. Let's do this again.Corey: We absolutely will. Thomas Hazel, CTO and Founder of ChaosSearch. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast episode, please leave a five-star review on your podcast platform of choice, whereas if you've hated this episode, please leave a five-star review on your podcast platform of choice along with an angry comment because I have dared to besmirch the honor of your homebrewed object store, running on top of some trusty and reliable Raspberries Pie.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Setting a new record for delay in editing, you can finally listen to Arjen, JM, and Guy discuss the news from April 2021. This was recorded nearly two months before it was released. News Finally in Sydney Amazon Transcribe Custom Language Models now support Australian English, British English, Hindi and US Spanish Multi-Attach for Provisioned IOPS io2 Now Available in Thirteen Additional AWS Regions AWS Transit Gateway Connect is now available in additional AWS Regions AWS CloudShell is now available in the Asia Pacific (Mumbai), Asia Pacific (Sydney), and Europe (Frankfurt) regions Serverless API Gateway Amazon API Gateway custom domain names now support multi-level base path mappings Lambda AWS Lambda@Edge changes duration billing granularity from 50ms down to 1ms Amazon CloudWatch Lambda Insights Now Supports AWS Lambda Container Images (General Availability) Amazon RDS for PostgreSQL Integrates with AWS Lambda AWS Lambda@Edge now supports Node 14.x Step Functions AWS Step Functions adds new data flow simulator for modelling input and output processing EventBridge Amazon EventBridge introduces support for cross-Region event bus targets AWS Chatbot now expands coverage of AWS Services monitored through Amazon EventBridge Amplify Data management is now generally available in the AWS Amplify Admin UI Amplify iOS now available via Swift Package Manager (SPM) AWS Amplify now orchestrates multiple Amazon DynamoDB GSI updates in a single deployment Containers eksctl now supports creating node groups using resource specifications and dry run mode AWS Secrets Manager Delivers Provider for Kubernetes Secrets Store CSI Driver EC2 & VPC Amazon EC2 Auto Scaling introduces Warm Pools to accelerate scale out while saving money Amazon VPC Flow Logs announces out-of-the-box integration with Amazon Athena MacSec Encryption for some Direct Connect (apologies, linking to this prevents the podcast from getting published :shrug:) New AWS Storage Gateway management console simplifies gateway creation and management AWS Batch now supports EFS volumes at the job level AWS Backup now supports cost allocation tags for Amazon EFS Backups Internet Group Management Protocol (IGMP) Multicast on AWS Transit Gateway is now available in major AWS regions worldwide Amazon EC2 enables replacing root volumes for quick restoration and troubleshooting Announcing availability of Red Hat Enterprise Linux with High availability for Amazon EC2 AWS Nitro Enclaves now supports Windows operating system Dev & Ops Dev Amazon CodeGuru Reviewer Updates: New Predictable Pricing Model Up To 90% Lower and Python Support Moves to GA | AWS News Blog Now available credential profile support for AWS SSO and Assume Role with MFA in the AWS Toolkit for Visual Studio AWS CodeDeploy improves support for EC2 deployments with Auto Scaling Groups AWS SAM CLI now supports AWS CDK applications - public preview Better together: AWS SAM and AWS CDK | AWS Compute Blog Proton AWS Proton allows adding and removing instances from an existing service AWS Proton introduces customer-managed environments AWS Proton adds an API to cancel deployments CloudFormation You can now deploy CloudFormation Stacks concurrently across multiple AWS regions using AWS CloudFormation StackSets AWS CloudFormation Command Line Interface (CFN-CLI) now supports TypeScript AWS CloudFormation Modules now Provides YAML and Delimiter Support Now reference latest AWS Systems Manager parameter values in AWS CloudFormation templates without specifying parameter versions You can now use macros and transforms in CloudFormation templates to create AWS CloudFormation StackSets Control Tower AWS Control Tower introduces changes to preventive S3 guardrails and updates to S3 bucket encryption protocols AWS Control Tower now provides configurable naming during Landing Zone setup Systems Manager AWS Systems Manager Run Command now displays more logs and enables log download from the console AWS Systems Manager Parameter Store now supports easier public parameter discoverability Customers can now use ServiceNow to track operational items related to AWS resources AWS Systems Manager Parameter Store now supports removal of parameter labels AWS Systems Manager now supports Amazon Elastic Container Service clusters AWS Systems Manager OpsCenter and Explorer now integrate with AWS Security Hub for diagnosis and remediation of security findings Security Firewalls How to Get Started with Amazon Route 53 Resolver DNS Firewall for Amazon VPC | AWS News Blog Reduce Unwanted Traffic on Your Website with New AWS WAF Bot Control | AWS News Blog AWS Firewall Manager now supports centralized management of Amazon Route 53 Resolver DNS Firewall AWS Firewall Manager now supports centralized deployment of the new AWS WAF Bot Control across your organization AWS WAF now supports Labels to improve rule customization and reporting Identity Review last accessed information to identify unused EC2, IAM, and Lambda permissions and tighten access for your IAM roles AWS Identity and Access Management now makes it easier to relate a user's IAM role activity to their corporate identity Other AWS Config launches the ability to track and visualize compliance change history of conformance packs AWS Security Hub Automated Response & Remediation Solution adds support for AWS Foundational Security Best Practices standard You now can use AWS CloudTrail to log Amazon DynamoDB Streams data-plane API activity Data Storage & Processing Glue Detect outliers and use dedicated transforms to handle outliers in AWS Glue DataBrew AWS Glue DataBrew now supports time-based, pattern-based and customizable parameters to create dynamic datasets AWS announces preview of AWS Glue custom blueprints AWS Glue now supports cross-account reads from Amazon Kinesis Data Streams AWS Glue now supports missing value imputation based on machine learning AWS announces data sink capability for the Glue connectors AWS Glue DataBrew announces native console integration with Amazon AppFlow to connect to data from SaaS (Software as a Service) applications and AWS services (in Preview) Redshift AQUA (Advanced Query Accelerator) – A Speed Boost for Your Amazon Redshift Queries | AWS News Blog Announcing cross-VPC support for Amazon Redshift powered by AWS PrivateLink Announcing general availability of Amazon Redshift native console integration with partners Announcing general availability of Amazon Redshift native JSON and semi-structured data support EMR Amazon EMR Release 5.33 now supports 10 new instance types Amazon EMR Studio is now generally available Athena Announcing general availability of Amazon Athena ML powered by Amazon SageMaker User Defined Functions (UDF) are now generally available for Amazon Athena RDS Amazon RDS for SQL Server now supports Extended Events Amazon RDS on VMware networking now simplified and more secure Other Amazon FSx and AWS Backup announce support for copying file system backups across AWS Regions and AWS accounts AWS Batch increases job scheduling and EC2 instance scaling performance Amazon Elasticsearch Service now supports integration with Microsoft Power BI AWS Ground Station now supports data delivery to Amazon S3 Amazon ElastiCache now supports publishing Redis logs to Amazon CloudWatch Logs and Kinesis Data Firehose AI & ML SageMaker Decrease Your Machine Learning Costs with Instance Price Reductions and Savings Plans for Amazon SageMaker | AWS News Blog New options to trigger Amazon SageMaker Pipeline executions ( EventBridge) Other Detect abnormal equipment behavior with Amazon Lookout for Equipment — now generally available Amazon Fraud Detector now supports Batch Fraud Predictions Get estimated run time for forecast creation jobs while using Amazon Forecast Amazon Kendra launches dynamic relevance tuning Other Cool Stuff WorkSpaces Amazon WorkSpaces webcam support now Generally Available Amazon WorkSpaces now supports smart cards with the WorkSpaces macOS client application IVS Amazon Interactive Video Service adds new Cloudwatch Metrics Amazon Interactive Video Service adds support for recording live streams to Amazon S3 Connect Amazon Connect launches audio device settings for the custom Contact Control Panel (CCP) Amazon Connect allows contact center managers to configure agent settings in a custom Contact Control Panel (CCP) Other AWS RoboMaker now supports the ability to configure tools for simulation jobs Amazon AppStream 2.0 adds support for fully managed image updates Amazon Managed Service for Grafana now supports Grafana Enterprise upgrade, Grafana version 7.5, Open Distro for Elasticsearch integration, and AWS Billing reports AWS Cloud9 now supports Amazon Linux 2 environments CloudWatch Metric Streams – Send AWS Metrics to Partners and to Your Apps in Real Time | AWS News Blog Announcing open source robotics projects for AWS DeepRacer Announcing Moving Graphs for CloudWatch Dashboards Amazon Nimble Studio – Build a Creative Studio in the Cloud | AWS News Blog AWS Snow Family now enables you to order, track, and manage long-term pricing Snow jobs The Nanos AWS Console Mobile Application adds support for Asia Pacific (Osaka) region (Arjen) Amazon Connect reduces telephony rates in Cyprus, Belgium, and Portugal (Guy) AWS Cloud9 now supports Amazon Linux 2 environments (Jean-Manuel) Sponsors Gold Sponsor Innablr Silver Sponsors AC3 CMD Solutions DoIT International
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS」 おはようございます、水曜日担当の福島です。 今日は 6/8 に出たアップデートをご紹介 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ トークスクリプト https://blog.serverworks.co.jp/aws-update-2021-06-08 ■ UPDATE PICKUP AWS Glue Studioでコードエディターが利用できるように AWS Outposts環境のRDSで利用できるPostgreSQLのバージョンが追加 ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS」 おはようございます、木曜日担当パーソナリティの小林です。 今日は 3/31 に出たアップデートをピックアップしてご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE PICKUP EC2シリアルコンソールの一般提供開始 Amazon API Gatewayでカスタムドメイン名がマルチレベルのベースパスマッピングをサポート AWS Glue custom blueprintのプレビューが発表 AWS Glue DataBrewで外れ値の検出と変換ができるように AWS Glue DataBrewで動的データセットを作成する場合の複数のパラメーターをサポート AWS Configで適合パックのコンプライアンス変更履歴を表示する機能をサポート AWS Data Exchangeで既存製品から新規製品にメタデータをコピーできるように Amazon Pinpointで新しいジャーニーコントロールをサポート Amazon FraudDetectorでバッチ不正予測をサポート Amazon EMRがAmazon EC2インスタンスメタデータサービスv2をサポート Amazon GameLiftでキュー通知をサポート AWS Transit Gateway Connectでアドバタイズされる動的ルートのデフォルト上限が増加 AWS Site-to-Site VPNで、TransitGatewayとの間でアドバタイズされる動的ルートのデフォルト上限が増加 ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS」 おはようございます、金曜日担当パーソナリティの菅谷です。 今日は 2/18 に出たアップデートをピックアップしてご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE PICKUP AWS Glue Studio が Amazon S3 からカタログに追加されてないデータの読み取りとスキーマの推測をサポート AWS Glue Studio ジョブが AWS Glue データカタログテーブルを更新できるように Amazon Pinpoint が 10DLC(Digit long code) と フリーダイヤルをサポート Amazon EC2 ml.Inf1 インスタンスが Amazon SageMaker で東京を含む複数のリージョンで利用可能に Amazon Redshift Query Editor がいくつかの追加機能をサポート Amazon Elasticsearch Service は分散トレースの新機能である Trace Analytics 機能を追加 ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
※配信プラットフォームが停止しており配信開始遅れました、、、! 最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS!」 おはようございます、サーバーワークスの加藤です。 今日は 11/24 に出たアップデート15件をご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE ラインナップ Amazon Managed Workflows for Apache Airflow が登場 Amazon CloudWatch Application Insights が自動アプリケーション検出をサポート Amazon Braket が手動キュービット割当をサポート Amazon CloudWatch Synthetics が Python と Selenium に対応 Amazon Elasticsearch Service が Elasticsearch のバージョン 7.9 に対応 Amazon Elasticsearch Service が Remote Reindex をサポート AWS Storage Gateway Tape Gateway が IBM Spectrum Protect 8.1.10 をサポート AWS Lambda が Advanced Vector Extensiosn 2 をサポート AWS Lambda が Amazon SQS のバッチ処理の待機に対応 Amazon FSx for Lustre がファイルシステムストレージの拡張に対応 AWS Glue がワークロードパーティショニングをサポート AWS Secrets Manager が 秒間リクエスト数を 5000 に拡張 Amazon Comprehend Events を発表 Amazon ECS Cluster Auto Scaling がより応答性の高いスケーリングを提供 Amazon RDS for SQL Server がビジネスインテリジェンススイートをサポート ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS!」 おはようございます、サーバーワークスの加藤です。 今日は 11/11 に出たアップデート9件をご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE ラインナップ 新しいビジュアルデータ準備ツール AWS Glue DataBrew がリリース Amazon ElastiCache が memcached 1.6.6 に対応 Amazon S3 の新コンソールが一般利用可能に Amazon Redshift が TIME および TIMETZ 型をサポート FreeRTOS が IoT および AWS ライブラリを含むように AWS Systems Manager エクスプローラーが複数アカウント・複数リージョンの AWS Config コンプライアンスの概要を表示できるように AWS CodePipeline のソースアクションが AWS CodeCommit の git clone をサポート 新しいデジタルコースが追加 - レガシーデータベースからの脱却 セキュリティと IoT に関する新しいデジタルコースが edX と Coursera に追加 ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS!」 おはようございます、サーバーワークスの加藤です。 今日は 10/15 に出たアップデート13件をご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE ラインナップ AWS Graviton2 プロセッサを利用した Amazon RDS M6g, R6g インスタンスが一般利用可能に Amazon RDS for PostgreSQL のリードレプリカがプライマリインスタンスに合わせて自動でバージョンアップするように Amazon Aurora がストレージサイズを動的に縮小するように Amazon Redshift がクロスデータベースクエリをプレビューで発表 AWS Budgets Actions を発表 Amazon CloudWatch Synthetics がユーザーフロースクリプトを生成するレコーダーを発表 AWS IAM Access Analyzer が検出結果のアーカイブルールをサポート Amazon Connect がリアルタイムメトリクスのドリルダウンをサポート AWS Glue ストリーミング ETL ジョブが Apache Avro フォーマットに対応 AWS Glue クローラーが Amazon DocumentDB および MongoDB をサポート Windows Server 向けサポート終了以降プログラム (EMP) から利用者自身が移行を実行できるツールを発表 AWS Purchase Order Management が一般利用可能に Amazon EKS について学ぶ、新しいクラスルーム ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS!」 おはようございます、サーバーワークスの加藤です。 今日は 10/8 に出たアップデート17件をご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE ラインナップ AWS Lambda Extensions がプレビューで発表 Amazon CloudWatch Lambda Insights がプレビューで発表 AWS Elastic Beanstalk が Amazon Linux 2 ベースの Docker プラットフォームでマルチコンテナアプリケーションをデプロイできるように AWS CodeArtifact が AWS CloudFormation をサポート AWS Security Hub がセキュリティ基準の新しい UI を発表 AWS OpsWorks が新しいバージョンの Chef Automate をサポート AWS Outposts が Amazon ElastiCache をサポート Amazon ElastiCache が M6g, R6g インスタンスをサポート Amazon EMR が障害のリスクを軽減するためマスターノードを個別のラックに配置するように AWS Glue ストリーミング ETL ジョブがスキーマの自動検出をサポート AWS Compute Optimizer が Amazon EBS メトリクスを使用して EC2 インスタンスの推奨事項を強化 ウィスパーフローが Amazon Connect チャットで利用できるように AWS Cost Categories が ヒエラルキーをサポート Amazon Kinesis Data Analytics が強制停止と自動スケーリングステータスをサポート Amazon Inspector が複数の新しいOSをサポート AWS Lake Formation がアカウント間でのデータベース共有をサポート Amazon EventBridge が Dead Letter Queues をサポート ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS!」 おはようございます、サーバーワークスの加藤です。 今日は 9/23 に出たアップデート5件をご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE ラインナップ AWS Glue Studio が登場 - ビジュアルインターフェースによるジョブ開発と高度なモニタリング AWS Backup が EC2 上の Microsoft アプリケーションのバックアップに対応 プレビュー中のAmazon RDS M6g / R6gがより多くのデータベースバージョンをサポート AWS Security Hub が 14 の新しい自動セキュリティコントロールをリリース Coursera と edX に新しいコース - AWSを用いたモダンアプリケーションの構築 ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
In this Episode of AWS TechChat, Shane and Gabe perform a tech round up from July through to August of 2020 We started with containers as we spoke about ACK or the AWS Controller for Kubernetes which means you can leverage AWS services directly in or your Kubernetes applications. Amazon EKS now supports UDP load balancing with the NLB and sticking with Amazon EKS, it is now included in Compute Savings plan A huge win for customers. Still with containers, Amazon ECS now has launched the new ECS Optimized Inferentia AMI making it easier for customers to run Inferentia based containers on ECS. Compute wise, Inferentia based EC2 instances (Inf1) are now available in additional regions and EC2 Launch is now at v2 with a range of new features, I particularly like you can rename the administrator account. Graviton 2 based instances make their way in to a heap more regions, that is super awesome and they can now be consumed by Amazon EKS, and sticking with EKS with Fargate it can now mount AWS EFS based file systems Amazon Bracket is generally available which is development environment for you to explore and build quantum algorithms, test them on quantum circuit simulators, and run them on different quantum hardware technologies. We introduced a new EBS storage class, IO2 which fits in between IO1 and GP2 based volumes. It has 5 9s of durability and up to 64 000IOPS per volume On the development front, AWS Step Functions adds support for string manipulation, new comparison operators, and improved output processing, Amazon API Gateway adds integration with five AWS services, meaning you no longer need to proxy through code as well as Amazon API GW supporting enhanced observability via access logs. Amazon Lightsail now has a CDN, Lightsail CDN, which is backed by Amazon CloudFront it offers three fixed-price data plans, including an introductory plan that’s free for 12 months CloudFront, adds additional geo-location headers for more fine grain geo-tagging as-well as cache key and origin request policies providing more options to control and configure headers, query strings, and cookies that can be used to compute the cache key or forwarded to your origin. Lastly we introduced AWS Glue version 2 which has some some sizeable changes around functionality, cost and speed.
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS!」 おはようございます、サーバーワークスの加藤です。 今日は 8/10 に出たアップデート6件をご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE ラインナップ AWS Lambda がVPC設定の IAM 条件キーをサポート EC2起動タイプのAmazon ECS がコンテナのより多くのネットワークメトリクスを取得できるように Amazon AppStream 2.0 が ローカルプリンターリダイレクトをサポート AWS Glue version 2.0 がリリース - ジョブの開始が10倍高速化 Amazon VPCフローログがCloudFormationサポートを拡張 Compute Savings Plans が AWS Fargate for Amazon EKS に対応 ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
In this episode Nick and Rob share the top launches from AWS including interviews with Graviton2/M6G E2, AWS Glue Streaming ETL, Amazon A2I, Amazon Kendra.
In this Episode of AWS TechChat, Shane and Pete embark on a different style of the show and share with you a lot of updates - over 30 updates and we tackle it like speed dating. We start the show with some updates, there are now an additional 2 AWS regions, Milan in Italy and Cape Town in South Africa. This brings the region count to 24 Regions and 76 Availability Zones. Amazon Guard Duty has a price reduction for the customers who are consuming it on the upper end of the scale, VPC flow log scanning is now 40% cheaper when your logs are more than 10,000GB. Lots of Database engine updates: • Database engine version updates across almost all engines. Microsoft SSAS (SQL Server Analysis Studio) is now available on Amazon Relational Database Service (Amazon RDS) for SQL Server now. • If you are currently running SSAS on Amazon Elastic Compute Cloud (Amazon EC2), you can now save costs by running SSAS directly on the same Amazon RDS DB instance as your SQL Server database. SSAS is currently available on Amazon RDS for SQL Server 2016 and SQL Server 2017 in the single-AZ configuration on both the Standard and Enterprise edition. • NoSQL Workbench for Amazon DynamoDB is now is now generally available. NoSQL Workbench is a client-side application, available for Windows and macOS that helps developers build scalable, high-performance data models, and simplifies query development and testing. • Apache Kafka is an option for AWS Database Migration Service and Amazon Managed Apache Cassandra Service is now available in public preview. Microsoft SQL Server on RDS now supports Read Replicas. Storage updates: • More nitro based Amazon EC2 systems receive IO performance updates. • Amazon FSx for Windows File Server is now has a Magnetic HDD option which brings storage down to 1.3cents per GB. • Amazon Elastic File System (Amazon EFS) announces 400% increase in read operations for General Purpose mode file systems. On Development front: • AWS Lambda@Edge now supports Node 12.x and Python 3.8. • Amplify CLI add support for additional AWS Lambda runtimes (Java, Go, .NET and Python) and Lambda cron jobs. • AWS Lambda now supports .NET Core 3.1. • Receive notifications for AWS CodeBuild, AWS CodeCommit, AWS CodeDeploy, and AWS CodePipeline in Slack, no need to use Amazon Simple Notification Service (SNS) and AWS Glue. • Amazon MSK adds support for Apache Kafka version 2.4.1 • Updates to AWS Deep Learning Containers for PyTorch 1.4.0 and MXNet 1.6.0 Containers updates: • AWS Fargate launches platform version 1.4 which brings a raft of improvements. • Amazon Elastic Kubernetes Service (Amazon EKS) updates service level agreement to 99.95%. • Amazon EKS now supports service-linked roles. • Amazon EKS adds envelope encryption for secrets with AWS Key Management Service (KMS). • Amazon EKS now supports Kubernetes version 1.15 • Amazon ECS supports in preview updating placement strategy and constraints for existing Amazon ECS Services without recreating the service. Connect your managed call centre in the cloud: • Introducing Voicemail for Amazon Connect. • Amazon Connect adds custom terminating keypress for DTMF. Other updates: • New versions of Elastic Search available for Amazon Elastic Search. • AWS DeepComposer is now shipping from Amazon.com Speakers: Shane Baldacchino - Solutions Architect, ANZ, AWS Peter Stanski - Head of Solution Architecture, AWS AWS Events: AWS Summit Online https://aws.amazon.com/events/summits/online/ AWSome Day Online Conference https://aws.amazon.com/events/awsome-day/awsome-day-online/ AWS Innovate AIML Edition on-demand https://aws.amazon.com/events/aws-innovate/machine-learning/ AWS Events and Webinars https://aws.amazon.com/events/
Listen now as April Shen, FSA, CFA interviews Tom Peplow (Principal, Director of Product Development - LTS) on the interaction between actuarial modeling and big data. In this episode, Tom discusses the basic concept on data warehouse, tools and resources actuaries can use to learn more. Show Notes and Resources: 1) Anything by Ralph Kimball – he’s the forefather of data warehouse data models. 2) Data storage tools: Databricks, Snowflake, Google BigTable, Amazon RedShift, Microsoft Synapse, Parquet 3) Streaming analytics / event processing: Kafka, Amazon Kinesis, Azure Stream Analytics 4) Data preparation: Power BI (using Power Query), Azure Factory Data Flows, Alteryx, AWS Glue 5) Data analytics: Tableau, Power BI 6) Other tools/languages: Jupyter, Python, R, C#, .NET 7) Chapter 5 of Roy Fielding’s dissertation, “Representational State Transfer (REST).” 8) Andrew White (Gartner) “Top Data and Analytics Predictions for 2019” https://blogs.gartner.com/andrew_white/2019/01/03/our-top-data-and-analytics-predicts-for-2019/ 9) Data warehouse is not a use case, Jordan Tigani (Google) https://www.youtube.com/watch?v=0I7eOQpDBHQ
Flexibility is key when building and scaling a data lake, and by choosing the right storage architecture you will have agility to quickly experiment and migrate with the latest analytics solutions. In this session, we explore the best practices for building a data lake on Amazon S3 that allows you to leverage an entire array of AWS, open-source, and third-party analytics tools, helping you remain at the cutting edge. We explore use cases for analytics tools, including Amazon EMR and AWS Glue, and query-in-place tools like Amazon Athena, Amazon Redshift Spectrum, Amazon S3 Select, and Amazon Glacier Select.
In this demo, learn how to upgrade to use AWS Lake Formation permissions. We start with a simple data lake that has fine-grained access control on the AWS Glue Data Catalog using AWS Identity and Access Management policies. We then upgrade to using Lake Formation permissions.
Want to build and secure a data lake without all the hassle? Learn how AWS Lake Formation and AWS Glue consolidate and simplify the steps of ingesting, encrypting, cataloging, transforming, and provisioning data. Understand how to make data readily accessible to different analytics services and users, while enforcing granular access control policies and audit logging. Hear about how AWS customers have implemented Lake Formation and AWS Glue for their businesses. Hear how NU SKIN has recently completed an all-in migration to AWS and leveraged AWS Lake Formation to manage their data lake.
This presentation was recorded prior to re:Invent. Learn how to upgrade to use Lake Formation permissions. We start with a simple data lake that has fine-grained access control (FGAC) on the AWS Glue Data Catalog using IAM policies. We then upgrade to using Lake Formation permissions.
Sony Interactive Entertainment built their EVE Fraud System as a big data platform to aggregate transactional activity of users, resulting in substantially increased approvals with no increase in declines or chargebacks from banks. In this session, learn how Sony used Amazon Kinesis Data Analytics, Amazon Kinesis Data Firehose, AWS Lambda, AWS Glue, Amazon Simple Storage Service (Amazon S3), and Amazon DynamoDB to generate aggregations and retrieve them in a single purchase transaction in less than 180 milliseconds.
Woot.com designed and developed a data lake as a replacement for their legacy data warehouse to deliver powerful analytics capabilities across multiple business areas. In this session, learn how it used Amazon EMR, Amazon Athena, AWS Glue, and Amazon QuickSight to build an automated process to ingest and centralize data from external and internal sources for immediate analysis.
Modern data warehousing blends and analyzes all your data-in your data warehouse and in your data lake-without needing to move the data. In this session, a representative from Nielsen explains how they migrated from a leading on-premises data warehouse to Amazon Redshift in record time. See how the company uses AWS Database Migration Service (AWS DMS), AWS Schema Conversion Tool, AWS Glue, and Amazon Redshift to provide timely analytics across the organization.
Machine learning involves more than just training models; you need to source and prepare data, engineer features, select algorithms, train and tune models, and then deploy those models and monitor their performance in production. Learn how Amazon Consumer Payments uses Amazon SageMaker, AWS Glue, AWS Step Functions, Amazon API Gateway, AWS Lambda, AWS Batch, Amazon Elastic Container Registry (Amazon ECR), and AWS CloudFormation to automate a CI/CD framework for business-critical machine learning workloads at scale.
AWS Schema Conversion Tool data extractors can help with migrating from most widely used data warehouses like Teradata, Netezza, and Oracle to Amazon Redshift. You can also use the AWS SCT to optimize your Amazon Redshift database and use the right sort and distribution keys along with modernizing ETL from sources like Oracle and Teradata to AWS Glue. We walk you through AWS SCT's capabilities and show a demo on migrating a sample data warehouse from Netezza to Amazon Redshift and show you how the AWS Snowball Edge integration with AWS SCT data extractors works.
Although this Global Partner Summit breakout is open to anyone, it is geared toward current and potential AWS Partner Network Partners. Many customers are migrating their on-premises SAP applications to AWS, opening up interesting opportunities to build data and analytics transformations using SAP data with AWS data lake solutions. One of the biggest questions customers have is how to extract data from SAP applications. In this demo-driven session, we show the best practices and reference architectures for extracting data from SAP applications at scale. You will take home prescriptive guidance on how to design high-performance data extractors using services like AWS Glue and AWS Lambda with code samples for various use cases that you can implement in your customers' projects.
Imagine being tasked with collecting, analyzing, and securing data from hundreds of sources around the world, in multiple cloud and on-premises environments. Toyota Motor North America, along with Booz Allen Hamilton, has created a secure, cloud-native solution to analyze billions of messages per day using AWS Key Management Service (AWS KMS). We discuss how AWS KMS with AWS native services provides granular access and secures corporate assets with data segregation using AWS KMS encryption. Toyota uses AWS Glue, Amazon Athena, and Amazon SageMaker to generate actionable intelligence in its corporate IT and vehicle telematics environments to solve its business and analytics challenges.
In this talk, we walk through the process of building an end-to-end big data pipeline using AWS managed services. Follow along and learn how to detect anomalous trajectories across different transportation industries ingesting IoT geo-referenced data and receiving real-time insights with Amazon Kinesis Data Analytics and AWS Glue. At the end of this session, you'll understand how to customize and deploy this solution at scale.
In this session, Accenture and Healthfirst, a New York City health plan provider, share key insights and lessons learned about how they used data and analytics modernization to drive digital enablement. The overall program was initiated to improve the Heathfirst member experience, including driving better care, service, and overall engagement strategies. This session outlines the key innovations within the Healthfirst data program, including how it increased member retention, quality scores, and member satisfaction by developing a strong data foundation for the enterprise. Additionally, learn how the team used AWS solutions to support the program, including AWS Glue, Amazon Textract and Amazon Comprehend Medical. This presentation is brought to you by Accenture, an APN Partner.
Simon and Nicki share a broad range of interesting updates! 00:48 Storage 01:43 Compute 05:27 Networking 16:07 Databases 12:03 Developer Tools 13:18 Analytics 19:06 IoT 20:42 Customer Engagement 21:03 End User Computing 22:31 Machine Learning 25:27 Application Integration 27:35 Management and Governance 29:17 Media 30:53 Security 32:56 Blockchain 33:14 Quick Starts 33:51 Training 36:11 Public Datasets 37:12 Robotics Shownotes: Topic || Storage AWS Snowball Edge now supports offline software updates for Snowball Edge devices in air-gapped environments | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-snowball-edge-now-supports-offline-software-updates-for-snowball-edge-devices-in-air-gapped-environments/ Topic || Compute Now Available: Amazon EC2 High Memory Instances with up to 24 TB of memory, Purpose-built to Run Large In-memory Databases, like SAP HANA | https://aws.amazon.com/about-aws/whats-new/2019/10/now-available-amazon-ec2-high-memory-instances-purpose-built-run-large-in-memory-databases/ Introducing Availability of Amazon EC2 A1 Bare Metal Instances | https://aws.amazon.com/about-aws/whats-new/2019/10/introducing-availability-of-amazon-ec2-a1-bare-metal-instances/ Windows Nodes Supported by Amazon EKS | https://aws.amazon.com/about-aws/whats-new/2019/10/windows-nodes-supported-by-amazon-eks/ Amazon ECS now Supports ECS Image SHA Tracking | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-ecs-now-supports-ecs-image-sha-tracking/ AWS Serverless Application Model feature support updates for Amazon API Gateway and more | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-serverless-application-model-feature-support-updates-for-amazon-api-gateway-and-more/ Queuing Purchases of EC2 RIs | https://aws.amazon.com/about-aws/whats-new/2019/10/queuing-purchases-of-ec2-ris/ Topic || Network AWS Direct Connect Announces the Support for Granular Cost Allocation and Removal of Payer ID Restriction for Direct Connect Gateway Association. | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-direct-connect-aws-direct-connect-announces-the-support-for-granular-cost-allocation-and-removal-of-payer-id-restriction-for-direct-connect-gateway-association/ AWS Direct Connect Announces Resiliency Toolkit to Help Customers Order Resilient Connectivity to AWS | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-direct-connect-announces-resiliency-toolkit-to-help-customers-order-resilient-connectivity-to-aws/ Amazon VPC Traffic Mirroring Now Supports AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-vpc-traffic-mirroring-now-supports-aws-cloudformation/ Application Load Balancer and Network Load Balancer Add New Security Policies for Forward Secrecy with More Stringent Protocols and Ciphers | https://aws.amazon.com/about-aws/whats-new/2019/10/application-load-balancer-and-network-load-balancer-add-new-security-policies-for-forward-secrecy-with-more-strigent-protocols-and-ciphers/ Topic || Databases Amazon RDS on VMware is now generally available | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-rds-on-vmware-is-now-generally-available/ Amazon RDS Enables Detailed Backup Storage Billing | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-rds-enables-detailed-backup-storage-billing/ Amazon RDS for PostgreSQL Supports Minor Version 11.5, 10.10, 9.6.15, 9.5.19, 9.4.24, adds Transportable Database Feature | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-rds-for-postgresql-supports-minor-version-115-1010-9615-9515-9424-adds-transportable-database-feature/ Amazon ElastiCache launches self-service updates for Memcached and Redis Cache Clusters | https://aws.amazon.com/about-aws/whats-new/2019/10/elasticache-memcached-self-service-updates/ Amazon DocumentDB (with MongoDB compatibility) adds additional Aggregation Pipeline Capabilities including $lookup | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-documentdb-add-additional-aggregation-pipeline-capabilities/ Amazon Neptune now supports Streams to capture graph data changes | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-neptune-now-supports-streams-to-capture-graph-data-changes/ Amazon Neptune now supports SPARQL 1.1 federated query | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-neptune-now-supports-SPARQL-11-federated-query/ Topic || Developer Tools AWS CodePipeline Enables Setting Environment Variables on AWS CodeBuild Build Jobs | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-codepipeline-enables-setting-environment-variables-on-aws-codebuild-build-jobs/ AWS CodePipeline Adds Execution Visualization to Pipeline Execution History | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-codepipeline-adds-execution-visualization-to-pipeline-execution-history/ Topic || Analytics Amazon Redshift introduces AZ64, a new compression encoding for optimized storage and high query performance | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-redshift-introduces-az64-a-new-compression-encoding-for-optimized-storage-and-high-query-performance/ Amazon Redshift Improves Performance of Inter-Region Snapshot Transfers | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-redshift-improves-performance-of-inter-region-snapshot-transfers/ Amazon Elasticsearch Service provides option to mandate HTTPS | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-elasticsearch-service-provides-option-to-mandate-https/ Amazon Athena now provides an interface VPC endpoint | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-athena-now-provides-an-interface-VPC-endpoint/ Amazon Kinesis Data Firehose adds cross-account delivery to Amazon Elasticsearch Service | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-kinesis-data-firehose-adds-cross-account-delivery-to-amazon-elasticsearch-service/ Amazon Kinesis Data Firehose adds support for data stream delivery to Amazon Elasticsearch Service 7.x clusters | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-kinesis-data-firehose-adds-support-data-stream-delivery-amazon-elasticsearch-service/ Amazon QuickSight announces Data Source Sharing, Table Transpose, New Filtering and Analytical Capabilities | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-quicksight-announces-data-source-sharing-table-transpose-new-filtering-analytics-capabilities/ AWS Glue now provides ability to use custom certificates for JDBC Connections | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-glue-now-provides-ability-to-use-custom-certificates-for-jdbc-connections/ You can now expand your Amazon MSK clusters and deploy new clusters across 2-AZs | https://aws.amazon.com/about-aws/whats-new/2019/10/now-expand-your-amazon-msk-clusters-and-deploy-new-clusters-across-2-azs/ Amazon EMR Adds Support for Spark 2.4.4, Flink 1.8.1, and the Ability to Reconfigure Multiple Master Nodes | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-emr-adds-support-for-spark-2-4-4-flink-1-8-1-and-ability-to-reconfigure-multiple-master-nodes/ Topic || IoT Two New Solution Accelerators for AWS IoT Greengrass Machine Learning Inference and Extract, Transform, Load Functions | https://aws.amazon.com/about-aws/whats-new/2019/10/two-new-solution-accelerators-for-aws-iot-greengrass-machine-lea/ AWS IoT Core Adds the Ability to Retrieve Data from DynamoDB using Rule SQL | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-iot-core-adds-ability-to-retrieve-data-from-dynamodb-using-rule-sql/ PSoC 62 Prototyping Kit is now qualified for Amazon FreeRTOS | https://aws.amazon.com/about-aws/whats-new/2019/10/psoc-62-prototyping-kit-qualified-for-amazon-freertos/ Topic || Customer Engagement Amazon Pinpoint Adds Support for Message Templates | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-pinpoint-adds-support-for-message-templates/ Topic || End User Computing Amazon AppStream 2.0 adds support for 4K Ultra HD resolution on 2 monitors and 2K resolution on 4 monitors | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-appstream-2-adds-support-for-4k-ultra-hd-resolution-on-2-monitors-and-2k-resolution-on-4-monitors/ Amazon AppStream 2.0 Now Supports FIPS 140-2 Compliant Endpoints | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-appstream-2-now-supports-fips-140-2-compliant-endpoints/ Amazon Chime now supports screen sharing from Mozilla Firefox and Google Chrome without a plug-in or extension | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-chime-now-supports-screen-sharing-from-mozilla-firefox-and-google-chrome-without-a-plug-in-or-extension/ Topic || Machine Learning Amazon Translate now adds support for seven new languages - Greek, Romanian, Hungarian, Ukrainian, Vietnamese, Thai, and Urdu | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-translate-adds-support-seven-new-languages/ Introducing Amazon SageMaker ml.p3dn.24xlarge instances, optimized for distributed machine learning with up to 4x the network bandwidth of ml.p3.16xlarge instances | https://aws.amazon.com/about-aws/whats-new/2019/10/introducing-amazon-sagemaker-mlp3dn24xlarge-instances/ SageMaker Notebooks now support diffing | https://aws.amazon.com/about-aws/whats-new/2019/10/sagemaker-notebooks-now-support-diffing/ Amazon Lex Adds Support for Checkpoints in Session APIs | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-lex-adds-support-for-checkpoints-in-session-apis/ Amazon SageMaker Ground Truth Adds Built-in Workflows for the Verification and Adjustment of Data Labels | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-sagemaker-ground-truth-adds-built-in-workflows-for-verification-and-adjustment-of-data-labels/ AWS Chatbot Now Supports Notifications from AWS Config | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-chatbot-now-supports-notifications-from-aws-config/ AWS Deep Learning Containers now support PyTorch | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-deep-learning-containers-now-support-pytorch/ Topic || Application Integration AWS Step Functions expands Amazon SageMaker service integration | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-step-functions-expands-amazon-sagemaker-service-integration/ Amazon EventBridge now supports AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-eventbridge-supports-aws-cloudformation/ Amazon API Gateway now supports access logging to Amazon Kinesis Data Firehose | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-api-gateway-now-supports-access-logging-to-amazon-kinesis-data-firehose/ Topic || Management and Governance AWS Backup Enhances SNS Notifications to filter on job status | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-backup-enhances-sns-notifications-to-filter-on-job-status/ AWS Managed Services Console now supports search and usage-based filtering to improve change type discovery | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-managed-services-console-now-supports-search-and-usage-based-filtering-to-improve-change-type-discovery/ AWS Console Mobile Application Launches Federated Login for iOS | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-console-mobile-application-launches-federated-login-for-ios/ Topic || Media Announcing New AWS Elemental MediaConvert Features for Accelerated Transcoding, DASH, and AVC Video Quality | https://aws.amazon.com/about-aws/whats-new/2019/10/announcing-new-aws-elemental-mediaconvert-features-for-accelerated-transcoding-dash-and-avc-video-quality/ Topic || Security Amazon Cognito Increases CloudFormation Support | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-cognito-increases-cloudformation-support/ Amazon Inspector adds CIS Benchmark support for Windows 2016 | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-inspector-adds-cis-benchmark-support-for-windows-2016/ AWS Firewall Manager now supports management of Amazon VPC security groups | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-firewall-manager-now-supports-management-of-amazon-vpc-security-groups/ Amazon GuardDuty Adds Three New Threat Detections | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-guardduty-adds-three-new-threat-detections/ Topic || Block Chain New Quick Start deploys Amazon Managed Blockchain | https://aws.amazon.com/about-aws/whats-new/2019/10/new-quick-start-deploys-amazon-managed-blockchain/ Topic || AWS Quick Starts New Quick Start deploys TIBCO JasperReports Server on AWS | https://aws.amazon.com/about-aws/whats-new/2019/10/new-quick-start-deploys-tibco-jasperreports-server-on-aws/ Topic || Training New Training Courses Teach New APN Partners to Better Help Their Customers | https://aws.amazon.com/about-aws/whats-new/2019/10/new-training-courses-teach-new-apn-partners-to-better-help-their-customers/ New Courses Available to Help You Grow and Accelerate Your AWS Cloud Skills | https://aws.amazon.com/about-aws/whats-new/2019/10/new-courses-available-to-help-you-grow-and-accelerate-your-aws-cloud-skills/ New Digital Course on Coursera - AWS Fundamentals: Migrating to the Cloud | https://aws.amazon.com/about-aws/whats-new/2019/10/new-digital-course-on-coursera-aws-fundamentals-migrating-to-the-cloud/ Topic || Public Data Sets New AWS Public Datasets Available from Audi, MIT, Allen Institute for Cell Science, Finnish Meteorological Institute, and others | https://aws.amazon.com/about-aws/whats-new/2019/10/new-aws-public-datasets-available/ Topic || Robotics AWS RoboMaker introduces support for Robot Operating System 2 (ROS2) in beta release | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-robomaker-introduces-support-robot-operating-system-2-beta-release/
It is a MASSIVE episode of updates that Simon and Nikki do their best to cover! There is also an EXTRA SPECIAL bonus just for AWS Podcast listeners! Special Discount for Intersect Tickets: https://int.aws/podcast use discount code 'podcast' - note that tickets are limited! Chapters: 02:19 Infrastructure 03:07 Storage 05:34 Compute 13:47 Network 14:54 Databases 17:45 Migration 18:36 Developer Tools 21:39 Analytics 29:25 IoT 33:24 End User Computing 34:08 Machine Learning 40:21 AR and VR 41:11 Application Integration 43:57 Management and Governance 48:04 Customer Engagement 49:13 Media 50:17 Mobile 50:36 Security 51:26 Gaming 51:39 Robotics 52:13 Training Shownotes: Special Discount for Intersect Tickets: https://int.aws/podcast use discount code 'podcast' - note that tickets are limited! Topic || Infrastructure Announcing the new AWS Middle East (Bahrain) Region | https://aws.amazon.com/about-aws/whats-new/2019/07/announcing-the-new-aws-middle-east--bahrain--region-/ Topic || Storage EBS default volume type updated to GP2 | https://aws.amazon.com/about-aws/whats-new/2019/07/ebs-default-volume-type-updated-to-gp2/ AWS Backup will Automatically Copy Tags from Resource to Recovery Point | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-backup-will-automatically-copy-tags-from-resource-to-recovery-point/ Configuration update for Amazon EFS encryption of data in transit | https://aws.amazon.com/about-aws/whats-new/2019/07/configuration-update-for-amazon-efs-encryption-data-in-transit/ AWS Snowball and Snowball Edge available in Seoul – Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-snowball-and-aws-snowball-edge-available-in-asia-pacific-seoul-region/ Amazon S3 adds support for percentiles on Amazon CloudWatch Metrics | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-s3-adds-support-for-percentiles-on-amazon-cloudwatch-metrics/ Amazon FSx Now Supports Windows Shadow Copies for Restoring Files to Previous Versions | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-fsx-now-supports-windows-shadow-copies-for-restoring-files-to-previous-versions/ Amazon CloudFront Announces Support for Resource-Level and Tag-Based Permissions | https://aws.amazon.com/about-aws/whats-new/2019/08/cloudfront-resource-level-tag-based-permission/ Topic || Compute Amazon EC2 AMD Instances are Now Available in additional regions | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-amd-instances-available-in-additional-regions/ Amazon EC2 P3 Instances Featuring NVIDIA Volta V100 GPUs now Support NVIDIA Quadro Virtual Workstation | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-p3-nstances-featuring-nvidia-volta-v100-gpus-now-support-nvidia-quadro-virtual-workstation/ Introducing Amazon EC2 I3en and C5n Bare Metal Instances | https://aws.amazon.com/about-aws/whats-new/2019/08/introducing-amazon-ec2-i3en-and-c5n-bare-metal-instances/ Amazon EC2 C5 New Instance Sizes are Now Available in Additional Regions | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-ec2-c5-new-instance-sizes-are-now-available-in-additional-regions/ Amazon EC2 Spot Now Available for Red Hat Enterprise Linux (RHEL) | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-spot-now-available-red-hat-enterprise-linux-rhel/ Amazon EC2 Now Supports Tagging Launch Templates on Creation | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-now-supports-tagging-launch-templates-on-creation/ Amazon EC2 On-Demand Capacity Reservations Can Now Be Shared Across Multiple AWS Accounts | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-on-demand-capacity-reservations-shared-across-multiple-aws-accounts/ Amazon EC2 Fleet Now Lets You Modify On-Demand Target Capacity | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-ec2-fleet-modify-on-demand-target-capacity/ Amazon EC2 Fleet Now Lets You Set A Maximum Price For A Fleet Of Instances | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-ec2-fleet-now-lets-you-submit-maximum-price-for-fleet-of-instances/ Amazon EC2 Hibernation Now Available on Ubuntu 18.04 LTS | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-hibernation-now-available-ubuntu-1804-lts/ Amazon ECS services now support multiple load balancer target groups | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecs-services-now-support-multiple-load-balancer-target-groups/ Amazon ECS Console now enables simplified AWS App Mesh integration | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecs-console-enables-simplified-aws-app-mesh-integration/ Amazon ECR now supports increased repository and image limits | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecr-now-supports-increased-repository-and-image-limits/ Amazon ECR Now Supports Immutable Image Tags | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecr-now-supports-immutable-image-tags/ Amazon Linux 2 Extras now provides AWS-optimized versions of new Linux Kernels | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-linux-2-extras-provides-aws-optimized-versions-of-new-linux-kernels/ Lambda@Edge Adds Support for Python 3.7 | https://aws.amazon.com/about-aws/whats-new/2019/08/lambdaedge-adds-support-for-python-37/ AWS Batch Now Supports the Elastic Fabric Adapter | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-batch-now-supports-elastic-fabric-adapter/ Topic || Network Elastic Fabric Adapter is officially integrated into Libfabric Library | https://aws.amazon.com/about-aws/whats-new/2019/07/elastic-fabric-adapter-officially-integrated-into-libfabric-library/ Now Launch AWS Glue, Amazon EMR, and AWS Aurora Serverless Clusters in Shared VPCs | https://aws.amazon.com/about-aws/whats-new/2019/08/now-launch-aws-glue-amazon-emr-and-aws-aurora-serverless-clusters-in-shared-vpcs/ AWS DataSync now supports Amazon VPC endpoints | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-datasync-now-supports-amazon-vpc-endpoints/ AWS Direct Connect Now Supports Resource Based Authorization, Tag Based Authorization, and Tag on Resource Creation | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-direct-connect-now-supports-resource-based-authorization-tag-based-authorization-tag-on-resource-creation/ Topic || Databases Amazon Aurora Multi-Master is Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-aurora-multimaster-now-generally-available/ Amazon DocumentDB (with MongoDB compatibility) Adds Aggregation Pipeline and Diagnostics Capabilities | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-documentdb-with-mongodb-compatibility-adds-aggregation-pipeline-and-diagnostics-capabilities/ Amazon DynamoDB now helps you monitor as you approach your account limits | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-dynamodb-now-helps-you-monitor-as-you-approach-your-account-limits/ Amazon RDS for Oracle now supports new instance sizes | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-rds-for-oracle-now-supports-new-instance-sizes/ Amazon RDS for Oracle Supports Oracle Management Agent (OMA) version 13.3 for Oracle Enterprise Manager Cloud Control 13c | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-rds-for-oracle-supports-oracle-management-agent-oma-version133-for-oracle-enterprise-manager-cloud-control13c/ Amazon RDS for Oracle now supports July 2019 Oracle Patch Set Updates (PSU) and Release Updates (RU) | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-rds-for-oracle-supports-july-2019-oracle-patch-set-and-release-updates/ Amazon RDS SQL Server now supports changing the server-level collation | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-rds-sql-server-supports-changing-server-level-collation/ PostgreSQL 12 Beta 2 Now Available in Amazon RDS Database Preview Environment | https://aws.amazon.com/about-aws/whats-new/2019/08/postgresql-beta-2-now-available-in-amazon-rds-database-preview-environment/ Amazon Aurora with PostgreSQL Compatibility Supports Publishing PostgreSQL Log Files to Amazon CloudWatch Logs | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-aurora-with-postgresql-compatibility-support-logs-to-cloudwatch/ Amazon Redshift Launches Concurrency Scaling in Five additional AWS Regions, and Enhances Console Performance Graphs in all supported AWS Regions | https://aws.amazon.com/about-aws/ whats-new/2019/08/amazon-redshift-launches-concurrency-scaling-five-additional-regions-enhances-console-performance-graphs/ Amazon Redshift now supports column level access control with AWS Lake Formation | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-redshift-spectrum-now-supports-column-level-access-control-with-aws-lake-formation/ Topic || Migration AWS Migration Hub Now Supports Import of On-Premises Server and Application Data From RISC Networks to Plan and Track Migration Progress | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-migration-hub-supports-import-of-on-premises-server-application-data-from-risc-networks-to-track-migration-progress/ Topic || Developer Tools AWS CodePipeline Achieves HIPAA Eligibility | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-codepipeline-achieves-hipaa-eligibility/ AWS CodePipeline Adds Pipeline Status to Pipeline Listing | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-codepipeline-adds-pipeline-status-to-pipeline-listing/ AWS Amplify Console adds support for automatically deploying branches that match a specific pattern | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-amplify-console-support-git-based-branch-pattern-detection/ Amplify Framework Adds Predictions Category | https://aws.amazon.com/about-aws/whats-new/2019/07/amplify-framework-adds-predictions-category/ Amplify Framework adds local mocking and testing for GraphQL APIs, Storage, Functions, and Hosting | https://aws.amazon.com/about-aws/whats-new/2019/08/amplify-framework-adds-local-mocking-and-testing-for-graphql-apis-storage-functions-hostings/ Topic || Analytics AWS Lake Formation is now generally available | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-lake-formation-is-now-generally-available/ Announcing PartiQL: One query language for all your data | https://aws.amazon.com/blogs/opensource/announcing-partiql-one-query-language-for-all-your-data/ AWS Glue now supports the ability to run ETL jobs on Apache Spark 2.4.3 (with Python 3) | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-glue-now-supports-ability-to-run-etl-jobs-apache-spark-243-with-python-3/ AWS Glue now supports additional configuration options for memory-intensive jobs submitted through development endpoints | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-glue-now-supports-additional-configuration-options-for-memory-intensive-jobs-submitted-through-deployment-endpoints/ AWS Glue now provides the ability to bookmark Parquet and ORC files using Glue ETL jobs | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-glue-now-provides-ability-to-bookmark-parquet-and-orc-files-using-glue-etl-jobs/ AWS Glue now provides FindMatches ML transform to deduplicate and find matching records in your dataset | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-glue-provides-findmatches-ml-transform-to-deduplicate/ Amazon QuickSight adds support for custom colors, embedding for all user types and new regions! | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-quicksight-adds-support-for-custom-colors-embedding-for-all-user-types-and-new-regions/ Achieve 3x better Spark performance with EMR 5.25.0 | https://aws.amazon.com/about-aws/whats-new/2019/08/achieve-3x-better-spark-performance-with-emr-5250/ Amazon EMR now supports native EBS encryption | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon_emr_now_supports_native_ebs_encryption/ Amazon Athena adds Support for AWS Lake Formation Enabling Fine-Grained Access Control on Databases, Tables, and Columns | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-athena-adds-support-for-aws-lake-formation-enabling-fine-grained-access-control-on-databases-tables-columns/ Amazon EMR Integration With AWS Lake Formation Is Now In Beta, Supporting Database, Table, and Column-level access controls for Apache Spark | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-emr-integration-with-aws-lake-formation-now-in-beta-supporting-database-table-column-level-access-controls/ Topic || IoT AWS IoT Device Defender Expands Globally | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-iot-device-defender-expands-globally/ AWS IoT Device Defender Supports Mitigation Actions for Audit Results | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-iot-device-defender-supports-mitigation-actions-for-audit-results/ AWS IoT Device Tester v1.3.0 is Now Available for Amazon FreeRTOS 201906.00 Major | https://aws.amazon.com/about-aws/whats-new/2019/07/aws_iot_device_tester_v130_for_amazon_freertos_201906_00_major/ AWS IoT Events actions now support AWS Lambda, SQS, Kinesis Firehose, and IoT Events as targets | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-iot-events-supports-invoking-actions-to-lambda-sqs-kinesis-firehose-iot-events/ AWS IoT Events now supports AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-iot-events-now-supports-aws-cloudformation/ Topic || End User Computing AWS Client VPN now adds support for Split-tunnel | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-client-vpn-now-adds-support-for-split-tunnel/ Introducing AWS Chatbot (beta): ChatOps for AWS in Amazon Chime and Slack Chat Rooms | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-aws-chatbot-chatops-for-aws/ Amazon AppStream 2.0 Adds CLI Operations for Programmatic Image Creation | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-appstream-2-adds-cli-operations-for-programmatic-image-creation/ NICE DCV Releases Version 2019.0 with Multi-Monitor Support on Web Client | https://aws.amazon.com/about-aws/whats-new/2019/08/nice-dcv-releases-version-2019-0-with-multi-monitor-support-on-web-client/ New End User Computing Competency Solutions | https://aws.amazon.com/about-aws/whats-new/2019/08/end-user-computing-competency-solutions/ Amazon WorkDocs Migration Service | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon_workdocs_migration_service/ Topic || Machine Learning SageMaker Batch Transform now enables associating prediction results with input attributes | https://aws.amazon.com/about-aws/whats-new/2019/07/sagemaker-batch-transform-enable-associating-prediction-results-with-input-attributes/ Amazon SageMaker Ground Truth Adds Data Labeling Workflow for Named Entity Recognition | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-sagemaker-ground-truth-adds-data-labeling-workflow-for-named-entity-recognition/ Amazon SageMaker notebooks now available with pre-installed R kernel | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-sagemaker-notebooks-available-with-pre-installed-r-kernel/ New Model Tracking Capabilities for Amazon SageMaker Are Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/08/new-model-tracking-capabilities-for-amazon-sagemaker-now-generally-available/ Amazon Comprehend Custom Entities now supports multiple entity types | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-comprehend-custom-entities-supports-multiple-entity-types/ Introducing Predictive Maintenance Using Machine Learning | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-predictive-maintenance-using-machine-learning/ Amazon Transcribe Streaming Now Supports WebSocket | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-transcribe-streaming-now-supports-websocket/ Amazon Polly Launches Neural Text-to-Speech and Newscaster Voices | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-polly-launches-neural-text-to-speech-and-newscaster-voices/ Manage a Lex session using APIs on the client | https://aws.amazon.com/about-aws/whats-new/2019/08/manage-a-lex-session-using-apis-on-the-client/ Amazon Rekognition now detects violence, weapons, and self-injury in images and videos; improves accuracy for nudity detection | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-rekognition-now-detects-violence-weapons-and-self-injury-in-images-and-videos-improves-accuracy-for-nudity-detection/ Topic || AR and VR Amazon Sumerian Now Supports Physically-Based Rendering (PBR) | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-sumerian-now-supports-physically-based-rendering-pbr/ Topic || Application Integration Amazon SNS Message Filtering Adds Support for Attribute Key Matching | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-sns-message-filtering-adds-support-for-attribute-key-matching/ Amazon SNS Adds Support for AWS X-Ray | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-sns-adds-support-for-aws-x-ray/ Temporary Queue Client Now Available for Amazon SQS | https://aws.amazon.com/about-aws/whats-new/2019/07/temporary-queue-client-now-available-for-amazon-sqs/ Amazon MQ Adds Support for AWS Key Management Service (AWS KMS), Improving Encryption Capabilities | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-mq-adds-support-for-aws-key-management-service-improving-encryption-capabilities/ Amazon MSK adds support for Apache Kafka version 2.2.1 and expands availability to EU (Stockholm), Asia Pacific (Mumbai), and Asia Pacific (Seoul) | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-msk-adds-support-apache-kafka-version-221-expands-availability-stockholm-mumbai-seoul/ Amazon API Gateway supports secured connectivity between REST APIs & Amazon Virtual Private Clouds in additional regions | https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-api-gateway-supports-secured-connectivity-between-reset-apis-and-amazon-virtual-private-clouds-in-additional-regions/ Topic || Management and Governance AWS Cost Explorer now Supports Usage-Based Forecasts | https://aws.amazon.com/about-aws/whats-new/2019/07/usage-based-forecasting-in-aws-cost-explorer/ Introducing Amazon EC2 Resource Optimization Recommendations | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-amazon-ec2-resource-optimization-recommendations/ AWS Budgets Announces AWS Chatbot Integration | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-budgets-announces-aws-chatbot-integration/ Discovering Documents Made Easy in AWS Systems Manager Automation | https://aws.amazon.com/about-aws/whats-new/2019/07/discovering-documents-made-easy-in-aws-systems-manager-automation/ AWS Systems Manager Distributor makes it easier to create distributable software packages | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-systems-manager-distributor-makes-it-easier-to-create-distributable-software-packages/ Now use AWS Systems Manager Maintenance Windows to select resource groups as targets | https://aws.amazon.com/about-aws/whats-new/2019/07/now-use-aws-systems-manager-maintenance-windows-to-select-resource-groups-as-targets/ Use AWS Systems Manager to resolve operational issues with your .NET and Microsoft SQL Server Applications | https://aws.amazon.com/about-aws/whats-new/2019/08/use-aws-systems-manager-to-resolve-operational-issues-with-your-net-and-microsoft-sql-server-applications/ CloudWatch Logs Insights adds cross log group querying | https://aws.amazon.com/about-aws/whats-new/2019/07/cloudwatch-logs-insights-adds-cross-log-group-querying/ AWS CloudFormation now supports higher StackSets limits | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-cloudformation-now-supports-higher-stacksets-limits/ Topic || Customer Engagement Introducing AI-Driven Social Media Dashboard | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-ai-driven-social-media-dashboard/ New Amazon Connect integration for ChoiceView from Radish Systems on AWS | https://aws.amazon.com/about-aws/whats-new/2019/07/new-amazon-connect-integration-for-choiceview-from-radish-systems-on-aws/ Amazon Pinpoint Adds Campaign and Application Metrics APIs | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-pinpoint-adds-campaign-and-application-metrics-apis/ Topic || Media AWS Elemental Appliances and Software Now Available in the AWS Management Console | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-elemental-appliances-and-software-now-available-in-aws-management-console/ AWS Elemental MediaConvert Expands Audio Support and Improves Performance | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-elemental-mediaconvert-expands-audio-support-and-improves-performance/ AWS Elemental MediaConvert Adds Ability to Prioritize Transcoding Jobs | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-elemental-mediaconvert-adds-ability-to-prioritize-transcoding-jobs/ AWS Elemental MediaConvert Simplifies Editing and Sharing of Settings | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-elemental-mediaconvert-simplifies-editing-and-sharing-of-settings/ AWS Elemental MediaStore Now Supports Resource Tagging | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-elemental-mediastore-now-supports-resource-tagging/ AWS Elemental MediaLive Enhances Support for File-Based Inputs for Live Channels | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-elemental-medialive-enhances-support-for-file-based-inputs-for-live-channels/ Topic || Mobile AWS Device Farm improves device start up time to enable instant access to devices | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-device-farm-improves-device-start-up-time-to-enable-instant-access-to-devices/ Topic || Security Introducing the Amazon Corretto Crypto Provider (ACCP) for Improved Cryptography Performance | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-the-amazon-corretto-crypto-provider/ AWS Secrets Manager now supports VPC endpoint policies | https://aws.amazon.com/about-aws/whats-new/2019/07/AWS-Secrets-Manager-now-supports-VPC-endpoint-policies/ Topic || Gaming Lumberyard Beta 1.20 Now Available | https://aws.amazon.com/about-aws/whats-new/2019/07/lumberyard-beta-120-now-available/ Topic || Robotics AWS RoboMaker now supports offline logs and metrics for the AWS RoboMaker CloudWatch cloud extension | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-robomaker-now-supports-offline-logs-metrics-aws-robomaker-cloudwatch-cloud-extension/ Topic || Training New AWS Certification Exam Vouchers Make Certifying Groups Easier | https://aws.amazon.com/about-aws/whats-new/2019/07/new-aws-certification-exam-vouchers-make-certifying-groups-easier/ Announcing New Resources and Website to Accelerate Your Cloud Adoption | https://aws.amazon.com/about-aws/whats-new/2019/07/announcing-new-resources-and-website-to-accelerate-your-cloud-adoption/ AWS Developer Series Relaunched on edX | https://aws.amazon.com/about-aws/whats-new/2019/08/aws-developer-series-relaunched-on-edx/
Simon and Nicki share a bumper-crop of interesting, useful and cool new services and features for AWS customers! Chapter Timings 00:01:17 Storage 00:03:15 Compute 00:07:13 Network 00:10:27 Databases 00:16:04 Migration 00:17:43 Developer Tools 00:22:47 Analytics 00:27:07 IoT 00:28:14 End User Computing 00:29:25 Machine Learning 00:30:49 Application Integration 00:34:18 Management and Governance 00:41:42 Customer Engagement 00:42:47 Media 00:44:03 Security 00:46:26 Gaming 00:47:54 AWS Marketplace 00:49:07 Robotics Shownotes Topic || Storage Optimize Cost with Amazon EFS Infrequent Access Lifecycle Management | https://aws.amazon.com/about-aws/whats-new/2019/07/optimize-cost-amazon-efs-infrequent-access-lifecycle-management/ Amazon FSx for Windows File Server Now Enables You to Use File Systems Directly With Your Organization’s Self-Managed Active Directory | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-fsx-for-windows-file-server-now-enables-you-to-use-file-systems-directly-with-your-organizations-self-managed-active-directory/ Amazon FSx for Windows File Server now enables you to use a single AWS Managed AD with file systems across VPCs or accounts | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-fsx-for-windows-file-server-now-enables-you-to-use-a-single-aws-managed-ad-with-file-systems-across-vpcs-or-accounts/ AWS Storage Gateway now supports Amazon VPC endpoints with AWS PrivateLink | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-storage-gateway-now-supports-amazon-vpc-endpoints-aws-privatelink/ File Gateway adds encryption & signing options for SMB clients – Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/06/file-gateway-adds-options-to-enforce-encryption-and-signing-for-smb-shares/ New AWS Public Datasets Available from Facebook, Yale, Allen Institute for Brain Science, NOAA, and others | https://aws.amazon.com/about-aws/whats-new/2019/07/new-aws-public-datasets-available-from-facebook-yale-allen/ Topic || Compute Introducing Amazon EC2 Instance Connect | https://aws.amazon.com/about-aws/whats-new/2019/06/introducing-amazon-ec2-instance-connect/ Introducing New Instances Sizes for Amazon EC2 M5 and R5 Instances | https://aws.amazon.com/about-aws/whats-new/2019/06/introducing-new-instances-sizes-for-amazon-ec2-m5-and-r5-instances/ Introducing New Instance Sizes for Amazon EC2 C5 Instances | https://aws.amazon.com/about-aws/whats-new/2019/06/introducing-new-instance-sizes-for-amazon-ec2-c5-instances/ Amazon ECS now supports additional resource-level permissions and tag-based access controls | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-ecs-now-supports-resource-level-permissions-and-tag-based-access-controls/ Amazon ECS now offers improved capabilities for local testing | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecs-now-offers-improved-capabilities-for-local-testing/ AWS Container Services launches AWS For Fluent Bit | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-container-services-launches-aws-for-fluent-bit/ Amazon EKS now supports Kubernetes version 1.13, ECR PrivateLink, and Kubernetes Pod Security Policies | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-eks-now-supports-kubernetes113-ecr-privatelink-kubernetes-pod-security/ AWS VPC CNI Version 1.5.0 Now Default for Amazon EKS Clusters | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-vpc-cni-version-150-now-default-for-amazon-eks-clusters/ Announcing Enhanced Lambda@Edge Monitoring within the Amazon CloudFront Console | https://aws.amazon.com/about-aws/whats-new/2019/06/announcing-enhanced-lambda-edge-monitoring-amazon-cloudfront-console/ AWS Lambda Console shows recent invocations using CloudWatch Logs Insights | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-lambda-console-recent-invocations-using-cloudwatch-logs-insights/ AWS Thinkbox Deadline with Resource Tracker | https://aws.amazon.com/about-aws/whats-new/2019/06/thinkbox-deadline-resource-tracker/ Topic || Network Network Load Balancer Now Supports UDP Protocol | https://aws.amazon.com/about-aws/whats-new/2019/06/network-load-balancer-now-supports-udp-protocol/ Announcing Amazon VPC Traffic Mirroring for Amazon EC2 Instances | https://aws.amazon.com/about-aws/whats-new/2019/06/announcing-amazon-vpc-traffic-mirroring-for-amazon-ec2-instances/ AWS ParallelCluster now supports Elastic Fabric Adapter (EFA) | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-parallelcluster-supports-elastic-fabric-adapter/ AWS Direct Connect launches first location in Italy | https://aws.amazon.com/about-aws/whats-new/2019/06/aws_direct_connect_locations_in_italy/ Amazon CloudFront announces seven new Edge locations in North America, Europe, and Australia | https://aws.amazon.com/about-aws/whats-new/2019/06/cloudfront-seven-edge-locations-june2019/ Now Add Endpoint Policies to Interface Endpoints for AWS Services | https://aws.amazon.com/about-aws/whats-new/2019/06/now-add-endpoint-policies-to-interface-endpoints-for-aws-services/ Topic || Databases Amazon Aurora with PostgreSQL Compatibility Supports Serverless | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-aurora-with-postgresql-compatibility-supports-serverless/ Amazon RDS now supports Storage Auto Scaling | https://aws.amazon.com/about-aws/whats-new/2019/06/rds-storage-auto-scaling/ Amazon RDS Introduces Compatibility Checks for Upgrades from MySQL 5.7 to MySQL 8.0 | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon_rds_introduces_compatibility_checks/ Amazon RDS for PostgreSQL Supports New Minor Versions 11.4, 10.9, 9.6.14, 9.5.18, and 9.4.23 | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-rds-postgresql-supports-minor-version-114/ Amazon Aurora with PostgreSQL Compatibility Supports Cluster Cache Management | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-aurora-with-postgresql-compatibility-supports-cluster-cache-management/ Amazon Aurora with PostgreSQL Compatibility Supports Data Import from Amazon S3 | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-aurora-with-postgresql-compatibility-supports-data-import-from-amazon-s3/ Amazon Aurora Supports Cloning Across AWS Accounts | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon_aurora_supportscloningacrossawsaccounts-/ Amazon RDS for Oracle now supports z1d instance types | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-rds-for-oracle-now-supports-z1d-instance-types/ Amazon RDS for Oracle Supports Oracle Application Express (APEX) Version 19.1 | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-rds-oracle-supports-oracle-application-express-version-191/ Amazon ElastiCache launches reader endpoints for Redis | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-elasticache-launches-reader-endpoint-for-redis/ Amazon DocumentDB (with MongoDB compatibility) Now Supports Stopping and Starting Clusters | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-documentdb-supports-stopping-starting-cluters/ Amazon DocumentDB (with MongoDB compatibility) Now Provides Cluster Deletion Protection | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-documentdb-provides-cluster-deletion-protection/ You can now publish Amazon Neptune Audit Logs to Cloudwatch | https://aws.amazon.com/about-aws/whats-new/2019/06/you-can-now-publish-amazon-neptune-audit-logs-to-cloudwatch/ Amazon DynamoDB now supports deleting a global secondary index before it finishes building | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-dynamodb-now-supports-deleting-a-global-secondary-index-before-it-finishes-building/ Amazon DynamoDB now supports up to 25 unique items and 4 MB of data per transactional request | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-dynamodb-now-supports-up-to-25-unique-items-and-4-mb-of-data-per-transactional-request/ Topic || Migration CloudEndure Migration is now available at no charge | https://aws.amazon.com/about-aws/whats-new/2019/06/cloudendure-migration-available-at-no-charge/ New AWS ISV Workload Migration Program | https://aws.amazon.com/about-aws/whats-new/2019/06/isv-workload-migration/ AWS Migration Hub Adds Support for Service-Linked Roles | https://aws.amazon.com/about-aws/whats-new/2019/06/aws_migration_hub_adds_support_for_service_linked_roles/ Topic || Developer Tools The AWS Toolkit for Visual Studio Code is Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/07/announcing-aws-toolkit-for-visual-studio-code/ The AWS Cloud Development Kit (AWS CDK) is Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/07/the-aws-cloud-development-kit-aws-cdk-is-now-generally-available1/ AWS CodeCommit Supports Two Additional Merge Strategies and Merge Conflict Resolution | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-codecommit-supports-2-additional-merge-strategies-and-merge-conflict-resolution/ AWS CodeCommit Now Supports Resource Tagging | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-codecommit-now-supports-resource-tagging/ AWS CodeBuild adds Support for Polyglot Builds | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-codebuild-adds-support-for-polyglot-builds/ AWS Amplify Console Updates Build image with SAM CLI and Custom Container Support | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-amplify-console-updates-build-image-sam-cli-and-custom-container-support/ AWS Amplify Console announces Manual Deploys for Static Web Hosting | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-amplify-console-announces-manual-deploys-for-static-web-hosting/ Amplify Framework now Supports Adding AWS Lambda Triggers for events in Auth and Storage categories | https://aws.amazon.com/about-aws/whats-new/2019/07/amplify-framework-now-supports-adding-aws-lambda-triggers-for-events-auth-storage-categories/ AWS Amplify Console now supports AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-amplify-console-supports-aws-cloudformation/ AWS CloudFormation updates for Amazon EC2, Amazon ECS, Amazon EFS, Amazon S3 and more | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-cloudformation-updates-amazon-ec2-ecs-efs-s3-and-more/ Topic || Analytics Amazon QuickSight launches multi-sheet dashboards, new visual types and more | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-quickSight-launches-multi-sheet-dashboards-new-visual-types-and-more/ Amazon QuickSight now supports fine-grained access control over Amazon S3 and Amazon Athena! | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-quickSight-now-supports-fine-grained-access-control-over-amazon-S3-and-amazon-athena/ Announcing EMR Release 5.24.0: With performance improvements in Spark, new versions of Flink, Presto, and Hue, and enhanced CloudFormation support for EMR Instance Fleets | https://aws.amazon.com/about-aws/whats-new/2019/06/announcing-emr-release-5240-with-performance-improvements-in-spark-new-versions-of-flink-presto-Hue-and-cloudformation-support-for-launching-clusters-in-multiple-subnets-through-emr-instance-fleets/ AWS Glue now provides workflows to orchestrate your ETL workloads | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-glue-now-provides-workflows-to-orchestrate-etl-workloads/ Amazon Elasticsearch Service increases data protection with automated hourly snapshots at no extra charge | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-elasticsearch-service-increases-data-protection-with-automated-hourly-snapshots-at-no-extra-charge/ Amazon MSK is Now Integrated with AWS CloudFormation and Terraform | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon_msk_is_now_integrated_with_aws_cloudformation_and_terraform/ Kinesis Video Streams adds support for Dynamic Adaptive Streaming over HTTP (DASH) and H.265 video | https://aws.amazon.com/about-aws/whats-new/2019/07/kinesis-video-streams-adds-support-for-dynamic-adaptive-streaming-over-http-dash-and-h-2-6-5-video/ Announcing the availability of Amazon Kinesis Video Producer SDK in C | https://aws.amazon.com/about-aws/whats-new/2019/07/announcing-availability-of-amazon-kinesis-video-producer-sdk-in-c/ Topic || IoT AWS IoT Expands Globally | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-iot-expands-globally/ Bluetooth Low Energy Support and New MQTT Library Now Generally Available in Amazon FreeRTOS 201906.00 Major | https://aws.amazon.com/about-aws/whats-new/2019/06/bluetooth-low-energy-support-amazon-freertos-now-available/ AWS IoT Greengrass 1.9.2 With Support for OpenWrt and AWS IoT Device Tester is Now Available | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-iot-greengrass-support-openwrt-aws-iot-device-tester-available/ Topic || End User Computing Amazon Chime Achieves HIPAA Eligibility | https://aws.amazon.com/about-aws/whats-new/2019/06/chime_hipaa_eligibility/ Amazon WorkSpaces now supports copying Images across AWS Regions | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon_workspaces_now_supports_copying_images_across_aws_regions/ Amazon AppStream 2.0 adds support for Windows Server 2016 and Windows Server 2019 | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-appstream-20-adds-support-for-windows-server-2016-and-windows-server-2019/ AWS Client VPN now includes support for AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-client-vpn-includes-support-for-aws-cloudformation/ Topic || Machine Learning Amazon Comprehend Medical is now Available in Sydney, London, and Canada | https://aws.amazon.com/about-aws/whats-new/2019/06/comprehend-medical-available-in-asia-pacific-eu-canada/ Amazon Personalize Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-personalize-now-generally-available/ New in AWS Deep Learning Containers: Support for Amazon SageMaker and MXNet 1.4.1 with CUDA 10.0 | https://aws.amazon.com/about-aws/whats-new/2019/06/new-in-aws-deep-learning-containers-support-for-amazon-sagemaker-libraries-and-mxnet-1-4-1-with-cuda-10-0/ Topic || Application Integration Introducing Amazon EventBridge | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-amazon-eventbridge/ AWS App Mesh Service Discovery with AWS Cloud Map generally available. | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-app-mesh-service-discovery-with-aws-cloud-map-generally-available/ Amazon API Gateway Now Supports Tag-Based Access Control and Tags on WebSocket APIs | https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-api-gateway-supports-tag-based-access-control-tags-on-websocket/ Amazon API Gateway Adds Configurable Transport Layer Security Version for Custom Domains | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-api-gateway-adds-configurable-transport-layer-security-version-custom-domains/ Topic || Management and Governance Introducing AWS Systems Manager OpsCenter to enable faster issue resolution | https://aws.amazon.com/about-aws/whats-new/2019/06/introducing-aws-systems-manager-opscenter-to-enable-faster-issue-resolution/ Introducing Service Quotas: View and manage your quotas for AWS services from one central location | https://aws.amazon.com/about-aws/whats-new/2019/06/introducing-service-quotas-view-and-manage-quotas-for-aws-services-from-one-location/ Introducing AWS Budgets Reports | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-aws-budgets-reports/ Introducing Amazon CloudWatch Anomaly Detection – Now in Preview | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-amazon-cloudwatch-anomaly-detection-now-in-preview/ Amazon CloudWatch Launches Dynamic Labels on Dashboards | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-cloudwatch-launches-dynamic-labels-on-dashboards/ Amazon CloudWatch Adds Visibility for your .NET and SQL Server Application Health | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-cloudwatch-adds-visibility-for-your-net-sql-server-application-health/ Amazon CloudWatch Events Now Supports Amazon CloudWatch Logs as a Target and Tagging of CloudWatch Events Rules | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-cloudwatch-events-now-supports-amazon-cloudwatch-logs-target-tagging-cloudwatch-events-rules/ Introducing Amazon CloudWatch Container Insights for Amazon ECS and AWS Fargate - Now in Preview | https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-container-insights-for-ecs-and-aws-fargate-in-preview/ AWS Config now enables you to provision AWS Config rules across all AWS accounts in your organization | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-config-now-enables-you-to-provision-config-rules-across-all-aws-accounts-in-your-organization/ Session Manager launches Run As to start interactive sessions with your own operating system user account | https://aws.amazon.com/about-aws/whats-new/2019/07/session-manager-launches-run-as-to-start-interactive-sessions-with-your-own-operating-system-user-account/ Session Manager launches tunneling support for SSH and SCP | https://aws.amazon.com/about-aws/whats-new/2019/07/session-manager-launches-tunneling-support-for-ssh-and-scp/ Use IAM access advisor with AWS Organizations to set permission guardrails confidently | https://aws.amazon.com/about-aws/whats-new/2019/06/now-use-iam-access-advisor-with-aws-organizations-to-set-permission-guardrails-confidently/ AWS Resource Groups is Now SOC Compliant | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-resource-groups-is-now-soc-compliant/ Topic || Customer Engagement Introducing AI Powered Speech Analytics for Amazon Connect | https://aws.amazon.com/about-aws/whats-new/2019/06/introducing-ai-powered-speech-analytics-for-amazon-connect/ Amazon Connect Launches Contact Flow Versioning | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-connect-launches-contact-flow-versioning/ Topic || Media AWS Elemental MediaConnect Now Supports SPEKE for Conditional Access | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-elemental-mediaconnect-now-supports-speke-for-conditional-access/ AWS Elemental MediaLive Now Supports AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-elemental-medialive-now-supports-aws-cloudformation/ AWS Elemental MediaConvert Now Ingests Files from HTTPS Sources | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-elemental-mediaconvert-now-ingests-files-from-https-sources/ Topic || Security AWS Certificate Manager Private Certificate Authority now supports root CA hierarchies | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-certificate-manager-private-certificate-authority-now-supports-root-CA-heirarchies/ AWS Control Tower is now generally available | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-control-tower-is-now-generally-available/ AWS Security Hub is now generally available | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-security-hub-now-generally-available/ AWS Single Sign-On now makes it easy to access more business applications including Asana and Jamf | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-single-sign-on-access-business-applications-including-asana-and-jamf/ Topic || Gaming Large Match Support for Amazon GameLift Now Available | https://aws.amazon.com/about-aws/whats-new/2019/07/large-match-support-for-amazon-gameLift-now-available/ New Dynamic Vegetation System in Lumberyard Beta 1.19 – Available Now | https://aws.amazon.com/about-aws/whats-new/2019/06/lumberyard-beta-119-available-now/ Topic || AWS Marketplace AWS Marketplace now integrates with your procurement systems | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-marketplace-now-integrates-with-your-procurement-systems/ Topic || Robotics AWS RoboMaker announces support for Robot Operating System (ROS) Melodic | https://aws.amazon.com/about-aws/whats-new/2019/07/aws-robomaker-support-robot-operating-system-melodic/
Simon shares a huge selection of updates and new things! Chapter Marks: 00:00:19 Satellites 00:01:12 Storage 00:03:01 Compute 00:05:35 Databases 00:09:17 Developer Tools 00:10:32 Analytics 00:12:28 IoT 00:14:06 End User Computing 00:15:03 Machine Learning 00:16:22 Robotics 00:16:46 Application Integration 00:17:42 Management and Governance 00:20:49 Customer Engagement 00:21:24 Security 00:22:10 Training and Certification 00:22:36 Quick Starts 00:23:04 AWS Marketplace Shownotes Topic || Satellite Announcing General Availability of AWS Ground Station | https://aws.amazon.com/about-aws/whats-new/2019/05/announcing-general-availability-of-aws-ground-station-/ Topic || Storage You can now encrypt new EBS volumes in your account in a region with a single setting | https://aws.amazon.com/about-aws/whats-new/2019/05/with-a-single-setting-you-can-encrypt-all-new-amazon-ebs-volumes/ AWS Backup Now Supports AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-backup-now-supports-aws-cloudformation/ AWS DataSync Now Supports EFS-to-EFS Transfer | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-datasync-now-supports-efs-to-efs-transfer/ AWS DataSync adds filtering for data transfers – Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-datasync-adds-filtering-for-data-transfers/ AWS DataSync is now SOC compliant | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-datasync-is-now-soc-compliant/ Topic || Compute Amazon EC2 announces Host Recovery | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-ec2-announces-host-recovery/ Enable EC2 Hibernation Without Specifying Encryption Intent at Every Instance Launch | https://aws.amazon.com/about-aws/whats-new/2019/05/enable-ec2-hibernation-without-specifying-encryption-intent/ AWS Step Functions Adds Support for Callback Patterns in Workflows | https://aws.amazon.com/about-aws/ whats-new/2019/05/aws-step-functions-support-callback-patterns/ Amazon ECS Support for Windows Server 2019 Containers is Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-ecs-support-windows-server-2019-containers-generally-available/ Amazon ECS Improves ENI Density Limits for awsvpc Networking Mode | https://aws.amazon.com/about-aws/whats-new/2019/06/Amazon-ECS-Improves-ENI-Density-Limits-for-awsvpc-Networking-Mode/ Serverless Image Handler Now Leverages Sharp and Provides Smart Cropping with Amazon Rekognition | https://aws.amazon.com/about-aws/whats-new/2019/06/serverless-image-handler-now-leverages-sharp-and-provides-smart-cropping-with-amazon-rekognition/ Topic || Databases Amazon Aurora Serverless MySQL 5.6 Now Supports Data API | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon_aurora_serverless_mysql_5_6_now_supportsdataapi/ Amazon RDS Recommendations Provide Best Practice Guidance for Amazon Aurora | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-rds-recommendations-provide-best-practice-guidance-for-amazon-aurora/ Amazon Aurora with PostgreSQL Compatibility Supports PostgreSQL 10.7 | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-aurora-with-postgresql-compatibility-supports-postgresql-107/ Amazon Aurora with PostgreSQL Compatibility Supports Database Activity Streams For Real-time Monitoring | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-aurora-with-postgresql-compatibility-supports-database-activity-streams/ Amazon RDS for SQL Server Increases the Database Limit Per Database Instance up to 100 | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon_rds_for_sql_server_increases/ Amazon RDS for SQL Server Now Supports Always On Availability Groups for SQL Server 2017 | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-rds-for-sql-server-now-supports-always-on-availability-groups-for-sql-server-2017/ Amazon RDS for SQL Server now Supports Multi-File Native Restores | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-rds-for-sql-server-now-supports-multi-file-native-restores/ Amazon DocumentDB (with MongoDB compatibility) is now SOC 1, 2, and 3 compliant | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-documentdb-now-soc-1-2-3-compliant/ Amazon DynamoDB adaptive capacity is now instant | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-dynamodb-adaptive-capacity-is-now-instant/ Amazon ElastiCache for Redis improves cluster availability during planned maintenance | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon_elasticache_for_redis_improves_cluster_availability/ Amazon ElastiCache for Redis launches self-service updates | https://aws.amazon.com/about-aws/whats-new/2019/06/elasticache-self-service-updates/ Topic || Developer Tools Amplify Framework Adds Support for AWS Lambda Functions and Amazon DynamoDB Custom Indexes in GraphQL Schemas | https://aws.amazon.com/about-aws/whats-new/2019/05/amplify-framework-adds-support-for-aws-lambda-as-a-data-source-and-custom-indexes-for-amazon-dynamodb-in-graphql-schema/ AWS CodeCommit Now Supports Including Application Code When Creating a Repository with AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-codecommit-now-supports-the-ability-to-make-an-initial-commit/ Topic || Analytics Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon_managed_streaming_for_apache_kafka_amazon_msk_is_now_generally_available/ Amazon Elasticsearch Service Is Now SOC Compliant | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-elasticsearch-service-now-soc-compliant/ Amazon Elasticsearch Service announces support for Elasticsearch 6.7 | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-elasticsearch-service-announces-support-for-elasticsearch-67/ AWS Glue now provides an VPC interface endpoint | https://aws.amazon.com/about-aws/whats-new/2019/06/aws_glue_now_provides_vpc_interface_endpoint/ AWS Glue supports scripts that are compatible with Python 3.6 in Python shell jobs | https://aws.amazon.com/about-aws/whats-new/2019/06/aws_glue_supportscripts/ Topic || IoT AWS IoT Things Graph Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-iot-things-graph-now-generally-available/ AWS IoT Events is now generally available | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-iot-events-now-generally-available/ AWS IoT Device Tester v1.2 is Now Available for Amazon FreeRTOS v1.4.8 | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-iot-device-tester-v120-now-available-amazon-freertos-v148/ AWS IoT Analytics Now Supports Channel and Data Stores in Your Own Amazon S3 Buckets | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-iot-analytics-now-supports-channel-and-data-stores-in-your-o/ Topic || End User Computing Announcing Amazon WorkLink support for Additional Website Authorization Providers | https://aws.amazon.com/about-aws/whats-new/2019/05/announcing-amazon-workLink-support-for-additional-website-authorization-providers/ Amazon AppStream 2.0 launches three self-guided workshops to build online trials and SaaS solutions – Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-appstream2-launches-three-self-guided-workshops-to-build-online-trials-and-saas-solutions/ Amazon Chime Voice Connector now supports United States Toll-Free Numbers | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-chime-voice-connector-now-supports-us-toll-free-numbers/ Topic || Machine Learning Introducing Fraud Detection Using Machine Learning | https://aws.amazon.com/about-aws/whats-new/2019/05/introducing-fraud-detection-using-machine-learning/ Amazon Textract - Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-textract-now-generally-available/ Amazon Transcribe now supports speech-to-text in Modern Standard Arabic | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-transcribe-now-supports-speech-to-text-in-modern-standard-arabic/ Topic || Robotics AWS RoboMaker now supports over-the-air deployment job cancellation | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-robomaker-supports-over-the-air-deployment-job-cancellation/ Topic || Application Integration Amazon API Gateway Now Supports Tag-Based Access Control and Tags on Additional Resources | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-api-gateway-now-supports-tag-based-access-control-tags-additional-resources/ Amazon API Gateway Now Supports VPC Endpoint Policies | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-api-gateway-supports-vpc-endpoint-policies/ Topic || Management and Governance Introducing AWS Systems Manager OpsCenter to enable faster issue resolution | https://aws.amazon.com/about-aws/whats-new/2019/06/introducing-aws-systems-manager-opscenter-to-enable-faster-issue-resolution/ AWS Budgets now Supports Variable Budget Targets for Monthly and Quarterly Cost and Usage Budgets | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-budgets-support-for-variable-budget-targets-for-cost-and-usage-budgets/ CloudWatch Logs adds support for percentiles in metric filters | https://aws.amazon.com/about-aws/whats-new/2019/05/cloudwatch-logs-adds-support-for-percentiles-in-metric-filters/ Announcing Tag-Based Access Control for AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/05/announcing-tag-based-access-control-for-aws-cloudformation/ AWS Organizations Now Supports Tagging and Untagging of AWS Accounts | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-organizations-now-supports-tagging-and-untagging-of-aws-acco/ AWS Well-Architected Tool Now Supports 8x More Text in the Notes Field | https://aws.amazon.com/about-aws/whats-new/2019/06/aws-well-architected-tool-now-supports-8x-more-text-in-notes-field/ Topic || Customer Engagement Amazon Pinpoint now includes support for AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-pinpoint-now-includes-support-for-aws-cloudformation/ Amazon Connect Adds Additional Telephony Metadata | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-connect-adds-additional-telephony-metadata/ Amazon Connect Decreases US Telephony Pricing by 26% in the US East (N. Virginia) and US West (Oregon) regions | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-connect-decreases-US-telephony-pricing-by-26-percent-in-the-US-east-N-Virginia-and-US-West-Oregon-regions/ Topic || Security Amazon GuardDuty is Now SOC Compliant | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-guardduty-now-soc-compliant/ AWS Encryption SDK for C is now available | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-encryption-sdk-for-c-now-available/ Amazon Inspector adds CIS Benchmark support for Amazon Linux 2 | https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-inspector-adds-cis-benchmark-support-for-amazon-linux-2/ Topic || Training and Certification Announcing New and Updated Exam Readiness Courses for AWS Certifications | https://aws.amazon.com/about-aws/whats-new/2019/05/announcing-new-and-updated-exam-readiness-courses-for-aws-certifications/ Topic || Quick Starts New Quick Start deploys a modular architecture for Amazon Aurora PostgreSQL | https://aws.amazon.com/about-aws/whats-new/2019/05/new-quick-start-deploys-modular-aurora-postgresql-architecture-on-aws/ Topic || AWS Marketplace AWS Marketplace enables long term contracts for AMI products | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-marketplace-enables-long-term-contracts-for-ami-products/
Simon hosts an update show with lots of great new features and capabilities! Chapters: Developer Tools 0:26 Storage 3:02 Compute 5:10 Database 10:31 Networking 13:41 Analytics 16:38 IoT 18:23 End User Computing 20:19 Machine Learning 21:12 Application Integration 24:02 Management and Governance 24:23 Migration 26:05 Security 26:56 Training and Certification 29:57 Blockchain 30:27 Quickstarts 31:06 Shownotes: Topic || Developer Tools Announcing AWS X-Ray Analytics – An Interactive approach to Trace Analysis | https://aws.amazon.com/about-aws/whats-new/2019/04/aws_x_ray_interactive_approach_analyze_traces/ Quickly Search for Resources across Services in the AWS Developer Tools Console | https://aws.amazon.com/about-aws/whats-new/2019/05/search-resources-across-services-developer-tools-console/ AWS Amplify Console adds support for Incoming Webhooks | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-amplify-console-adds-support-for-incoming-webhooks/ AWS Amplify launches an online community for fullstack serverless app developers | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-amplify-launches-an-online-community-for-fullstack-serverless-app-developers/ AWS AppSync Now Enables More Visibility into Performance and Health of GraphQL Operations | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-appsync-now-enables-more-visibility-into-performance-and-hea/ AWS AppSync Now Supports Configuring Multiple Authorization Types for GraphQL APIs | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-appsync-now-supports-configuring-multiple-authorization-type/ Topic || Storage Amazon S3 Introduces S3 Batch Operations for Object Management | https://aws.amazon.com/about-aws/whats-new/2019/04/Amazon-S3-Introduces-S3-Batch-Operations-for-Object-Management/ AWS Snowball Edge adds block storage – Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-snowball-edge-adds-block-storage-for-edge-computing-workload/ Amazon FSx for Windows File Server Adds Support for File System Monitoring with Amazon CloudWatch | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-fsx-for-windows-file-server-adds-support-for-cloudwatch/ AWS Storage Gateway enhances access control for SMB shares to store and access objects in Amazon S3 buckets | https://aws.amazon.com/about-aws/whats-new/2019/05/AWS-Storage-Gateway-enhances-access-control-for-SMB-shares-to-access-objects-in-Amazon-s3/ Topic || Compute AWS Lambda adds support for Node.js v10 | https://aws.amazon.com/about-aws/whats-new/2019/05/aws_lambda_adds_support_for_node_js_v10/ AWS Serverless Application Model (SAM) supports IAM permissions and custom responses for Amazon API Gateway | https://aws.amazon.com/about-aws/whats-new/2019/aws_serverless_application_Model_support_IAM/ AWS Step Functions Adds Support for Workflow Execution Events | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-step-functions-adds-support-for-workflow-execution-events/ Amazon EC2 I3en instances, offering up to 60 TB of NVMe SSD instance storage, are now generally available | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-ec2-i3en-instances-are-now-generally-available/ Now Create Amazon EC2 On-Demand Capacity Reservations Through AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/04/now-create-amazon-ec2-on-demand-capacity-reservations-through-aws-cloudformation/ Share encrypted AMIs across accounts to launch instances in a single step | https://aws.amazon.com/about-aws/whats-new/2019/05/share-encrypted-amis-across-accounts-to-launch-instances-in-a-single-step/ Launch encrypted EBS backed EC2 instances from unencrypted AMIs in a single step | https://aws.amazon.com/about-aws/whats-new/2019/05/launch-encrypted-ebs-backed-ec2-instances-from-unencrypted-amis-in-a-single-step/ Amazon EKS Releases Deep Learning Benchmarking Utility | https://aws.amazon.com/about-aws/whats-new/2019/05/-amazon-eks-releases-deep-learning-benchmarking-utility-/ Amazon EKS Adds Support for Public IP Addresses Within Cluster VPCs | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-eks-adds-support-for-public-ip-addresses-within-cluster-v/ Amazon EKS Simplifies Kubernetes Cluster Authentication | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-eks-simplifies-kubernetes-cluster-authentication/ Amazon ECS Console support for ECS-optimized Amazon Linux 2 AMI and Amazon EC2 A1 instance family now available | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-ecs-console-support-for-ecs-optimized-amazon-linux-2-ami-/ AWS Fargate PV1.3 now supports the Splunk log driver | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-fargate-pv1-3-now-supports-the-splunk-log-driver/ Topic || Databases Amazon Aurora Serverless Supports Capacity of 1 Unit and a New Scaling Option | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon_aurora_serverless_now_supports_a_minimum_capacity_of_1_unit_and_a_new_scaling_option/ Aurora Global Database Expands Availability to 14 AWS Regions | https://aws.amazon.com/about-aws/whats-new/2019/05/Aurora_Global_Database_Expands_Availability_to_14_AWS_Regions/ Amazon DocumentDB (with MongoDB compatibility) now supports per-second billing | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-documentdb-now-supports-per-second-billing/ Performance Insights is Generally Available on Amazon Aurora MySQL 5.7 | https://aws.amazon.com/about-aws/whats-new/2019/05/Performance-Insights-GA-Aurora-MySQL-57/ Performance Insights Supports Counter Metrics on Amazon RDS for Oracle | https://aws.amazon.com/about-aws/whats-new/2019/05/performance-insights-countermetrics-on-oracle/ Performance Insights Supports Amazon Aurora Global Database | https://aws.amazon.com/about-aws/whats-new/2019/05/performance-insights-global-datatabase/ Amazon ElastiCache for Redis adds support for Redis 5.0.4 | https://aws.amazon.com/about-aws/whats-new/2019/05/elasticache-redis-5-0-4/ Amazon RDS for MySQL Supports Password Validation | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-rds-for-mysql-supports-password-validation/ Amazon RDS for PostgreSQL Supports New Minor Versions 11.2, 10.7, 9.6.12, 9.5.16, and 9.4.21 | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-rds-postgresql-supports-minor-version-112/ Amazon RDS for Oracle now supports April Oracle Patch Set Updates (PSU) and Release Updates (RU) | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-rds-for-oracle-now-supports-april-oracle-patch-set-updates-psu-and-release-updates-ru/ Topic || Networking Elastic Fabric Adapter Is Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/04/elastic-fabric-adapter-is-now-generally-available/ Migrate Your AWS Site-to-Site VPN Connections from a Virtual Private Gateway to an AWS Transit Gateway | https://aws.amazon.com/about-aws/whats-new/2019/04/migrate-your-aws-site-to-site-vpn-connections-from-a-virtual-private-gateway-to-an-aws-transit-gateway/ Announcing AWS Direct Connect Support for AWS Transit Gateway | https://aws.amazon.com/about-aws/whats-new/2019/04/announcing-aws-direct-connect-support-for-aws-transit-gateway/ Amazon CloudFront announces 11 new Edge locations in India, Japan, and the United States | https://aws.amazon.com/about-aws/whats-new/2019/05/cloudfront-11locations-7may2019/ Amazon VPC Endpoints Now Support Tagging for Gateway Endpoints, Interface Endpoints, and Endpoint Services | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-vpc-endpoints-now-support-tagging-for-gateway-endpoints-interface-endpoints-and-endpoint-services/ Topic || Analytics Amazon EMR announces Support for Multiple Master nodes to enable High Availability for EMR applications | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-emr-announces-support-for-multiple-master-nodes-to-enable-high-availability-for-EMR-applications/ Amazon EMR now supports Multiple Master nodes to enable High Availability for HBase clusters | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-emr-now-supports-multiple-master-nodes-to-enable-high-availability-for-hbase-clusters/ Amazon EMR announces Support for Reconfiguring Applications on Running EMR Clusters | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-emr-announces-support-for-reconfiguring-applications-on-running-emr-clusters/ Amazon Kinesis Data Analytics now allows you to assign AWS resource tags to your real-time applications | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon_kinesis_data_analytics_now_allows_you_to_assign_aws_resource_tags_to_your_real_time_applications/ AWS Glue crawlers now support existing Data Catalog tables as sources | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-glue-crawlers-now-support-existing-data-catalog-tables-as-sources/ Topic || IoT AWS IoT Analytics Now Supports Faster SQL Data Set Refresh Intervals | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-iot-analytics-now-supports-faster-sql-data-set-refresh-intervals/ AWS IoT Greengrass Adds Support for Python 3.7, Node v8.10.0, and Expands Support for Elliptic-Curve Cryptography | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-iot-greengrass-adds-support-python-3-7-node-v-8-10-0-and-expands-support-elliptic-curve-cryptography/ AWS Releases Additional Preconfigured Examples for FreeRTOS on Armv8-M | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-releases-additional-freertos-preconfigured-examples-armv8m/ AWS IoT Device Defender supports monitoring behavior of unregistered devices | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-iot-device-defender-supports-monitoring-behavior-of-unregistered-devices/ AWS IoT Analytics Now Supports Data Set Content Delivery to Amazon S3 | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-iot-analytics-now-supports-data-set-content-delivery-to-amaz/ Topic || End User Computing Amazon AppStream 2.0 adds configurable timeouts for idle sessions | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-appstream-2-0-adds-configurable-timeouts-for-idle-session/ Monitor Emails in Your Workmail Organization Using Cloudwatch Metrics and Logs | https://aws.amazon.com/about-aws/whats-new/2019/05/monitor-emails-in-your-workmail-organization-using-cloudwatch-me/ You can now use custom chat bots with Amazon Chime | https://aws.amazon.com/about-aws/whats-new/2019/05/you-can-now-use-custom-chat-bots-with-amazon-chime/ Topic || Machine Learning Developers, start your engines! The AWS DeepRacer Virtual League kicks off today. | https://aws.amazon.com/about-aws/whats-new/2019/04/AWSDeepRacerVirtualLeague/ Amazon SageMaker announces new features to the built-in Object2Vec algorithm | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-sagemaker-announces-new-features-to-the-built-in-object2v/ Amazon SageMaker Ground Truth Now Supports Automated Email Notifications for Manual Data Labeling | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-sagemaker-ground-truth-now-supports-automated-email-notif/ Amazon Translate Adds Support for Hindi, Farsi, Malay, and Norwegian | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon_translate_support_hindi_farsi_malay_norwegian/ Amazon Transcribe now supports Hindi and Indian-accented English | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-transcribe-supports-hindi-indian-accented-english/ Amazon Comprehend batch jobs now supports Amazon Virtual Private Cloud | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-comprehend-batch-jobs-now-supports-amazon-virtual-private-cloud/ New in AWS Deep Learning AMIs: PyTorch 1.1, Chainer 5.4, and CUDA 10 support for MXNet | https://aws.amazon.com/about-aws/whats-new/2019/05/new-in-aws-deep-learning-amis-pytorch-1-1-chainer-5-4-cuda10-for-mxnet/ Topic || Application Integration Amazon MQ Now Supports Resource-Level and Tag-Based Permissions | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-mq-now-supports-resource-level-and-tag-based-permissions/ Amazon SNS Adds Support for Cost Allocation Tags | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-sns-adds-support-for-cost-allocation-tags/ Topic || Management and Governance Reservation Expiration Alerts Now Available in AWS Cost Explorer | https://aws.amazon.com/about-aws/whats-new/2019/05/reservation-expiration-alerts-now-available-in-aws-cost-explorer/ AWS Systems Manager Patch Manager Supports Microsoft Application Patching | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-systems-manager-patch-manager-supports-microsoft-application-patching/ AWS OpsWorks for Chef Automate now supports Chef Automate 2 | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-opsworks-for-chef-automate-now-supports-chef-automate-2/ AWS Service Catalog Connector for ServiceNow supports CloudFormation StackSets | https://aws.amazon.com/about-aws/whats-new/2019/05/service-catalog-servicenow-connector-now-supports-stacksets/ Topic || Migration AWS Migration Hub EC2 Recommendations | https://aws.amazon.com/about-aws/whats-new/2019/05/aws-migration-hub-ec2-recommendations/ Topic || Security Amazon GuardDuty Adds Two New Threat Detections | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-guardduty-adds-two-new-threat-detections/ AWS Security Token Service (STS) now supports enabling the global STS endpoint to issue session tokens compatible with all AWS Regions | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-security-token-service-sts-now-supports-enabling-the-global-sts-endpoint-to-issue-session-tokens-compatible-with-all-aws-regions/ AWS WAF Security Automations Now Supports Log Analysis | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-waf-security-automations-now-supports-log-analysis/ AWS Certificate Manager Private Certificate Authority Increases Certificate Limit To One Million | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-certificate-manager-private-certificate-authority-increases-certificate-limit-to-one-million/ Amazon Cognito launches enhanced user password reset API for administrators | https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-cognito-launches-enhanced-user-password-reset-api-for-administrators/ AWS Secrets Manager supports more client-side caching libraries to improve secrets availability and reduce cost | https://aws.amazon.com/about-aws/whats-new/2019/05/Secrets-Manager-Client-Side-Caching-Libraries-in-Python-NET-Go/ Create fine-grained session permissions using AWS Identity and Access Management (IAM) managed policies | https://aws.amazon.com/about-aws/whats-new/2019/05/session-permissions/ Topic || Training and Certification New VMware Cloud on AWS Navigate Track | https://aws.amazon.com/about-aws/whats-new/2019/04/vmware-navigate-track/ Topic || Blockchain Amazon Managed Blockchain What's New | https://aws.amazon.com/about-aws/whats-new/2019/04/introducing-amazon-managed-blockchain/ Topic || Quick Starts New Quick Start deploys SAP S/4HANA on AWS | https://aws.amazon.com/about-aws/whats-new/2019/05/new-quick-start-deploys-sap-s4-hana-on-aws/
Simon & Nicki are joined by a live audience to record a great set of cool updates for customers! Chapters: 1:20 Infrastructure 1:33 Developer Tools 3:50 Storage 4:28 Compute 6:13 Database 10:22 Analytics 13:01 IoT 13:23 End User Computing 14:08 Machine Learning 17:03 Networking 18:22 Customer Engagement 18:37 Application Integration 19:12 Game Tech 19:47 Media Services 20:44 Management and Governance 23:20 Robotics 24:26 Migration 25:03 Security 25:38 Training & Certification 26:05 Audience Q&A Shownotes: Topic || Infrastructure Announcing the AWS Asia Pacific (Hong Kong) Region | https://aws.amazon.com/about-aws/whats-new/2019/04/announcing-the-aws-asia-pacific-hong-kong-region/ Topic || Developer Tools AWS Amplify Console Now Supports Deploying Fullstack Serverless Applications with a Single Click | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-amplify-console-now-supports-deploying-fullstack-serverless-/ Amplify Framework Simplifies Configuring OAuth 2.0 Flows, Hosted UI, and AR/VR Scenes for Mobile and Web Apps | https://aws.amazon.com/about-aws/whats-new/2019/04/amplify-framework-simplifies-configuring-oauth-2-0-flows--hosted/ Amplify Framework Announces New Amazon Aurora Serverless, GraphQL, and OAuth Capabilities | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-amplify-announces-new-amazon-aurora-serverless--graphql--and/ AWS Amplify Console adds support for Custom Headers | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-amplify-console-adds-support-for-custom-headers/ AWS Amplify Console Now Available in Five Additional Regions | https://aws.amazon.com/about-aws/whats-new/2019/04/amplify-console-now-available-in-five-additional-regions/ AWS Device Farm Remote Access for Manual Testing on real Android and iOS devices now supports Android OS 8+ and iOS 11+ devices | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-device-farm-remote-access-for-manual-testing-on-real-android/ Topic || Storage New AWS Public Datasets Available from National Renewable Energy Laboratory, Nanyang Technological University, Stanford, Software Heritage and others | https://aws.amazon.com/about-aws/whats-new/2019/04/new-aws-public-datasets-available-from-national-renewable-energy/ Topic || Compute Amazon EC2 T3a Instances Are Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-ec2-t3a-instances-are-now-generally-available/ Amazon EKS Now Delivers Kubernetes Control Plane Logs to Amazon CloudWatch | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-eks-now-delivers-kubernetes-control-plane-logs-to-amazon-/ Amazon EKS Supports EC2 A1 Instances as a Public Preview | https://aws.amazon.com/about-aws/whats-new/2019/04/-amazon-eks-supports-ec2-a1-instances-as-a-public-preview-/ AWS Elastic Beanstalk extends Tag-Based Permissions | https://aws.amazon.com/about-aws/whats-new/2019/04/aws_elastic_beanstalk_extends_tag-based_permissions/ AWS ParallelCluster 2.3.1 with enhanced support for Slurm Workload Manager is available now | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-parallelcluster-slurm-enhancements/ Topic || Databases Amazon RDS now supports per-second billing | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-rds-per-second-billing/ Amazon RDS for Oracle Now Supports Database Storage Size up to 64TiB | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-rds-for-oracle-now-supports-64tib/ Amazon RDS Enhanced Monitoring Adds New Storage and Host Metrics | https://aws.amazon.com/about-aws/whats-new/2019/04/enhanced-monitoring-supports-additional-metrics/ Amazon RDS for PostgreSQL Now Supports Multi Major Version Upgrades to PostgreSQL 11 | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-rds-postgresql-supports-multi-major-version-upgrades/ Amazon RDS for PostgreSQL Now Supports Data Import from Amazon S3 | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-rds-postgresql-supports-data-import-from-amazon-s3/ Amazon Aurora and Amazon RDS Enable Faster Migration from MySQL 5.7 Databases | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon_aurora_and_amazon_rds_enable_faster_migration_from_mysql_57_databases/ Amazon Aurora Serverless Supports Sharing and Cross-Region Copying of Snapshots | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon_aurora_serverless_now_supports_sharing_and_cross-region_copying_of_snapshots/ AWS simplifies replatforming of Microsoft SQL Server databases from Windows to Linux | https://aws.amazon.com/about-aws/whats-new/2019/04/windows-to-linux-replatforming-assistant-sql-server-databases/ Amazon Redshift now provides more control over snapshots | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-redshift-now-provides-more-control-over-snapshots/ AWS specifies the IP address ranges for Amazon DynamoDB endpoints | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-specifies-the-ip-address-ranges-for-amazon-dynamodb-endpoints/ Now you can tag Amazon DynamoDB tables when you create them | https://aws.amazon.com/about-aws/whats-new/2019/04/now-you-can-tag-amazon-dynamodb-tables-when-you-create-them/ DynamoDBMapper now supports Amazon DynamoDB transactional API calls | https://aws.amazon.com/about-aws/whats-new/2019/04/dynamodbmapper-now-supports-amazon-dynamodb-transactional-api-calls/ Topic || Analytics Amazon Elasticsearch Service announces support for Elasticsearch 6.5 | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-elasticsearch-service-announces-support-for-elasticsearch-6-5/ Amazon Elasticsearch Service adds event monitoring and alerting support | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-elasticsearch-service-adds-event-monitoring-and-alerting-support/ Amazon Elasticsearch Service now offers improved performance at lower costs with C5, M5, and R5 instances | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-elasticsearch-service-now-offers-improved-performance-at-lower-costs-with-C5-M5-R5-instances/ AWS Glue now supports additional configuration options for memory-intensive jobs | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-glue-now-supports-additional-configuration-options-for-memory-intensive-jobs/ Announcing EMR release 5.22.0: Support for new versions of HBase, Oozie, Flink, and optimized EBS configuration for improved IO performance for applications such as Spark | https://aws.amazon.com/about-aws/whats-new/2019/04/announcing-emr-release-5220-support-for-new-versions-of-hbase-oozie-flink-and-optimized-ebs-configuration-for-improved-io-performance-for-applications-such-as-spark/ Amazon Kinesis Data Streams changes license for its consumer library to Apache License 2.0 | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon_kinesis_data_streams_changes_license_for_its_consumer_library_to_apache_license_2_0/ Amazon MSK expands its open preview into AP (Singapore) and AP (Sydney) AWS Regions | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon_msk_expands_its_open_preview_into_ap_singapore_and_ap_sydney_aws_regions/ Amazon QuickSight now supports localization, percentile calculations and more | https://aws.amazon.com/about-aws/whats-new/2019/04/Amazon_QuickSight_now_supports_localization_percentile_calculations_and_more/ Topic || IoT Amazon FreeRTOS Now Supports Resource Tagging | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-freertos-now-supports-resource-tagging/ AWS IoT Analytics Now Supports Single Step Setup of IoT Analytics Resources from AWS IoT Core | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-iot-analytics-now-supports-single-step-setup-of-iot-analytic/ Topic || End User Computing AWS Client VPN is Now Available in Four Additional AWS Regions | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-client-vpn-is-now-available-in-four-additional-aws-regions/ Amazon WorkDocs Migration Service | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon_workdocs_migration_service/ Amazon WorkDocs Document Approvals | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-workdocs-document-approval/ Topic || Machine Learning Amazon SageMaker Now Offers Reduced Prices in the Asia Pacific (Tokyo) and Asia Pacific (Seoul) AWS Regions | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-sagemaker-now-offers-reduced-prices-in-the-asia-pacific--/ Amazon SageMaker Now Supports Greater Control of Root Access to Notebook Instances | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-sagemaker-now-supports-greater-control-of-root-access-to-/ Amazon SageMaker Ground Truth announces new features to simplify workflows, new data labeling vendors, and expansion in the Asia Pacific region | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-sagemaker-ground-truth-announces-new-features-to-simplify/ Amazon Transcribe now supports real-time speech-to-text in British English, French, and Canadian French | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-transcribe-now-supports-real-time-speech-to-text-in-british-english-french-and-canadian-french/ Amazon Polly Adds Arabic Language Support | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-polly-adds-arabic-language-support/ Amazon Comprehend Now Supports Confusion Matrices for Custom Classification | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-comprehend-now-supports-confusion-matrices-for-custom-classification/ AWS DeepLens Introduces New Bird Classification Project Template | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-deeplens-bird-classification/ Topic || Networking Amazon CloudFront enhances the security for adding alternate domain names to a distribution | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-cloudfront-enhances-the-security-for-adding-alternate-domain-names-to-a-distribution/ Amazon CloudFront is now Available in Mainland China | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-cloudfront-is-now-available-in-mainland-china/ Expanding AWS PrivateLink support for Amazon Kinesis Data Firehose | https://aws.amazon.com/about-aws/whats-new/2019/04/expanding_aws_privatelink_support_for_amazon_kinesis_data_firehose/ AWS Global Accelerator is Now Available in Six Additional Regions | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-global-accelerator-is-now-available-in-six-additional-regions/ Topic || Customer Engagement Amazon Pinpoint Now Offers an Analytics Dashboard for Transactional SMS Messages | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-pinpoint-now-offers-an-analytics-dashboard-for-transactional-sms-messages/ Topic || Application Integration AWS AppSync Now Supports Tagging GraphQL APIs | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-appsync-now-supports-tagging-graphql-apis/ Amazon MQ now supports ActiveMQ Minor Version 5.15.9 | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-mq-now-supports-activemq-minor-version-5-15-9/ Topic || Game Tech Amazon GameLift Realtime Servers Now Available | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-gameLift-realtime-servers-now-available/ Topic || Media Services AWS Elemental MediaPackage and MediaTailor improve support for DASH Endpoints and Monetization | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-elemental-mediapackage-and-mediatailor-improve-support-for-dash-endpoints-and-monetization/ AWS Elemental MediaLive Offers Lower Cost Live Channels with Single-Pipeline Option | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-elemental-medialive-offers-lower-cost-live-channels-with-single-pipeline-option/ Speed Up Video Processing With New Accelerated Transcoding in AWS Elemental MediaConvert | https://aws.amazon.com/about-aws/whats-new/2019/04/speed-up-video-processing-with-new-accelerated-transcoding-in-aws-elemental-mediaconvert/ AWS Elemental MediaStore Now Supports Chunked Object Transfer to Enable Ultra-Low Latency Video Workflows | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-elemental-mediastore-now-supports-chunked-object-transfer-to-enabling-ultra-low-latency-video-workflows/ Topic || Management and Governance AWS CloudFormation Coverage Updates for Amazon EC2, Amazon ECS and Amazon Elastic Load Balancer | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-cloudformation-coverage-updates-for-amazon-ec2--amazon-ecs-a/ AWS Systems Manager Session Manager Enables Session Encryption Using Customer Keys | https://aws.amazon.com/about-aws/whats-new/2019/04/AWS-Systems-Manager-Session-Manager-Enables-Session-Encryption-Using-Customer-Keys/ AWS Systems Manager Now Supports Use of Parameter Store at Higher API Throughput | https://aws.amazon.com/about-aws/whats-new/2019/04/aws_systems_manager_now_supports_use_of_parameter_store_at_higher_api_throughput/ AWS Systems Manager Parameter Store Introduces Advanced Parameters | https://aws.amazon.com/about-aws/whats-new/2019/04/aws_systems_manager_parameter_store_introduces_advanced_parameters/ Query AWS Regions Endpoints and More | https://aws.amazon.com/blogs/aws/new-query-for-aws-regions-endpoints-and-more-using-aws-systems-manager-parameter-store/ AWS Service Catalog Announces Tag Updating | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-service-catalog-announces-tag-updating/ Topic || Robotics Announcing AWS RoboMaker Cloud Extensions for Robot Operating System (ROS) Melodic | https://aws.amazon.com/about-aws/whats-new/2019/04/announcing-aws-robomaker-cloud-extensions-for-robot-operating-sy/ NICE DCV Now Supports MacOS Native Clients | https://aws.amazon.com/about-aws/whats-new/2019/04/nice-dcv-now-supports-macos-native-clients/ Topic || Migration Announcing Azure to AWS migration support in AWS Server Migration Service | https://aws.amazon.com/about-aws/whats-new/2019/04/announcing_azure_awsmigration_servermigrationservice/ Topic || Security AWS Certificate Manager Private Certificate Authority is now available in five additional regions | https://aws.amazon.com/about-aws/whats-new/2019/04/AWS-Certificate-Manager-Private-Certificate-Authority-is-now-available-in-five-additional-regions/ AWS Single Sign-On now offers certificate customization to support your corporate policies | https://aws.amazon.com/about-aws/whats-new/2019/04/you-can-now-customize-the-aws-single-sign-on-certificate-to-meet-your-corporate-security-requirements/ Topic || Training and Certification AWS Certification Triples its Testing Locations, Making it Even More Convenient to Get Certified | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-certification-triples-testing-locations/ Announcing the New AWS Certified Alexa Skill Builder - Specialty Exam | https://aws.amazon.com/about-aws/whats-new/2019/04/new-awsexam-certified-alexa-skill-builder-specialty/
Simon and Nicki cover almost 100 updates! Check out the chapter timings to see where things of interest to you might be. Infrastructure 00:42 Storage 1:17 Databases 4:14 Analytics 8:28 Compute 9:52 IoT 15:17 End User Computing 17:40 Machine Learning 19:10 Networking 21:57 Developer Tools 23:21 Application Integration 25:42 Game Tech 26:29 Media 27:37 Management and Governance 28:11 Robotics 30:35 Security 31:30 Solutions 32:40 Topic || Infrastructure In the Works – AWS Region in Indonesia | https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-indonesia/ Topic || Storage New Amazon S3 Storage Class – Glacier Deep Archive | https://aws.amazon.com/blogs/aws/new-amazon-s3-storage-class-glacier-deep-archive/ File Gateway Supports Amazon S3 Object Lock - Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/03/file-gateway-supports-amazon-s3-object-lock/ AWS Storage Gateway Tape Gateway Deep Archive | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-storage-gateway-service-integrates-tape-gateway-with-amazon-s3-glacier-deeparchive-storage-class/ AWS Transfer for SFTP supports AWS Privatelink – Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-transfer-for-sftp-now-supports-aws-privatelink/ Amazon FSx for Lustre Now Supports Access from Amazon Linux | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-fsx-for-lustre-now-supports-access-from-amazon-linux/ AWS introduces CSI Drivers for Amazon EFS and Amazon FSx for Lustre | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-introduces-csi-drivers-for-amazon-efs-and-amazon-fsx-for-lus/ Topic || Databases Amazon DynamoDB drops the price of global tables by eliminating associated charges for DynamoDB Streams | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-dynamodb-drops-the-price-of-global-tables-by-eliminating-associated-charges-for-dynamodb-streams/ Amazon ElastiCache for Redis 5.0.3 enhances I/O handling to boost performance | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-elasticache-for-redis-503-enhances-io-handling-to-boost-performance/ Amazon Redshift announces Concurrency Scaling: Consistently fast performance during bursts of user activity | https://aws.amazon.com/about-aws/whats-new/2019/03/AmazonRedshift-ConcurrencyScaling/ Performance Insights is Generally Available on Amazon RDS for MariaDB | https://aws.amazon.com/about-aws/whats-new/2019/03/performance-insights-is-generally-available-for-mariadb/ Amazon RDS adds support for MySQL Versions 5.7.25, 5.7.24, and MariaDB Version 10.2.21 | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-rds-mysql-minor-5725-5725-and-mariadb-10221/ Amazon Aurora with MySQL 5.7 Compatibility Supports GTID-Based Replication | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-aurora-with-mysql-5-7-compatibility-supports-gtid-based-replication/ PostgreSQL 11 now Supported in Amazon RDS | https://aws.amazon.com/about-aws/whats-new/2019/03/postgresql11-now-supported-in-amazon-rds/ Amazon Aurora with PostgreSQL Compatibility Supports Logical Replication | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-aurora-with-postgresql-compatibility-supports-logical-replication/ Restore an Encrypted Amazon Aurora PostgreSQL Database from an Unencrypted Snapshot | https://aws.amazon.com/about-aws/whats-new/2019/03/restore-an-encrypted-aurora-postgresql-database-from-an-unencrypted-snapshot/ Amazon RDS for Oracle Now Supports In-region Read Replicas with Active Data Guard for Read Scalability and Availability | https://aws.amazon.com/about-aws/whats-new/2019/03/Amazon-RDS-for-Oracle-Now-Supports-In-region-Read-Replicas-with-Active-Data-Guard-for-Read-Scalability-and-Availability/ AWS Schema Conversion Tool Adds Support for Migrating Oracle ETL Jobs to AWS Glue | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-schema-conversion-tool-adds-support-for-migrating-oracle-etl/ AWS Schema Conversion Tool Adds New Conversion Features | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-sct-adds-support-for-new-endpoints/ Amazon Neptune Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-neptune-announces-service-level-agreement/ Topic || Analytics Amazon QuickSight Announces General Availability of ML Insights | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon_quicksight_announced_general_availability_of_mL_insights/ AWS Glue enables running Apache Spark SQL queries | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-glue-enables-running-apache-spark-sql-queries/ AWS Glue now supports resource tagging | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-glue-now-supports-resource-tagging/ Amazon Kinesis Data Analytics Supports AWS CloudTrail Logging | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-kinesis-data-analytics-supports-aws-cloudtrail-logging/ Tag-on Create and Tag-Based IAM Application for Amazon Kinesis Data Firehose | https://aws.amazon.com/about-aws/whats-new/2019/03/tag-on-create-and-tag-based-iam-application-for-amazon-kinesis-data-firehose/ Topic || Compute Amazon EKS Introduces Kubernetes API Server Endpoint Access Control | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-eks-introduces-kubernetes-api-server-endpoint-access-cont/ Amazon EKS Opens Public Preview of Windows Container Support | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-eks-opens-public-preview-of-windows-container-support/ Amazon EKS now supports Kubernetes version 1.12 and Cluster Version Updates Via CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-eks-now-supports-kubernetes-version-1-12-and-cluster-vers/ New Local Testing Tools Now Available for Amazon ECS | https://aws.amazon.com/about-aws/whats-new/2019/03/new-local-testing-tools-now-available-for-amazon-ecs/ AWS Fargate and Amazon ECS Support External Deployment Controllers for ECS Services | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-fargate-and-amazon-ecs-support-external-deployment-controlle/ AWS Fargate PV1.3 adds secrets and enhanced container dependency management | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-fargate-pv1-3-adds-secrets-and-enhanced-container-dependency/ AWS Event Fork Pipelines – Nested Applications for Event-Driven Serverless Architectures | https://aws.amazon.com/about-aws/whats-new/2019/03/introducing-aws-event-fork-pipelines-nested-applications-for-event-driven-serverless-architectures/ New Amazon EC2 M5ad and R5ad Featuring AMD EPYC Processors are Now Available | https://aws.amazon.com/about-aws/whats-new/2019/03/new-amazon-ec2-m5ad-and-r5ad-featuring-amd-epyc-processors-are-now-available/ Announcing the Ability to Pick the Time for Amazon EC2 Scheduled Events | https://aws.amazon.com/about-aws/whats-new/2019/03/announcing-the-ability-to-pick-the-time-for-amazon-ec2-scheduled-events/ Topic || IoT AWS IoT Analytics now supports Single Step Setup of IoT Analytics Resources | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-iot-analytics-now-supports-single-step-setup-of-iot-analytic/ AWS IoT Greengrass Adds New Connector for AWS IoT Analytics, Support for AWS CloudFormation Templates, and Integration with Fleet Indexing | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-iot-greengrass-adds-new-connector-aws-iot-analytics-support-aws-cloudformation-templates-integration-fleet-indexing/ AWS IoT Device Tester v1.1 is Now Available for AWS IoT Greengrass v1.8.0 | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-iot-device-tester-now-available-aws-iot-greengrass-v180/ AWS IoT Core Now Supports HTTP REST APIs with X.509 Client Certificate-Based Authentication On Port 443 | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-iot-core-now-supports-http-rest-apis-with-x509-client-certificate-based-authentication-on-port-443/ Generate Fleet Metrics with New Capabilities of AWS IoT Device Management | https://aws.amazon.com/about-aws/whats-new/2019/03/generate-fleet-metrics-with-new-capabilities-of-aws-iot-device-management/ Topic || End User Computing Amazon AppStream 2.0 Now Supports iPad and Android Tablets and Touch Gestures | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-appstream-2-0-now-supports-ipad-and-android-tablets-and-t/ Amazon WorkDocs Drive now supports offline content and offline search | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-workdocs-drive-now-supports-offline-content-and-offline-s/ Introducing Amazon Chime Business Calling | https://aws.amazon.com/about-aws/whats-new/2019/03/introducing-amazon-chime-business-calling/ Introducing Amazon Chime Voice Connector | https://aws.amazon.com/about-aws/whats-new/2019/03/introducing-amazon-chime-voice-connector/ Alexa for Business now lets you create Alexa skills for your organization using Skill Blueprints | https://aws.amazon.com/about-aws/whats-new/2019/03/alexa-for-business-now-lets-you-create-alexa-skills-for-your-org/ Topic || Machine Learning New AWS Deep Learning AMIs: Amazon Linux 2, TensorFlow 1.13.1, MXNet 1.4.0, and Chainer 5.3.0 | https://aws.amazon.com/about-aws/whats-new/2019/03/new-aws-deep-learning-amis-amazon-linux2-tensorflow-13-1-mxnet1-4-0-chainer5-3-0/ Introducing AWS Deep Learning Containers | https://aws.amazon.com/about-aws/whats-new/2019/03/introducing-aws-deep-learning-containers/ Amazon Transcribe now supports speech-to-text in German and Korean | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-transcribe-now-supports-speech-to-text-in-german-and-korean/ Amazon Transcribe enhances custom vocabulary with custom pronunciations and display forms | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-transcribe-enhances-custom-vocabulary-with-custom-pronunciations-and-display-forms/ Amazon Comprehend now supports AWS KMS Encryption | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-comprehend-now-supports-aws-kms-encryption/ New Setup Tool To Get Started Quickly with Amazon Elastic Inference | https://aws.amazon.com/about-aws/whats-new/2019/04/new-python-script-to-get-started-quickly-with-amazon-elastic-inference/ Topic || Networking Application Load Balancers now Support Advanced Request Routing | https://aws.amazon.com/about-aws/whats-new/2019/03/application-load-balancers-now-support-advanced-request-routing/ Announcing Multi-Account Support for Direct Connect Gateway | https://aws.amazon.com/about-aws/whats-new/2019/03/announcing-multi-account-support-for-direct-connect-gateway/ Topic || Developer Tools AWS App Mesh is now generally available | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-app-mesh-is-now-generally-available/ The AWS Toolkit for IntelliJ is Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/03/the-aws-toolkit-for-intellij-is-now-generally-available/ The AWS Toolkit for Visual Studio Code (Developer Preview) is Now Available for Download from in the Visual Studio Marketplace | https://aws.amazon.com/about-aws/whats-new/2019/03/the-aws-toolkit-for-visual-studio-code--developer-preview--is-now-available-for-download-from-vs-marketplace/ AWS Cloud9 announces support for Ubuntu development environments | https://aws.amazon.com/about-aws/whats-new/2019/04/aws-cloud9-announces-support-for-ubuntu-development-environments/ Amplify Framework Adds Enhancements to Authentication for iOS, Android, and React Native Developers | https://aws.amazon.com/about-aws/whats-new/2019/03/amplify-framework-adds-enhancements-to-authentication-for-ios-android-and-react-native-developers/ AWS CodePipeline Adds Action-Level Details to Pipeline Execution History | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-codepipeline-adds-action-level-details-to-pipeline-execution-history/ Topic || Application Integration Amazon API Gateway Improves API Publishing and Adds Features to Enhance User Experience | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-api-gateway-improves-api-publishing-and-adds-features/ Topic || Game Tech AWS Whats New - Lumberyard Beta 118 - Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/03/over-190-updates-come-to-lumberyard-beta-118-available-now/ Amazon GameLift Realtime Servers Now in Preview | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-gamelift-realtime-servers-now-in-preview/ Topic || Media Services Detailed Job Progress Status and Server-Side S3 Encryption Now Available with AWS Elemental MediaConvert | https://aws.amazon.com/about-aws/whats-new/2019/03/detailed-job-progress-status-and-server-side-s3-encryption-now-available-with-aws-elemental-mediaconvert/ Introducing Live Streaming with Automated Multi-Language Subtitling | https://aws.amazon.com/about-aws/whats-new/2019/03/introducing-live-streaming-with-automated-multi-language-subtitling/ Video on Demand Now Leverages AWS Elemental MediaConvert QVBR Mode | https://aws.amazon.com/about-aws/whats-new/2019/04/video-on-demand-now-leverages-aws-elemental-mediaconvert-qvbr-mode/ Topic || Management and Governance Use AWS Config Rules to Remediate Noncompliant Resources | https://aws.amazon.com/about-aws/whats-new/2019/03/use-aws-config-to-remediate-noncompliant-resources/ AWS Config Now Supports Tagging of AWS Config Resources | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-config-now-supports-tagging-of-aws-config-resources/ Now You Can Query Based on Resource Configuration Properties in AWS Config | https://aws.amazon.com/about-aws/whats-new/2019/03/now-you-can-query-based-on-resource-configuration-properties-in-aws-config/ AWS Config Adds Support for Amazon API Gateway | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-config-adds-support-for-amazon-api-gateway/ Amazon Inspector adds support for Amazon EC2 A1 instances | https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-inspector-adds-support-for-amazon-ec2-a1-instances/ Service control policies in AWS Organizations enable fine-grained permission controls | https://aws.amazon.com/about-aws/whats-new/2019/03/service-control-policies-enable-fine-grained-permission-controls/ You can now use resource level policies for Amazon CloudWatch Alarms | https://aws.amazon.com/about-aws/whats-new/2019/04/you-can-now-use-resource-level-permissions-for-amazon-cloudwatch/ Amazon CloudWatch Launches Search Expressions | https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-cloudwatch-launches-search-expressions/ AWS Systems Manager Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-systems-manager-announces-service-level-agreement/ Topic || Robotics AWS RoboMaker Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-robomaker-announces-service-level-agreement/ AWS RoboMaker announces new build and bundle feature that makes it up to 10x faster to update a simulation job or a robot | https://aws.amazon.com/about-aws/whats-new/2019/03/robomaker-new-build-and-bundle/ Topic || Security Announcing the renewal command for AWS Certificate Manager | https://aws.amazon.com/about-aws/whats-new/2019/03/Announcing-the-renewal-command-for-AWS-Certificate-Manager/ AWS Key Management Service Increases API Requests Per Second Limits | https://aws.amazon.com/about-aws/whats-new/2019/03/aws-key-management-service-increases-api-requests-per-second-limits/ Announcing AWS Firewall Manager Support For AWS Shield Advanced | https://aws.amazon.com/about-aws/whats-new/2019/03/announcing-aws-firewall-manager-support-for-aws-shield-advanced/ Topic || Solutions New AWS SAP Navigate Track | https://aws.amazon.com/about-aws/whats-new/2019/03/sap-navigate-track/ Deploy Micro Focus PlateSpin Migrate on AWS with New Quick Start | https://aws.amazon.com/about-aws/whats-new/2019/03/deploy-micro-focus-platespin-migrate-on-aws-with-new-quick-start/
Simon guides you through lots of new features, services and capabilities that you can take advantage of. Including the new AWS Backup service, more powerful GPU capabilities, new SLAs and much, much more! Chapters: Service Level Agreements 0:17 Storage 0:57 Media Services 5:08 Developer Tools 6:17 Analytics 9:54 AI/ML 12:07 Database 14:47 Networking & Content Delivery 17:32 Compute 19:02 Solutions 21:57 Business Applications 23:38 AWS Cost Management 25:07 Migration & Transfer 25:39 Application Integration 26:07 Management & Governance 26:32 End User Computing 29:22 Links: Topic || Service Level Agreements 0:17 Amazon Kinesis Data Firehose Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-kinesis-data-firehose-announces-99-9-service-level-agreement/ Amazon Kinesis Data Streams Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-kinesis-data-streams-announces-99-9-service-level-agreement/ Amazon Kinesis Video Streams Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-kinesis-video-streams-announces-99-9-service-level-agreement/ Amazon EKS Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/01/-amazon-eks-announces-99-9--service-level-agreement-/ Amazon ECR Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-ecr-announces-99-9--service-level-agreement/ Amazon Cognito Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-cognito-announces-99-9-service-level-agreement/ AWS Step Functions Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-step-functions-announces-service-level-agreement/ AWS Secrets Manager Announces Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/01/AWS-Secrets-Manager-announces-service-level-agreement/ Amazon MQ Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-mq-announces-service-level-agreement/ Topic || Storage 0:57 Introducing AWS Backup | https://aws.amazon.com/about-aws/whats-new/2019/01/introducing-aws-backup/ Introducing Amazon Elastic File System Integration with AWS Backup | https://aws.amazon.com/about-aws/whats-new/2019/01/introducing-amazon-elastic-file-system-integration-with-aws-backup/ AWS Storage Gateway Integrates with AWS Backup - Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-storage-gateway-integrates-with-aws-backup-to-protect-volume/ AWS Backup Integrates with Amazon DynamoDB for Centralized and Automated Backup Management | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-backup-integrates-with-amazon-DynamoDB-for-centralized-and-automated-backup-management/ Amazon EBS Integrates with AWS Backup to Protect Your Volumes | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-ebs-integrates-with-aws-backup-to-protect-your-volumes/ AWS Storage Gateway Volume Detach & Attach - Amazon Web Services | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-storage-gateway-introduces-volume-detach-and-attach-feature-/ AWS Storage Gateway - Tape Gateway Performance | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-storage-gateway-announces-increased-throughput-performance-for-tape-gateway/ Amazon FSx for Lustre Offers New Options and Faster Speeds for Working with S3 Data | https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-fsx-for-lustre-offers-new-options-and-faster-speeds/ Topic || Media Services 5:08 AWS Elemental MediaConvert Adds IMF Input and Enhances Caption Burn-In Support | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-elemental-mediaconvert-adds-imf-input-enhances-caption-burn-in-support/ AWS Elemental MediaLive Adds Support for AWS CloudTrail | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-elemental-medialive-adds-support-for-aws-cloudtrail/ AWS Elemental MediaLive Now Supports Resource Tagging | https://aws.amazon.com/about-aws/whats-new/2019/02/aws-elemental-medialive-now-supports-resource-tagging/ AWS Elemental MediaLive Adds I-Frame-Only HLS Manifests and JPEG Outputs | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-elemental-medialive-add-i-frame-only-hls-manifest-and-jpeg-outputs/ Topic || Developer Tools 6:17 Amazon Corretto is Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-corretto-is-now-generally-available/ AWS CodePipeline Now Supports Deploying to Amazon S3 | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-codepipeline-now-supports-deploying-to-amazon-s3/ AWS Cloud9 Supports AWS CloudTrail Logging | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-cloud9-supports-aws-cloudtrail-logging/ AWS CodeBuild Now Supports Accessing Images from Private Docker Registry | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-codebuild-now-supports-accessing-images-from-private-docker-registry/ Develop and Test AWS Step Functions Workflows Locally | https://aws.amazon.com/about-aws/whats-new/2019/02/develop-and-test-aws-step-functions-workflows-locally/ AWS X-Ray SDK for .NET Core is Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2019/02/aws-x-ray-net-core-sdk-generally-available/ Topic || Analytics 9:54 Amazon Elasticsearch Service doubles maximum cluster capacity with 200 node cluster support | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-elasticsearch-service-doubles-maximum-cluster-capacity-with-200-node-cluster-support/ Amazon Elasticsearch Service announces support for Elasticsearch 6.4 | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-elasticsearch-service-announces-support-for-elasticsearch-6-4/ Amazon Elasticsearch Service now supports three Availability Zone deployments | https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-elasticsearch-service-now-supports-three-availability-zone-deployments/ Now bring your own KDC and enable Kerberos authentication in Amazon EMR | https://aws.amazon.com/about-aws/whats-new/2019/01/now_bring_your_own_kdc_and_enable_kerberos_authentication_in_amazon_emr/ Source code for the AWS Glue Data Catalog client for Apache Hive Metastore is now available for download | https://aws.amazon.com/about-aws/whats-new/2019/02/source-code-for-the-aws-glue-data-catalog-client-for-apache-hive-metatore-is-now-available-for-download/ Topic || AI/ML 12:07 Amazon Comprehend is now Integrated with AWS CloudTrail | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-comprehend-is-now-integrated-with-aws-cloudtrail/ Object Bounding Boxes and More Accurate Object and Scene Detection are now Available for Amazon Rekognition Video | https://aws.amazon.com/about-aws/whats-new/2019/01/object-bounding-boxes-and-more-accurate-object-and-scene-detection-are-now-available-for-amazon-rekognition-video/ Amazon Elastic Inference Now Supports TensorFlow 1.12 with a New Python API | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-elastic-inference-supports-tensorflow-1-12-with-a-python-api/ New in AWS Deep Learning AMIs: Updated Elastic Inference for TensorFlow, TensorBoard 1.12.1, and MMS 1.0.1 | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-deep-learning-amis-now-support-elastic-inference-for-tensorflow-tensorboard1-12-1-mms101/ Amazon SageMaker Batch Transform Now Supports TFRecord Format | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-sagemaker-batch-transform-now-supports-tfrecord-format/ Amazon Transcribe Now Supports US Spanish Speech-to-Text in Real Time | https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-transcribe-now-supports-us-spanish-speech-to-text-in-real-time/ Topic || Database 14:47 Amazon Redshift now runs ANALYZE automatically | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-redshift-auto-analyze/ Introducing Python Shell Jobs in AWS Glue | https://aws.amazon.com/about-aws/whats-new/2019/01/introducing-python-shell-jobs-in-aws-glue/ Amazon RDS for PostgreSQL Now Supports T3 Instance Types | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-rds-postgresql-now-supports-t3-instance-types/ Amazon RDS for Oracle Now Supports T3 Instance Types | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-rds-for-oracle-now-supports-t3-instance-types/ Amazon RDS for Oracle Now Supports SQLT Diagnostics Tool Version 12.2.180725 | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-rds-oracle-now-supports-sqlt-diagnostics-tool-122180725/ Amazon RDS for Oracle Now Supports January 2019 Oracle Patch Set Updates (PSU) and Release Updates (RU) | https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-rds-oracle-supports-jan-2019-oracle-psu/ Amazon DynamoDB Local Adds Support for Transactional APIs, On-Demand Capacity Mode, and 20 GSIs | https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-dynamodb-local-adds-support-for-transactional-apis-on-demand-capacity-mode-and-20-gsis/ Topic || Networking & Content Delivery 17:32 Network Load Balancer Now Supports TLS Termination | https://aws.amazon.com/about-aws/whats-new/2019/01/network-load-balancer-now-supports-tls-termination/ Amazon CloudFront announces six new Edge locations across United States and France | https://aws.amazon.com/about-aws/whats-new/2019/02/cloudfront-feb2019-6locations/ AWS Site-to-Site VPN Now Supports IKEv2 | https://aws.amazon.com/about-aws/whats-new/2019/02/aws-site-to-site-vpn-now-supports-ikev2/ VPC Route Tables Support up to 1,000 Static Routes | https://forums.aws.amazon.com/ann.jspa?annID=6554 Topic || Compute 19:02 Announcing a 25% price reduction for Amazon EC2 X1 Instances in the Asia Pacific (Mumbai) AWS Region | https://aws.amazon.com/about-aws/whats-new/2019/02/announcing-a-25-percent-price-reduction-for-amazon-ec2-x1-instances-in-the-asia-pacific-mumbai-aws-region/ Amazon EKS Achieves ISO and PCI Compliance | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-eks-achieves-iso-and-pci-compliance/ AWS Fargate Now Has Support For AWS PrivateLink | https://aws.amazon.com/about-aws/whats-new/2019/02/aws-fargate-now-has-support-for-aws-privatelink/ AWS Elastic Beanstalk Adds Support for Ruby 2.6 | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-elastic-beanstalk-adds-support-for-ruby-26/ AWS Elastic Beanstalk Adds Support for .NET Core 2.2 | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-elastic-beanstalk-adds-support-for-net-core-22/ Amazon ECS and Amazon ECR now have support for AWS PrivateLink | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-fargate--amazon-ecs--and-amazon-ecr-now-have-support-for-aws/ GPU Support for Amazon ECS now Available | https://aws.amazon.com/about-aws/whats-new/2019/02/gpu-support-for-amazon-ecs-now-available/ AWS Batch now supports Amazon EC2 A1 Instances and EC2 G3s Instances | https://aws.amazon.com/about-aws/whats-new/2019/02/aws-batch-now-supports-amazon-ec2-a1-instances-and-ec2-g3s-insta/ Topic || Solutions 21:57 Deploy Micro Focus Enterprise Server on AWS with New Quick Start | https://aws.amazon.com/about-aws/whats-new/2019/01/deploy-micro-focus-enterprise-server-on-aws-with-new-quick-start/ AWS Public Datasets Now Available from UK Meteorological Office, Queensland Government, University of Pennsylvania, Buildzero, and Others | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-public-datasets-now-available/ Quick Start Update: Active Directory Domain Services on the AWS Cloud | https://aws.amazon.com/about-aws/whats-new/2019/02/quick-start-update-active-directory-domain-services-on-aws/ Introducing the Media2Cloud solution | https://aws.amazon.com/about-aws/whats-new/2019/01/introducing-the-media2cloud-solution/ Topic || Business Applications 23:38 Alexa for Business now offers IT admins simplified workflow to setup shared devices | https://aws.amazon.com/about-aws/whats-new/2019/01/alexa-for-business-now-offers-it-admins-simplified-workflow-to-s/ Topic || AWS Cost Management 25:07 Introducing Normalized Units Information for Amazon EC2 Reservations in AWS Cost Explorer | https://aws.amazon.com/about-aws/whats-new/2019/02/normalized-units-information-for-amazon-ec2-reservations-in-aws-cost-explorer/ Topic || Migration & Transfer 25:39 AWS Migration Hub Now Supports Importing On-Premises Server and Application Data to Track Migration Progress | https://aws.amazon.com/about-aws/whats-new/2019/01/AWSMigrationHubImport/ Topic || Application Integration 26:07 Amazon SNS Message Filtering Adds Support for Multiple String Values in Blacklist Matching | https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-sns-message-filtering-adds-support-for-multiple-string-values-in-blacklist-matching/ Topic || Management & Governance 26:32 AWS Trusted Advisor Expands Functionality With New Best Practice Checks | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-trusted-advisor-expands-functionality/ AWS Systems Manager State Manager Now Supports Management of In-Guest and Instance-Level Configuration | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-systems-manager-state-manager-now-supports-management-of-in-guest-and-instance-level-configuration/ AWS Config Increases Default Limits for AWS Config Rules | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-config-increases-default-limits-for-aws-config-rules/ Introducing AWS CloudFormation UpdateReplacePolicy Attribute | https://aws.amazon.com/about-aws/whats-new/2019/01/introducing-aws-cloudformation-updatereplacepolicy-attribute/ Automate WebSocket API Creation in Amazon API Gateway Using AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/02/automate-websocket-api-creation-in-api-gateway-with-cloudformation/ AWS OpsWorks for Chef Automate and AWS OpsWorks for Puppet Enterprise Now Support AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/02/aws-opsworks-for-chef-automate-and-aws-opsworks-for-puppet-enter/ Find And Update Access Keys, Password, And MFA Settings Easily Using The AWS Management Console | https://aws.amazon.com/about-aws/whats-new/2019/01/my-security-credentials/ Amazon CloudWatch Agent Adds Support for Procstat Plugin and Multiple Configuration Files | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-cloudwatch-agent-adds-support-for-procstat-plugin-and-multiple-configuration-files/ Improve Security Of Your AWS SSO Users Signing In To The User Portal By Using Email-based Verification | https://aws.amazon.com/about-aws/whats-new/2019/01/email-based-verification-for-sso/ Topic || End User Computing 29:22 Introducing Amazon WorkLink | https://aws.amazon.com/about-aws/whats-new/2019/01/introducing-amazon-worklink/ AppStream 2.0 enables custom scripts before session start and after session termination | https://aws.amazon.com/about-aws/whats-new/2019/02/appstream-2-0-enables-custom-scripts-before-session-start-and-af/
Episode 8 – Now With Insane Magic This week we talk about TLS support for NLB, AWS Worklink, Kubernetes Metering and retailers pushing back on AWS. Plus the lightning round and cool tools with Jonathan. #thecloudpod Thanks to our Sponsors: Foghorn Consulting: fogops.io/thecloudpod Audible: audibletrial.com/thecloudpod News Microsoft Earnings – http://fortune.com/2019/01/30/microsoft-stock-down-slowdown-azure-cloud/ Idera acquires Travis CI – https://techcrunch.com/2019/01/23/idera-acquires-travis-ci/ Google Gives Wikipedia Millions – https://www.wired.com/story/google-wikipedia-machine-learning-glow-languages/ NLB now supports TLS Termination – https://aws.amazon.com/blogs/aws/new-tls-termination-for-network-load-balancers Microsoft Acquires Citus Data – https://blogs.microsoft.com/blog/2019/01/24/microsoft-acquires-citus-data-re-affirming-its-commitment-to-open-source-and-accelerating-azure-postgresql-performance-and-scale/ AWS Worklink secures on premise website and apps – https://aws.amazon.com/blogs/aws/amazon-worklink-secure-one-click-mobile-access-to-internal-websites-and-applications/ GKE Usage Metering https://cloud.google.com/blog/products/containers-kubernetes/gke-usage-metering-whose-line-item-is-it-anyway Albertsons picks Azure to run cloud workloads due to Amazon being a competitor – https://www.fool.com/investing/2019/01/28/amazon-fear-is-driving-retailers-to-microsofts-clo.aspx Lightning Round Python Shell for AWS Glue – https://aws.amazon.com/about-aws/whats-new/2019/01/introducing-python-shell-jobs-in-aws-
Simon takes you through a nice mix of updates and new things to take advantage of - even a price drop! Chapters: Service Level Agreements 00:19 Price Reduction 1:15 Databases 2:09 Service Region Expansion 3:52 Analytics 5:23 Machine Learning 7:13 Compute 7:55 IoT 9:37 Management 10:43 Mobile 11:33 Desktop 12:30 Certification 13:11 Shownotes: Topic || Service Level Agreements 00:19 Amazon API Gateway announces service level agreement | https://aws.amazon.com/about-aws/whats-new/2019/01/api-gateway-introduces-service-level-agreement/ Amazon EMR Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-emr-announces-99-9-service-level-agreement/ Amazon EFS Announces 99.9% Service Level Agreement | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-efs-announces-99-9-service-level-agreement/ AWS Direct Connect Service Level Agreement | https://aws.amazon.com/directconnect/sla/ Topic || Price Reduction 1:15 Announcing AWS Fargate Price Reduction by Up To 50% | https://aws.amazon.com/about-aws/whats-new/2019/01/announcing-aws-fargate-price-reduction-by-up-to-50-/ Topic || Databases 2:09 Introducing Amazon DocumentDB (with MongoDB compatibility) – Generally available | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-documentdb-with-mongodb-compatibility-generally-available/ AWS Database Migration Service Now Supports Amazon DocumentDB with MongoDB compatibility as a target | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-database-migration-service-adds-support-for-amazon-documentdb/ Topic || Service Region Expansion 3:52 Amazon EC2 High Memory Instances are Now Available in Additional AWS Regions | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-ec2-high-memory-instances-are-now-available-in-additional-aws-regions/ Amazon Neptune is Now Available in Asia Pacific (Sydney) | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon_neptune_is_now_available_in_sydney/ Amazon EKS Available in Seoul Region | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-eks-available-in-seoul-region/ AWS Glue is now available in the AWS EU (Paris) Region | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-glue-is-now-available-in-the-aws-eu-paris-region/ Amazon EC2 X1e Instances are Now Available in the Asia Pacific (Seoul) AWS Region | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-ec2-x1e-instances-are-now-available-in-the-asia-pacific-seoul-aws-region/ Amazon Pinpoint is now available in three additional regions | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-pinpoint-is-now-available-in-three-additional-regions/ Topic || Analytics 5:23 Amazon QuickSight Launches Pivot Table Enhancements, Cross-Schema Joins and More | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-quickSight-launche-pivot-table-enhancements-cross-schema-joins-and-more/ Topic || Machine Learning 7:13 Amazon SageMaker now supports encrypting all inter-node communication for distributed training | https://docs.aws.amazon.com/sagemaker/latest/dg/train-encrypt.html Topic || Compute 7:55 Amazon EC2 Spot now Provides Results for the “Describe Spot Instance Request” in Multiple Pages | https://aws.amazon.com/about-aws/whats-new/2019/01/amazon-ec2-spot-now-supports-paginated-describe-for-spot-instance-requests/ Announcing Windows Server 2019 AMIs for Amazon EC2 | https://aws.amazon.com/about-aws/whats-new/2018/12/announcing-windows-server-2019-amis-for-amazon-ec2/ AWS Step Functions Now Supports Resource Tagging | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-step-functions-now-supports-resource-tagging/ Topic || IoT 9:37 AWS IoT Core Now Enables Customers to Store Messages for Disconnected Devices | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-iot-core-now-enables-customers-to-store-messages-for-disconnected-devices/ Renesas RX65N System on Chip is Qualified for Amazon FreeRTOS | https://aws.amazon.com/about-aws/whats-new/2019/01/renesas-rx65n-system-on-chip-qualified-for-amazon-freertos/ Topic || Management 10:43 AWS Config adds support for AWS Service Catalog | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-config-adds-support-for-aws-service-catalog/ AWS Single Sign-On Now Enables You to Direct Users to a Specific AWS Management Console Page | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-single-sign-on-now-enables-you-to-direct-users-to-a-specific-aws-management-console-page/ Topic || Mobile 11:33 aws-device-farm-now-supports-appium-node.js-and-appium-ruby | https://aws.amazon.com/about-aws/whats-new/2019/01/aws-device-farm-now-supports-appium-nodejs-and-appium-ruby/ Topic || Desktop 12:30 Deploy Citrix Virtual Apps and Desktops Service on AWS with New Quick Start | https://aws.amazon.com/about-aws/whats-new/2019/01/deploy-citrix-virtual-apps-and-desktops-service-on-aws-with-new-quick-start/ Topic || Certification 13:11 Announcing the AWS Certified Alexa Skill Builder - Specialty Beta Exam | https://aws.amazon.com/about-aws/whats-new/2019/01/announcing-aws-certified-alexa-skill-builder-beta-exam/
The AWS analytics group of services has a lot of members. These are some of the newer offerings from Amazon. However, they are very effective to use in professional development and learning more about your enterprise environment. Amazon Athena Query Data in S3 using SQL. Store your data in S3. Then you can define your schema on top of the data set and run queries. The UI is not very awesome currently, but it is a way to avoid building out a data warehouse for your needs. This serverless query service can get you analytical data back quickly. Better yet, it comes without all of the typical setup. Amazon CloudSearch Managed Search Service. This service provides a way for you to upload data or documents, index them, and provide a search system for that data via HTTP requests. This is flexible and allows you to custom define the indexes. Thus, you can upload almost any document format or data style and utilize the service to handle search requests. Amazon EMR Hosted Hadoop Framework. This service allows you to spin up a Spark or Hadoop system on top of your S3 data lake quickly. It covers the headaches of getting those environments built. Also, it is a cost-effective solution to your data science needs that can scale to avoid over-buying your resources. Amazon Elasticsearch Service Run and Scale Elasticsearch Clusters. ES is a popular open-source search and analytics engine. There are a broad number of uses for this including log file research, stream data analysis, application monitoring, and more. This is quick and easy to set up so you can dive right into the analysis part of your work. The fully managed service has an API and CLI, as you would expect so that you can automate it to your needs. Amazon Kinesis Work with Real-time Streaming Data. This provides a way to analyze video streams in real time and was covered with the media group. We included it in the media episode. Therefore, we will not spend time on it here. Amazon Managed Streaming for Kafka Fully Managed Apache Kafka Service. I must admit that this is not an area where I am solid so it, is best to use Amazon's own words. "Amazon Managed Streaming for Kafka (Amazon MSK) is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data. Amazon MSK provides the control-plane operations and lets you use Apache Kafka data-plane operations, such as those for producing and consuming data. It runs open-source versions of Apache Kafka. This means existing applications, tooling, and plugins from partners and the Apache Kafka community are supported without requiring changes to application code. This release of Amazon MSK supports Apache Kafka version 1.1.1." Amazon Redshift This service provides fast, simple, and cost-effective data warehousing. If you wonder whether there is a fully managed data warehouse solution out there then here is your answer. Redshift is fully managed, scales up to petabytes, and incorporates the security and administration tools you come to expect from AWS. There are some excellent how-to and tutorials to help you get started and maybe even understand warehouses more in general. Amazon Quicksight This is a fast business analytics service. Also known as a fully managed BI solution. It is what you would expect from a BI solution. Therefore, it requires setup and forethought to position your data. Although this is a robust service, expect to spend a few hours (at least) to get going. AWS Data Pipeline Next is an orchestration service for periodic, data-driven workflows. Yes, that is their words, not mine. The AWS Data Pipeline is a web service that helps you reliably move data between different AWS compute and storage services. The scope includes on-premises data sources as well. Therefore, you can schedule moving all of your enterprise data to the proper destinations. All of this includes being able to translate and manipulate it at scale. Once you get to the point of having a lot of data in AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon EMR, this becomes critical. Thus, while this is not of much use early on, it is essential to running an enterprise. AWS Glue This service helps you prepare and load data. AWS Glue is a fully managed ETL (extract, transform, and load) solution. Therefore, this makes it easy to prepare and load your data no matter the end goal. You can create and run an ETL job with a few clicks in the AWS Management Console. I have not used it beyond simple tests, but this may be your best solution to ETL needs. When you store your data on AWS then why not try out this solution? It catalogs the data and makes it easy to dive right into that ETL process. AWS Lake Formation This is advertised as how to build a secure data lake in days. I find it hard to argue against that claim. We have already seen how well AWS handles storing and cataloging (even indexing) data. Therefore, it makes sense that their data lake tool would extend from those solutions. With data lakes being a sort of new concept you might want to see the latest news and how-tos at this link. https://aws.amazon.com/big-data/datalakes-and-analytics/what-is-a-data-lake/
As data volumes grow and customers store more data on AWS, they often have valuable data that is not easily discoverable and available for analytics. Learn how AWS Glue makes it easy to build and manage enterprise-grade data lakes on Amazon S3. AWS Glue can ingest data from variety of sources into your data lake, clean it, transform it, and automatically register it in the AWS Glue Data Catalog, making data readily available for analytics. Learn how you can set appropriate security policies in the Data Catalog and make data available for a variety of use cases, such as run ad-hoc analytics in Amazon Athena, run queries across your data warehouse and data lake with Amazon Redshift Spectrum, run big data analysis in Amazon EMR, and build machine learning models with Amazon SageMaker and AWS Glue. Additionally, Robinhood will share how they were able to move from a world of data silos to building a robust, petabyte scale data lake on Amazon S3 with AWS Glue. Robinhood is one of the fastest-growing brokerages, serving over five million users with an easy to use investment platform that offers commission-free trading of equities, ETFs, options, and cryptocurrencies. Learn about the design paradigms and tradeoffs that Robinhood made to achieve a cost effective and performant data lake that unifies all data access, analytics, and machine learning use cases.
Organizations need to gain insight and knowledge from a growing number of IoT, APIs, clickstreams, and unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we introduce key ETL features of AWS Glue, we cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL pipelines for your data lake. We also discuss how to build scalable, efficient, and serverless ETL pipelines using AWS Glue. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Data and data-based decisions are at the core of everything online brokerage Robinhood does. In this talk, we dive deep into how Robinhood moved from a world where multiple systems grew to create an unwieldy data architecture, and we discuss how the company used AWS tools, such as Amazon S3, Amazon Athena, Amazon EMR, AWS Glue, and Amazon Redshift, to build a robust data lake that can operate on petabyte scale. We also discuss design paradigms and tradeoffs made to achieve a cost-effective and performant cluster that unifies all data access, analytics, and machine learning use cases.
Flexibility is key when building and scaling a data lake. The analytics solutions you use in the future will almost certainly be different from the ones you use today, and choosing the right storage architecture gives you the agility to quickly experiment and migrate with the latest analytics solutions. In this session, we explore best practices for building a data lake in Amazon S3 and Amazon Glacier for leveraging an entire array of AWS, open source, and third-party analytics tools. We explore use cases for traditional analytics tools, including Amazon EMR and AWS Glue, as well as query-in-place tools like Amazon Athena, Amazon Redshift Spectrum, Amazon S3 Select, and Amazon Glacier Select. Complete Title: AWS re:Invent 2018: [REPEAT 1] Data Lake Implementation: Processing & Querying Data in Place (STG204-R1)
In this session, learn how to better analyze your data for patterns and inform decisions by pairing relational databases with a number of AWS services, including the graph database service, Amazon Neptune. Additionally, hear about the use of AWS Glue and Apache Ranger for data cataloging and as a baseline for query and dataset resolution. Learn about the use of AWS Fargate and AWS Lambda for serverless provisioning of complex data and how to do data rights management at scale on an enterprise data lake. As a case study, hear how Change Healthcare is building an Intelligent Health Platform (IHP) using these services to help standardize and simplify a number of healthcare workflows, including payment processing, which have traditionally been both complex and disconnected from healthcare event data. Complete Title: AWS re:Invent 2018: Data Patterns & Analysis with Amazon Neptune: A Case Study in Healthcare Billing (HLC303)
Corteva Agriscience, the agricultural division of DowDuPont, produces as much DNA sequence data every six hours as existed in the entire public sphere in 2008. On-premises processing and storage could not scale to meet the business demand. Partnering with Sogeti (part of Capgemini), Corteva replatformed their existing Hadoop-based genome processing systems into AWS using a serverless, cloud-native architecture. In this session, learn how Corteva Agriscience met current and future data processing demands without maintaining any long-running servers by using AWS Lambda, Amazon S3, Amazon API Gateway, Amazon EMR, AWS Glue, AWS Batch, and more. This session is brought to you by AWS partner, Capgemini America. Complete Title: AWS re:Invent 2018: Petabytes of Data & No Servers: Corteva Scales DNA Analysis to Meet Increasing Business Demand (ENT218-S)
Learn how you can build, train, and deploy machine learning workflows for Amazon SageMaker on AWS Step Functions. Learn how to stitch together services, such as AWS Glue, with your Amazon SageMaker model training to build feature-rich machine learning applications, and you learn how to build serverless ML workflows with less code. Cox Automotive also shares how it combined Amazon SageMaker and Step Functions to improve collaboration between data scientists and software engineers. We also share some new features to build and manage ML workflows even faster.
Simon shares a great list of new capabilities for customers! Chapters: 00:00- 00:08 Opening 00:09 - 10:50 Compute 10:51 - 25:50 Database and Storage 25:51 - 28:25 Network 28:26 - 35:01 Development 35:09 - 39:03 AI/ML 39:04 - 45:04 System Management and Operations 45:05 - 46:18 Identity 46:19 - 48:05 Video Streaming 48:06 - 49:14 Public Datasets 49:15 - 49:54 AWS Marketplace 49:55 - 51:03 YubiKey Support for MFA 51:04 - 51:18 Closing Shownotes: Amazon EC2 F1 Instance Expands to More Regions, Adds New Features, and Improves Development Tools | https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-ec2-f1-instance-expands-to-more-regions-adds-new-features-and-improves-development-tools/ Amazon EC2 F1 instances now Available in an Additional Size | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-ec2-f1-instances-now-available-in-an-additional-size/ Amazon EC2 R5 and R5D instances now Available in 8 Additional AWS Regions | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-ec2-r5-and-r5d-instances-now-available-in-8-additional-aws-regions/ Introducing Amazon EC2 High Memory Instances with up to 12 TB of memory, Purpose-built to Run Large In-memory Databases, like SAP HANA | https://aws.amazon.com/about-aws/whats-new/2018/09/introducing-amazon-ec2-high-memory-instances-purpose-built-to-run-large-in-memory-databases/ Introducing a New Size for Amazon EC2 G3 Graphics Accelerated Instances | https://aws.amazon.com/about-aws/whats-new/2018/10/introducing-a-new-size-for-amazon-ec2-g3-graphics-accelerated-instances/ Amazon EC2 Spot Console Now Supports Scheduled Scaling for Application Auto Scaling | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-ec2-spot-console-now-supports-scheduled-scaling-for-application-auto-scaling/ Amazon Linux 2 Now Supports 32-bit Applications and Libraries | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-linux-2-now-supports-32-bit-applications-and-libraries/ AWS Server Migration Service Adds Support for Migrating Larger Data Volumes | https://aws.amazon.com/about-aws/whats-new/2018/09/aws-server-migration-service-adds-support-for-migrating-larger-data-volumes/ AWS Migration Hub Saves Time Migrating with Application Migration Status Automation | https://aws.amazon.com/about-aws/whats-new/2018/10/aws_migration_hub_saves_time_migrating_with_application_migration_status_automation/ Plan Your Migration with AWS Application Discovery Service Data Exploration | https://aws.amazon.com/about-aws/whats-new/2018/09/plan-your-migration-with-aws-application-discovery-service-data-exploration/ AWS Lambda enables functions that can run up to 15 minutes | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-lambda-supports-functions-that-can-run-up-to-15-minutes/ AWS Lambda announces service level agreement | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-lambda-introduces-service-level-agreement/ AWS Lambda Console Now Enables You to Manage and Monitor Serverless Applications | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-lambda-console-enables-managing-and-monitoring/ Amazon EKS Enables Support for Kubernetes Dynamic Admission Controllers | https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-eks-enables-support-for-kubernetes-dynamic-admission-cont/ Amazon EKS Simplifies Cluster Setup with update-kubeconfig CLI Command | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-eks-simplifies-cluster-setup-with-update-kubeconfig-cli-command/ Amazon Aurora Parallel Query is Generally Available | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-aurora-parallel-query-is-generally-available/ Amazon Aurora Now Supports Stopping and Starting of Database Clusters | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-aurora-stop-and-start/ Amazon Aurora Databases Support up to Five Cross-Region Read Replicas | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-aurora-databases-support-up-to-five-cross-region-read-replicas/ Amazon RDS Now Provides Database Deletion Protection | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-rds-now-provides-database-deletion-protection/ Announcing Managed Databases for Amazon Lightsail | https://aws.amazon.com/about-aws/whats-new/2018/10/announcing-managed-databases-for-amazon-lightsail/ Amazon RDS for MySQL and MariaDB now Support M5 Instance Types | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-rds-for-mysql-and-mariadb-support-m5-instance-types/ Amazon RDS for Oracle Now Supports Database Storage Size up to 32TiB | https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-rds-for-oracle-now-supports-32tib/ Specify Parameter Groups when Restoring Amazon RDS Backups | https://aws.amazon.com/about-aws/whats-new/2018/10/specify-parameter-groups-when-restoring-amazon-rds-backups/ Amazon ElastiCache for Redis adds read replica scaling for Redis Cluster | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-elasticache-for-redis-adds-read-replica-scaling-for-redis-cluster/ Amazon Elasticsearch Service now supports encrypted communication between Elasticsearch nodes | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon_elasticsearch_service_now_supports_encrypted_communication_between_elasticsearch_nodes/ Amazon Athena adds support for Creating Tables using the results of a Select query (CTAS) | https://aws.amazon.com/about-aws/whats-new/2018/10/athena_ctas_support/ Amazon Redshift announces Query Editor to run queries directly from the AWS Management Console | https://aws.amazon.com/about-aws/whats-new/2018/10/amazon_redshift_announces_query_editor_to_run_queries_directly_from_the_aws_console/ Support for TensorFlow and S3 select with Spark on Amazon EMR release 5.17.0 | https://aws.amazon.com/about-aws/whats-new/2018/09/support-for-tensorflow-s3-select-with-spark-on-amazon-emr-release-517/ AWS Database Migration Service Makes It Easier to Migrate Cassandra Databases to Amazon DynamoDB | https://aws.amazon.com/about-aws/whats-new/2018/09/aws-dms-aws-sct-now-support-the-migration-of-apache-cassandra-databases/ The Data Lake Solution Now Integrates with Microsoft Active Directory | https://aws.amazon.com/about-aws/whats-new/2018/09/the-data-lake-solution-now-integrates-with-microsoft-active-directory/ Amazon S3 Announces Selective Cross-Region Replication Based on Object Tags | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-s3-announces-selective-crr-based-on-object-tags/ AWS Storage Gateway Is Now Available as a Hardware Appliance | https://aws.amazon.com/about-aws/whats-new/2018/09/aws-storage-gateway-is-now-available-as-a-hardware-appliance/ AWS PrivateLink now supports access over AWS VPN | https://aws.amazon.com/about-aws/whats-new/2018/09/aws-privatelink-now-supports-access-over-aws-vpn/ AWS PrivateLink now supports access over Inter-Region VPC Peering | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-privatelink-now-supports-access-over-inter-region-vpc-peering/ Network Load Balancer now supports AWS VPN | https://aws.amazon.com/about-aws/whats-new/2018/09/network-load-balancer-now-supports-aws-vpn/ Network Load Balancer now supports Inter-Region VPC Peering | https://aws.amazon.com/about-aws/whats-new/2018/10/network-load-balancer-now-supports-inter-region-vpc-peering/ AWS Direct Connect now Supports Jumbo Frames for Amazon Virtual Private Cloud Traffic | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-direct-connect-now-supports-jumbo-frames-for-amazon-virtual-private-cloud-traffic/ Amazon CloudFront announces two new Edge locations, including its second location in Fujairah, United Arab Emirates | https://aws.amazon.com/about-aws/whats-new/2018/10/cloudfront-fujairah/ AWS CodeBuild Now Supports Building Bitbucket Pull Requests | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-codebuild-now-supports-building-bitbucket-pull-requests/ AWS CodeCommit Supports New File and Folder Actions via the CLI and SDKs | https://aws.amazon.com/about-aws/whats-new/2018/09/aws-codecommit-supports-new-file-and-folder-actions-via-the-cli-and-sdks/ AWS Cloud9 Now Supports TypeScript | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-cloud9-now-supports-typescript/ AWS CloudFormation coverage updates for Amazon API Gateway, Amazon ECS, Amazon Aurora Serverless, Amazon ElastiCache, and more | https://aws.amazon.com/about-aws/whats-new/2018/09/aws-cloudformation-coverage-updates-for-amazon-api-gateway--amaz/ AWS Elastic Beanstalk adds support for T3 instance and Go 1.11 | https://aws.amazon.com/about-aws/whats-new/2018/09/aws-elastic-beanstalk-adds-support-for-t3-instance-and-go-1-11/ AWS Elastic Beanstalk Console Supports Network Load Balancer | https://aws.amazon.com/about-aws/whats-new/2018/10/aws_elastic_beanstalk_console_supports_network_load_balancer/ AWS Amplify Announces Vue.js Support for Building Cloud-powered Web Applications | https://aws.amazon.com/about-aws/whats-new/2018/09/aws-amplify-announces-vuejs-support-for-building-cloud-powered-web-applications/ AWS Amplify Adds Support for Securely Embedding Amazon Sumerian AR/VR Scenes in Web Applications | https://aws.amazon.com/about-aws/whats-new/2018/09/AWS-Amplify-adds-support-for-securely-embedding-Amazon-Sumerian/ Amazon API Gateway adds support for multi-value parameters | https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-api-gateway-adds-support-for-multi-parameters/ Amazon API Gateway adds support for OpenAPI 3.0 API specification | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-api-gateway-adds-support-for-openapi-3-api-specification/ AWS AppSync Launches a Guided API Builder for Mobile and Web Apps | https://aws.amazon.com/about-aws/whats-new/2018/09/AWS-AppSync-launches-a-guided-API-builder-for-apps/ Amazon Polly Adds Mandarin Chinese Language Support | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-polly-adds-mandarin-chinese-language-support/ Amazon Comprehend Extends Natural Language Processing for Additional Languages and Region | https://aws.amazon.com/about-aws/whats-new/2018/10/amazon_comprehend_extends_natural_language_processing_for_additional_languages_and_region/ Amazon Transcribe Supports Deletion of Completed Transcription Jobs | https://aws.amazon.com/about-aws/whats-new/2018/10/amazon_transcribe_supports_deletion_of_completed_transcription_jobs/ Amazon Rekognition improves the accuracy of image moderation | https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-rekognition-improves-the-accuracy-of-image-moderation/ Save time and money by filtering faces during indexing with Amazon Rekognition | https://aws.amazon.com/about-aws/whats-new/2018/09/save-time-and-money-by-filtering-faces-during-indexing-with-amazon-rekognition/ Amazon SageMaker Now Supports Tagging for Hyperparameter Tuning Jobs | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-sagemaker-now-supports-tagging-for-hyperparameter-tuning-/ Amazon SageMaker Now Supports an Improved Pipe Mode Implementation | https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-sagemaker-now-supports-an-improved-pipe-mode-implementati/ Amazon SageMaker Announces Enhancements to its Built-In Image Classification Algorithm | https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-sagemaker-announces-enhancements-to-its-built-in-image-cl/ AWS Glue now supports connecting Amazon SageMaker notebooks to development endpoints | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-glue-now-supports-connecting-amazon-sagemaker-notebooks-to-development-endpoints/ AWS Glue now supports resource-based policies and resource-level permissions for the AWS Glue Data Catalog | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-glue-now-supports-resource-based-policies-and-resource-level-permissions-and-for-the-AWS-Glue-Data-Catalog/ Resource Groups Tagging API Supports Additional AWS Services | https://aws.amazon.com/about-aws/whats-new/2018/10/resource-groups-tagging-api-supports-additional-aws-services/ Changes to Tags on AWS Resources Now Generate Amazon CloudWatch Events | https://aws.amazon.com/about-aws/whats-new/2018/09/changes-to-tags-on-aws-resources-now-generate-amazon-cloudwatch-events/ AWS Systems Manager Announces Enhanced Compliance Dashboard | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-systems-manager-announces-enhanced-compliance-dashboard/ Conditional Branching Now Supported in AWS Systems Manager Automation | https://aws.amazon.com/about-aws/whats-new/2018/09/Conditional_Branching_Now_Supported_in_AWS_Systems_Manager_Automation/ AWS Systems Manager Launches Custom Approvals for Patching | https://aws.amazon.com/about-aws/whats-new/2018/10/AWS_Systems_Manager_Launches_Custom_Approvals_for_Patching/ Amazon CloudWatch adds Ability to Build Custom Dashboards Outside the AWS Console | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-cloudwatch-adds-ability-to-build-custom-dashboards-outside-the-aws-console/ Amazon CloudWatch Agent adds Custom Metrics Support | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-cloudwatch-agent-adds-custom-metrics-support/ Amazon CloudWatch Launches Client-side Metric Data Aggregations | https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-cloudWatch-launches-client-side-metric-data-aggregations/ AWS IoT Device Management Now Provides In Progress Timeouts and Step Timeouts for Jobs | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-iot-device-management-now-provides-in-progress-timeouts-and-step-timeouts-for-jobs/ Amazon GuardDuty Provides Customization of Notification Frequency to Amazon CloudWatch Events | https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-guardduty-provides-customization-of-notification-frequency-to-amazon-cloudwatch-events/ AWS Managed Microsoft AD Now Offers Additional Configurations to Connect to Your Existing Microsoft AD | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-managed-microsoft-ad-now-offers-additional-configurations-to-connect-to-our-existing-microsoft-ad/ Easily Deploy Directory-Aware Workloads in Multiple AWS Accounts and VPCs by Sharing a Single AWS Managed Microsoft AD | https://aws.amazon.com/about-aws/whats-new/2018/09/aws-directory-service-share-directory-across-accounts-and-vpcs/ AWS Single Sign-on Now Enables You to Customize the User Experience to Business Applications | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-single-sign-on-now-enables-you-to-customize-the-user-experience-to-business-applications/ Live Streaming on AWS Now Features AWS Elemental MediaLive and MediaPackage | https://aws.amazon.com/about-aws/whats-new/2018/09/live-streaming-on-aws-now-features-aws-elemental-medialive-and-mediapackage/ AWS Elemental MediaStore Increases Object Size Limit to 25 Megabytes | https://aws.amazon.com/about-aws/whats-new/2018/10/aws-elemental-mediastore-increase-object-size-limit-to-25-megabytes/ Amazon Kinesis Video Streams now supports adding and retrieving Metadata at Fragment-Level | https://aws.amazon.com/about-aws/whats-new/2018/10/kinesis-video-streams-fragment-level-metadata-support/ AWS Public Datasets Now Available from the German Meteorological Office, Broad Institute, Chan Zuckerberg Biohub, fast.ai, and Others | https://aws.amazon.com/about-aws/whats-new/2018/10/public-datasets/ Customize Your Payment Frequency and More with AWS Marketplace Flexible Payment Scheduler | https://aws.amazon.com/about-aws/whats-new/2018/10/customize-your-payment-frequency-and-more-with-awsmarketplace-flexible-payment-scheduler/ Sign in to your AWS Management Console with YubiKey Security Key for Multi-factor Authentication (MFA) | https://aws.amazon.com/about-aws/whats-new/2018/09/aws_sign_in_support_for_yubikey_security_key_as_mfa/
Simon walks you through some great new things you can use on your projects today! Shownotes: Amazon Lightsail Announces 50% Price Drop and Two New Instance Sizes | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-lightsail-announces-50-percent-price-drop-and-two-new-instance-sizes/ Introducing Amazon EC2 T3 Instances | https://aws.amazon.com/about-aws/whats-new/2018/08/introducing-amazon-ec2-t3-instances/ Amazon EC2 M5d Instances are Now Available in Additional Regions | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-ec2-m5d-instances-are-now-available-in-additional-regions/ Amazon EC2 C5d Instances are Now Available in Tokyo and Sydney Regions | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-ec2-c5d-instances-are-now-available-in-tokyo-and-sydney-regions/ AWS Batch Now Supports z1d, r5d, r5, m5d, c5d, p3, and x1e Instance Types | https://aws.amazon.com/about-aws/whats-new/2018/09/aws-batch-now-supports-z1d-r5d-r5-m5d-c5d-p3-and-x1e-instance-types/ Amazon ElastiCache for Redis adds support for in-place version upgrades for Redis Cluster | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-elasticache-for-redis-adds-support-for-in-place-version-upgrades-for-redis-cluster/ Introducing AWS CloudFormation Macros | https://aws.amazon.com/about-aws/whats-new/2018/09/introducing-aws-cloudformation-macros/ AWS CloudFormation Now Supports AWS PrivateLink | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-cloudformation-now-supports-aws-privatelink-/ New Amazon EKS-optimized AMI and CloudFormation Template for Worker Node Provisioning | https://aws.amazon.com/about-aws/whats-new/2018/08/new-amazon-eks-optimized-ami-and-cloudformation-template-for-worker-node-provisioning/ Amazon EKS Supports GPU-Enabled EC2 Instances | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-eks-supports-gpu-enabled-ec2-instances/ Introducing Amazon EKS Platform Version 2 | https://aws.amazon.com/about-aws/whats-new/2018/08/introducing-amazon-eks-platform-version-2/ Amazon ECS Service Discovery Now Available in Frankfurt, London, Tokyo, Sydney, and Singapore Regions | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-ecs-service-discovery-now-available-in-frankfurt--tokyo--/ AWS Fargate Now Supports Time and Event-Based Task Scheduling | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-fargate-now-supports-time-and-event-based-task-scheduling/ Amazon Athena releases an updated JDBC driver with improved performance when retrieving results | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-athena-streaming-jdbc-driver/ AWS Key Management Service Increases API Requests Per Second Limits | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-key-management-service-increases-api-requests-per-second-limits/ Use Amazon DynamoDB Local More Easily with the New Docker Image | https://aws.amazon.com/about-aws/whats-new/2018/08/use-amazon-dynamodb-local-more-easily-with-the-new-docker-image/ Amazon DynamoDB Global Tables Available in Additional Regions | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-dynamodb-global-tables-available-in-additional-regions/ Performance Insights Supports Amazon Relational Database Service (RDS) for MySQL | https://aws.amazon.com/about-aws/whats-new/2018/08/performance-insights-supports-amazon-relational-database-service-for-mysql/ AWS Glue now supports data encryption at rest | https://aws.amazon.com/about-aws/whats-new/2018/09/aws-glue-now-supports-data-encryption-at-rest/ Deploy an AWS Cloud environment for VFX workstations with new Quick Start | https://aws.amazon.com/about-aws/whats-new/2018/08/deploy-an-aws-cloud-environment-for-vfx-workstations-with-new-quick-start/ New in AWS Deep Learning AMIs: TensorFlow 1.10, PyTorch with CUDA 9.2, and More | https://aws.amazon.com/about-aws/whats-new/2018/08/new-in-dl-amis-tensorflow1-10-pytorch-with-cuda9-2/ Amazon Rekognition announces the ability to more easily manage face collections | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-rekognition-announces-the-ability-to-more-easily-manage-face-collections/ Amazon SageMaker Supports TensorFlow 1.10 | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-sagemaker-supports-tensorflow-1-10/ Amazon SageMaker Supports A New Custom Header For The InvokeEndPoint API Action | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-sagemaker-supports-a-new-custom-header-for-the-invokeendp/ Amazon FreeRTOS Over-the-Air Update Feature Generally Available | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-freertos-over-the-air-update-feature-generally-available/ Announcing New Custom Analysis Features for AWS IoT Analytics with Custom Container Execution for Continuous Analysis | https://aws.amazon.com/about-aws/whats-new/2018/08/announcing-new-features-for-aws-iot-analytics-including-custom-container-execution/ AWS IoT Device Management Now Allows Thing Groups Indexing | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-iot-device-management-now-allows-thing-groups-indexing/ AWS IoT Core Adds New Endpoints Serving Amazon Trust Services (ATS) Signed Certificates to Help Customers Avoid Symantec Distrust Issues | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-iot-core-adds-new-endpoints-serving-amazon-trust-services-signed-certificates-to-help-customers-avoid-symantec-distrust-issues/ AWS WAF Launches New Comprehensive Logging Functionality | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-waf-launches-new-comprehensive-logging-functionality/ AWS Direct Connect now in Dubai | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-direct-connect-now-in-dubai/ New AWS Direct Connect locations in Paris and Taipei | https://aws.amazon.com/about-aws/whats-new/2018/08/new-aws-direct-connect-locations-paris-taipei/ Amazon Route 53 Auto Naming Available in Five Additional AWS Regions | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-route-53-auto-naming-available-in-five-additional-AWS-regions/ Amazon S3 Announces New Features for S3 Select | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-s3-announces-new-features-for-s3-select/ AWS Systems Manager Automation Now Supports Calling AWS APIs | https://aws.amazon.com/about-aws/whats-new/2018/08/AWS_Systems_Manager_Automation_Now_Supports_Invoking_AWS_APIs/ AWS Serverless Application Repository Adds Sorting Functionality and Improves Search Experience | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-serverless-application-repository-adds-sorting-and-improves-search/ AWS SAM CLI Now Supports Debugging Go Functions and Testing with 50+ Events | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-sam-cli-supports-debugging-go-functions-and-testing-for-additional-events/ AWS X-Ray Adds Support for Controlling Sampling Rate from the X-Ray Console | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-xray-adds-support-for-controlling-sampling-rate-from-the-xray-console/ Amazon API Gateway Adds Support for AWS X-Ray | https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-api-gateway-adds-support-for-aws-x-ray/ AWS CodeBuild Adds Ability to Create Build Projects with Multiple Input Sources and Output Artifacts | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-codebuild-adds-ability-to-create-build-projects-with-multiple-input-sources-and-output-artifacts/ Announcing the AWS Amplify CLI Toolchain | https://aws.amazon.com/about-aws/whats-new/2018/08/annoucing-aws-amplify-cli-toolchain/ New Amazon Kinesis Data Analytics capability for time-series analytics | https://aws.amazon.com/about-aws/whats-new/2018/09/new-amazon-kinesis-data-analytics-capability-for-time-series-analytics/ Amazon Kinesis Video Streams Producer SDK Is Now Available For Microsoft Windows | https://aws.amazon.com/about-aws/whats-new/2018/08/kinesis-video-streams-producer-sdk-windows/ AWS Config Announces New Managed Rules | https://aws.amazon.com/about-aws/whats-new/2018/09/aws-config-announces-new-managed-rules/ Deploy Three New Amazon Connect Integrations from CallMiner, Aspect Software, and Acqueon | https://aws.amazon.com/about-aws/whats-new/2018/08/deploy-three-new-amazon-connect-integrations-from-callminer-aspect-acqueon/
Simon takes you through a great list of new services, functions and capabilities - hopefully something for everyone! Shownotes: AWS Global Infrastructure: https://aws.amazon.com/about-aws/global-infrastructure/ Amazon EFS Now Supports Provisioned Throughput | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-efs-now-supports-provisioned-throughput/ Amazon EFS Achieves PCI DSS Compliance | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-efs-achieves-pci-dss-compliance/ Amazon EC2 P3 instances, one of the most powerful GPU instances in the cloud, now available in 6 additional regions | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-ec2-p3-instances-now-available-in-6-additional-regions/ New SBE1 Amazon EC2 instances for AWS Snowball Edge | https://aws.amazon.com/about-aws/whats-new/2018/07/new-sbe1-instances-for-snowball-edge/ Introducing Amazon EC2 R5 Instances, the next generation of memory-optimized instances | https://aws.amazon.com/about-aws/whats-new/2018/07/introducing-amazon-ec2-r5-instances/ Introducing Amazon EC2 z1d Instances with a sustained all core frequency of up to 4.0 GHz | https://aws.amazon.com/about-aws/whats-new/2018/07/introducing-amazon-ec2-z1d-instances/ Amazon EC2 M5d Instances are Now Available in Additional Regions | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-ec2-m5d-instances-are-now-available-in-additional-regions/ Amazon EC2 C5d Instances are Now Available in Additional Regions | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-ec2-c5d-instances-are-now-available-in-additional-regions/ Amazon EC2 F1 Instances Adds New Features and Performance Improvements | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-ec2-f1-instances-adds-new-features-and-performance-improvements/ Amazon EC2 Fleet Now Supports Two New Allocation Strategies: On-Demand Prioritized List, and Lowest Price | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-ec2-fleet-now-supports-two-new-allocation-strategies/ Amazon EC2 Nitro System Based Instances Now Support Faster Amazon EBS-Optimized Instance Performance | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-ec2-nitro-system-based-instances-now-support-faster-ebs-optimized-performance/ Access Reserved Instance (RI) Purchase Recommendations for your Amazon Redshift, Amazon ElastiCache, and Amazon Elasticsearch Reservations using AWS Cost Explorer | https://aws.amazon.com/about-aws/whats-new/2018/07/reserved-instance-purchase-recommendations-redshift-elasticache-elasticsearch-reservations/ AWS Systems Manager Run Command Now Streams Output to Amazon CloudWatch Logs | https://aws.amazon.com/about-aws/whats-new/2018/07/aws-systems-manager-run-command-streams-output-to-amazon-cloudwatch-logs/ AWS Systems Manager Automation Conditional Branching for Step Failure | https://aws.amazon.com/about-aws/whats-new/2018/07/aws-systems-manager-automation-conditional-branching-for-step-failure/ Amazon EKS AMI Build Scripts Available on GitHub | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-eks-ami-build-scripts-available-on-github/ Add Scaling to Services You Build on AWS | https://aws.amazon.com/about-aws/whats-new/2018/07/add-scaling-to-services-you-build-on-aws/ Announcing Bring Your Own IP for Amazon Virtual Private Cloud (Preview) | https://aws.amazon.com/about-aws/whats-new/2018/07/announcing-bring-your-own-ip-for-amazon-virtual-private-cloud-preview/ Introducing Amazon Data Lifecycle Manager for EBS Snapshots | https://aws.amazon.com/about-aws/whats-new/2018/07/introducing-amazon-data-lifecycle-manager-for-ebs-snapshots/ Amazon S3 Announces Increased Request Rate Performance | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-s3-announces-increased-request-rate-performance/ Amazon CloudFront announces four new Edge locations, including its first location in Cape Town, South Africa | https://aws.amazon.com/about-aws/whats-new/2018/07/cloudfront-capetown-launch/ Amazon CloudFront announces nine new Edge locations globally across major cities in North America, Europe, and Asia | https://aws.amazon.com/about-aws/whats-new/2018/07/cloudfront-nine-edge-locations-july2018/ Amazon Route 53 Expands Into Africa With New Edge Locations in Cape Town and Johannesburg | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-route-53-expands-into-africa-with-new-edge-locations-in-cape-town-and-johannesburg/ Amazon API Gateway Increases API Limits | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-api-gateway-increases-api-limits/ Amazon API Gateway Usage Plans Now Support Method Level Throttling | https://aws.amazon.com/about-aws/whats-new/2018/07/api-gateway-usage-plans-support-method-level-throttling/ Amazon API Gateway Supports Request/Response Parameters and Status Overrides | https://aws.amazon.com/about-aws/whats-new/2018/07/api-gateway-supports-request-response-parameters-and-status-overrides/ Automate Amazon GuardDuty Provisioning Over Multiple Accounts and Regions with AWS CloudFormation StackSets Integration | https://aws.amazon.com/about-aws/whats-new/2018/07/automate-amazon-guardduty-provisioning-over-multiple-accounts-and-regions-with-aws-cloudformation-stacksets-integration/ AWS Secrets Manager Now Supports AWS PrivateLink | https://aws.amazon.com/about-aws/whats-new/2018/07/aws-secrets-manager-now-supports-aws-privatelink/ AWS Systems Manager Parameter Store integrates with AWS Secrets Manager, and adds labeling for easy configuration updates | https://aws.amazon.com/about-aws/whats-new/2018/07/aws-systems-manager-parameter-store-integrates-with-aws-secrets-manager-and-adds-parameter-version-labeling/ Delegate Permission Management to Employees by Using IAM Permissions Boundaries | https://aws.amazon.com/about-aws/whats-new/2018/07/delegate-permission-management-to-employees-by-using-IAM-permissions-boundaries/ AWS Lambda Supports .NET Core 2.1 | https://aws.amazon.com/about-aws/whats-new/2018/06/lambda-supports-dotnetcore-twopointone/ AWS Glue now provides additional ETL job metrics | https://aws.amazon.com/about-aws/whats-new/2018/07/aws-glue-now-provides-additional-ETL-job-metrics/ AWS Glue now supports reading from Amazon DynamoDB tables | https://aws.amazon.com/about-aws/whats-new/2018/07/aws-glue-now-supports-reading-from-amazon-dynamodb-tables/ The Data Lake Solution Now Transforms and Analyzes Data | https://aws.amazon.com/about-aws/whats-new/2018/07/the-data-lake-solution-now-transforms-and-analyzes-data/ AWS Marketplace Helps Customers Quickly Map Products in Their Existing Software Inventory | https://aws.amazon.com/about-aws/whats-new/2018/07/aws-marketplace-helps-customers-quickly-map-products-in-their-existing-software-inventory/ Amazon SageMaker Now Supports Resource Tags for More Efficient Access Control | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-sagemaker-now-supports-resource-tags-for-more-efficient-access-control/ Amazon SageMaker Supports High Throughput Batch Transform Jobs for Non-Real Time Inferencing | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-sagemaker-supports-high-throughput-batch-transform-jobs-for-non-real-time-inferencing/ Amazon SageMaker Now Supports Pipe Input Mode for Built-In TensorFlow Containers | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-sagemaker-supports-pipe-input-mode-for-built-in-tensorflow-containers/ Amazon SageMaker Now Supports k-Nearest-Neighbor and Object Detection Algorithms | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-sagemaker-supports-knn-and-object-detection-algorithms/ Amazon SageMaker Announces Several Enhancements to Built-in Algorithms and Frameworks | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-sagemaker-announces-enhancements-for-built-in-algorithms-and-frameworks/ AWS Service Catalog Now Supports Service Catalog Resources in CloudFormation | https://aws.amazon.com/about-aws/whats-new/2018/07/aws-service-catalog-now-supports-service-catalog-resources-in-cloudformation/ Kinesis Video Streams now supports HTTP Live Streaming (HLS) to playback live and recorded video from devices | https://aws.amazon.com/about-aws/whats-new/2018/07/kinesis-video-adds-hls-support/ Amazon Polly Now Lets You Define the Maximum Amount of Time for Speech to Complete | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-polly-now-lets-you-define-the-maximum-amount-of-time-for-speech-to-complete/ Amazon Polly Now Supports Input Character Limit of 100K and Stores Output Files in S3 | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-polly-now-supports-input-character-limit-of-100k-and-stores-output-files-in-s3/ Amazon Polly Adds Bilingual Indian English/Hindi Language Support | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-polly-adds-bilingual-indian-english-hindi-language-support/ Amazon Translate Adds Six New Languages | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-translate-adds-six-new-languages/ Amazon Transcribe Now Lets You Designate Your Own Amazon S3 Buckets to Store Transcription Outputs | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-transcribe-now-lets-you-designate-your-own-amazon-s3-buckets-to-store-transcription-outputs/ Amazon Comprehend Now Supports Syntax Analysis | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-comprehend-now-supports-syntax-analysis/ Amazon Rekognition Increases Accuracy of Text-in-Image | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-rekognition-increases-accuracy-of-text-in-image/ AWS AppSync releases enhanced no-code GraphQL API builder, HTTP resolvers, and new built-in scalar types | https://aws.amazon.com/about-aws/whats-new/2018/07/aws-appsync-releases-enhanced-capabilities-nocode-graphql/ Introducing the Serverless Bot Framework | https://aws.amazon.com/about-aws/whats-new/2018/04/introducing-the-serverless-bot-framework/ AWS SAM CLI Launches New Commands to Simplify Testing and Debugging Serverless Applications | https://aws.amazon.com/about-aws/whats-new/2018/04/aws-sam-cli-launches-new-commands/ AWS Device Farm Adds Integration with AWS CodePipeline | https://aws.amazon.com/about-aws/whats-new/2018/07/aws-device-farm-adds-integration-with-aws-codepipeline/ Amazon Aurora Serverless Brings Serverless Computing to Relational Databases | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-aurora-serverless-brings-serverless-computing-to-relational-databases/ Amazon RDS now Provides Best Practice Recommendations | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-rds-recommendations/ Copying Amazon RDS Encrypted Snapshots across Regions now Completes Faster with Less Storage | https://aws.amazon.com/about-aws/whats-new/2018/07/rds-crossregion-incremental-encrypted-snapshots/ Amazon RDS Performance Insights on RDS for PostgreSQL | https://aws.amazon.com/about-aws/whats-new/2018/04/rds-performance-insights-on-rds-for-postgresql/ Performance Insights is Available for Amazon Aurora with MySQL Compatibility | https://aws.amazon.com/about-aws/whats-new/2018/08/performance-insights-is-available-for-amazon-aurora-with-mysql-compatibility/ Amazon DynamoDB Accelerator (DAX) SDK Enhancements | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-dynamodb-accelerator--dax--sdk-enhancements/ Amazon DynamoDB Accelerator (DAX) Adds Support for Encryption at Rest | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-dynamodb-accelerator--dax--adds-support-for-encryption-at/ Amazon DynamoDB Global Tables Now Available in Three Additional Asia Pacific Regions | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-dynamodb-global-tables-regional-expansion/ Amazon Redshift announces free upgrade for DC1 Reserved Instances to DC2 | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon_redshift_announces_free_upgrade_for_dc1_reserved_instances_to_dc2/ Amazon Redshift now provides customized best practice recommendations with Advisor | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-redshift-now-provides-customized-best-practice-recommendations-with-advisor/ Amazon Redshift now supports current and trailing tracks for release updates | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-redshift-now-supports-current-and-trailing-tracks-for-release-updates/ Amazon Redshift announces new metrics to help optimize cluster performance | https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-redshift-announces-new-metrics-to-help-optimize-cluster-performance/ Amazon Redshift announces support for lateral column alias reference | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-redshift-announces-support-for-lateral-column-alias-reference/ Amazon Redshift automatically enables short query acceleration | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-redshift-automatically-enables-short-query-acceleration/ Amazon Redshift announces support for nested data with Redshift Spectrum | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-redshift-announces-support-for-nested-data-with-redshift-spectrum/ Elastic Load Balancing Announces Support for Redirects and Fixed Responses for Application Load Balancer | https://aws.amazon.com/about-aws/whats-new/2018/07/elastic-load-balancing-announces-support-for-redirects-and-fixed-responses-for-application-load-balancer/ AWS IoT Device Defender - Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2018/08/aws-iot-device-defender-now-generally-available/ AWS IoT Rules Engine Now Supports Step Functions Action | https://aws.amazon.com/about-aws/whats-new/2018/07/aws-iot-rules-engine-now-supports-step-functions-action/ Stream data 65% faster with 5x higher fan-out using new Kinesis Data Streams features | https://aws.amazon.com/about-aws/whats-new/2018/08/stream_data_65_faster_with_5x_higher_fan_out_using_new_kinesis_data_streams_features/ Amazon Elasticsearch Service now supports zero downtime, in-place version upgrades | https://aws.amazon.com/about-aws/whats-new/2018/08/amazon_elasticsearch_service_now_supports_zero_downtime_in-place_version_upgrades/ Announcing the New AWS Free Tier Widget on the AWS Billing Dashboard | https://aws.amazon.com/about-aws/whats-new/2018/07/aws-billing-dashboard-free-tier-widget/ New AWS Public Datasets Available from Allen Institute for Brain Science, NOAA, Hubble Space Telescope, and Others | https://aws.amazon.com/about-aws/whats-new/2018/07/new-aws-public-datasets-available/
Learn how to architect a data lake where different teams within your organization can publish and consume data in a self-service manner. As organizations aim to become more data-driven, data engineering teams have to build architectures that can cater to the needs of diverse users - from developers, to business analysts, to data scientists. Each of these user groups employs different tools, have different data needs and access data in different ways. In this talk, we will dive deep into assembling a data lake using Amazon S3, Amazon Kinesis, Amazon Athena, Amazon EMR, and AWS Glue. The session will feature Mohit Rao, Architect and Integration lead at Atlassian, the maker of products such as JIRA, Confluence, and Stride. First, we will look at a couple of common architectures for building a data lake. Then we will show how Atlassian built a self-service data lake, where any team within the company can publish a dataset to be consumed by a broad set of users.
Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT), APIs, clickstreams, unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we introduce key ETL features of AWS Glue, cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL flows for your data lake. We discuss how to build scalable, efficient, and serverless ETL pipelines using AWS Glue. Additionally, Merck will share how they built an end-to-end ETL pipeline for their application release management system, and launched it in production in less than a week using AWS Glue.
Sysco has nearly 200 operating companies across its multiple lines of business throughout the United States, Canada, Central/South America, and Europe. As the global leader in food services, Sysco identified the need to streamline the collection, transformation, and presentation of data produced by the distributed units and systems, into a central data ecosystem. Sysco's Business Intelligence and Analytics team addressed these requirements by creating a data lake with scalable analytics and query engines leveraging AWS services. In this session, Sysco will outline their journey from a hindsight reporting focused company to an insights driven organization. They will cover solution architecture, challenges, and lessons learned from deploying a self-service insights platform. They will also walk through the design patterns they used and how they designed the solution to provide predictive analytics using Amazon Redshift Spectrum, Amazon S3, Amazon EMR, AWS Glue, Amazon Elasticsearch Service and other AWS services.
In this episode Mark is joined by ex-colleague and now Technical Advisor to Gluent, Michael Rainey, to talk about hybrid platforms and Gluent's new cloud offload capability, the Hadoop market in-general and his thoughts on data engineering and the recently-released AWS Glue data integration service.Gluent Cloud Sync – Sharing Data to Enable Analytics in the CloudGluent Case StudiesGluent Data Platform OverviewAmazon GlueThe Rise of the Data Engineer and The Downfall of the Data Engineer by Maxime BeaucheminDrill to Detail Ep.26 'Airflow, Superset & The Rise of the Data Engineer' with Special Guest Maxime BeaucheminDrill to Detail Ep.12 'Gluent and the New World of Hybrid Data' with Special Guest Tanel Poder
In the exciting 20th episode of AWS TechChat, hosts Dr Pete and Oli take listeners through new service announcements of AWS Migration Hub, Amazon Macie, AWS CloudTrail Event History, AWS Glue, launch of edge locations for Amazon CloudFront, general availability of Lambda@Edge and VPC endpoints for updates and information around Amazon DynamoDB, Amazon EFS, NOAA GOES-R on AWS, UK Met Office Forecast Data, New Quick Start, AWS CodeCommit, AWS SAM Local and AWS CodeDeploy.
In this episode we talk about… the AWS Summit NYC, Roomba and data privacy, child sex dolls, Game of Thrones and Coinbase the blockchain unicorn! Breakfast from Aqua Shard, London Bridge. Where bytes and bites collide.
In this episode we talk about… the AWS Summit NYC, Roomba and data privacy, child sex dolls, Game of Thrones and Coinbase the blockchain unicorn! Breakfast from Aqua Shard, London Bridge. Where bytes and bites collide.
AWS Glue is a fully managed ETL service that makes it easy to understand your data sources, prepare the data for analytics, and load it reliably to your data stores. In this session, we will introduce AWS Glue, provide an overview of its components, and discuss how you can use the service to simplify and automate your ETL process. We will also talk about when you can try out the service and how to sign up for a preview.