Podcasts about amazon dynamodb

  • 32PODCASTS
  • 150EPISODES
  • 42mAVG DURATION
  • ?INFREQUENT EPISODES
  • Sep 16, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about amazon dynamodb

Latest podcast episodes about amazon dynamodb

AWS Morning Brief
The Trouble With Coding Assistant Demos

AWS Morning Brief

Play Episode Listen Later Sep 16, 2024 6:09


AWS Morning Brief for the week of September 16, with Corey Quinn. Links:Amazon EC2 P5e instances are generally available via EC2 Capacity BlocksAnnouncing Storage Browser for Amazon S3 for your web applications (alpha release)Building a privacy preserving chatbot with Amazon BedrockFaster development with Amazon DynamoDB and Amazon Q DeveloperLinux Support Updates for AWS CLI v2Best prompting practices for using Meta Llama 3 with Amazon SageMaker JumpStart Build a RAG-based QnA application using Llama3 models from SageMaker JumpStartOptimizing Amazon S3 data transfers over Direct ConnectNew whitepaper available: Building security from the ground up with Secure by DesignSummary of the AWS Service Event in the Northern Virginia (US-EAST-1) RegionOpt out from all supported AWS AI servicesOracle and Amazon Web Services Announce Strategic Partnership

AWS Podcast
#681: Amazon DynamoDB Deep Dive

AWS Podcast

Play Episode Listen Later Aug 19, 2024 48:56


Simon is joined by Jason Hunter, AWS Principal Specialist Solutions Architect, do dive super-deep into how to make the most of DynamoDB. Whether you are new to DynamoDB, or have been using it for years - there is something in this episode for everyone! Shownotes: Jason's Blog Posts: https://aws.amazon.com/blogs/database/author/jzhunter/ The Apache Iceberg blog: https://aws.amazon.com/blogs/database/use-amazon-dynamodb-incremental-export-to-update-apache-iceberg-tables/ Traffic spikes (on-demand vs provisioned): https://aws.amazon.com/blogs/database/handle-traffic-spikes-with-amazon-dynamodb-provisioned-capacity/ Cost-effective bulk actions like delete: https://aws.amazon.com/blogs/database/cost-effective-bulk-processing-with-amazon-dynamodb/ A deep dive on partitions: https://aws.amazon.com/blogs/database/part-1-scaling-dynamodb-how-partitions-hot-keys-and-split-for-heat-impact-performance/ Global tables prescriptive guidance (the 25 page deep dive): https://docs.aws.amazon.com/prescriptive-guidance/latest/dynamodb-global-tables/introduction.html

Salesforce Developer Podcast
219: Salesforce and AWS Unified Developer Experience with Philippe Ozil

Salesforce Developer Podcast

Play Episode Listen Later May 27, 2024 14:12


In this episode, we discuss the intricacies of integrating Salesforce with AWS alongside expert Philippe Ozil. We cover the capabilities of Salesforce Connect and its AWS adapters, which facilitate efficient data and event-driven integrations without the hassle of data replication. Hear the benefits of Amazon DynamoDB, Amazon Athena, and the versatile GraphQL adapter, each serving unique use cases within Salesforce. As we wait to hear Philippe's insights from the CheckDreaming conference in Prague, we encourage you to continue your learning journey. Thank you for joining us and sharing in our passion for the evolving world of Salesforce development. Show Highlights: Overview of Salesforce Connect and its AWS adapters. Benefits of virtualizing access to external data sources and their impact on developer experience. Event-driven integrations using event relays to connect Salesforce events to Amazon EventBridge. Upcoming features like managed subscriptions for PubSub API in the Salesforce Summer '24 release. Practical tips for Salesforce developers to start experimenting with AWS integrations.  Links:  

AWS Morning Brief
New Cognito Pricing Dimensions, More GenAI Boosterism

AWS Morning Brief

Play Episode Listen Later May 13, 2024 5:20


AWS Morning Brief for the week of May 13th, 2024, with Corey Quinn. Links:Announcing Amazon Bedrock Studio previewAmazon Cognito introduces tiered pricing for machine-to-machine (M2M) usage Amazon EC2 Inf2 instances, optimized for generative AI, now in new regionsAWS Amplify Gen 2 is now generally available AWS Cost Anomaly Detection reduces anomaly detection latency by up to 30% Amazon DynamoDB introduces configurable maximum throughput for On-demand tables Creating an organizational multi-Region failover strategy Reimagine customer experiences with AWS at Customer Contact Week 2024List unspent transaction outputs by address on Bitcoin with Amazon Managed Blockchain Query Generative AI: Getting Proofs-of-Concept to Production 

AWS Morning Brief
Cancel Recent Savings Plan Purchases

AWS Morning Brief

Play Episode Listen Later Mar 25, 2024 5:11


AWS Morning Brief for the week of March 25, 2024, with Corey Quinn. Links:Amazon DynamoDB now supports AWS PrivateLinkAmazon WorkMail now supports Audit LoggingAWS announces a 7-day window to return Savings Plans AWS CodeBuild now supports custom images for AWS Lambda computeEC2 Mac Dedicated Hosts now provide visibility into supported macOS versionsInvoke AWS Lambda functions from cross-account Amazon Kinesis Data StreamsTraeger Grills's Customer Experience team drives customer satisfaction significantly using Amazon QuickSight Bulk update Amazon DynamoDB tables with AWS Step Functions Simplify cross-account access control with Amazon DynamoDB using resource-based policies How to securely provide access to centralized AWS CloudTrail Lake logs across accounts in your organizationHow to optimize DNS for dual-stack networksIntroducing mTLS for Application Load Balancer 6 foundational capabilities you need for generative AIIt's time to evolve IT procurementAWS and NVIDIA extend their collaboration to advance generative AI

AWS Morning Brief
A Slightly Better Free Tier

AWS Morning Brief

Play Episode Listen Later Feb 5, 2024 3:37


AWS Morning Brief for the week of February 5, 2024, with Corey Quinn. Links:Amazon EC2 added new price protection for attribute based instance selectionAWS announces a new Local Zone in Chicago, Illinois AWS Free Tier now includes 750 hours of free Public IPv4 addresses, as charges for Public IPv4 begin Optimize costs by automating AWS Compute Optimizer recommendationsA new and improved AWS CDK construct for Amazon DynamoDB tables Announcing Generative AI CDK ConstructsAWS Marketplace now available in the AWS Secret Region Building your machine learning skills from zero I dipped my toes in the Machine Learning® world a while back and found an impressively great tool for it: Google Colab. RHEL Pricing – Amazon Web ServicesIncorrect RI / SP Purchase Warnings

Software Defined Talk
Episode 443: Everything is maintenance

Software Defined Talk

Play Episode Listen Later Dec 1, 2023 61:54


This week, we review the major announcements from AWS re:Invent and discuss how the hyperscalers are embracing A.I. Plus, a few thoughts on children's chores. Watch the YouTube Live Recording of Episode (https://www.youtube.com/watch?v=q0xwqUis6xA) 443 (https://www.youtube.com/watch?v=q0xwqUis6xA) Runner-up Titles No Slack The Corporate Podcast. Quality of life stop Our roads diverge Eats a bag of llama Nobody wants to do a bake-off AI all the time Rundown AWS re:Invent Top announcements of AWS re:Invent 2023 | Amazon Web Services (https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2023/) Salesforce Inks Deal to Sell on Amazon Web Services' Marketplace (https://www.bloomberg.com/news/articles/2023-11-27/salesforce-to-sell-software-on-aws-marketplace-in-self-service-purchase-push#xj4y7vzkg) AWS Unveils Next Generation AWS-Designed Chips (https://press.aboutamazon.com/2023/11/aws-unveils-next-generation-aws-designed-chips) Join the preview for new memory-optimized, AWS Graviton4-powered Amazon EC2 instances (R8g) (https://aws.amazon.com/blogs/aws/join-the-preview-for-new-memory-optimized-aws-graviton4-powered-amazon-ec2-instances-r8g/) Announcing the new Amazon S3 Express One Zone high performance storage class (https://aws.amazon.com/blogs/aws/new-amazon-s3-express-one-zone-high-performance-storage-class/) AWS unveils new Trainium AI chip and Graviton 4, extends Nvidia partnership (https://www.zdnet.com/article/aws-unveils-new-trainium-ai-chip-and-graviton-4-extends-nvidia-partnership/) AI Chip - AWS Inferentia - AWS (https://aws.amazon.com/machine-learning/inferentia/) DGX Platform (https://www.nvidia.com/en-au/data-center/dgx-platform/) Foundational Models - Amazon Bedrock - AWS (https://aws.amazon.com/bedrock/) Supported models in Amazon Bedrock - Amazon Bedrock (https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html#models-supported-meta) Agents for Amazon Bedrock is now available with improved control of orchestration and visibility into reasoning (https://aws.amazon.com/blogs/aws/agents-for-amazon-bedrock-is-now-available-with-improved-control-of-orchestration-and-visibility-into-reasoning/) Knowledge Bases now delivers fully managed RAG experience in Amazon Bedrock (https://aws.amazon.com/blogs/aws/knowledge-bases-now-delivers-fully-managed-rag-experience-in-amazon-bedrock/) Customize models in Amazon Bedrock with your own data using fine-tuning and continued pre-training (https://aws.amazon.com/blogs/aws/customize-models-in-amazon-bedrock-with-your-own-data-using-fine-tuning-and-continued-pre-training/) Amazon Q brings generative AI-powered assistance to IT pros and developers (https://aws.amazon.com/blogs/aws/amazon-q-brings-generative-ai-powered-assistance-to-it-pros-and-developers-preview/) Improve developer productivity with generative-AI powered Amazon Q in Amazon CodeCatalyst (https://aws.amazon.com/blogs/aws/improve-developer-productivity-with-generative-ai-powered-amazon-q-in-amazon-codecatalyst-preview/) Upgrade your Java applications with Amazon Q Code Transformation (https://aws.amazon.com/blogs/aws/upgrade-your-java-applications-with-amazon-q-code-transformation-preview/) Introducing Amazon Q, a new generative AI-powered assistant (https://aws.amazon.com/blogs/aws/introducing-amazon-q-a-new-generative-ai-powered-assistant-preview/) New Amazon Q in QuickSight uses generative AI assistance for quicker, easier data insights (https://aws.amazon.com/blogs/aws/new-amazon-q-in-quicksight-uses-generative-ai-assistance-for-quicker-easier-data-insights-preview/) Amazon Managed Service for Prometheus collector provides agentless metric collection for Amazon EKS (https://aws.amazon.com/blogs/aws/amazon-managed-service-for-prometheus-collector-provides-agentless-metric-collection-for-amazon-eks/) Amazon CloudWatch Logs now offers automated pattern analytics and anomaly detection (https://aws.amazon.com/blogs/aws/amazon-cloudwatch-logs-now-offers-automated-pattern-analytics-and-anomaly-detection/) Use Amazon CloudWatch to consolidate hybrid, multicloud, and on-premises metrics (https://aws.amazon.com/blogs/aws/new-use-amazon-cloudwatch-to-consolidate-hybrid-multi-cloud-and-on-premises-metrics/) Amazon EKS Pod Identity simplifies IAM permissions for applications on Amazon EKS clusters (https://aws.amazon.com/blogs/aws/amazon-eks-pod-identity-simplifies-iam-permissions-for-applications-on-amazon-eks-clusters/) Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service is now available (https://aws.amazon.com/blogs/aws/amazon-dynamodb-zero-etl-integration-with-amazon-opensearch-service-is-now-generally-available/) Amazon says its first Project Kuiper internet satellites were fully successful in testing (https://www.cnbc.com/2023/11/16/amazon-kuiper-internet-satellites-fully-successful-in-testing.html) AWS takes the cheap shots (https://techcrunch.com/2023/11/28/aws-takes-the-cheap-shots/) Here's everything Amazon Web Services announced at AWS re:Invent (https://techcrunch.com/2023/11/28/heres-everything-aws-reinvent-2023-so-far/) Relevant to your Interests Oracle Cloud Made All The Right Moves In 2022 (https://moorinsightsstrategy.com/oracle-cloud-made-all-the-right-moves-in-2022/) Ransomware gang files SEC complaint over victim's undisclosed breach (https://www.bleepingcomputer.com/news/security/ransomware-gang-files-sec-complaint-over-victims-undisclosed-breach/) Keynote Highlights: Satya Nadella at Microsoft Ignite 2023 (https://www.youtube.com/watch?v=QMlUJqxhdoY) Thoma Bravo to sell about $500 million in Dynatrace stock (https://www.marketwatch.com/story/thoma-bravo-to-sell-about-500-million-in-dynatrace-stock-9d7bd0e6) FinOps Open Cost and Usage Specification 1.0-preview Released to Demystify Cloud Billing Data (https://www.prnewswire.com/news-releases/finops-open-cost-and-usage-specification-1-0-preview-released-to-demystify-cloud-billing-data-301990559.html?tc=eml_cleartime) AWS, Microsoft, Google and Oracle partner to make cloud spend more transparent | TechCrunch (https://techcrunch.com/2023/11/16/aws-microsoft-google-and-oracle-partner-to-make-cloud-spend-more-transparent/) Privacy is Priceless, but Signal is Expensive (https://signal.org/blog/signal-is-expensive/) Several popular AI products flagged as unsafe for kids by Common Sense Media | TechCrunch (https://techcrunch.com/2023/11/16/several-popular-ai-products-flagged-as-unsafe-for-kids-by-common-sense-media/) Amazon to sell Hyundai vehicles online starting in 2024 (https://finance.yahoo.com/news/amazon-sell-hyundai-vehicles-online-180500951.html) Amazon to launch car sales next year with Hyundai (https://news.google.com/articles/CBMiP2h0dHBzOi8vd3d3LmF4aW9zLmNvbS8yMDIzLzExLzE2L2FtYXpvbi1oeXVuZGFpLWNhcnMtc2FsZS1hbGV4YdIBAA?hl=en-US&gl=US&ceid=US%3Aen) Canonical Microcloud: Simple, free, on-prem Linux clustering (https://www.theregister.com/2023/11/16/canonical_microcloud/) Introducing the Functional Source License: Freedom without Free-riding (https://blog.sentry.io/introducing-the-functional-source-license-freedom-without-free-riding/) The Problems with Money In (Open Source) Software | Aneel Lakhani | Monktoberfest 2023 (https://www.youtube.com/watch?v=LTCuLyv6SHo) DXC Technology and AWS Take Their Strategic Partnership to the Next Level to Deliver the Future of Cloud for Customers (https://dxc.com/us/en/about-us/newsroom/press-releases/11202023) Broadcom and VMware Intend to Close Transaction on November 22, 2023 (https://www.businesswire.com/news/home/20231121379706/en/Broadcom-and-VMware-Intend-to-Close-Transaction-on-November-22-2023) Broadcom announces successful acquisition of VMware | Hock Tan (https://www.broadcom.com/blog/broadcom-announces-successful-acquisition-of-vmware) Broadcom closes $69 billion VMware deal after China approval (https://finance.yahoo.com/news/broadcom-closes-69-billion-vmware-133704461.html) VMware is now part of Broadcom | VMware by Broadcom (https://www.broadcom.com/info/vmware) Binance CEO Changpeng Zhao Reportedly Quits and Pleads Guilty to Breaking US Law (https://www.wired.com/story/binance-cz-ceo-quits-pleads-guilty-breaking-law/) Congrats To Elon Musk: I Didn't Think You Had It In You To File A Lawsuit This Stupid. But, You Crazy Bastard, You Did It! (https://www.techdirt.com/2023/11/21/congrats-to-elon-musk-i-didnt-think-you-had-it-in-you-to-file-a-lawsuit-this-stupid-but-you-crazy-bastard-you-did-it/) Hackers spent 2+ years looting secrets of chipmaker NXP before being detected (https://arstechnica.com/security/2023/11/hackers-spent-2-years-looting-secrets-of-chipmaker-nxp-before-being-detected/) Meet ‘Anna Boyko': How a Fake Speaker Blew up DevTernity (https://thenewstack.io/meet-anna-boyko-how-a-fake-speaker-blew-up-devternity/) IBM's Db2 database dinosaur comes to AWS (https://go.theregister.com/feed/www.theregister.com/2023/11/29/aws_launch_ibms_db2_database/) Reports of AI ending human labour may be greatly exaggerated (https://www.ecb.europa.eu/pub/economic-research/resbull/2023/html/ecb.rb231128~0a16e73d87.es.html) New Google geothermal electricity project could be a milestone for clean energy (https://apnews.com/article/geothermal-energy-heat-renewable-power-climate-5c97f86e62263d3a63d7c92c40f1330d) VMware's $92bn sale showers cash on Michael Dell and Silver Lake (https://www.ft.com/content/d01901a2-db4b-45df-8ce5-f57ff46d463e) Gartner Says Cloud Will Become a Business Necessity by 2028 (https://www.gartner.com/en/newsroom/press-releases/2023-11-29-gartner-says-cloud-will-become-a-business-necessity-by-2028) IRS starts the bidding for $1.9B IT services recompete (https://www.nextgov.com/acquisition/2023/11/irs-starts-bidding-19b-it-services-recompete/392303/) WSJ News Exclusive | Apple Pulls Plug on Goldman Credit-Card Partnership (https://www.wsj.com/finance/banking/apple-pulls-plug-on-goldman-credit-card-partnership-ca1dfb45) Apple employees most likely to leave to join Google shows LinkedIn (https://9to5mac.com/2023/11/23/apple-employees-next-jobs/) Ranked: Worst Companies for Employee Retention (U.S. and UK) (https://www.visualcapitalist.com/cp/ranked-worst-companies-for-employee-retention-u-s-and-uk/) Apple announces RCS support for iMessage (https://arstechnica.com/gadgets/2023/11/apple-announces-rcs-support-for-imessage/) Apple says iPhones will support RCS in 2024 (https://www.theverge.com/2023/11/16/23964171/apple-iphone-rcs-support) Today on The Vergecast: what Apple really means when it talks about RCS. (https://www.theverge.com/2023/11/17/23965656/today-on-the-vergecast-what-apple-really-means-when-it-talks-about-rcs) **## Nonsense Ikea debuts a trio of affordable smart home sensors (https://www.theverge.com/2023/11/28/23977693/ikea-sensors-door-window-water-motion-price-date-specs) Apple and Spotify have revealed their top podcasts of 2023 (https://www.theverge.com/2023/11/29/23981468/apple-replay-spotify-wrapped-podcasts-rogan-crime-junkie-alex-cooper) Listener Feedback Matt's Trackball: Amazon.com: Kensington Expert Trackball Mouse (K64325), Black Silver, 5"W x 5-3/4"D x 2-1/2"H : Electronics (https://amzn.to/3ujm7ct) Conferences Jan 29, 2024 to Feb 1, 2024 That Conference Texas (https://that.us/events/tx/2024/schedule/) If you want your conference mentioned, let's talk media sponsorships. SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us: Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), Mastodon (https://hachyderm.io/@softwaredefinedtalk), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk), Threads (https://www.threads.net/@softwaredefinedtalk) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off Coté's book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: The Complete History & Strategy of Visa (https://www.acquired.fm/episodes/visa) Matt: Markdown in Google Docs (https://support.google.com/docs/answer/12014036) Google Docs to Markdown (https://workspace.google.com/marketplace/app/docs_to_markdown/700168918607) Coté: pork chops, preferably thin sliced. Photo Credits Header (https://unsplash.com/photos/bike-on-concrete-floor-j0zlzt40J-0) Artwork (https://unsplash.com/photos/person-holding-black-amazon-echo-dot-qQRrhMIpxPw)

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)

If you work on Serverless Architectures, and are building Lambdas on AWS, it's highly likely you are already using DynamoDB, & if you aren't, it's only a matter of time before you realize you really ought to :) While there's no dearth of NoSQL databases, and despite the fact that AWS has plentiful support (to varying degrees) for a number of them, DynamoDB is a slightly unique database and it has a specific purpose when it comes to where it fits in & how well it does. Given that, it's certainly useful to understand it a bit (better). Purchase course in one of 2 ways: 1. Go to https://getsnowpal.com, and purchase it on the Web 2. On your phone:     (i) If you are an iPhone user, go to http://ios.snowpal.com, and watch the course on the go.     (ii). If you are an Android user, go to http://android.snowpal.com.

AWS Morning Brief
Cloud Institute for the Criminally Underpaid

AWS Morning Brief

Play Episode Listen Later Oct 16, 2023 6:27


AWS Morning Brief for the week of October 16, 2023 with Corey Quinn. Links: New Amazon CloudWatch metric monitors EC2 instance reachability to EBS volumes Announcing AWS Lambda's support for Internet Protocol Version 6 (IPv6) for outbound connections in VPC  Announcing new AWS Network Load Balancer (NLB) availability and performance capabilities  Two billion downloads of Terraform AWS Provider shows value of IaC for infrastructure management Why purpose-built artificial intelligence chips may be key to your generative AI strategy How Zalando migrated their shopping carts to Amazon DynamoDB from Apache Cassandra  Unlocking cost-optimization: Open Raven's journey with Amazon Aurora I/O-Optimized leads to 60% savings How does Cloud enable the transformation of Bank finance functions?  AWS Cloud Institute: Virtual training program for cloud developers

AWS Morning Brief
VirtuSwap's Giant Panda Accelerato

AWS Morning Brief

Play Episode Listen Later Sep 25, 2023 5:07


AWS Morning Brief for the week of September 25, 2023, with Corey Quinn. Links: Today Corey is hosting a drink-up at 6 PM in Seattle at Outer Planet Brewing. If you're in town / free, come on by; let him buy you a beer. Later this week Corey will be hosting an AMA on 9/27 @ noon PDT over on YouTube. Bring questions! Accenture Extends Generative AI Capabilities to Accelerate Adoption and Value on AWS  New – Amazon EC2 M2 Pro Mac Instances Built on Apple Silicon M2 Pro Mac Mini Computers  How Chime Financial uses AWS to build a serverless stream analytics platform and defeat fraudsters  Centralizing management of AWS Lambda layers across multiple AWS Accounts Handle traffic spikes with Amazon DynamoDB provisioned capacity Streamline interstate Department of Motor Vehicles collaboration with Private Blockchain  How to host your Unreal Engine game for under $1 per player with Amazon GameLift  How United Airlines built a cost-efficient Optical Character Recognition active learning pipeline How VirtuSwap accelerates their pandas ... -based trading simulations with an Amazon SageMaker Studio custom container and AWS GPU instances Provision sandbox accounts with budget limits to reduce costs using AWS Control Tower Reducing the Scope of Impact with Cell-Based Architecture - Reducing the Scope of Impact with Cell-Based Architecture From Massage Therapist to Cloud Associate with AWS Academy 

The Cloud Pod
219: The Cloud Pod Proclaims: One Does Not Just Entra into Mordor

The Cloud Pod

Play Episode Listen Later Jul 20, 2023 22:57


Welcome episode 219 of The Cloud Pod podcast - where the forecast is always cloudy! Today your hosts are Justin and Jonathan, and they discuss all things cloud, including clickstream analytics, databricks, Microsoft Entra, virtual machines, Outlook threats, and some major changes over at the Google Cloud team.  Titles we almost went with this week: TCP is not Entranced with Entra ID The Cave you Fear to Entra, Holds the Treasure you Seek Microsoft should rethink Entra rules for their Email A big thanks to this week's sponsor: Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring?  Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.

Ready, Set, Cloud Podcast!
Writing the DynamoDB Book with Alex Debrie

Ready, Set, Cloud Podcast!

Play Episode Listen Later May 5, 2023 26:28


Join Alex DeBrie and Allen Helton as they talk about the work that went into writing the famous DynamoDB book. Alex tells us how it all got started, how long it took to write the book, and what was easier and harder than expected about the entire process. The two also talk about imposter syndrome and how to fight back when you feel like your content isn't quite good enough. Stay tuned until the end, when Alex asks for help deciding what book he should write next! About Alex Alex DeBrie is an AWS Data Hero who wrote The DynamoDB Book, a comprehensive guide to data modeling with Amazon DynamoDB. He has given talks at various AWS events and meetups and is passionate about spreading his love for serverless architectures. Alex has worked with a variety of clients, from product companies to government agencies, and aims to help the next generation of developers learn AWS. Prior to his current work, Alex was a corporate lawyer and currently resides in Omaha, NE with his family.Links Twitter - https://twitter.com/alexbdebrie LinkedIn - https://www.linkedin.com/in/alex-debrie Alex's blog - https://www.alexdebrie.com The DynamoDB Book - https://dynamodbbook.com --- Send in a voice message: https://podcasters.spotify.com/pod/show/readysetcloud/message Support this podcast: https://podcasters.spotify.com/pod/show/readysetcloud/support

The Cloud Pod
209: The Cloud Pod Whispers Sweet Nothings To Our Code (**why wont you work**)

The Cloud Pod

Play Episode Listen Later Apr 28, 2023 44:35


Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan and Jonathan are your hosts this week as we discuss all the latest news and announcements in the world of the cloud and AI - including Amazon's new AI, Bedrock, as well as new AI tools from other developers. We also address the new updates to AWS's CodeWhisperer, and return to our Cloud Journey Series where we discuss *insert dramatic music* - Kubernetes!  Titles we almost went with this week: ⭐I'm always Whispering to My Code as an Individual

AWS Morning Brief
Your Network Bill is Now Diamonds

AWS Morning Brief

Play Episode Listen Later Apr 10, 2023 5:43


AWS Morning Brief for the week of April 10, 2023 with Corey Quinn. Links: Console Toolbar is now generally available for AWS CloudShell Announcing CSV Export for AWS Resource Explorer Search Results Announcing Utilization Notifications for EC2 On-Demand Capacity Everything you need to know about AWS Billing Conductor's new pricing model How to use Amazon CloudWatch to monitor Amazon DynamoDB table size and item count metrics Implement resource counters with Amazon DynamoDB AWS Organizations, moving an organization member account to another organization: Part 3 Build secure multi-account multi-VPC connectivity for your applications with Amazon VPC Lattice  Higher education cloud financial planning: A former CFO's perspective How the Think Big for Small Business program helps small businesses win big contracts Amazon started passing out Small Business labels to giant companies. Perfect imperfections: how AWS is innovating on diamond materials for quantum communication with Element Six

AWS Morning Brief
Friendship Started with Microservices

AWS Morning Brief

Play Episode Listen Later Apr 3, 2023 4:11


AWS Morning Brief for the week of April 3, 2023 with Corey Quinn. Links: Amazon Kendra launches Featured Results  AWS Chatbot now supports search of AWS resources and AWS content  AWS Copilot adds support for full customization with AWS CDK or YAML overrides  AWS re:Post now includes AWS Knowledge Center articles New Cost Explorer users now get Cost Anomaly Detection by default Introducing Data on EKS – Modernize Data Workloads on Amazon EKS Friend microservices using Amazon DynamoDB and event filtering 

The Cloud Pod
203: From vaporware to visual apps – AWS App Composer Generally Available

The Cloud Pod

Play Episode Listen Later Mar 16, 2023 40:47


On this episode of The Cloud Pod, the team talks about the new AWS region in Malaysia, the launch of AWS App Composer, the expansion of spanner database capabilities, the release of a vision AI by Microsoft; Florence Foundation Model, and the three migration techniques to the cloud space. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights

AWS Morning Brief
Bored? See the AWS Job Board

AWS Morning Brief

Play Episode Listen Later Mar 13, 2023 5:53


AWS Morning Brief for the week of March 13, 2023 with Corey Quinn. Links: jobs.lastweekinaws.com Amazon EC2 announces the ability to create Amazon Machine Images (AMIs) that can boot on UEFI and Legacy BIOS  AWS Application Composer is now generally available AWS CloudShell now supports the modular variant of AWS Tools for PowerShell  AWS Config now supports 18 new resource types  AWS Lambda now supports up to 10 GB of ephemeral storage for Lambda functions in 6 additional regions  AWS announces new competition structure for the 2023 Season AWS Resource Explorer supports 12 new resource types Announcing lower data warehouse base capacity configuration for Amazon Redshift Serverless Meet the Newest AWS Heroes – March 2023  Subscribe to AWS Daily Feature Updates via Amazon SNS Calculate Amazon DynamoDB reserved capacity recommendations to optimize costs How to use deletion protection to enhance your Amazon DynamoDB table protection strategy  Push notification engagement metrics tracking  Build Cloud Operations skills using the new AWS Observability Training 

The Cloud Pod
201: The CloudPod is assimilated and joins the Azure Collective

The Cloud Pod

Play Episode Listen Later Feb 28, 2023 36:04


On this episode of The Cloud Pod, the team discusses the AWS systems manager default enablement option for all EC2 instances in an account, different ideas from leveraging innovators plus subscription using $500 Google credits, the Azure Open Source Day, the new theme for the Oracle OCI Console, and lastly, different ways to migrate to a cloud provider. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights

The Cloud Pod
200: Now you can make bad cloud decisions like running EKS on SNOW

The Cloud Pod

Play Episode Listen Later Feb 22, 2023 50:18


EKS on Snow Devices On this episode of The Cloud Pod, the team highlights the new Graviton3-based images for users of AWS, new ways provided by Google to pay for its cloud services, the new partnership between Azure and the Finops Foundation, as well as Oracle's new cloud banking, and the automation of CCOE. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)

(Part 1/2) As part of the current API Gateway work, we are using DynamoDB and in this series, I'll share my experience with it. #snowpal #projectmanagement Manage personal and professional projects on https://snowpal.com.

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)

(Part 2/2) As part of the current API Gateway work, we are using DynamoDB and in this series, I'll share my experience with it. #snowpal #projectmanagement Manage personal and professional projects on https://snowpal.com.

AWS Morning Brief
gp3 for thee, RDS

AWS Morning Brief

Play Episode Listen Later Nov 14, 2022 6:30


Links: Ben Kehoe has left iRobot. And where's he going next? Presumably to re:Invent! I am too, with my re:Quinnvent nonsense Amazon Athena announces Query Result Reuse to accelerate queries Amazon EC2 enables you to opt out of directly shared Amazon Machine Images Amazon EC2 placement groups can now be shared across multiple AWS accounts  Amazon EC2 now supports specifying list of instance types to use in attribute-based instance type selection for Auto Scaling groups, EC2 Fleet, and Spot Fleet  Amazon Lightsail announces support for domain registration and DNS autoconfiguration Amazon RDS now supports new General Purpose gp3 storage volumes Announcing recurring custom line items for AWS Billing Conductor AWS Lambda announces Telemetry API, further enriching monitoring and observability capabilities of Lambda Extensions AWS Cost Explorer's New Look and Common Use Cases A New AWS Region Opens in Switzerland - eu-central-2 is now available. Introducing AWS Resource Explorer – Quickly Find Resources in Your AWS Account  Overview of building resilient applications with Amazon DynamoDB global tables  Publish Amazon DevOps Guru Insights to Slack Channel Uncompressed Media over IP on AWS: Read the whitepaper  Enable cross-account queries on AWS CloudTrail lake using delegated administration from AWS Organizations NASA and ASDI announce no-cost access to important climate dataset on the AWS Cloud 

AWS Morning Brief
Getting Lost in Cloud Map

AWS Morning Brief

Play Episode Listen Later Oct 11, 2022 6:00


Links: AWS Cloud Map Updates Service Level Agreement  Amazon DevOps Guru now allows customers control over the notifications they receive  Amazon S3 Object Lambda now supports using your own code to modify the results of S3 HEAD and LIST API requests Amazon SageMaker Clarify now can provide near real-time explanations for ML predictions  AWS Lambda Functions powered by AWS Graviton2 now available in 12 additional regions The five most visited Amazon DynamoDB blog posts of 2022  Prevent account takeover at login with the new Account Takeover Insights model in Amazon Fraud Detector  Bootstrapping multiple AWS accounts for AWS CDK using CloudFormation StackSets Designing hyperscale Amazon VPC networks 

amazon cloud prevent ml aws devops getting lost amazon dynamodb last week in aws amazon vpc aws lambda functions
52 Weeks of Cloud
52-weeks-aws-certified-developer-lambda-serverless

52 Weeks of Cloud

Play Episode Listen Later Sep 29, 2022 24:51


[00:00.000 --> 00:04.560] All right, so I'm here with 52 weeks of AWS[00:04.560 --> 00:07.920] and still continuing to do developer certification.[00:07.920 --> 00:11.280] I'm gonna go ahead and share my screen here.[00:13.720 --> 00:18.720] All right, so we are on Lambda, one of my favorite topics.[00:19.200 --> 00:20.800] Let's get right into it[00:20.800 --> 00:24.040] and talk about how to develop event-driven solutions[00:24.040 --> 00:25.560] with AWS Lambda.[00:26.640 --> 00:29.440] With Serverless Computing, one of the things[00:29.440 --> 00:32.920] that it is going to do is it's gonna change[00:32.920 --> 00:36.000] the way you think about building software[00:36.000 --> 00:39.000] and in a traditional deployment environment,[00:39.000 --> 00:42.040] you would configure an instance, you would update an OS,[00:42.040 --> 00:45.520] you'd install applications, build and deploy them,[00:45.520 --> 00:47.000] load balance.[00:47.000 --> 00:51.400] So this is non-cloud native computing and Serverless,[00:51.400 --> 00:54.040] you really only need to focus on building[00:54.040 --> 00:56.360] and deploying applications and then monitoring[00:56.360 --> 00:58.240] and maintaining the applications.[00:58.240 --> 01:00.680] And so with really what Serverless does[01:00.680 --> 01:05.680] is it allows you to focus on the code for the application[01:06.320 --> 01:08.000] and you don't have to manage the operating system,[01:08.000 --> 01:12.160] the servers or scale it and really is a huge advantage[01:12.160 --> 01:14.920] because you don't have to pay for the infrastructure[01:14.920 --> 01:15.920] when the code isn't running.[01:15.920 --> 01:18.040] And that's really a key takeaway.[01:19.080 --> 01:22.760] If you take a look at the AWS Serverless platform,[01:22.760 --> 01:24.840] there's a bunch of fully managed services[01:24.840 --> 01:26.800] that are tightly integrated with Lambda.[01:26.800 --> 01:28.880] And so this is another huge advantage of Lambda,[01:28.880 --> 01:31.000] isn't necessarily that it's the fastest[01:31.000 --> 01:33.640] or it has the most powerful execution,[01:33.640 --> 01:35.680] it's the tight integration with the rest[01:35.680 --> 01:39.320] of the AWS platform and developer tools[01:39.320 --> 01:43.400] like AWS Serverless application model or AWS SAM[01:43.400 --> 01:45.440] would help you simplify the deployment[01:45.440 --> 01:47.520] of Serverless applications.[01:47.520 --> 01:51.960] And some of the services include Amazon S3,[01:51.960 --> 01:56.960] Amazon SNS, Amazon SQS and AWS SDKs.[01:58.600 --> 02:03.280] So in terms of Lambda, AWS Lambda is a compute service[02:03.280 --> 02:05.680] for Serverless and it lets you run code[02:05.680 --> 02:08.360] without provisioning or managing servers.[02:08.360 --> 02:11.640] It allows you to trigger your code in response to events[02:11.640 --> 02:14.840] that you would configure like, for example,[02:14.840 --> 02:19.200] dropping something into a S3 bucket like that's an image,[02:19.200 --> 02:22.200] Nevel Lambda that transcribes it to a different format.[02:23.080 --> 02:27.200] It also allows you to scale automatically based on demand[02:27.200 --> 02:29.880] and it will also incorporate built-in monitoring[02:29.880 --> 02:32.880] and logging with AWS CloudWatch.[02:34.640 --> 02:37.200] So if you look at AWS Lambda,[02:37.200 --> 02:39.040] some of the things that it does[02:39.040 --> 02:42.600] is it enables you to bring in your own code.[02:42.600 --> 02:45.280] So the code you write for Lambda isn't written[02:45.280 --> 02:49.560] in a new language, you can write things[02:49.560 --> 02:52.600] in tons of different languages for AWS Lambda,[02:52.600 --> 02:57.600] Node, Java, Python, C-sharp, Go, Ruby.[02:57.880 --> 02:59.440] There's also custom run time.[02:59.440 --> 03:03.880] So you could do Rust or Swift or something like that.[03:03.880 --> 03:06.080] And it also integrates very deeply[03:06.080 --> 03:11.200] with other AWS services and you can invoke[03:11.200 --> 03:13.360] third-party applications as well.[03:13.360 --> 03:18.080] It also has a very flexible resource and concurrency model.[03:18.080 --> 03:20.600] And so Lambda would scale in response to events.[03:20.600 --> 03:22.880] So you would just need to configure memory settings[03:22.880 --> 03:24.960] and AWS would handle the other details[03:24.960 --> 03:28.720] like the CPU, the network, the IO throughput.[03:28.720 --> 03:31.400] Also, you can use the Lambda,[03:31.400 --> 03:35.000] AWS Identity and Access Management Service or IAM[03:35.000 --> 03:38.560] to grant access to what other resources you would need.[03:38.560 --> 03:41.200] And this is one of the ways that you would control[03:41.200 --> 03:44.720] the security of Lambda is you have really guardrails[03:44.720 --> 03:47.000] around it because you would just tell Lambda,[03:47.000 --> 03:50.080] you have a role that is whatever it is you need Lambda to do,[03:50.080 --> 03:52.200] talk to SQS or talk to S3,[03:52.200 --> 03:55.240] and it would specifically only do that role.[03:55.240 --> 04:00.240] And the other thing about Lambda is that it has built-in[04:00.560 --> 04:02.360] availability and fault tolerance.[04:02.360 --> 04:04.440] So again, it's a fully managed service,[04:04.440 --> 04:07.520] it's high availability and you don't have to do anything[04:07.520 --> 04:08.920] at all to use that.[04:08.920 --> 04:11.600] And one of the biggest things about Lambda[04:11.600 --> 04:15.000] is that you only pay for what you use.[04:15.000 --> 04:18.120] And so when the Lambda service is idle,[04:18.120 --> 04:19.480] you don't have to actually pay for that[04:19.480 --> 04:21.440] versus if it's something else,[04:21.440 --> 04:25.240] like even in the case of a Kubernetes-based system,[04:25.240 --> 04:28.920] still there's a host machine that's running Kubernetes[04:28.920 --> 04:31.640] and you have to actually pay for that.[04:31.640 --> 04:34.520] So one of the ways that you can think about Lambda[04:34.520 --> 04:38.040] is that there's a bunch of different use cases for it.[04:38.040 --> 04:40.560] So let's start off with different use cases,[04:40.560 --> 04:42.920] web apps, I think would be one of the better ones[04:42.920 --> 04:43.880] to think about.[04:43.880 --> 04:46.680] So you can combine AWS Lambda with other services[04:46.680 --> 04:49.000] and you can build powerful web apps[04:49.000 --> 04:51.520] that automatically scale up and down.[04:51.520 --> 04:54.000] And there's no administrative effort at all.[04:54.000 --> 04:55.160] There's no backups necessary,[04:55.160 --> 04:58.320] no multi-data center redundancy, it's done for you.[04:58.320 --> 05:01.400] Backends, so you can build serverless backends[05:01.400 --> 05:05.680] that lets you handle web, mobile, IoT,[05:05.680 --> 05:07.760] third-party applications.[05:07.760 --> 05:10.600] You can also build those backends with Lambda,[05:10.600 --> 05:15.400] with API Gateway, and you can build applications with them.[05:15.400 --> 05:17.200] In terms of data processing,[05:17.200 --> 05:19.840] you can also use Lambda to run code[05:19.840 --> 05:22.560] in response to a trigger, change in data,[05:22.560 --> 05:24.440] shift in system state,[05:24.440 --> 05:27.360] and really all of AWS for the most part[05:27.360 --> 05:29.280] is able to be orchestrated with Lambda.[05:29.280 --> 05:31.800] So it's really like a glue type service[05:31.800 --> 05:32.840] that you're able to use.[05:32.840 --> 05:36.600] Now chatbots, that's another great use case for it.[05:36.600 --> 05:40.760] Amazon Lex is a service for building conversational chatbots[05:42.120 --> 05:43.560] and you could use it with Lambda.[05:43.560 --> 05:48.560] Amazon Lambda service is also able to be used[05:50.080 --> 05:52.840] with voice IT automation.[05:52.840 --> 05:55.760] These are all great use cases for Lambda.[05:55.760 --> 05:57.680] In fact, I would say it's kind of like[05:57.680 --> 06:01.160] the go-to automation tool for AWS.[06:01.160 --> 06:04.160] So let's talk about how Lambda works next.[06:04.160 --> 06:06.080] So the way Lambda works is that[06:06.080 --> 06:09.080] there's a function and there's an event source,[06:09.080 --> 06:10.920] and these are the core components.[06:10.920 --> 06:14.200] The event source is the entity that publishes events[06:14.200 --> 06:19.000] to AWS Lambda, and Lambda function is the code[06:19.000 --> 06:21.960] that you're gonna use to process the event.[06:21.960 --> 06:25.400] And AWS Lambda would run that Lambda function[06:25.400 --> 06:29.600] on your behalf, and a few things to consider[06:29.600 --> 06:33.840] is that it really is just a little bit of code,[06:33.840 --> 06:35.160] and you can configure the triggers[06:35.160 --> 06:39.720] to invoke a function in response to resource lifecycle events,[06:39.720 --> 06:43.680] like for example, responding to incoming HTTP,[06:43.680 --> 06:47.080] consuming events from a queue, like in the case of SQS[06:47.080 --> 06:48.320] or running it on a schedule.[06:48.320 --> 06:49.760] So running it on a schedule is actually[06:49.760 --> 06:51.480] a really good data engineering task, right?[06:51.480 --> 06:54.160] Like you could run it periodically to scrape a website.[06:55.120 --> 06:58.080] So as a developer, when you create Lambda functions[06:58.080 --> 07:01.400] that are managed by the AWS Lambda service,[07:01.400 --> 07:03.680] you can define the permissions for the function[07:03.680 --> 07:06.560] and basically specify what are the events[07:06.560 --> 07:08.520] that would actually trigger it.[07:08.520 --> 07:11.000] You can also create a deployment package[07:11.000 --> 07:12.920] that includes application code[07:12.920 --> 07:17.000] in any dependency or library necessary to run the code,[07:17.000 --> 07:19.200] and you can also configure things like the memory,[07:19.200 --> 07:23.200] you can figure the timeout, also configure the concurrency,[07:23.200 --> 07:25.160] and then when your function is invoked,[07:25.160 --> 07:27.640] Lambda will provide a runtime environment[07:27.640 --> 07:30.080] based on the runtime and configuration options[07:30.080 --> 07:31.080] that you selected.[07:31.080 --> 07:36.080] So let's talk about models for invoking Lambda functions.[07:36.360 --> 07:41.360] In the case of an event source that invokes Lambda function[07:41.440 --> 07:43.640] by either a push or a pool model,[07:43.640 --> 07:45.920] in the case of a push, it would be an event source[07:45.920 --> 07:48.440] directly invoking the Lambda function[07:48.440 --> 07:49.840] when the event occurs.[07:50.720 --> 07:53.040] In the case of a pool model,[07:53.040 --> 07:56.960] this would be putting the information into a stream or a queue,[07:56.960 --> 07:59.400] and then Lambda would pull that stream or queue,[07:59.400 --> 08:02.800] and then invoke the function when it detects an events.[08:04.080 --> 08:06.480] So a few different examples would be[08:06.480 --> 08:11.280] that some services can actually invoke the function directly.[08:11.280 --> 08:13.680] So for a synchronous invocation,[08:13.680 --> 08:15.480] the other service would wait for the response[08:15.480 --> 08:16.320] from the function.[08:16.320 --> 08:20.680] So a good example would be in the case of Amazon API Gateway,[08:20.680 --> 08:24.800] which would be the REST-based service in front.[08:24.800 --> 08:28.320] In this case, when a client makes a request to your API,[08:28.320 --> 08:31.200] that client would get a response immediately.[08:31.200 --> 08:32.320] And then with this model,[08:32.320 --> 08:34.880] there's no built-in retry in Lambda.[08:34.880 --> 08:38.040] Examples of this would be Elastic Load Balancing,[08:38.040 --> 08:42.800] Amazon Cognito, Amazon Lex, Amazon Alexa,[08:42.800 --> 08:46.360] Amazon API Gateway, AWS CloudFormation,[08:46.360 --> 08:48.880] and Amazon CloudFront,[08:48.880 --> 08:53.040] and also Amazon Kinesis Data Firehose.[08:53.040 --> 08:56.760] For asynchronous invocation, AWS Lambda queues,[08:56.760 --> 09:00.320] the event before it passes to your function.[09:00.320 --> 09:02.760] The other service gets a success response[09:02.760 --> 09:04.920] as soon as the event is queued,[09:04.920 --> 09:06.560] and if an error occurs,[09:06.560 --> 09:09.760] Lambda will automatically retry the invocation twice.[09:10.760 --> 09:14.520] A good example of this would be S3, SNS,[09:14.520 --> 09:17.720] SES, the Simple Email Service,[09:17.720 --> 09:21.120] AWS CloudFormation, Amazon CloudWatch Logs,[09:21.120 --> 09:25.400] CloudWatch Events, AWS CodeCommit, and AWS Config.[09:25.400 --> 09:28.280] But in both cases, you can invoke a Lambda function[09:28.280 --> 09:30.000] using the invoke operation,[09:30.000 --> 09:32.720] and you can specify the invocation type[09:32.720 --> 09:35.440] as either synchronous or asynchronous.[09:35.440 --> 09:38.760] And when you use the AWS service as a trigger,[09:38.760 --> 09:42.280] the invocation type is predetermined for each service,[09:42.280 --> 09:44.920] and so you have no control over the invocation type[09:44.920 --> 09:48.920] that these events sources use when they invoke your Lambda.[09:50.800 --> 09:52.120] In the polling model,[09:52.120 --> 09:55.720] the event sources will put information into a stream or a queue,[09:55.720 --> 09:59.360] and AWS Lambda will pull the stream or the queue.[09:59.360 --> 10:01.000] If it first finds a record,[10:01.000 --> 10:03.280] it will deliver the payload and invoke the function.[10:03.280 --> 10:04.920] And this model, the Lambda itself,[10:04.920 --> 10:07.920] is basically pulling data from a stream or a queue[10:07.920 --> 10:10.280] for processing by the Lambda function.[10:10.280 --> 10:12.640] Some examples would be a stream-based event service[10:12.640 --> 10:17.640] would be Amazon DynamoDB or Amazon Kinesis Data Streams,[10:17.800 --> 10:20.920] and these stream records are organized into shards.[10:20.920 --> 10:24.640] So Lambda would actually pull the stream for the record[10:24.640 --> 10:27.120] and then attempt to invoke the function.[10:27.120 --> 10:28.800] If there's a failure,[10:28.800 --> 10:31.480] AWS Lambda won't read any of the new shards[10:31.480 --> 10:34.840] until the failed batch of records expires or is processed[10:34.840 --> 10:36.160] successfully.[10:36.160 --> 10:39.840] In the non-streaming event, which would be SQS,[10:39.840 --> 10:42.400] Amazon would pull the queue for records.[10:42.400 --> 10:44.600] If it fails or times out,[10:44.600 --> 10:46.640] then the message would be returned to the queue,[10:46.640 --> 10:49.320] and then Lambda will keep retrying the failed message[10:49.320 --> 10:51.800] until it's processed successfully.[10:51.800 --> 10:53.600] If the message will expire,[10:53.600 --> 10:56.440] which is something you can do with SQS,[10:56.440 --> 10:58.240] then it'll just be discarded.[10:58.240 --> 11:00.400] And you can create a mapping between an event source[11:00.400 --> 11:02.960] and a Lambda function right inside of the console.[11:02.960 --> 11:05.520] And this is how typically you would set that up manually[11:05.520 --> 11:07.600] without using infrastructure as code.[11:08.560 --> 11:10.200] All right, let's talk about permissions.[11:10.200 --> 11:13.080] This is definitely an easy place to get tripped up[11:13.080 --> 11:15.760] when you're first using AWS Lambda.[11:15.760 --> 11:17.840] There's two types of permissions.[11:17.840 --> 11:20.120] The first is the event source and permission[11:20.120 --> 11:22.320] to trigger the Lambda function.[11:22.320 --> 11:24.480] This would be the invocation permission.[11:24.480 --> 11:26.440] And the next one would be the Lambda function[11:26.440 --> 11:29.600] needs permissions to interact with other services,[11:29.600 --> 11:31.280] but this would be the run permissions.[11:31.280 --> 11:34.520] And these are both handled via the IAM service[11:34.520 --> 11:38.120] or the AWS identity and access management service.[11:38.120 --> 11:43.120] So the IAM resource policy would tell the Lambda service[11:43.600 --> 11:46.640] which push event the sources have permission[11:46.640 --> 11:48.560] to invoke the Lambda function.[11:48.560 --> 11:51.120] And these resource policies would make it easy[11:51.120 --> 11:55.280] to grant access to a Lambda function across AWS account.[11:55.280 --> 11:58.400] So a good example would be if you have an S3 bucket[11:58.400 --> 12:01.400] in your account and you need to invoke a function[12:01.400 --> 12:03.880] in another account, you could create a resource policy[12:03.880 --> 12:07.120] that allows those to interact with each other.[12:07.120 --> 12:09.200] And the resource policy for a Lambda function[12:09.200 --> 12:11.200] is called a function policy.[12:11.200 --> 12:14.160] And when you add a trigger to your Lambda function[12:14.160 --> 12:16.760] from the console, the function policy[12:16.760 --> 12:18.680] will be generated automatically[12:18.680 --> 12:20.040] and it allows the event source[12:20.040 --> 12:22.820] to take the Lambda invoke function action.[12:24.400 --> 12:27.320] So a good example would be in Amazon S3 permission[12:27.320 --> 12:32.120] to invoke the Lambda function called my first function.[12:32.120 --> 12:34.720] And basically it would be an effect allow.[12:34.720 --> 12:36.880] And then under principle, if you would have service[12:36.880 --> 12:41.880] S3.AmazonEWS.com, the action would be Lambda colon[12:41.880 --> 12:45.400] invoke function and then the resource would be the name[12:45.400 --> 12:49.120] or the ARN of actually the Lambda.[12:49.120 --> 12:53.080] And then the condition would be actually the ARN of the bucket.[12:54.400 --> 12:56.720] And really that's it in a nutshell.[12:57.560 --> 13:01.480] The Lambda execution role grants your Lambda function[13:01.480 --> 13:05.040] permission to access AWS services and resources.[13:05.040 --> 13:08.000] And you select or create the execution role[13:08.000 --> 13:10.000] when you create a Lambda function.[13:10.000 --> 13:12.320] The IAM policy would define the actions[13:12.320 --> 13:14.440] of Lambda functions allowed to take[13:14.440 --> 13:16.720] and the trust policy allows the Lambda service[13:16.720 --> 13:20.040] to assume an execution role.[13:20.040 --> 13:23.800] To grant permissions to AWS Lambda to assume a role,[13:23.800 --> 13:27.460] you have to have the permission for IAM pass role action.[13:28.320 --> 13:31.000] A couple of different examples of a relevant policy[13:31.000 --> 13:34.560] for an execution role and the example,[13:34.560 --> 13:37.760] the IAM policy, you know,[13:37.760 --> 13:39.840] basically that we talked about earlier,[13:39.840 --> 13:43.000] would allow you to interact with S3.[13:43.000 --> 13:45.360] Another example would be to make it interact[13:45.360 --> 13:49.240] with CloudWatch logs and to create a log group[13:49.240 --> 13:51.640] and stream those logs.[13:51.640 --> 13:54.800] The trust policy would give Lambda service permissions[13:54.800 --> 13:57.600] to assume a role and invoke a Lambda function[13:57.600 --> 13:58.520] on your behalf.[13:59.560 --> 14:02.600] Now let's talk about the overview of authoring[14:02.600 --> 14:06.120] and configuring Lambda functions.[14:06.120 --> 14:10.440] So really to start with, to create a Lambda function,[14:10.440 --> 14:14.840] you first need to create a Lambda function deployment package,[14:14.840 --> 14:19.800] which is a zip or jar file that consists of your code[14:19.800 --> 14:23.160] and any dependencies with Lambda,[14:23.160 --> 14:25.400] you can use the programming language[14:25.400 --> 14:27.280] and integrated development environment[14:27.280 --> 14:29.800] that you're most familiar with.[14:29.800 --> 14:33.360] And you can actually bring the code you've already written.[14:33.360 --> 14:35.960] And Lambda does support lots of different languages[14:35.960 --> 14:39.520] like Node.js, Python, Ruby, Java, Go,[14:39.520 --> 14:41.160] and.NET runtimes.[14:41.160 --> 14:44.120] And you can also implement a custom runtime[14:44.120 --> 14:45.960] if you wanna use a different language as well,[14:45.960 --> 14:48.480] which is actually pretty cool.[14:48.480 --> 14:50.960] And if you wanna create a Lambda function,[14:50.960 --> 14:52.800] you would specify the handler,[14:52.800 --> 14:55.760] the Lambda function handler is the entry point.[14:55.760 --> 14:57.600] And a few different aspects of it[14:57.600 --> 14:59.400] that are important to pay attention to,[14:59.400 --> 15:00.720] the event object,[15:00.720 --> 15:03.480] this would provide information about the event[15:03.480 --> 15:05.520] that triggered the Lambda function.[15:05.520 --> 15:08.280] And this could be like a predefined object[15:08.280 --> 15:09.760] that AWS service generates.[15:09.760 --> 15:11.520] So you'll see this, like for example,[15:11.520 --> 15:13.440] in the console of AWS,[15:13.440 --> 15:16.360] you can actually ask for these objects[15:16.360 --> 15:19.200] and it'll give you really the JSON structure[15:19.200 --> 15:20.680] so you can test things out.[15:21.880 --> 15:23.900] In the contents of an event object[15:23.900 --> 15:26.800] includes everything you would need to actually invoke it.[15:26.800 --> 15:29.640] The context object is generated by AWS[15:29.640 --> 15:32.360] and this is really a runtime information.[15:32.360 --> 15:35.320] And so if you needed to get some kind of runtime information[15:35.320 --> 15:36.160] about your code,[15:36.160 --> 15:40.400] let's say environmental variables or AWS request ID[15:40.400 --> 15:44.280] or a log stream or remaining time in Millies,[15:45.320 --> 15:47.200] like for example, that one would return[15:47.200 --> 15:48.840] the number of milliseconds that remain[15:48.840 --> 15:50.600] before your function times out,[15:50.600 --> 15:53.300] you can get all that inside the context object.[15:54.520 --> 15:57.560] So what about an example that runs a Python?[15:57.560 --> 15:59.280] Pretty straightforward actually.[15:59.280 --> 16:01.400] All you need is you would put a handler[16:01.400 --> 16:03.280] inside the handler would take,[16:03.280 --> 16:05.000] that it would be a Python function,[16:05.000 --> 16:07.080] it would be an event, there'd be a context,[16:07.080 --> 16:10.960] you pass it inside and then you return some kind of message.[16:10.960 --> 16:13.960] A few different best practices to remember[16:13.960 --> 16:17.240] about AWS Lambda would be to separate[16:17.240 --> 16:20.320] the core business logic from the handler method[16:20.320 --> 16:22.320] and this would make your code more portable,[16:22.320 --> 16:24.280] enable you to target unit tests[16:25.240 --> 16:27.120] without having to worry about the configuration.[16:27.120 --> 16:30.400] So this is always a really good idea just in general.[16:30.400 --> 16:32.680] Make sure you have modular functions.[16:32.680 --> 16:34.320] So you have a single purpose function,[16:34.320 --> 16:37.160] you don't have like a kitchen sink function,[16:37.160 --> 16:40.000] you treat functions as stateless as well.[16:40.000 --> 16:42.800] So you would treat a function that basically[16:42.800 --> 16:46.040] just does one thing and then when it's done,[16:46.040 --> 16:48.320] there is no state that's actually kept anywhere[16:49.320 --> 16:51.120] and also only include what you need.[16:51.120 --> 16:55.840] So you don't want to have a huge sized Lambda functions[16:55.840 --> 16:58.560] and one of the ways that you can avoid this[16:58.560 --> 17:02.360] is by reducing the time it takes a Lambda to unpack[17:02.360 --> 17:04.000] the deployment packages[17:04.000 --> 17:06.600] and you can also minimize the complexity[17:06.600 --> 17:08.640] of your dependencies as well.[17:08.640 --> 17:13.600] And you can also reuse the temporary runtime environment[17:13.600 --> 17:16.080] to improve the performance of a function as well.[17:16.080 --> 17:17.680] And so the temporary runtime environment[17:17.680 --> 17:22.280] initializes any external dependencies of the Lambda code[17:22.280 --> 17:25.760] and you can make sure that any externalized configuration[17:25.760 --> 17:27.920] or dependency that your code retrieves are stored[17:27.920 --> 17:30.640] and referenced locally after the initial run.[17:30.640 --> 17:33.800] So this would be limit re-initializing variables[17:33.800 --> 17:35.960] and objects on every invocation,[17:35.960 --> 17:38.200] keeping it alive and reusing connections[17:38.200 --> 17:40.680] like an HTTP or database[17:40.680 --> 17:43.160] that were established during the previous invocation.[17:43.160 --> 17:45.880] So a really good example of this would be a socket connection.[17:45.880 --> 17:48.040] If you make a socket connection[17:48.040 --> 17:51.640] and this socket connection took two seconds to spawn,[17:51.640 --> 17:54.000] you don't want every time you call Lambda[17:54.000 --> 17:55.480] for it to wait two seconds,[17:55.480 --> 17:58.160] you want to reuse that socket connection.[17:58.160 --> 18:00.600] A few good examples of best practices[18:00.600 --> 18:02.840] would be including logging statements.[18:02.840 --> 18:05.480] This is a kind of a big one[18:05.480 --> 18:08.120] in the case of any cloud computing operation,[18:08.120 --> 18:10.960] especially when it's distributed, if you don't log it,[18:10.960 --> 18:13.280] there's no way you can figure out what's going on.[18:13.280 --> 18:16.560] So you must add logging statements that have context[18:16.560 --> 18:19.720] so you know which particular Lambda instance[18:19.720 --> 18:21.600] is actually occurring in.[18:21.600 --> 18:23.440] Also include results.[18:23.440 --> 18:25.560] So make sure that you know it's happening[18:25.560 --> 18:29.000] when the Lambda ran, use environmental variables as well.[18:29.000 --> 18:31.320] So you can figure out things like what the bucket was[18:31.320 --> 18:32.880] that it was writing to.[18:32.880 --> 18:35.520] And then also don't do recursive code.[18:35.520 --> 18:37.360] That's really a no-no.[18:37.360 --> 18:40.200] You want to write very simple functions with Lambda.[18:41.320 --> 18:44.440] Few different ways to write Lambda actually would be[18:44.440 --> 18:46.280] that you can do the console editor,[18:46.280 --> 18:47.440] which I use all the time.[18:47.440 --> 18:49.320] I like to actually just play around with it.[18:49.320 --> 18:51.640] Now the downside is that if you don't,[18:51.640 --> 18:53.800] if you do need to use custom libraries,[18:53.800 --> 18:56.600] you're not gonna be able to do it other than using,[18:56.600 --> 18:58.440] let's say the AWS SDK.[18:58.440 --> 19:01.600] But for just simple things, it's a great use case.[19:01.600 --> 19:06.080] Another one is you can just upload it to AWS console.[19:06.080 --> 19:09.040] And so you can create a deployment package in an IDE.[19:09.040 --> 19:12.120] Like for example, Visual Studio for.NET,[19:12.120 --> 19:13.280] you can actually just right click[19:13.280 --> 19:16.320] and deploy it directly into Lambda.[19:16.320 --> 19:20.920] Another one is you can upload the entire package into S3[19:20.920 --> 19:22.200] and put it into a bucket.[19:22.200 --> 19:26.280] And then Lambda will just grab it outside of that S3 package.[19:26.280 --> 19:29.760] A few different things to remember about Lambda.[19:29.760 --> 19:32.520] The memory and the timeout are configurations[19:32.520 --> 19:35.840] that determine how the Lambda function performs.[19:35.840 --> 19:38.440] And these will affect the billing.[19:38.440 --> 19:40.200] Now, one of the great things about Lambda[19:40.200 --> 19:43.640] is just amazingly inexpensive to run.[19:43.640 --> 19:45.560] And the reason is that you're charged[19:45.560 --> 19:48.200] based on the number of requests for a function.[19:48.200 --> 19:50.560] A few different things to remember would be the memory.[19:50.560 --> 19:53.560] Like so if you specify more memory,[19:53.560 --> 19:57.120] it's going to increase the cost timeout.[19:57.120 --> 19:59.960] You can also control the memory duration of the function[19:59.960 --> 20:01.720] by having the right kind of timeout.[20:01.720 --> 20:03.960] But if you make the timeout too long,[20:03.960 --> 20:05.880] it could cost you more money.[20:05.880 --> 20:08.520] So really the best practices would be test the performance[20:08.520 --> 20:12.880] of Lambda and make sure you have the optimum memory size.[20:12.880 --> 20:15.160] Also load test it to make sure[20:15.160 --> 20:17.440] that you understand how the timeouts work.[20:17.440 --> 20:18.280] Just in general,[20:18.280 --> 20:21.640] anything with cloud computing, you should load test it.[20:21.640 --> 20:24.200] Now let's talk about an important topic[20:24.200 --> 20:25.280] that's a final topic here,[20:25.280 --> 20:29.080] which is how to deploy Lambda functions.[20:29.080 --> 20:32.200] So versions are immutable copies of a code[20:32.200 --> 20:34.200] in the configuration of your Lambda function.[20:34.200 --> 20:35.880] And the versioning will allow you to publish[20:35.880 --> 20:39.360] one or more versions of your Lambda function.[20:39.360 --> 20:40.400] And as a result,[20:40.400 --> 20:43.360] you can work with different variations of your Lambda function[20:44.560 --> 20:45.840] in your development workflow,[20:45.840 --> 20:48.680] like development, beta, production, et cetera.[20:48.680 --> 20:50.320] And when you create a Lambda function,[20:50.320 --> 20:52.960] there's only one version, the latest version,[20:52.960 --> 20:54.080] dollar sign, latest.[20:54.080 --> 20:57.240] And you can refer to this function using the ARN[20:57.240 --> 20:59.240] or Amazon resource name.[20:59.240 --> 21:00.640] And when you publish a new version,[21:00.640 --> 21:02.920] AWS Lambda will make a snapshot[21:02.920 --> 21:05.320] of the latest version to create a new version.[21:06.800 --> 21:09.600] You can also create an alias for Lambda function.[21:09.600 --> 21:12.280] And conceptually, an alias is just like a pointer[21:12.280 --> 21:13.800] to a specific function.[21:13.800 --> 21:17.040] And you can use that alias in the ARN[21:17.040 --> 21:18.680] to reference the Lambda function version[21:18.680 --> 21:21.280] that's currently associated with the alias.[21:21.280 --> 21:23.400] What's nice about the alias is you can roll back[21:23.400 --> 21:25.840] and forth between different versions,[21:25.840 --> 21:29.760] which is pretty nice because in the case of deploying[21:29.760 --> 21:32.920] a new version, if there's a huge problem with it,[21:32.920 --> 21:34.080] you just toggle it right back.[21:34.080 --> 21:36.400] And there's really not a big issue[21:36.400 --> 21:39.400] in terms of rolling back your code.[21:39.400 --> 21:44.400] Now, let's take a look at an example where AWS S3,[21:45.160 --> 21:46.720] or Amazon S3 is the event source[21:46.720 --> 21:48.560] that invokes your Lambda function.[21:48.560 --> 21:50.720] Every time a new object is created,[21:50.720 --> 21:52.880] when Amazon S3 is the event source,[21:52.880 --> 21:55.800] you can store the information for the event source mapping[21:55.800 --> 21:59.040] in the configuration for the bucket notifications.[21:59.040 --> 22:01.000] And then in that configuration,[22:01.000 --> 22:04.800] you could identify the Lambda function ARN[22:04.800 --> 22:07.160] that Amazon S3 can invoke.[22:07.160 --> 22:08.520] But in some cases,[22:08.520 --> 22:11.680] you're gonna have to update the notification configuration.[22:11.680 --> 22:14.720] So Amazon S3 will invoke the correct version each time[22:14.720 --> 22:17.840] you publish a new version of your Lambda function.[22:17.840 --> 22:21.800] So basically, instead of specifying the function ARN,[22:21.800 --> 22:23.880] you can specify an alias ARN[22:23.880 --> 22:26.320] in the notification of configuration.[22:26.320 --> 22:29.160] And as you promote a new version of the Lambda function[22:29.160 --> 22:32.200] into production, you only need to update the prod alias[22:32.200 --> 22:34.520] to point to the latest stable version.[22:34.520 --> 22:36.320] And you also don't need to update[22:36.320 --> 22:39.120] the notification configuration in Amazon S3.[22:40.480 --> 22:43.080] And when you build serverless applications[22:43.080 --> 22:46.600] as common to have code that's shared across Lambda functions,[22:46.600 --> 22:49.400] it could be custom code, it could be a standard library,[22:49.400 --> 22:50.560] et cetera.[22:50.560 --> 22:53.320] And before, and this was really a big limitation,[22:53.320 --> 22:55.920] was you had to have all the code deployed together.[22:55.920 --> 22:58.960] But now, one of the really cool things you can do[22:58.960 --> 23:00.880] is you can have a Lambda function[23:00.880 --> 23:03.600] to include additional code as a layer.[23:03.600 --> 23:05.520] So layer is basically a zip archive[23:05.520 --> 23:08.640] that contains a library, maybe a custom runtime.[23:08.640 --> 23:11.720] Maybe it isn't gonna include some kind of really cool[23:11.720 --> 23:13.040] pre-trained model.[23:13.040 --> 23:14.680] And then the layers you can use,[23:14.680 --> 23:15.800] the libraries in your function[23:15.800 --> 23:18.960] without needing to include them in your deployment package.[23:18.960 --> 23:22.400] And it's a best practice to have the smaller deployment packages[23:22.400 --> 23:25.240] and share common dependencies with the layers.[23:26.120 --> 23:28.520] Also layers will help you keep your deployment package[23:28.520 --> 23:29.360] really small.[23:29.360 --> 23:32.680] So for node, JS, Python, Ruby functions,[23:32.680 --> 23:36.000] you can develop your function code in the console[23:36.000 --> 23:39.000] as long as you keep the package under three megabytes.[23:39.000 --> 23:42.320] And then a function can use up to five layers at a time,[23:42.320 --> 23:44.160] which is pretty incredible actually,[23:44.160 --> 23:46.040] which means that you could have, you know,[23:46.040 --> 23:49.240] basically up to a 250 megabytes total.[23:49.240 --> 23:53.920] So for many languages, this is plenty of space.[23:53.920 --> 23:56.620] Also Amazon has published a public layer[23:56.620 --> 23:58.800] that includes really popular libraries[23:58.800 --> 24:00.800] like NumPy and SciPy,[24:00.800 --> 24:04.840] which does dramatically help data processing[24:04.840 --> 24:05.680] in machine learning.[24:05.680 --> 24:07.680] Now, if I had to predict the future[24:07.680 --> 24:11.840] and I wanted to predict a massive announcement,[24:11.840 --> 24:14.840] I would say that what AWS could do[24:14.840 --> 24:18.600] is they could have a GPU enabled layer at some point[24:18.600 --> 24:20.160] that would include pre-trained models.[24:20.160 --> 24:22.120] And if they did something like that,[24:22.120 --> 24:24.320] that could really open up the doors[24:24.320 --> 24:27.000] for the pre-trained model revolution.[24:27.000 --> 24:30.160] And I would bet that that's possible.[24:30.160 --> 24:32.200] All right, well, in a nutshell,[24:32.200 --> 24:34.680] AWS Lambda is one of my favorite services.[24:34.680 --> 24:38.440] And I think it's worth everybody's time[24:38.440 --> 24:42.360] that's interested in AWS to play around with AWS Lambda.[24:42.360 --> 24:47.200] All right, next week, I'm going to cover API Gateway.[24:47.200 --> 25:13.840] All right, see you next week.If you enjoyed this video, here are additional resources to look at:Coursera + Duke Specialization: Building Cloud Computing Solutions at Scale Specialization: https://www.coursera.org/specializations/building-cloud-computing-solutions-at-scalePython, Bash, and SQL Essentials for Data Engineering Specialization: https://www.coursera.org/specializations/python-bash-sql-data-engineering-dukeAWS Certified Solutions Architect - Professional (SAP-C01) Cert Prep: 1 Design for Organizational Complexity:https://www.linkedin.com/learning/aws-certified-solutions-architect-professional-sap-c01-cert-prep-1-design-for-organizational-complexity/design-for-organizational-complexity?autoplay=trueEssentials of MLOps with Azure and Databricks: https://www.linkedin.com/learning/essentials-of-mlops-with-azure-1-introduction/essentials-of-mlops-with-azureO'Reilly Book: Implementing MLOps in the EnterpriseO'Reilly Book: Practical MLOps: https://www.amazon.com/Practical-MLOps-Operationalizing-Machine-Learning/dp/1098103017O'Reilly Book: Python for DevOps: https://www.amazon.com/gp/product/B082P97LDW/O'Reilly Book: Developing on AWS with C#: A Comprehensive Guide on Using C# to Build Solutions on the AWS Platformhttps://www.amazon.com/Developing-AWS-Comprehensive-Solutions-Platform/dp/1492095877Pragmatic AI: An Introduction to Cloud-based Machine Learning: https://www.amazon.com/gp/product/B07FB8F8QP/Pragmatic AI Labs Book: Python Command-Line Tools: https://www.amazon.com/gp/product/B0855FSFYZPragmatic AI Labs Book: Cloud Computing for Data Analysis: https://www.amazon.com/gp/product/B0992BN7W8Pragmatic AI Book: Minimal Python: https://www.amazon.com/gp/product/B0855NSRR7Pragmatic AI Book: Testing in Python: https://www.amazon.com/gp/product/B0855NSRR7Subscribe to Pragmatic AI Labs YouTube Channel: https://www.youtube.com/channel/UCNDfiL0D1LUeKWAkRE1xO5QSubscribe to 52 Weeks of AWS Podcast: https://52-weeks-of-cloud.simplecast.comView content on noahgift.com: https://noahgift.com/View content on Pragmatic AI Labs Website: https://paiml.com/

Misreading Chat
#98: Dynamo: Amazon's Highly Available Key-value Store

Misreading Chat

Play Episode Listen Later Sep 14, 2022 39:20


昔 Amazon が作った DynamoDB の祖先について森田が読みました。

Misreading Chat
#96: Amazon DynamoDB: A Scalable, Predictably Performant, and Fully Managed NoSQL Database Service

Misreading Chat

Play Episode Listen Later Jul 29, 2022 47:48


AWS の人気サービス DynamoDB の論文を森田が眺めました。

DevZen Podcast
NoSQL не стоял на месте — Episode 0390

DevZen Podcast

Play Episode Listen Later Jul 23, 2022 135:33


В этом выпуске: чему научились за неделю; обзор очередного видео: Index Concurrency Control; тонкости pg_stat_activity; подробный разбор пейпера про Amazon DynamoDB; темы и вопросы слушателей. Шоуноты: [00:02:42] Чему мы научились за неделю GitHub — petere/plxslt: XSLT procedural language for PostgreSQL https://twitter.com/JI/status/1546948817462800384 PostgreSQL: How to inherit search_path from template https://twitter.com/marcan42/status/1549672494210113536 [00:15:25] 08 — Index Concurrency Control… Читать далее →

AWS - Il podcast in italiano
Smeup, come modernizzare i mainframe passando per le funzioni serverless (ospite: Mauro Sanfilippo)

AWS - Il podcast in italiano

Play Episode Listen Later Jun 13, 2022 32:25


Perché è importante adottare l'approccio giusto quando si affronta un progetto di modernizzazione? Quali approcci esistono e da dove si può partire per vedere subito dei risultati? Come si modernizza tutto il sistema end-to-end, dalla business logic e le integrazioni, fino ai sistemi di CI/CD? In questo episodio ospito Mauro Sanfilippo, CTO di smeup, per parlare di come le esigenze siano cambiate (o rimaste uguali) negli ultimi decenni e anche per raccontare un approccio di modernizzazione innovativo per workload basati su linguaggi di programmazione come l'RPG (simile a COBOL), utilizzando servizi serverless come AWS Lambda e Amazon DynamoDB. Link: smeup.

Outspoken with Shana Cosgrove
For the Back of the Room: Gerard Spivey, Senior Systems Development Engineer at Amazon Web Services.

Outspoken with Shana Cosgrove

Play Episode Listen Later Jun 7, 2022 55:55


Curiosity, Focus, and Forging a Path.In this episode of The Outspoken Podcast, host Shana Cosgrove talks to Gerard Spivey, Senior Systems Development Engineer at Amazon Web Services. Gerard speaks in detail about Amazon's interview process, giving us insight into their procedures and how he prepared himself. We also hear about Gerard's time at Amazon and the types of work he's taking on. Side hustles are a way of life for Gerard, and he speaks about his latest experiences managing his YouTube channel, Gerard's Curious Tech. Lastly, Gerard talks about his time at NYLA and how he was able to bring his full self to work thanks to NYLA's culture. QUOTES “I can do slow and steady, I can find my target audience, and then once I have that I can figure out what I want to parlay that into later.” - Gerard Spivey [25:59] “‘I'm a Senior Director [at Intel], and I can do what I want' is basically what he told me. He's like ‘the company has a 3.0 thing, but for someone like you who actually knows what they're talking about it's not a problem.' So I said, ‘Ooh this is my time, they're letting me in'” - Gerard Spivey [42:07] “You're in a good spot in your career when you're valued for the thing you're going to do next versus the thing you did previously. What you're going to do next is your competitive value - that is what you bring to the table.” - Gerard Spivey [48:27]   TIMESTAMPS  [00:04] Intro [01:31] Gerard's Wedding Ceremony [02:32] Working at Amazon Web Services (AWS) [05:33] Amazon's Interview Process [12:06] Gerard's Experience with the Job Market [15:54] Working at Amazon [19:11] Starting a New Job During COVID [19:43] Side Hustles [23:21] Gerard's YouTube Channel [31:08] Gerard's Childhood [31:52] How Gerard Decided to Study Electrical Engineering [34:19] Choosing a College [45:13] Gerard's Advice to his Younger Self [47:42] Favorite Books [50:57] Gerard's Time at NYLA [55:36] Outro RESOURCES https://aws.amazon.com/ec2/ (Amazon EC2) https://aws.amazon.com/ec2/instance-types/ (Amazon EC2 Instance Types) https://aws.amazon.com/dynamodb/ (Amazon DynamoDB) https://sre.google/ (Site Reliability Engineering (SRE)) https://www.c2stechs.com/ (Commercial Cloud Services (C2S)) https://www.thebalancecareers.com/what-is-the-star-interview-response-technique-2061629 (STAR Interview Response Method) https://www.microsoft.com/en-us/microsoft-365/exchange/email (Microsoft Exchange) https://azure.microsoft.com/en-us/ (Microsoft Azure) https://www.synopsys.com/glossary/what-is-cicd.html (CI/CD) https://mlt.org/ (Management Leadership for Tomorrow (MLT)) https://www.hbs.edu/ (Harvard Business School) https://a16z.com/ (Andreessen Horowitz) https://www.youtube.com/ (YouTube) https://www.nsbe.org/K-12/Programs/PCI-Programs (NSBE Pre-College Initiative Program) https://www.jhu.edu/ (Johns Hopkins University) https://www.abet.org/ (Accreditation Board for Engineering and Technology (ABET)) https://www.ncat.edu/ (North Carolina A&T State University) https://www.morgan.edu/ (Morgan State University) https://howard.edu/ (Howard University) https://www.rit.edu/ (Rochester Institute of Technology) https://www.psu.edu/ (Penn State University) https://www.digitaltechnologieshub.edu.au/teach-and-assess/classroom-resources/topics/digital-systems/ (Digital Systems) https://www.xilinx.com/products/silicon-devices/fpga/what-is-an-fpga.html (Field Programmable Gate Arrays (FPGAs)) https://www.gwu.edu/ (The George Washington University) https://www.intel.com/content/www/us/en/homepage.html (Intel) https://www.pcmag.com/encyclopedia/term/pci-express (PCI Express) https://www.intel.com/content/www/us/en/io/serial-ata/serial-ata-developer.html (Serial ATA (SATA)) https://consortium.org/ (Consortium of Universities of the Washington Metropolitan Area) https://www.amazon.com/Zero-One-Notes-Startups-Future/dp/0804139296 (Zero to One) by Peter Thiel and Blake Masters https://www.richdad.com/...

Screaming in the Cloud
Data Analytics in Real Time with Venkat Venkataramani

Screaming in the Cloud

Play Episode Listen Later Apr 27, 2022 38:41


About VenkatVenkat Venkataramani is CEO and co-founder of Rockset. In his role, Venkat helps organizations build, grow and compete with data by making real-time analytics accessible to developers and data teams everywhere. Prior to founding Rockset in 2016, he was an Engineering Director for the Facebook infrastructure team that managed online data services for 1.5 billion users. These systems scaled 1000x during Venkat's eight years at Facebook, serving five billion queries per second at single-digit millisecond latency and five 9's of reliability. Venkat and his team also created and contributed to many noted data technologies and open-source projects, including Facebook's TAO distributed data store, RocksDB, Memcached, MySQL, MongoRocks, and others. Prior to Facebook, Venkat worked on tools to make the Oracle database easier to manage. He has a master's in computer science from the University of Wisconsin-Madison, and bachelor's in computer science from the National Institute of Technology, Tiruchirappalli.Links Referenced: Company website: https://rockset.com Company blog: https://rockset.com/blog TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored by our friends at Revelo. Revelo is the Spanish word of the day, and its spelled R-E-V-E-L-O. It means “I reveal.” Now, have you tried to hire an engineer lately? I assure you it is significantly harder than it sounds. One of the things that Revelo has recognized is something I've been talking about for a while, specifically that while talent is evenly distributed, opportunity is absolutely not. They're exposing a new talent pool to, basically, those of us without a presence in Latin America via their platform. It's the largest tech talent marketplace in Latin America with over a million engineers in their network, which includes—but isn't limited to—talent in Mexico, Costa Rica, Brazil, and Argentina. Now, not only do they wind up spreading all of their talent on English ability, as well as you know, their engineering skills, but they go significantly beyond that. Some of the folks on their platform are hands down the most talented engineers that I've ever spoken to. Let's also not forget that Latin America has high time zone overlap with what we have here in the United States, so you can hire full-time remote engineers who share most of the workday as your team. It's an end-to-end talent service, so you can find and hire engineers in Central and South America without having to worry about, frankly, the colossal pain of cross-border payroll and benefits and compliance because Revelo handles all of it. If you're hiring engineers, check out revelo.io/screaming to get 20% off your first three months. That's R-E-V-E-L-O dot I-O slash screaming.Corey: This episode is sponsored in part by LaunchDarkly. Take a look at what it takes to get your code into production. I'm going to just guess that it's awful because it's always awful. No one loves their deployment process. What if launching new features didn't require you to do a full-on code and possibly infrastructure deploy? What if you could test on a small subset of users and then roll it back immediately if results aren't what you expect? LaunchDarkly does exactly this. To learn more, visit launchdarkly.com and tell them Corey sent you, and watch for the wince.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's promoted guest episode is one of those questions I really like to ask because it can often come across as incredibly, well, direct, which is one of the things I love doing. In this case, the question that I am asking is, when you look around at the list of colossal blunders that people make in the course of careers in technology and the rest, it's one of the most common is, “Oh, yeah. I don't like the way that this thing works, so I'm going to build my own database.” That is the siren call to engineers, and it is often the prelude to horrifying disasters. Today, my guest is Venkat Venkataramani, co-founder and CEO at Rockset. Venkat, thank you for joining me.Venkat: Thanks for having me, Corey. It's a pleasure to be here.Corey: So, it is easy for me to sit here in my beautiful ivory tower that is crumbling down around me and use my favorite slash the best database imaginable, which is TXT records shoved into Route 53. Now, there are certainly better databases than that for most use cases. Almost anything really, to be honest with you, because that is a terrifying pattern; good joke, terrible practice. What is Rockset as we look at the broad landscape of things that store data?Venkat: Rockset is a real-time analytics platform built for the cloud. Let me break that down a little bit, right? I think it's a very good question when you say does the world really need another database? Don't we have enough already? SQL databases, NoSQL databases, warehouses, and lake houses now.So, if you really break it down, the first digital transformation that happened in the '80s was when people actually retired pen and paper records and started using a relational database to actually manage their business records and what have you instead of ledgers and books and what have you. And that was the first digital transformation. That was—and Oracle called the rows in a table ‘records' for a reason. They're called records to this date. And then, you know, 20 years later, when all businesses were doing system of record and transactions and transactional databases, then analytics was born, right?This was, like, the whole reason why I wanted to make better data-driven business decisions, and BI was born, warehouses and data lakes started becoming more and more mainstream. And there was really a second category of database management systems because the first category it was very good at to be a system of record, but not really good at complex analytics that businesses are asking to be able to guide their decisions. Fast-forward 20 years from then, the nature of applications are changing. The world is going from batch to real-time, your data never stops coming, advent of Apache Kafka and technologies like that, 5G, IoTs, data is coming from all sorts of nooks and corners within an enterprise, and now customers in enterprises are acquiring the data in real-time at a scale that the world has never seen before.Now, how do you get analytics out of that? And then if you look at the database market—entire market—there are still only two large categories of databases: OLTP databases for transaction processing, and warehouses and data lakes for batch analytics. Now suddenly, you need the speed of OLTP at the scale of batch, right, in terms of, like, complexity of compute, complexity of storage. So, that is really why we thought the data management space needs that third leg, and we call it real-time analytics platform or real-time analytics processing. And this is where the data never stops coming; the queries never stopped coming.You need the speed and the scale, and it's about time we innovate and solve the problem well because in 2015, 2016, when I was researching for this, every company that was looking to solve build applications that were real-time applications was building a custom Rube Goldberg machine of sorts. And it was insanely complex, it was insanely expensive. Fast-forward now, you can build a real-time application in a matter of hours with the simplicity of the cloud using Rockset.Corey: There's a lot to be said that the way we used to do things after the first transformation and we got into the world of batch processing, where—in the days of punch cards, which was a bit before my time and I believe yours as well—where they would drop them off and then the next day, or two days, they would come back later after the run, they would get the results only to figure out syntax error because you put the wrong card first or something like that. And it was maddening. In time, that got better, but still, nightly runs have become a thing to the point where even now, by default, if you wind up looking at the typical timing of a default Linux install, for example, you see that in the middle of the night is when a bunch of things will rotate when various cleanup jobs get done, et cetera, et cetera. And that seemed like a weird direction to go in. One of the most famous Google April Fools Day jokes was when they put out their white paper on MapReduce.And then Yahoo fell for it hook, line, and sinker, built out Hadoop, and we've been stuck with this idea of performing these big query jobs on top of existing giant piles of data, where ideally, you can measure it with a wall clock; in practice, you often measure the calendar in some cases. And as the world continues to evolve, being able to do streaming processing and understand in real-time what is going on, is unlocking different approaches, at least by all accounts. Do you have an example you can give me of a problem that real-time analytics solves for a customer? Because I can sit here and talk all day about how things might theoretically work, but I have to get out of my Route 53-based ivory tower over here, what are customers seeing?Venkat: That's a great question. And I want one hundred percent agree. I think Google did build MapReduce, and I think it's a very nice continuation of what happened there and what is happening in the world now. And built MapReduce and they quickly realized re-indexing the whole world [laugh] every night, as the size of the internet is exploding is a bad idea. And you know how Google index is now? They do real-time indexing.That is how they index the wor—you know, web. And they look for the changes that are happening in the internet, and they only index the changes. And that is exactly the same principle behind—one of the core principles behind Rockset's real-time analytics platform. So, what is the customer story? So, let me give you one of my favorite ones.So, the world's number one or number two buy now, pay later company, they have hundreds of millions of users, they have 300,000-plus merchants, they operate in, like, maybe 100-plus countries, so many different payment methods, you can imagine the complexity. At any given point in time, some part of the product is broken, well, Apple Pay stopped working in Switzerland for this e-commerce merchant. Oh God, like, we got to first detect that. Forget even debugging and figuring out what happened and having an incident response team. So, what did they do as they scale the number of payments processed in the system across the world—it's, like, in millions; first, it was millions in the day, and there was millions in an hour—so like everybody else, they built a batch-based system.So, they would accumulate all these payment records, and every six hours—so initially, it was a day, and then afterwards, you know, you try to see how far I can push it, and they couldn't push it beyond every six hours. Every six hours, some batch job would come and process through all the payments that happened, have some statistical models to detect, hey, here are some of the things that you might want to double-click and follow up on. And as they were scaling, the batch job that they will kick off every six hours was starting to take more than six hours. So, you can see how the story goes. Now, fast-forward, they came to us and say—it's almost like Rockset has, like, a big red button that says, “Real-time this.”And then they kind of like, “Can you make this real-time? Because not only that we are losing millions of potential revenue dollars in a year because something stops working and we're not processing payments, and we don't find out about that up to, like, three hours later, five hours later, six hours later, but our merchants are also very unhappy. We are also not able to protect our customers' business because that is all we are about.” And so fast-forward, they use Rockset, and simply using SQL now they have all the metrics and statistical computation that they want to do, happens in real-time, that are accurate up to the second. All of their anomaly detectors run every minute and the anomaly detectors take, like, hundreds of milliseconds to run.And so, now they've cut down the business observability, I would say. It's not metrics and machine observability is actually the—you know, they have now business observability in real-time. And that not only actually saves them a lot of potential revenue loss from downtimes, that's also allowing them to build a better product and give their customers a better experience because they are now telling their merchants and their customers that something is not working in some part of your e-commerce footprint before even the customers notice that something is wrong. And that allows them to build a better product and a better customer experience than their competitors. So, this is a very real-world example of why companies and enterprises are moving from batch to real-time.Corey: With the stories that you, and frankly, a lot of other data analytics companies tend to fall back on all the time has been stories of the ones you're telling, where you're talking about the largest buy now, pay later lender, for example. These are companies operating at massive scale who have tremendous existing transaction volume, and they're built out already. That's great, but then I wanted to try to cut to the truth of some of these things. And when I visit your pricing page at Rockset, it doesn't have what I would expect if that were the only use case. And what that would be is, “Great. Call here to conta—open up a sales quote, and we'll talk to you et cetera, et cetera, et cetera.”And the answer then is, “Okay, I know it's going to have at least two commas in it, ideally, not three, but okay, great.” Instead, you have a free tier where it's, “Hey, we'll give you a pile of credits, here's some limits on our free account, et cetera, et cetera.” Great. That is awesome. So, it tells me that there is a use case here for folks who have not already, on some level, made a good show of starting the process of conquering the world.Rather, someone with an idea some evening at two in the morning can wind up diving in and getting started. What is the Twitter for Pets, in my garage, spare-time side project story for using something like Rockset? What problem will I have as I wind up building those things out, when I don't have any user traffic or data yet, but I want to, you know for once in my life, do the smart thing in advance rather than building an impressive tower of technical debt?Venkat: That is the first thing we built, by the way. When we finish our product, the first thing we built was self-service. The first thing we built was a free forever tier, which has certain limits because somebody has to pay the bill, right? And then we also have compute instances that are very, very affordable that cost you, like, approximately $1 a day. And so, we built all of that because real-time analytics is not a need that only, like, the large-scale companies have. And I'll give you a very, very simple example.Let's say you're building a game, it's a mobile game. You can use Amazon DynamoDB and use AWS Lambdas and have a serverless stack and, like, you're really only paying… you're kind of keeping your footprint very, very small, and you're able to build a very lively game and see if it gets [wider 00:12:16], and it's growing. And once it grows, you can have all the big company scaling problems. But in the early days, you're just getting started. Now, if you think about DynamoDB and Lambdas and whatnot, you can build almost every part of the game except probably the leaderboard.So, how do I build a leaderboard when thousands of people are playing and all of their individual gameplays and scores and everything is just another simple record in DynamoDB. It's all serverless. But DynamoDB doesn't give me a SQL SELECT *, order by score, limit 100, distinct by the same player. No, this is a analytical question, and it has to be updated in real-time, otherwise, you really don't have this thing where I just finished playing. I go to the leaderboard, and within a second or two, if it doesn't update, you kind of lose people along the way. So, this is one of actually a very popular use case, when the scale is much smaller, which is, like, Rockset augments NoSQL database like a Dynamo or a Mongo where you can continue to use that for—or even a Postgres or MySQL for that case where you can use that as your system of record and keep it small, but cover all of your compute-heavy and analytical parts of your application with Rockset.So, it's almost like kind of a CQRS pattern where you use your OLTP database as your system of record, you connect Rockset to it, and so—Rockset comes in with built-in connectors, by the way, so you don't have to write a single line of code for your inserts and updates and deletes in your transactional database to get reflected in Rockset within one to two seconds. And so now, all of a sudden you have a fully indexed, fast SQL replica of your transactional database that on which you can do all sorts of analytical queries and that's fully isolated with your transactional database. So, this is the pattern that I'm talking about. The mobile leaderboard is an example of that pattern where it comes in very handy. But you can imagine almost everybody building some kind of an application has certain parts of it that is very analytical in nature. And by augmenting your transactional database with Rockset, you can have your cake and eat it too.Corey: One of the challenges I think that at least I've run into when it comes to working with data—and let's be clear, I tend to deal with data in relatively small volumes, mostly. The stuff that's significantly large, like, oh, I don't know, AWS bills from large organizations, the format of those is mostly predefined. When I'm building something out, we're using, I don't know, DynamoDB or being dangerous with SQLite or whatnot, invariably I find that even at small-scale, I paint myself into a corner by data model design or how I wind up structuring access or the rest, and the thing that I'm doing that makes perfect sense today winds up being incredibly challenging to change later. And I still, in production and have a DynamoDB table that has the word ‘test' in its name because of course I do.It's not a great place to find yourself in some cases. And I'm curious as to what you've seen, as you've been building this out and watching customers, especially ones who already had significant datasets as they move to you. Do you have any guidance around how to avoid falling down that particular well?Venkat: I will say a lot of the complexity in this world is by solving the right problems using the wrong tool, or by solving the right problem on the wrong part of the stack. I'll unpack this a little bit, right? So, when your patterns change, your application is getting more complex, it is demanding more things, that doesn't necessarily mean the first part of the application you build—and let's say DynamoDB was your solution for that—was the wrong choice. That is the right choice, but now you're expanded the scope of your application and the demand that you have on your backend transactional database. And now you have to ask the question, now in the expanded scope, which ones are still more of the same category of things on why I chose Dynamo and which ones are actually not at all?And so, instead of going and abusing the GSIs and other really complex and expensive indexing options and whatnot, that Dynamo, you know, has built, and has all sorts of limitations, instead of that, what do I really need and what is the best tool for the job, right? What is the best system for that? And how do I augment? And how do I manage these things? And this goes to the first thing I said, which is, like, this tremendous complexity when you start to build a Rube Goldberg machine of sorts.Okay, now, I'm going to start making changes to Dynamo. Oh, God, like, how do I pick up all of those things and not miss a single record? Now, replicate that to another second system that is going to be search-centric or reporting-centric, and do I have to rethink this once in a while? Do I have to build and manage these pipelines? And suddenly, instead of going from one system to two system, you actually end up going from one system to, like, four different things that with all the pipes and tubes going into the middle.And so, this is what we really observed. And so, when you come in to Rockset and you point us at your DynamoDB table, you don't write a single line of code, and Rockset will automatically scan your Dynamo tables, move that into Rockset, and in real-time, your changes, insert, updates, deletes to Dynamo will be reflected in Rockset. And this is all using Dynamo Streams API, Dynamo Scan API, and whatnot, behind the scenes. And this just gives you an example of if you use the right tool for the job here, when suddenly your application is demanding analytical queries on Dynamo, and you do the right research and find the right tool, your complexity doesn't explode at all, and you can still, again, continue to use Dynamo for what it is very, very good at while augmenting that with a system built for analytics with full-featured SQL and other capabilities that I can talk about, for the parts of your application for which Dynamo is not a good fit. And so, if you use the right tool for the job, you should be in very good place.The other thing is part about this wrong part of the stack. I'll give a very kind of naive example, and then maybe you can extrapolate that to, like, other patterns on how people could—you know, accidental complexities the worst. So, let's just say you need to implement access control on your data. Let's say the best place to implement access control is at the database level, just happens to be that is the right thing. But this database that I picked, doesn't really have role-based access control or what have you, it doesn't really give me all the security features to be able to protect the data the way I want it.So, then what I'm going to do is, I'm going to go look at all the places that is actually having business logic and querying the database and I'm going to put a whole bunch of permission management and roles and privileges, and you can just see how that will be so error-prone, so hard to maintain, and it will be impossible to scale. And this is what is the worst form of accidental complexity because if you had just looked at it that one week or two weeks, how do I get something out, or the database I picked doesn't have it, and then the two weeks, you feel like you made some progress by, kind of like, putting some duct tape if conditions on all the access paths. But now, [laugh] you've just painted yourself into a really, really bad corner.And so, this is another variation of the same problem where you end up solving the right problems in the wrong part of the stack, and that just introduces tremendous amount of accidental complexity. And so, I think yeah, both of these are the common pitfalls that I think people make. I think it's easy to avoid them. I would say there's so much research, there's so much content, and if you know how to search for these things, they're available in the internet. It's a beautiful place. [laugh]. But I guess you have to know how to search for these things. But in my experience, these are the two common pitfalls a lot of people fall into and paint themselves in a corner.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.Corey: A question I have, though, that is an extension is this—and I want to give some flavor to it—but why is there a market for real-time analytics? And what I mean by that is, early on in my tenure of fixing horrifying AWS bills, I saw a giant pile of money being hurled over at effectively a MapReduce cluster for Elastic MapReduce. Great. Okay, well, stream-processing is kind of a thing; what about migrating to that? Well, that was a complete non-starter because it wasn't just the job running on those things; there were downstream jobs, and with their own downstream jobs. There were thousands of business processes tied to that thing.And similarly, the idea of real-time analytics, we don't have any use for that because of, oh I don't know, I only wind up pulling these reports on a once-a-week basis, and that's fine, so what do I need that updated for in real-time if I'm looking at them once a week? In practice, the answer is often something aligned with the, “Well, yeah, but you had a real-time updating dashboard, you would find that more useful than those reports.” But people's expectations and business processes have shaped themselves around constraints that now can be removed, but how do you get them to see that? How do you get them to buy in on that? And then how do you untangle that enormous pile of previous constraint into something that leverages the technology that's now available for a brighter future?Venkat: I think [unintelligible 00:21:40] a really good question, who are the people moving to real-time analytics? What do they see? And why can they do it with other tech? Like, you know, as you say… EMR, you know, it's just MapReduce; can't I just run it in sort of every twenty-four hours, every six hours, every hour? How about every five minutes? It doesn't work that way.Corey: How about I spin up a whole bunch of parallel clusters on different timescales so I constantly—Venkat: [laugh].Corey: Have a new report coming in. It's real-time, except—Venkat: Exactly.Corey: You're constantly putting out new ones, but they're just six hours delayed every time.Venkat: Exactly. So, you don't really want to do this. And so, let me unpack it one at a time, right? I mean, we talked about a very good example of a business team which is building business observability at the buy now, pay later company. That's a very clear value-prop on why they want to go from batch to real-time because it saves their company tremendous losses—potential losses—and also allows them to build a better product.So, it could be a marketing operations team looking to get more real-time observability to see what campaigns are working well today and how do I double down and make sure my ad budget for the day is put to good use? I don't have to mention security operations, you know, needing real-time. Don't tell me I got owned three days ago. Tell me—[laugh] somebody is, you know, breaking glass and might be, you know, entering into your house right now. And tell me then and not three days later, you know—Corey: “Yeah, what alert system do you have for security intrusion?” “So, I read the front page of_The New York Times_ every morning and waiting to see my company's name.” Yeah, there probably are better ways to reduce that cycle time.Venkat: Exactly, right. And so, that is really the need, right? Like, I think more and more business teams are saying, “I need operational intelligence and not business intelligence.” Don't make me play Monday morning quarterback.My favorite analogy is it's the middle of the third quarter. I'm six points down. A couple of people, star players in my team and my opponent's team are injured, but there's some in offense, some in defense. What plays do I do and how do I play the game slightly differently to change the outcome of the game and win this game as opposed to losing by six points. So, that I think is kind of really what is driving businesses.You know, I want to be more agile, I want to be more nimble, and take, kind of, being data-driven decision-making to another level. So that, I think, is the real force in play. So, now the real question is, why can they do it already? Because if you go ask a hundred people, “Do you want fast analytics on real-time data or slow analytics on stale data?” How many people are going to say give me slow and stale? Zero, right? Exactly zero people.So, but then why hasn't it happened yet? I think it goes back to the world only has seen two kinds of databases: Transaction processing systems, built for system of record, don't lose my data kind of systems; and then batch analytics, you know, all these warehouses and data lakes. And so, in real-time analytics use cases, the data never stops coming, so you have to actually need a system that is running 24/7. And then what happens is, as soon as you build a real-time dashboard, like this example that you gave, which is, like, I just want all of these dashboards to automatically update all the time, immediately people respond, says, “But I'm not going to be like Clockwork Orange, you know, toothpicks in my eyelids and be staring at this 24/7. Can you do something to alert or detect some anomalies and tap on my shoulder when something off is going on?”And so, now what happens is somebody's actually—a program more than a person—is actually actively monitoring all of these metrics and graphs and doing some analysis, and only bringing this to your attention when you really need to because something is off, right? So, then suddenly what happens is you went from, accumulate all the data and run a batch report to [unintelligible 00:25:16], like, the data never stops coming, the queries never stopped coming, I never stop asking questions; it's just a programmatic way of asking those things. And at that point, you have a data app. This is not a analytics dashboard report anymore. You have a full-fledged application.In fact, that application is harder to build and scale than any application you've ever built before [laugh] because in those situations, again, you don't have this torrent of data coming in all the time and complex analytical questions you're asking on the data 24/7, you know? And so, that I think is really why real-time analytics platform has to be built as almost a third leg. So, this is what we call data apps, which is when your data never stops coming and your queries never stop coming. So, this is really, I think, what is pushing all the expensive EMR clusters or misusing your warehouse, misusing your data lakes. At the end of the day, is what is I think blowing up your Snowflake bills, is what blowing up your warehouse builds because you somehow accidentally use the wrong tool for the job [laugh] going back to the one that we just talked about.You accidentally say, “Oh, God, like, I just need some real-time.” With enough thrust, pigs can fly. Is that a good idea? Probably not, right? And so, I don't want to be building a data app on my warehouse just because I can. You should probably use the best tool for the job, and really use something that was built ground up for it.And I'll give you one technical insight about how real-time analytics platforms are different than warehouses.Corey: Please. I'm here for this.Venkat: Yes. So really, if you think about warehouses and data lakes, I call them storage-optimized systems. I've been building databases all my life, so if I have to really build a database that is for batch analytics, you just break down all of your expenses in terms of let's say, compute and storage. What I'm burning 24/7 is storage. Compute comes and goes when I'm doing a batch data load, or I'm running—an analyst who logs in and tries to run some queries.But what I'm actually burning 24/7 is storage, so I want to compress the heck out of the data, and I want to store it in very cheap media. I want to store it—and I want to make the storage as cheap as possible, so I want to optimize the heck out of the storage use. And I want to make computation on that possible but not efficient. I can shuffle things around and make the analysis possible, but I'm not trying to be compute-efficient. And we just talked about how, as soon as you get into real-time analytics, you very quickly get into the data app business. You're not building a real-time dashboard anymore, you're actually building your application.So, as soon as you get into that, what happens is you start burning both storage and compute 24/7. And we all know, relatively, [laugh] compute and RAM is about a hundred to a thousand times more expensive than storage in the grand scheme of things. And so, if you actually go and look at your Snowflake bill, if you go look at your warehouse bill—BigQuery, no matter what—I bet the computational part of it is about 90 to 95% of the bill and not the storage. And then, if you again, break down, okay, who's spending all the compute, and you'll very quickly narrow down all these real-time-y and data app-y use cases where you can never turn off the compute on your warehouse or your BigQuery, and those are the ones that are blowing up your costs and complexity. And on the Rockset side, we are actually not storage-optimized; we're compute-optimized.So, we index all the data as it comes in. And so, the storage actually goes slightly higher because the, you know, we stored the data and also the indexes of those data automatically, but we usually fold the computational cost to a quarter of what a typical warehouse needs. So, the TCO for our customers goes down by two to four folds, you know? It goes down by half or even to a quarter of what they used to spend. Even though their storage cost goes up in net, that is a very, very small fraction of their spend.And so really, I think, good real-time analytics platforms are all compute-optimized and not storage-optimized, and that is what allows them to be a lot more efficient at being the backend for these data applications.Corey: As someone who spends a lot of time staring into the depths of AWS bills, I think that people also lose sight of the reality that it doesn't matter what you're spending on AWS; it invariably pales in comparison to what you're spending on people to work with these things. The reason to go to cloud is not because it is the cheapest possible way to get computers to do things; it's because it's a capability story. It's about unlocking capacity and capabilities you do not have otherwise. And that dramatically increases your feature velocity and it lets you to achieve things faster, sooner, with better results. And unlocking a capability is always going to be more interesting to a company than saving money on it. When a company cares first, last, and always about just save money, make the bill lower, the end, it's usually a company in decline. Or alternately, something very strange is going on over there.Venkat: I agree with that. One of our favorite customers told us that Rockset took their six-month roadmap and shrunk it to a single afternoon. And their supply chain SaaS backend for heavy construction, 80% of concrete that are being delivered and tracked in North America follows through their platform, and Rockset powers all of their real-time analytics and reporting. And before Rockset, what did they have? They had built a beautiful serverless stack using DynamoDB, even have AWS Lambdas and what-have-you.And why did they have to do all serverless? Because the entire team was two people. [laugh]. And maybe a third person once in a while, they'll get, so 2.5. Brilliant people, like, you know, really pioneers of building an entire data stack on AWS in a serverless fashion; no pipes, no ETL.And then they were like, oh God, finally, I have to do something because my business demands and my customers are demanding real-time reporting on all of these concrete trucks and aggregate trucks delivering stuff. And real-time reporting is the name of the game for them, and so how do I power this? So, I have to build a whole bunch of pipes, deliver it to, like, some Elasticsearch or some kind of like a cluster that I had to keep up in real-time. And this will take me a couple of months, that will take me a couple of months. They came into Rockset on a Thursday, built their MVP over the weekend, and they had the first working version of their product the following Tuesday.So—and then, you know, there was no turning back at that point, not a single line of code was written. You know, you just go and create an account with Rockset, point us at your Dynamo, and then off you go. You know, you can use start using SQL and go start building your real-time application. So again, I think the tremendous value, I think a lot of customers like us, and a lot of customers love us. And if you really ask them what is one thing about Rockset that you really like, I think it'll come back to the same thing, which is, you gave me a lot of time back.What I thought would take six months is now a week. What I thought would be three weeks, we got that in a day. And that allows me to focus on my business. I want to spend more time with my stakeholders, you know, my CPO, my sales teams, and see what they need to grow our business and succeed, and not build yet another data pipeline and have data pipelines and other things coming out of my nose, you know? So, at the end of the day, the simplicity aspects of it is very, very important for real-time analytics because, you know, we can't really realize our vision for real-time being the new default in every enterprise for whenever analytics concern without making it very, very simple and accessible to everybody.And so, that continues to be one of our core thing. And I think you're absolutely right when you say the biggest expense is actually the people and the time and the energy they have to spend. And not having to stand up a huge data ops team that is building and managing all of these things, is probably the number one reason why customers really, really like working with our product.Corey: I want to thank you for taking so much time to talk me through what you're working on these days. If people want to learn more, where's the best place to find you?Venkat: We are Rockset, I'll spell it out for your listeners ROCKSET—rock set—rockset.com. You can go there, you can start a free trial. There is a blog, rockset.com/blog has a prolific blog that is very active. We have all sorts of stories there, and you know engineers talking about how they implemented certain things, to customer case studies.So, if you're really interested in this space, that's one on space to follow and watch. If you're interested in giving this a spin, you know, you can go to rockset.com and start a free trial. If you want to talk to someone, there is, like, a ‘Request Demo' button there; you click it and one of our solutions people or somebody that is more familiar with Rockset would get in touch with you and you can have a conversation with them.Corey: Excellent. And links to that will of course go in the [show notes 00:34:20]. Thank you so much for your time today. I appreciate it.Venkat: Thanks, Corey. It was great.Corey: Venkat Venkataramani, co-founder and CEO at Rockset. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an insulting crappy comment that I will immediately see show up on my real-time dashboard.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

AWS Insiders
An Inside Look at The Planet-Scale Architecture of DynamoDB with Alex DeBrie, Author of "The DynamoDB Book"(Part Two)

AWS Insiders

Play Episode Listen Later Mar 8, 2022 20:07


On this episode, Alex dives deep into the intricacies of Amazon DynamoDB, a Planet-Scale NoSQL database service that supports key-value and document data structures. Alex is the author of "The DynamoDB Book," a comprehensive guide to data modeling with DynamoDB. He also has been recognized as an AWS Data Hero. Alex discusses the consistency and predictability in the design of DynamoDB's performance, and how to best utilize it.  --------“I think the most important thing to know about Dynamo, it has that primary key structure that we talked about, but every single item is going to have what's called a partition key, and what this partition key is doing is basically helping decide to which shard or node, a particular item is going to go to. So when a request comes into DynamoDB, the first thing that's going to hit is just this global request router that's deployed across the entire region. It handles all the tables within a particular region, and it gets that item, it pulls up the metadata for that table, it hashes the partition key value that was sent in for a particular item, and then based on that, it knows which node to go to in the fleet.” Alex DeBrie, Principal at DeBrie Advisory and Author of “The DynamoDB Book.”--------Time Stamps* (01:50) DynamoDB scale cases* (05:49) The architecture of DynamoDB that allows it to theoretically scale* (09:37) The top three tips for DynamoDB customers* (11:51) The value of doing the math for your application* (13:14) What mistakes to avoid to keep your costs low* (16:28) Examples of managing costs with DynamoDB   --------SponsorThis podcast is presented by Cloudfix CloudFix automates easy, no risk AWS cost savings. So your team can focus on changing the world. --------LinksConnect with Alex DeBrie on LinkedInConnect with Rahul Subramaniam on LinkedIn

AWS Insiders
An Inside Look at The Planet-Scale Architecture of DynamoDB with Alex DeBrie, Author of "The DynamoDB Book" (Part One)

AWS Insiders

Play Episode Listen Later Feb 22, 2022 27:58


On this episode, Alex dives deep into the intricacies of Amazon DynamoDB, a Planet-Scale NoSQL database service that supports key-value and document data structures. Alex discusses the consistency and predictability in the design of DynamoDB's performance, and how to best utilize it. --------“How is Dynamo different than a relational database? I'd say the biggest thing is, it's designed for consistency and predictability, especially around its performance and what that response time is going to be, and also how high it can scale up on different axes.” Alex DeBrie, Principal at DeBrie Advisory and Author of “The DynamoDB Book”.--------Time Stamps* (02:08) How Alex discovered and got started with Amazon DynamoDB* (04:34) The value of writing “The DynamoDB Book”* (07:08) The underlying architecture and the unique qualities of Amazon DynamoDB* (13:45) The advantages of single table design * (17:48) Illustrating examples of single table design * (21:45) Doing the math with Amazon DynamoDB for consistency and predictability --------SponsorThis podcast is presented by Cloudfix CloudFix automates easy, no risk AWS cost savings. So your team can focus on changing the world. --------LinksConnect with Alex DeBrie on LinkedInConnect with Rahul Subramaniam on LinkedIn

Melbourne AWS User Group
What's new in October 2021

Melbourne AWS User Group

Play Episode Listen Later Jan 17, 2022 69:58


A lot of things happened in October, and we talked about them all in early November. In this episode Arjen, Guy, and JM discuss a whole bunch of cool things that were released and may be a bit harsh on everything Microsoft. News Finally in Sydney Amazon EC2 Mac instances are now available in seven additional AWS Regions Amazon MemoryDB for Redis is now available in 11 additional AWS Regions Serverless Lambda AWS Lambda now supports triggering Lambda functions from an Amazon SQS queue in a different account AWS Lambda now supports IAM authentication for Amazon MSK as an event source Step Functions Now — AWS Step Functions Supports 200 AWS Services To Enable Easier Workflow Automation | AWS News Blog AWS Batch adds console support for visualizing AWS Step Functions workflows Amplify Announcing General Availability of Amplify Geo for AWS Amplify AWS Amplify for JavaScript now supports resumable file uploads for Storage Other Accelerating serverless development with AWS SAM Accelerate | AWS Compute Blog Containers Amazon EKS Managed Node Groups adds native support for Bottlerocket AWS Fargate now supports Amazon ECS Windows containers Announcing the general availability of cdk8s and support for Go | Containers Monitoring clock accuracy on AWS Fargate with Amazon ECS Amazon ECS Anywhere now supports GPU-based workloads AWS Console Mobile Application adds support for Amazon Elastic Container Service AWS Load Balancer Controller version 2.3 now available with support for ALB IPv6 targets AWS App Mesh Metric Extension is now generally available EC2 & VPC New – Amazon EC2 C6i Instances Powered by the Latest Generation Intel Xeon Scalable Processors | AWS News Blog Amazon EC2 now supports sharing Amazon Machine Images across AWS Organizations and Organizational Units Amazon EC2 Hibernation adds support for Ubuntu 20.04 LTS Announcing Amazon EC2 Capacity Reservation Fleet a way to easily migrate Amazon EC2 Capacity Reservations across instance types Amazon EC2 Auto Scaling now supports describing Auto Scaling groups using tags Amazon EC2 now offers Microsoft SQL Server on Microsoft Windows Server 2022 AMIs AWS Elastic Beanstalk supports Database Decoupling in an Elastic Beanstalk Environment AWS FPGA developer kit now supports Jumbo frames in virtual ethernet frameworks for Amazon EC2 F1 instances Amazon VPC Flow Logs now supports Apache Parquet, Hive-compatible prefixes and Hourly partitioned files Network Load Balancer now supports TLS 1.3 New – Attribute-Based Instance Type Selection for EC2 Auto Scaling and EC2 Fleet | AWS News Blog Amazon Lightsail now supports AWS CloudFormation for instances, disks and databases Dev & Ops CLI AWS Cloud Control API, a Uniform API to Access AWS & Third-Party Services | AWS News Blog Now programmatically manage alternate contacts on AWS accounts CodeGuru Amazon CodeGuru now includes recommendations powered by Infer Amazon CodeGuru announces Security detectors for Python applications and security analysis powered by Bandit Amazon CodeGuru Reviewer adds detectors for AWS Java SDK v2's best practices and features IaC AWS CDK releases v1.121.0 - v1.125.0 with features for faster development cycles using hotswap deployments and rollback control AWS CloudFormation customers can now manage their applications in AWS Systems Manager Other NoSQL Workbench for Amazon DynamoDB now enables you to import and automatically populate sample data to help build and visualize your data models Amazon Corretto October Quarterly Updates Bulk Editing of OpsItems in AWS Systems Manager OpsCenter AWS Fault Injection Simulator now supports Spot Interruptions AWS Fault Injection Simulator now injects Spot Instance Interruptions Security Firewalls AWS Firewall Manager now supports centralized logging of AWS Network Firewall logs AWS Network Firewall Adds New Configuration Options for Rule Ordering and Default Drop Backups AWS Backup Audit Manager adds compliance reports AWS Backup adds an additional layer for backup protection with the availability of AWS Backup Vault Lock Other AWS Security Hub adds support for cross-Region aggregation of findings to simplify how you evaluate and improve your AWS security posture Amazon SES now supports 2048-bit DKIM keys AWS License Manager now supports Delegated Administrator for Managed entitlements Data Storage & Processing Goodbye Microsoft SQL Server, Hello Babelfish | AWS News Blog Announcing availability of the Babelfish for PostgreSQL open source project Announcing Amazon RDS Custom for Oracle AWS announces AWS Snowcone SSD Amazon RDS Proxy now supports Amazon RDS for MySQL Version 8.0 Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) announces support for Cross-Cluster Replication Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now comes with an improved management console AWS Transfer Family customers can now use Amazon S3 Access Point aliases for granular and simplified data access controls Amazon EMR now supports Apache Spark SQL to insert data into and update Apache Hive metadata tables when Apache Ranger integration is enabled Amazon Neptune now supports Auto Scaling for Read Replicas AWS Glue Crawlers support Amazon S3 event notifications Amazon Keyspaces (for Apache Cassandra) now supports automatic data expiration by using Time to Live (TTL) settings New – AWS Data Exchange for Amazon Redshift | AWS News Blog AI & ML SageMaker Announcing Fast File Mode for Amazon SageMaker Amazon SageMaker Projects now supports Image Building CI/CD templates Amazon SageMaker Data Wrangler now supports Amazon Athena Workgroups, feature correlation, and customer managed keys Other Amazon Kendra launches support for 34 additional languages Amazon Fraud Detector now supports event datasets AWS announces a price reduction of up to 56% for Amazon Fraud Detector machine learning fraud predictions Amazon Fraud Detector launches new ML model for online transaction fraud detection Amazon Transcribe now supports custom language models for streaming transcription Amazon Textract launches TIFF support and adds asynchronous support for receipts and invoices processing Announcing Amazon EC2 DL1 instances for cost efficient training of deep learning models Other Cool Stuff AWS IoT Core now makes it optional for customers to send the entire trust chain when provisioning devices using Just-in-Time Provisioning and Just-in-Time Registration AWS IoT SiteWise announces support for using the same asset models across different hierarchies VMware Cloud on AWS Outposts Brings VMware SDDC as a Fully Managed Service on Premises | AWS News Blog AWS Outposts adds new CloudWatch dimension for capacity monitoring Amazon Monitron launches iOS app Amazon Braket offers D-Wave's Advantage 4.1 system for quantum annealing Amazon QuickSight adds support for Pixel-Perfect dashboards Amazon WorkMail adds Mobile Device Access Override API and MDM integration capabilities Announcing Amazon WorkSpaces API to create new updated images with latest AWS drivers Computer Vision at the Edge with AWS Panorama | AWS News Blog Amazon Connect launches API to configure hours of operation programmatically New region availability and Graviton2 support now available for Amazon GameLift Sponsors CMD Solutions Silver Sponsors Cevo Versent

Streaming Audio: a Confluent podcast about Apache Kafka
Using Apache Kafka as Cloud-Native Data System ft. Gwen Shapira

Streaming Audio: a Confluent podcast about Apache Kafka

Play Episode Listen Later Dec 7, 2021 33:57 Transcription Available


What does cloud native mean, and what are some design considerations when implementing cloud-native data services? Gwen Shapira (Apache Kafka® Committer and Principal Engineer II, Confluent) addresses these questions in today's episode. She shares her learnings by discussing a series of technical papers published by her team, which explains what they've done to expand Kafka's cloud-native capabilities on Confluent Cloud. Gwen leads the Cloud-Native Kafka team, which focuses on developing new features to evolve Kafka to its next stage as a fully managed cloud data platform. Turning Kafka into a self-service platform is not entirely straightforward, however, Kafka's early day investment in elasticity, scalability, and multi-tenancy to run at a company-wide scale served as the North Star in taking Kafka to its next stage— a fully managed cloud service where users will just need to send us their workloads and everything else will magically work. Through examining modern cloud-native data services, such as Aurora, Amazon S3, Snowflake, Amazon DynamoDB, and BigQuery, there are seven capabilities that you can expect to see in modern cloud data systems, including: Elasticity: Adapt to workload changes to scale up and down with a click or APIs—cloud-native Kafka omits the requirement to install REST Proxy for using Kafka APIsInfinite scale: Kafka has the ability to elastic scale with a behind-the-scene process for capacity planning Resiliency: Ensures high availability to minimize downtown and disaster recovery Multi-tenancy: Cloud-native infrastructure needs to have isolations—data, namespaces, and performance, which Kafka is designed to supportPay per use: Pay for resources based on usageCost-effectiveness: Cloud deployment has notably lower costs than self-managed services, which also decreases adoption time Global: Connect to Kafka from around the globe and consume data locallyBuilding around these key requirements, a fully managed Kafka as a service provides an enhanced user experience that is scalable and flexible with reduced infrastructure management costs. Based on their experience building cloud-native Kafka, Gwen and her team published a four-part thesis that shares insights on user expectations for modern cloud data services as well as technical implementation considerations to help you develop your own cloud-native data system. EPISODE LINKSCloud-Native Apache KafkaDesign Considerations for Cloud-Native Data SystemsSoftware Engineer, Cloud Native KafkaJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)Watch the video version of this podcast

AWS Bites
AWS re:Invent Day One Special

AWS Bites

Play Episode Listen Later Nov 30, 2021 30:16


In this special episode, Eoin and Luciano talk about their impression on the announcements from the first day of AWS re:invent 2021. AWS Lambda now supports event filtering for Amazon SQS, Amazon DynamoDB, and Amazon Kinesis as event sources: https://aws.amazon.com/about-aws/whats-new/2021/11/aws-lambda-event-filtering-amazon-sqs-dynamodb-kinesis-sources/ Amazon CodeGuru Reviewer now detects hardcoded secrets in Java and Python repositories: https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-codeguru-reviewer-hardcoded-secrets-java-python/ Amazon ECR announces pull through cache repositories: https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-ecr-cache-repositories/ Introducing recommenders optimized to deliver personalized experiences for Media & Entertainment and Retail with Amazon Personalize: https://aws.amazon.com/about-aws/whats-new/2021/11/recommenders-optimized-personalized-media-entertainment-retail-amazon-personalize/AWS Chatbot now supports management of AWS resources in Slack (Preview): https://aws.amazon.com/about-aws/whats-new/2021/11/aws-chatbot-management-resources-slack/ Amazon CloudWatch Evidently: https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-cloudwatch-evidently-feature-experimentation-safer-launches/ AWS Migration Hub Refactor Spaces - Preview: https://aws.amazon.com/about-aws/whats-new/2021/11/aws-migration-hub-refactor-spaces/ CloudWatch Real User Monitoring: https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-cloudwatch-rum-applications-client-side-performance/ CloudWatch Metrics Insights: https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-cloudwatch-metrics-insights-preview/ AWS Karpenter: https://github.com/aws/karpenter S3 Event Notifications with EventBridge: https://aws.amazon.com/blogs/aws/new-use-amazon-s3-event-notifications-with-amazon-eventbridge/ S3 Event Notifications for S3 Lifecycle, S3 Intelligent-Tiering, object tags, and object access control lists: https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-s3-event-notifications-s3-lifecycle-intelligent-tiering-object-tags-object-access-control-lists/ Amazon Athena ACID Transactions (Preview): https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-athena-acid-apache-iceberg/ AWS Control Tower introduces Terraform account provisioning and customization: https://aws.amazon.com/about-aws/whats-new/2021/11/aws-control-tower-terraform/ Leave a comment here or connect with us on Twitter: - https://twitter.com/eoins - https://twitter.com/loige

media retail day one python aws java reinvent eoin terraform aws lambda amazon dynamodb eventbridge amazon sqs aws control tower amazon kinesis amazon ecr
Innovando con AWS
#0012: Migrando Bases de Datos Oracle

Innovando con AWS

Play Episode Listen Later Sep 16, 2021 38:46


En el episodio número 12 del podcast nos visita Emilio Nestal, Manager de Solution Architecture en Amazon Web Services. Hoy nos viene a hablar sobre un tema con el que muchos clientes se sentirán relacionados: Bases de datos Oracle en Amazon Web Services. Emilio nos cuenta cómo podemos usar Oracle en AWS, cuáles son las ventajas sobre servicios no manejados y cómo podemos romper esa base de datos monolítica por razones de precio, resiliencia y facilidad al operar utilizando Schema Conversion Tool y Database Migration Service. 

Emilio Nestal - Localizado en Madrid, España. Emilio lidera un equipo de Solution Architecture del segmento de PYMEs ayudando a los clientes en sus proyectos de transformación digital e innovación.

Rodrigo Asensio - @rasensio: Basado en Barcelona, España. Rodrigo es responsable de un equipo de Solution Architecture del segmento Enterprise que ayuda a grandes clientes en sus migraciones masivas al cloud, en transformación digital y proyectos de innovación.

Links
Oracle en AWS - https://aws.amazon.com/es/rds/oracle/Alternativas a Oracle en AWS - https://aws.amazon.com/es/rds/
Como hacer ETL en AWS - https://aws.amazon.com/glue/Schema Conversion Tool para facilitar migraciones de bases de datos - https://aws.amazon.com/dms/schema-conversion-tool
Database Migration Service - https://aws.amazon.com/dms/ 
Amazon apagó la ultima base de datos Oracle - https://aws.amazon.com/blogs/aws/migration-complete-amazons-consumer-business-just-turned-off-its-final-oracle-database/
Amazon Keyspaces para Apache Cassandra - https://aws.amazon.com/keyspaces/
Amazon DynamoDB , base de datos NoSQL - https://aws.amazon.com/dynamodb/ 
Database Freedom program - https://aws.amazon.com/solutions/databasemigrations/database-freedom/
Como Samsung migró 1.1 billones de usuarios a Amazon Aurora - https://aws.amazon.com/solutions/case-studies/samsung-migrates-off-oracle-to-amazon-aurora/Conéctate con Rodrigo Asensio en Twitter https://twitter.com/rasensio y Linkedin en https://www.linkedin.com/in/rasensio/ Conéctate con Emilio Nestal en Linkedin https://www.linkedin.com/in/emilio-nestal-diaz-%E2%98%81-4325438/

The Cloud Pod
132: The Cloud Pod takes a trip down MemoryDB lane

The Cloud Pod

Play Episode Listen Later Sep 2, 2021 59:09


On The Cloud Pod this week, the results of the AWS Summit prediction draft are in. It was probably worth getting up early for — especially if you're Jonathan. A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. JumpCloud, which offers a complete platform for identity, access, and device management — no matter where your users and devices are located.  This week's highlights

Le Podcast AWS en Français
Quoi de neuf ?

Le Podcast AWS en Français

Play Episode Listen Later May 14, 2021 15:48


Un épisode sur deux du podcast est consacré à une brève revue des principales nouveautés AWS. Cette semaine, nous parlons de Amazon FinSpace, un service pour faciliter la vie des analystes financiers, d'un nouveau volet ajouté à AWS Systems Manager, de la possibilité de déployer des minis fonctions sur les edges de Amazon CloudFront, d'un nouveau SDK en version Alpha pour les développeurs Rust et d'un cours en ligne et gratuit sur Amazon DynamoDB.

alpha rust aws sdks neuf amazon dynamodb amazon cloudfront aws systems manager
Le Podcast AWS en Français
Quoi de neuf ?

Le Podcast AWS en Français

Play Episode Listen Later May 14, 2021 15:48


Un épisode sur deux du podcast est consacré à une brève revue des principales nouveautés AWS. Cette semaine, nous parlons de Amazon FinSpace, un service pour faciliter la vie des analystes financiers, d'un nouveau volet ajouté à AWS Systems Manager, de la possibilité de déployer des minis fonctions sur les edges de Amazon CloudFront, d'un nouveau SDK en version Alpha pour les développeurs Rust et d'un cours en ligne et gratuit sur Amazon DynamoDB.

alpha rust aws sdks neuf amazon dynamodb amazon cloudfront aws systems manager
YoungCTO.Tech
IT Career Talk: Freelance Python Developer Angelica Lapastora -BackEnd Developer | Python.PH Trustee

YoungCTO.Tech

Play Episode Listen Later May 3, 2021 32:56


Guest Ms. Angelica Lapastora of YoungCTO Rafi Quisumbing Angelica Lapastora is a Digital Operations Engineer - APAC at BMAT Licensing SLU, Barcelona, Spain. She specializes in creating backend services with Python, Django and AWS Lambda. She is also a Director of Marketing at PythonPH and a Python Lead at Women Who Code Manila. https://www.linkedin.com/in/angelica-lapastora-6585b2139/ https://python.ph/ MySQL, Amazon S3, Amazon Simple Notification Service (SNS), Django, PostgreSQL , Vue.js , Teaching, Communication , Team Leadership , Amazon EC2 , Amazon Dynamodb , RabbitMQ , Celery , Amazon SQS, API Development

サーバーワークスが送るAWS情報番組「さばラジ!」
【毎日AWS #167】AWS CloudTrail が Amazon DynamoDB のデータイベントをサポート 他13件 #サバワ

サーバーワークスが送るAWS情報番組「さばラジ!」

Play Episode Listen Later Mar 25, 2021 21:07


最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS」 おはようございます、金曜日担当パーソナリティの菅谷です。 今日は 3/25 に出たアップデートをピックアップしてご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ お知らせ 毎日AWSで使用したトークスクリプトを公開中! 【毎日AWS #167 トークスクリプト】【AWSアップデート 3/25】AWS CloudTrail が Amazon DynamoDB のデータイベントをサポート 他13件 【AWS CloudTrail Adds Logging of Data Events for Amazon DynamoDB】 ■ UPDATE PICKUP AWS CloudTrail が Amazon DynamoDB のデータイベントをサポート AWS Elemental MediaTailor が拡張デバックログをはじめとする 4つの追加機能をサポート Red Hat OpenShift Service on AWS が 一般提供開始 AWS Toolkit for VS Code が AWS SSO credential profile による接続をサポート Amazon Timestream が Amazon VPC endpoints をサポート AWS IoT Device Defender で ML Detect 機能が追加 Amazon Elasticsearch Service の Auto-Tune 機能でパフォーマンスを最適化できるように Amazon Forecast でワークフローステータスの変更を通知できるように AWS Cloud Map が DNS 名による検出が可能なネームスペースで 非 IP ベースのリソースを管理できるように NICE DCV web client SDK のバージョン 1.00 がリリース開始 AWS Backup で リカバリポイントをまとめて削除できるように Amazon AppFlow が 送信先に Zendesk をサポート Amazon Kendra に Perficient社 による新しい検索コネクタが追加 デジタルトレーニング: Advanced Architecting on AWS がアップデート ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ

AWS - Il podcast in italiano
Catapush: alta affidabilità e disaster recovery su AWS (ospite: Davide Marrone)

AWS - Il podcast in italiano

Play Episode Listen Later Mar 1, 2021 28:30


Cosa significa progettare un'applicazione per l'alta affidabilità? E quali tecniche di disaster recovery permettono di garantire la continuità di sistemi ed infrastrutture in caso di emergenze che impattano l'affidabilità? E come possono aiutare i servizi gestiti di AWS? In questo episodio ospito Davide Marrone, CTO di Catapush, per parlare del percorso di Catapush su AWS, le sfide affrontate per offrire un servizio sicuro e affidabile a banche ed enti finanziari, ed il ruolo di servizi gestiti multi-AZ e multi-region come Amazon RDS/Aurora, Amazon DynamoDB e Amazon ElastiCache. Link: catapush.com. Link: Amazon RDS Multi-AZ. Link: Amazon Aurora Global Database.

School of Cloud
DynamoDB

School of Cloud

Play Episode Listen Later Jan 30, 2021 67:20


Amazon DynamoDB is a fully managed proprietary NoSQL database service that supports key-value and document data structures and is offered by Amazon.com as part of the Amazon Web Services portfolioTwitter feedback: https://twitter.com/schoolofcloud

Melbourne AWS User Group
What's New in December 2020

Melbourne AWS User Group

Play Episode Listen Later Jan 27, 2021 110:30


re:Invent arrived, and with it came a lot of announcements. Some meh, some good, some great. In this episode Arjen, Jean-Manuel, Guy, and special guest star Rob will do their best to make sense of it. Or maybe they just make it more confusing? Who knows? Our brains can't really handle the number of announcements. Which is probably also why it took far too long to edit this episode. What's New Finally in ANZ In the Works – AWS Region in Melbourne, Australia | AWS News Blog Amazon EMR now provides up to 30% lower cost and up to 15% improved performance for Spark workloads on Graviton2-based instances Amazon Aurora Serverless v1 with PostgreSQL compatibility now available in eight additional regions Amazon SageMaker Studio is now expanded to AWS regions worldwide Serverless Lambda New for AWS Lambda – 1ms Billing Granularity Adds Cost Savings | AWS News Blog New for AWS Lambda – Functions with Up to 10 GB of Memory and 6 vCPUs | AWS News Blog New for AWS Lambda – Container Image Support | AWS News Blog Using Amazon CloudWatch Lambda Insights to Improve Operational Visibility | AWS News Blog AWS Lambda now supports batch windows of up to 5 minutes for functions with Amazon SQS as an event source AWS Lambda now supports Advanced Vector Extensions 2 (AVX2) Announcing Code Signing, a trust and integrity control for AWS Lambda EventBridge AWS Systems Manager Change Calendar integrates with Amazon EventBridge to enable automated actions based on calendar state changes Amazon EventBridge adds Server-Side Encryption (SSE) and increases default quotas Step Functions Amazon API Gateway now supports integration with Step Functions StartSyncExecution for HTTP APIs AWS Step Functions now supports Synchronous Express Workflows Amplify AWS Amplify announces new Admin UI Containers ECR Amazon Elastic Container Registry Public: A New Public Container Registry | AWS News Blog Amazon ECR announces cross region replication of images Fargate New – Fully Serverless Batch Computing with AWS Batch Support for AWS Fargate | AWS News Blog ECS Introducing Amazon ECS Anywhere | Containers Amazon ECS Announces the Preview of ECS Deployment Circuit Breaker Amazon ECS Cluster Auto Scaling now supports specifying a custom instance warm-up time Amazon ECS Capacity Providers Now Support Update Functionality Amazon ECS adds support for P4d instance types Amazon ECS Cluster Auto Scaling now offers more responsive scaling AWS Copilot CLI is now Generally Available EKS Amazon EKS Anywhere – Amazon Web Services Amazon EKS Distro: The Kubernetes Distribution Used by Amazon EKS | AWS News Blog Simplify running Apache Spark jobs with Amazon EMR on Amazon EKS Amazon EKS simplifies installation and management for Kubernetes cluster add-ons Amazon EKS adds built-in logging support for AWS Fargate Amazon EKS adds support for EC2 Spot Instances in managed node groups Amazon EKS Console Now Includes Kubernetes Resources to Simplify Cluster Management EC2 & VPC EBS New – Amazon EBS gp3 Volume Lets You Provision Performance Apart From Capacity | AWS News Blog Now in Preview – Larger & Faster io2 Block Express EBS Volumes with Higher Throughput | AWS News Blog AWS announces tiered pricing for input/output operations per second (IOPS) charges for Amazon Elastic Block Store (EBS) io2 volume, reducing the cost of provisioning peak IOPS by 15% Amazon EBS reduces the minimum volume size of Throughput Optimized HDD and Cold HDD Volumes by 75% AWS Compute Optimizer now supports Amazon EBS volume recommendations Instance Types New – Use Amazon EC2 Mac Instances to Build & Test macOS, iOS, iPadOS, tvOS, and watchOS Apps | AWS News Blog New EC2 M5zn Instances – Fastest Intel Xeon Scalable CPU in the Cloud | AWS News Blog Coming Soon – Amazon EC2 G4ad Instances Featuring AMD GPUs for Graphics Workloads | AWS News Blog Coming Soon – EC2 C6gn Instances – 100 Gbps Networking with AWS Graviton2 Processors | AWS News Blog EC2 Update – D3 / D3en Dense Storage Instances | AWS News Blog New – Amazon EC2 R5b Instances Provide 3x Higher EBS Performance | AWS News Blog   Other EC2 Amazon Machine Images (AMIs) now support tag-on-create and tag-based access control Amazon EC2 Auto Scaling now supports attaching multiple network interfaces at launch AWS Announcing Windows Server version 20H2 AMIs for Amazon EC2 Simplify EC2 provisioning and viewing cloud resources in the ServiceNow CMDB with AWS Service Management Connector for ServiceNow Networking New – VPC Reachability Analyzer | AWS News Blog Introducing AWS Transit Gateway Connect to simplify SD-WAN branch connectivity AWS Global Accelerator launches custom routing Dev & Ops New services Preview: AWS Proton – Automated Management for Container and Serverless Deployments | AWS News Blog AWS announces Amazon DevOps Guru in Preview, an ML-powered cloud operations service to improve application availability for AWS workloads Preview: Amazon Lookout for Metrics, an Anomaly Detection Service for Monitoring the Health of Your Business | AWS News Blog Code New for Amazon CodeGuru – Python Support, Security Detectors, and Memory Profiling | AWS News Blog Amazon CodeGuru Reviewer announces Security Detectors to help improve code security Amazon CodeGuru Profiler adds Memory Profiling and Heap Summary Amazon CodeGuru Reviewer announces CodeQuality Detector to help manage technical debt and codebase maintainability AWS CodeArtifact now supports NuGet Tools AWS IDE Toolkit now available for AWS Cloud9 Porting Assistant for .NET adds support for .NET 5 Other Announcing Modules for AWS CloudFormation Amazon CloudWatch Synthetics now supports canary scripts in Python with Selenium framework AWS Systems Manager now supports Amazon Virtual Private Cloud (Amazon VPC) endpoint policies Security New services AWS Audit Manager Simplifies Audit Preparation | AWS News Blog SSO New – Attribute-Based Access Control with AWS Single Sign-On | AWS News Blog AWS Single Sign-On enables administrators to require users to set up MFA devices during sign-in AWS Single Sign-On adds Web Authentication (WebAuthn) support for user authentication with security keys and built-in biometric authenticators Other AWS CloudTrail provides more granular control of data event logging through advanced event selectors AWS Security Hub adds open source tool integrations with Kube-bench and Cloud Custodian AWS Transfer Family supports AWS WAF for identity provider integrations AWS Secrets Manager now supports 5000 requests per second for the GetSecretValue API operation Data Storage & Processing Aurora Introducing the next version of Amazon Aurora Serverless in preview Introducing Amazon Aurora R6g instance types, powered by AWS Graviton2 processors, in preview (includes Sydney) Babelfish for Amazon Aurora PostgreSQL is Available for Preview Amazon Aurora PostgreSQL Integrates with AWS Lambda RDS Amazon RDS for Oracle supports managed disaster recovery (DR) with Amazon RDS Cross-Region Automated Backups PostgreSQL 13 now available in Amazon RDS Database preview environment Lakes Amazon HealthLake Stores, Transforms, and Analyzes Health Data in the Cloud | AWS News Blog Announcing preview of AWS Lake Formation features: Transactions, Row-level Security, and Acceleration S3 New – Amazon S3 Replication Adds Support for Multiple Destination Buckets | AWS News Blog Amazon S3 Update – Strong Read-After-Write Consistency | AWS News Blog Amazon S3 Replication adds support for multiple destinations in the same, or different AWS Regions Amazon S3 now delivers strong read-after-write consistency automatically for all applications Amazon S3 Bucket Keys reduce the costs of Server-Side Encryption with AWS Key Management Service (SSE-KMS) Amazon S3 Replication adds support for two-way replication EMR Amazon EMR Studio makes it easier for data scientists to build and deploy code Redshift AWS announces AQUA for Amazon Redshift (preview) Amazon Redshift introduces data sharing (preview) Amazon Redshift launches RA3.xlplus nodes with managed storage Amazon Redshift announces Automatic Table Optimization Amazon Redshift now includes Amazon RDS for MySQL and Amazon Aurora MySQL databases as new data sources for federated querying (Preview) Amazon Redshift launches the ability to easily move clusters between AWS Availability Zones (AZs) DynamoDB You now can use Amazon DynamoDB with AWS Glue Elastic Views to combine and replicate data across multiple data stores by using SQL – available in limited preview You now can use a SQL-compatible query language to query, insert, update, and delete table data in Amazon DynamoDB Glue Announcing Amazon Elasticsearch Service support for AWS Glue Elastic Views Announcing AWS Glue Elastic Views Preview AWS Glue now supports workload partitioning to further improve the reliability of Spark applications Other Amazon FSx for Lustre now enables you to grow storage on your file systems with the click of a button Introducing Amazon Managed Workflows for Apache Airflow (MWAA) AI & ML Sagemaker :allthethings: Amazon SageMaker Simplifies Training Deep Learning Models With Billions of Parameters | AWS News Blog Amazon SageMaker JumpStart Simplifies Access to Pre-built Models and Machine Learning Solutions | AWS News Blog New – Store, Discover, and Share Machine Learning Features with Amazon SageMaker Feature Store | AWS News Blog New – Profile Your Machine Learning Training Jobs With Amazon SageMaker Debugger | AWS News Blog New – Amazon SageMaker Pipelines Brings DevOps Capabilities to your Machine Learning Projects | AWS News Blog Amazon SageMaker Edge Manager Simplifies Operating Machine Learning Models on Edge Devices | AWS News Blog New – Managed Data Parallelism in Amazon SageMaker Simplifies Training on Large Datasets | AWS News Blog Introducing Amazon SageMaker Data Wrangler, a Visual Interface to Prepare Data for Machine Learning | AWS News Blog Amazon SageMaker JumpStart Simplifies Access to Pre-built Models and Machine Learning Solutions | AWS News Blog New – Amazon SageMaker Clarify Detects Bias and Increases the Transparency of Machine Learning Models | AWS News Blog Amazon SageMaker Model Monitor now supports new capabilities to maintain model quality in production Introducing two new libraries for managed distributed training on Amazon SageMaker Edge New – Amazon Lookout for Equipment Analyzes Sensor Data to Help Detect Equipment Failure | AWS News Blog Amazon Lookout for Vision – New ML Service Simplifies Defect Detection for Manufacturing | AWS News Blog AWS Panorama Appliance: Bringing Computer Vision Applications to the Edge | AWS News Blog Introducing Amazon Monitron, an end-to-end system to detect abnormal equipment behavior AI Services Amazon Kendra adds Google Drive connector Amazon Kendra launches incremental learning Amazon Kendra launches connector library Announcing Amazon Forecast Weather Index – automatically include local weather to increase your forecasting model accuracy Added ML Amazon announces Amazon Neptune ML: easy, fast, and accurate predictions for graphs AWS announces Amazon Redshift ML (preview) Other Cool Stuff Regions/Zones Announcing new AWS Wavelength Zone in Las Vegas Announcing Preview of AWS Local Zones in Boston, Houston, and Miami Braket PennyLane on Braket + Progress Toward Fault-Tolerant Quantum Computing + Tensor Network Simulator | AWS News Blog Amazon Braket tensor network simulator supports 50-qubit quantum circuits Amazon Braket now supports manual qubit allocation Connect Contact Lens for Amazon Connect launches real-time contact center analytics to detect customer issues on live calls Amazon Connect Wisdom provides contact center agents the information they need to quickly solve customer issues Amazon Connect Customer Profiles for a unified view of your customers to provide more personalized service Amazon Connect Voice ID provides real-time caller authentication for more secure calls Amazon Connect Tasks makes it easy to prioritize, assign, track, and automate contact center agent tasks Amazon Connect Chat now supports Apple Business Chat (Preview) Quicksight Introducing Amazon QuickSight Q: ask questions about your data and get answers in seconds Amazon QuickSight launches new session capacity pricing options, embedding without user management and a developer portal for embedded analytics Other Announcing Unified Search in the AWS Management Console Amazon WorkSpaces Streaming Protocol now Generally Available New – SaaS Lens in AWS Well-Architected Tool | AWS News Blog The Amazon Chime SDK now supports messaging AWS Batch now has integrated Amazon Linux 2 support Nanos Amazon WorkDocs now supports Dark Mode on Android Sponsors Gold Sponsor Innablr Silver Sponsors AC3 CMD Solutions DoIT International

eCloud Radio
EP3 雲端最前線 - Amazon DynamoDB & AWS Gateway Load Balancer

eCloud Radio

Play Episode Listen Later Dec 2, 2020 5:27


哈囉大家,歡迎來到 eCloud Radio! 這周我們要和大家分享的 AWS 服務是 - Amazon DynamoDB,以及近期正式推出的新服務 - AWS Gateway Load Balancer。想了解更多相關內容的朋友,可以到我們的部落格查看完整文章唷!另外,AWS re:Invent 於今日正式登場,下周我們將特別和大家分享今年的 re:Invent 特輯喔! 本周新聞連結 : https://pse.is/386rtj AWS re:Invent 即時新聞:https://pse.is/3a3s3y Facebook|Instagram|Spotify|Apple Podcast 搜尋 : eCloudture

Melbourne AWS User Group
What's New in November 2020

Melbourne AWS User Group

Play Episode Listen Later Nov 26, 2020 67:53


Because re:Invent is just in a couple of days, Arjen, Jean-Manuel, and Guy take an earlier than usual look at the massive number of announcements in November. And to think, this episode was recorded on 20 November so everything announced after that will be discussed in the re:Invent episode. What's new Finally in Sydney Amazon Kendra now available in Asia-Pacific (Sydney) AWS region IP Multicast on AWS Transit Gateway is now available in major AWS regions world wide Meet the newest AWS Heroes including the first DevTools Heroes! | AWS News Blog Serverless Amazon EventBridge introduces support for Event Replay Amazon CodeGuru Profiler simplifies profiling for AWS Lambda functions AWS Lambda now makes it easier to send logs to custom destinations AWS Lambda now supports Amazon MQ for Apache ActiveMQ as an event source AWS Step Functions now supports Amazon API Gateway service integration AWS Step Functions now supports Amazon EKS service integration Containers Lightsail Containers: An Easy Way to Run your Containers in the Cloud | AWS News Blog Amazon ECS now supports Internet Protocol Version 6 (IPv6) in awsvpc networking mode Amazon ECS extensions for AWS CDK is now generally available The AWS CDK EKS Construct Library is Now Available as a Developer Preview and Adds Support for cdk8s AWS Fargate for Amazon ECS launches features focused on configuration and metrics AWS App Mesh introduces circuit breaker capabilities Announcing AWS App Mesh Controller for Kubernetes Version 1.2.0 Amazon VPC CNI plugin version 1.7 now default for Amazon EKS clusters EC2 & VPC AWS Network Firewall – New Managed Firewall Service in VPC | AWS News Blog Deployment models for AWS Network Firewall | Networking & Content Delivery Introducing AWS Gateway Load Balancer – Easy Deployment, Scalability, and High Availability for Partner Appliances | AWS News Blog Network Load Balancer now supports IPv6 AWS Client VPN now supports Client Connect Handler AWS Client VPN announces self service portal to download VPN profiles and desktop applications Introducing EC2 Instance rebalance recommendation for EC2 Spot Instances Amazon EC2 On-Demand Capacity Reservations now supports AWS Wavelength Zones Pause and Resume Workloads on T3 and T3a Instances with Amazon EC2 Hibernation Announcing AWS PrivateLink support for Amazon Braket Dev & Ops AWS CloudFormation change sets now support nested stacks AWS Service Catalog now supports StackSet instance operations AWS X-Ray now supports trace context propagation for Amazon Simple Storage Service (S3) Amazon CloudWatch Synthetics now supports Environment Variables AWS Systems Manager OpsCenter now integrates with Amazon CloudWatch for easier diagnosis and remediation of alarms AWS CodePipeline Source Action for AWS CodeCommit Supports git clone Now customize the idle session timeout value and stream session logs to Amazon CloudWatch Logs for Session Manager Security Encrypt your Amazon DynamoDB global tables by using your own encryption keys AWS KMS - based Encryption is Now Available in Amazon SageMaker Studio Announcing protection groups for AWS Shield Advanced AWS Firewall Manager now supports centralized management of AWS Network Firewall Data Storage & Processing New – Export Amazon DynamoDB Table Data to Your Data Lake in Amazon S3, No Code Writing Required | AWS News Blog Introducing Amazon S3 Storage Lens – Organization-wide Visibility Into Object Storage | AWS News Blog Amazon MQ Update – New RabbitMQ Message Broker Service | AWS News Blog Amazon DocumentDB (with MongoDB compatibility) adds support for MongoDB 4.0 and transactions Amazon Athena announces availability of engine version 2 Amazon Athena adds support for running SQL queries across relational, non-relational, object, and custom data sources. Announcing AWS Glue DataBrew – A Visual Data Preparation Tool That Helps You Clean and Normalize Data Faster | AWS News Blog Amazon RDS for SQL Server now supports Database Mail Amazon RDS Data API supports tag-based authorization Amazon RDS on VMware Adds Support for Cross-Custom-Availability-Zone Read Replicas Amazon Aurora Global Database Expands Manageability Capabilities AWS Launch Wizard now supports single-instance deployments of SQL Server on Windows and Linux Amazon Redshift announces Open Source JDBC and Python drivers Amazon Redshift announces support for TIME and TIMETZ data types Amazon Neptune now supports Event notifications Amazon Neptune now supports custom endpoints to access your workload Amazon Elasticsearch Service now supports defining a custom name for your domain endpoint Amazon Elasticsearch Service adds support for hot reload of dictionary files Storage Day Welcome to AWS Storage Day 2020 | AWS News Blog Amazon FSx for Windows File Server Now Supports Access to File Systems Using Alternate DNS Names AWS Storage Gateway adds schedule-based network bandwidth throttling for Tape and Volume Gateway Amazon S3 Replication adds support for metrics and notifications Amazon S3 Replication adds support for replicating delete markers AWS Transfer Family now supports shared services VPC environments Amazon S3 Intelligent-Tiering adds Archive Access Tiers — further optimizes storage costs AWS Backup extends centralized backup management support to Amazon FSx AWS Snowball Edge now supports importing virtual machine images to your deployed Snow devices AWS Storage Gateway simplifies in-cloud processing by adding file-level upload notifications for File Gateway AWS Storage Gateway enhances security by introducing access-based enumeration for File Gateway Amazon ECS now supports the use of Amazon FSx for persistent, shared storage for Windows containers AMI Lifecycle Management now available with Data Lifecycle Manager AWS Snowball Edge now supports Windows operating systems AWS Storage Gateway increases local storage cache by 4x for Tape and Volume Gateway AWS announces 40% price reduction for Amazon Elastic Block Store (EBS) Cold HDD (sc1) volumes Amazon FSx for Lustre now supports storage quotas AI & ML New – GPU-Equipped EC2 P4 Instances for Machine Learning & HPC | AWS News Blog EFA Now Supports NVIDIA GPUDirect RDMA Amazon Kendra adds Confluence Cloud connector Amazon Kendra adds user tokens for secure search AWS DeepComposer launches new learning capsule on sequence modeling and Transformers AWS DeepComposer adds new Transformers algorithm that allows developers to extend an input melody Announcing AWS DeepComposer's next Chartbusters challenge, Keep Calm and Model On Amazon Polly launches a British English Newscaster speaking Style Amazon Polly launches a new Australian English neural text-to-speech voice Amazon Lex adds language support for French, Spanish, Italian and Canadian French Apply your business rules to Amazon Personalize recommendations on the fly Amazon Textract supports handwriting and five new languages Amazon SageMaker Studio now supports multi-GPU instances Other Cool Stuff In the Works – AWS Region in Hyderabad, India | AWS News Blog In the Works – New AWS Region in Zurich, Switzerland | AWS News Blog AWS Backup and AWS Organizations bring cross-account backup feature Amazon Chime SDK now supports public switched telephone network (PSTN) audio Savings Plans Alerts now available in AWS Cost Management Introducing new visualization features in AWS IoT SiteWise: Status Charts, Scatter Plot and Trend lines Announcing new features for AWS IoT SiteWise Amazon CloudWatch launches Metrics Explorer Amazon Connect launches API to configure user hierarchies programmatically Automated ABR (Adaptive Bit Rate) Configuration now available in AWS Elemental MediaConvert Amazon QuickSight launches new Chart Types, Table Improvements and more AWS IoT Device Management enhances Secure Tunneling with new multiplexing capability, supporting multiple connections to a single device over a secure tunnel The Nanos Amazon WorkDocs adds support for managing the color theme in-app on iOS AWS IQ launches new functionality to support firms Amazon Connect has just reduced its 44th telephony rate this year Sponsors Gold Sponsor Innablr Silver Sponsors AC3 CMD Solutions DoIT International  

サーバーワークスが送るAWS情報番組「さばラジ!」
【毎日AWS #109】AWS Copilot CLI がついにGA! ECS on Fargate 環境を素早く立ち上げ 他11件 #サバワ

サーバーワークスが送るAWS情報番組「さばラジ!」

Play Episode Listen Later Nov 24, 2020 9:43


最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS!」 おはようございます、サーバーワークスの加藤です。 今日は 11/23 に出たアップデート12件をご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE ラインナップ AWS Copilot CLI が一般利用可能に AWS Single Sign-On が生体認証のため WebAuth サポートを追加 AWS Database Migration Service が Aurora PostgreSQL Serverless をターゲットとしてサポート Amazon AppFlow が Service Now との統合を拡張 Amazon CloudWatch Synthetics が API モニタリング機能を拡張 AWS Security Hub が AWS Organizations と統合 AWS Systems Manager の VPC エンドポイントがエンドポイントポリシーに対応 Amazon SageMaker Studio が東京を含む複数リージョンで利用可能に Amazon DynamoDB へ SQL 互換言語でクエリをかけられるように Amazon Kinesis Data Streams for Amazon DynamoDB が登場 Amazon DynamoDB テーブルの復元が高速化 AWS Pricing Calculator が Amazon DynamoDB をサポート ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ

サーバーワークスが送るAWS情報番組「さばラジ!」
【毎日AWS #099】Amazon DynamoDB から Amazon S3 にデータを直接エクスポートできるように 他14件 #サバワ

サーバーワークスが送るAWS情報番組「さばラジ!」

Play Episode Listen Later Nov 10, 2020 11:20


最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS!」 おはようございます、サーバーワークスの加藤です。 今日は 11/9 に出たアップデート15件をご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE ラインナップ Amazon DynamoDB のデータを Amazon S3 にエクスポートできるように AWS Data Exchange がプライベート製品を発表 AWS Data Exchange のデータプロバイダが自動更新設定を切り替えできるように AWS Data Exchange のプライベートオファーが自動更新機能に対応 AWS Storage Gateway のテープゲートウェイ・ボリュームゲートウェイがスケジュールベースでネットワーク帯域幅を調整できるように AWS Storage Gateway のテープゲートウェイ・ボリュームゲートウェイがローカルキャッシュを4倍に拡張 AWS Storage Gateway のファイルゲートウェイがファイルレベルのアップロード通知を追加 AWS Storage Gateway のファイルゲートウェイがアクセス許可ベースのリスト表示に対応 AWS Snowball Edge が仮想マシンイメージのインポートに対応 AWS Snowball Edge が Windows OS に対応 AWS Transfer Family が VPC 共有に対応 Amazon S3 レプリケーションがメトリクスと通知をサポート Amazon S3 レプリケーションが削除マーカーの複製に対応 AWS DataSync が AWS ストレージサービス間の自動転送を発表 AWS DataSync タスク実行時にネットワーク帯域幅を調整できるように ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ

aws amazon s3 amazon dynamodb aws snowball edge aws storage gateway aws datasync
AWS - Il podcast in italiano
I database NoSQL su AWS (ospite: Gianluca Nieri)

AWS - Il podcast in italiano

Play Episode Listen Later Sep 28, 2020 28:44


Cosa sono i database non relazionali e come funzionano? Per quali casi d'uso sono più adatti e quali problemi risolvono? Quali database non relazionali sono disponibili su AWS come servizio gestito? E cosa cambia tra Amazon DynamoDB e Amazon DocumentDB (compatibile con MongoDB)? In questo episodio ospito Gianluca Nieri, Solutions Architect di AWS Italia. Parleremo di database NoSQL (non relazionali) e di come possiamo utilizzarli per migliorare la disponibilità e la flessibilità delle nostre applicazioni, ma anche di come tanti clienti in più o meno tutti i settori stiano innovando grazie a questa tecnologia, anche nel settore bancario e assicurativo. Link: Database non relazionali su AWS.

Develpreneur: Become a Better Developer and Entrepreneur

The AWS analytics group of services has a lot of members.  These are some of the newer offerings from Amazon.  However, they are very effective to use in professional development and learning more about your enterprise environment. Amazon Athena Query Data in S3 using SQL.  Store your data in S3.  Then you can define your schema on top of the data set and run queries.  The UI is not very awesome currently, but it is a way to avoid building out a data warehouse for your needs.  This serverless query service can get you analytical data back quickly.  Better yet, it comes without all of the typical setup. Amazon CloudSearch Managed Search Service.  This service provides a way for you to upload data or documents, index them, and provide a search system for that data via HTTP requests.  This is flexible and allows you to custom define the indexes.  Thus, you can upload almost any document format or data style and utilize the service to handle search requests. Amazon EMR Hosted Hadoop Framework.  This service allows you to spin up a Spark or Hadoop system on top of your S3 data lake quickly.  It covers the headaches of getting those environments built.  Also, it is a cost-effective solution to your data science needs that can scale to avoid over-buying your resources. Amazon Elasticsearch Service Run and Scale Elasticsearch Clusters.  ES is a popular open-source search and analytics engine.  There are a broad number of uses for this including log file research, stream data analysis, application monitoring, and more.  This is quick and easy to set up so you can dive right into the analysis part of your work.  The fully managed service has an API and CLI, as you would expect so that you can automate it to your needs. Amazon Kinesis Work with Real-time Streaming Data.  This provides a way to analyze video streams in real time and was covered with the media group.  We included it in the media episode.  Therefore,  we will not spend time on it here. Amazon Managed Streaming for Kafka Fully Managed Apache Kafka Service.  I must admit that this is not an area where I am solid so it, is best to use Amazon's own words. "Amazon Managed Streaming for Kafka (Amazon MSK) is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data. Amazon MSK provides the control-plane operations and lets you use Apache Kafka data-plane operations, such as those for producing and consuming data. It runs open-source versions of Apache Kafka. This means existing applications, tooling, and plugins from partners and the Apache Kafka community are supported without requiring changes to application code. This release of Amazon MSK supports Apache Kafka version 1.1.1." Amazon Redshift This service provides fast, simple, and cost-effective data warehousing.  If you wonder whether there is a fully managed data warehouse solution out there then here is your answer.  Redshift is fully managed, scales up to petabytes, and incorporates the security and administration tools you come to expect from AWS.  There are some excellent how-to and tutorials to help you get started and maybe even understand warehouses more in general. Amazon Quicksight This is a fast business analytics service.  Also known as a fully managed BI solution.  It is what you would expect from a BI solution.  Therefore, it requires setup and forethought to position your data.  Although this is a robust service, expect to spend a few hours (at least) to get going. AWS Data Pipeline Next is an orchestration service for periodic, data-driven workflows.  Yes, that is their words, not mine.  The AWS Data Pipeline is a web service that helps you reliably move data between different AWS compute and storage services.  The scope includes on-premises data sources as well.  Therefore, you can schedule moving all of your enterprise data to the proper destinations.  All of this includes being able to translate and manipulate it at scale.  Once you get to the point of having a lot of data in AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon EMR, this becomes critical.  Thus, while this is not of much use early on, it is essential to running an enterprise. AWS Glue This service helps you prepare and load data.  AWS Glue is a fully managed ETL (extract, transform, and load) solution.  Therefore, this makes it easy to prepare and load your data no matter the end goal.  You can create and run an ETL job with a few clicks in the AWS Management Console.  I have not used it beyond simple tests, but this may be your best solution to ETL needs.  When you store your data on AWS then why not try out this solution?  It catalogs the data and makes it easy to dive right into that ETL process. AWS Lake Formation This is advertised as how to build a secure data lake in days.  I find it hard to argue against that claim.  We have already seen how well AWS handles storing and cataloging (even indexing) data.  Therefore, it makes sense that their data lake tool would extend from those solutions.  With data lakes being a sort of new concept you might want to see the latest news and how-tos at this link. https://aws.amazon.com/big-data/datalakes-and-analytics/what-is-a-data-lake/