Amazon cloud computing platform
POPULARITY
CISA braces for widespread staffing cuts. Russian hackers target a Western military mission in Ukraine. China acknowledges Volt Typhoon. The U.S. signs on to global spyware restrictions. A lab supporting Planned Parenthood confirms a data breach. Threat actors steal metadata from unsecured Amazon EC2 instances. A critical WordPress plugin vulnerability is under active exploitation. A new analysis details a critical unauthenticated remote code execution flaw affecting Ivanti products. Joining us today is Johannes Ullrich, Dean of Research at SANS Technology Institute, with his take on "Vibe Security." Does AI understand, and does that ultimately matter? Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Joining us today is Johannes Ullrich, Dean of Research at SANS Technology Institute, discussing "Vibe Security," similar to “Vibe Coding” where security teams overly rely on AI to do their job. Selected Reading Trump administration planning major workforce cuts at CISA (The Record) Cybersecurity industry falls silent as Trump turns ire on SentinelOne (Reuters) Russian hackers attack Western military mission using malicious drive (Bleeping Computer) China Admitted to US That It Conducted Volt Typhoon Attacks: Report (SecurityWeek) US to sign Pall Mall pact aimed at countering spyware abuses (The Record) US lab testing provider exposed health data of 1.6 million people (Bleeping Computer) Amazon EC2 instance metadata targeted in SSRF attacks (SC Media) Vulnerability in OttoKit WordPress Plugin Exploited in the Wild (SecurityWeek) Ivanti 0-day RCE Vulnerability Exploitation Details Disclosed (Cyber Security News) Experts Debate: Do AI Chatbots Truly Understand? (IEEE Spectrum) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Want to start your journey in AWS Cloud but not sure where to begin? In this episode of the InfosecTrain podcast, we provide a step-by-step guide to getting started with AWS, from account creation to launching your first Amazon EC2 instance.
J'ai compté 86 nouveautés ces deux dernières semaines, le rythme accélère, on sent que la conférence re:Invent à Las Vegas approche. Dans cet épisode vous découvrirez des nouveautés concernant le DNS (Amazon Route53), AWS AppSync et les web sockets, l'hébergement de sites web statistiques sur Amazon S3 avec AWS Amplify. On parlera aussi de macOS sur Amazon EC2 et d'un chapelet de nouveautés AWS Lambda. On passe en revue tout cela et plus encore dans le Le podcast
Tune in to learn about the future of AI deployments, from cost-effective CPU instances to the seamless integration of multiple models for robust AI systems with Jeff Boudier from Hugging Face and Sudeep Sharma from Amazon EC2. Jeff shares Hugging Face's mission to democratize machine learning, highlighting the ease and affordability of using diverse models on their platform. Sudeep dives into how AWS is tackling evolving customer demands by delivering next-gen Intel-powered instances, including the Gen7 Intel Sapphire Rapids processors, which optimize AI and machine learning tasks. Together, they discuss the challenges and innovations in serving customers with scalable, efficient solutions and how Hugging Face and AWS are partnering to offer more choices for AI builders.
A cloud service is only as good as the team of network engineers who keep it up and running. In this episode, AWS Vice President and Distinguished Engineer Tom Scholl breaks down the importance of security and legwork needed to support the company's massive infrastructure. Corey picks Tom's brain while singing the praises of the AWS DDoS Protection Team, marveling at the scale of the modern internet, and looking ahead to the next generation of network engineers that could land at AWS. If you've ever wondered about the inner workings of the AWS cloud, then this is the discussion for you.Show Highlights: (0:00) Intro(1:09) The Duckbill Group sponsor read(1:42) The importance of a good network for AWS(3:38) Evolution of networking(6:03) Efficiency of the AWS DDoS Protection Team(7:29) AWS Cloud and weathering DDoS attacks(10:03) Policing network abuse(12:08) Walking the SES tightrope and network attacks(15:00) Ensuring the security of the internet(17:53) The Duckbill Group sponsor read(18:37) Scale of the modern internet(20:47) Migrating the AWS network firewall(21:54) Internal network scaling(24:27) Preparing for DDoS disruption(29:14) Finding the next generation of network engineers(32:15) Where to learn more about AWS cloud securityAbout Tom Scholl:Tom Scholl is a VP and Distinguished Engineer at Amazon Web Services (AWS) in the infrastructure organization. His role includes working on AWS's global network backbone, as well as focusing on denial of service detection and mitigation systems. He has been with AWS for over 13 years.Prior to AWS, Tom was a Principal Network Engineer at nLayer and AT&T Labs (formerly SBC Telecom). He also previously held network engineering roles at OptimalPATH Digital Network and ANET Internet Services. Links Referenced:AWS Security Blog: https://aws.amazon.com/blogs/security/How AWS threat intelligence deters threat actors: https://aws.amazon.com/blogs/security/how-aws-threat-intelligence-deters-threat-actors/Using AWS Shield Advanced protection groups to improve DDoS detection and mitigation: https://aws.amazon.com/blogs/security/using-aws-shield-advanced-protection-groups-to-improve-ddos-detection-and-mitigation/AWS re:Inforce 2024 presentation on Sonaris and MadPot: https://www.youtube.com/watch?v=38Z9csvyFDgNANOG 2023 presentation on AWS networking infrastructure: https://www.youtube.com/watch?v=0tcR-iQce7s AWS re:Invent 2022 presentation on AWS networking infrastructure: https://www.youtube.com/watch?v=HJNR_dX8g8c AWS re:Invent 2022 presentation on Scaling network performance on next-gen Amazon EC2 instances: https://www.youtube.com/watch?v=jNYpWa7gf1A&t=1373sIEEE paper on Scalable Relatable Diagram (SRD): https://ieeexplore.ieee.org/document/9167399SponsorThe Duckbill Group: https://www.duckbillgroup.com/
היום גיא ואיתן מדברים עם אורחת מיוחדת מאוד - רוני ורד האחת והיחידה!
Overview Kito, Josh, Danno are joined by microservices guru, author, and Java Champion Chris Richardson. They discuss spring-boot-testjars, Jakarta EE 11, OpenRewrite, Chris' Eventuate project, microservice architecture patterns, Kafka, Repanda, AI and software development, the early days of cloud computing and Spring, and much more. About Chris Richardson Chris is a software architect and serial entrepreneur. He is a Java Champion, a JavaOne rock star and the author of POJOs in Action, which describes how to build enterprise Java applications with frameworks such as Spring and Hibernate. Chris was also the founder of the original CloudFoundry.com, an early Java PaaS for Amazon EC2. Today, he is a recognized thought leader in microservices and speaks regularly at international conferences. Chris is the author of the book Microservice Patterns. Chris helps organizations improve agility and competitiveness through better software architecture. He delivers consulting and training that helps organizations successfully adopt and use the microservice architecture. Chris is the founder of a startup that is creating a platform that simplifies the development of transactional microservices. He maintains a comprehensive set of resources for learning about microservices. Global and Industry News - Google layoffs 2024: Hundreds of employees on hardware, engineering teams lose jobs https://www.usatoday.com/story/money/2024/01/12/google-layoffs-2024/72201031007/ Server Side Java - CVE-2024-22233: Spring Framework server Web DoS Vulnerability (https://spring.io/blog/2024/01/22/cve-2024-22233-spring-framework-server-web-dos-vulnerability) - GitHub - spring-projects-experimental/spring-boot-testjars (https://github.com/spring-projects-experimental/spring-boot-testjars) - Jakarta EE 11 Update (https://jakarta.ee/specifications/platform/11/) - Tomcat migrator (https://github.com/apache/tomcat-jakartaee-migration) - OpenRewrite (https://docs.openrewrite.org/) - Eventuate (https://eventuate.io/) - Transactional Outbox pattern (https://microservices.io/patterns/data/transactional-outbox.html) - Enterprise Integration Patterns (https://www.enterpriseintegrationpatterns.com/) - https://www.google.com/books/edition/Enterprise_Integration_Patterns/qqB7nrrna_sC?hl=en&gbpv=1&printsec=frontcover - Redpanda (https://redpanda.com/) AI/ML - LLMs: our future overlords are hungry and thirsty (https://microservices.io/post/generativeai/2023/10/09/our-future-overlords-are-hungry-and-thirsty.html) Java Platform - The One Billion Row Challenge - Gunnar Morling (https://www.morling.dev/blog/one-billion-row-challenge/) Picks - jChampions Conf Recordings (Josh) (https://www.youtube.com/@JChampionsCon) - TV Show: Young Sheldon (Kito) (https://www.imdb.com/title/tt6226232/) Other Pubhouse Network podcasts - OffHeap (https://javaoffheap.com) - Java Pubhouse (https://javapubhouse.com) Events - Devnexus 2024 - April 9-11 - Atlanta, GA, USA (https://devnexus.org/) - Great International Developer Summit - April 23-26th - Bangalore, India (https://developersummit.com/) - JNation 2024 - June 4-5th - Coimbra, Portugal (https://jnation.pt/) - dev2next
Welcome to part four in the AWS Certification Exam Prep Mini-Series! Whether you're an aspiring cloud enthusiast or a seasoned developer looking to deepen your architectural acumen, you've landed in the perfect spot. In this six-part saga, we're demystifying the pivotal role of a Solutions Architect in the AWS cloud computing cosmos. In this fourth episode, Caroline and Dave chat again with Anya Derbakova, a Senior Startup Solutions Architect at AWS, known for weaving social media magic, and Ted Trentler, a Senior AWS Technical Instructor with a knack for simplifying the complex. Together, we will step into the realm of performance, where we untangle the complexities of designing high-performing architectures in the cloud. We dissect the essentials of high-performing storage solutions, dive deep into elastic compute services for scaling and cost efficiency, and unravel the intricacies of optimizing database solutions for unparalleled performance. Expect to uncover: • The spectrum of AWS storage services and their optimal use cases, from Amazon S3's versatility to the shared capabilities of Amazon EFS. • How to leverage Amazon EC2, Auto Scaling, and Load Balancing to create elastic compute solutions that adapt to your needs. • Insights into serverless computing paradigms with AWS Lambda and Fargate, highlighting the shift towards de-coupled architectures. • Strategies for selecting high-performing database solutions, including the transition from on-premise databases to AWS-managed services like RDS and the benefits of caching with Amazon ElastiCache. • A real-world scenario where we'll navigate the challenge of processing hundreds of thousands of online votes in minutes, testing your understanding and application of high-performing AWS architectures. Whether you're dealing with vast amounts of data, requiring robust compute power, or ensuring your architecture can handle peak loads without a hitch, we've got you covered! Anya on LinkedIn: https://www.linkedin.com/in/annadderbakova/ Ted on Twitter: https://twitter.com/ttrentler Ted on LinkedIn: https://linkedin/in/tedtrentler Caroline on Twitter: https://twitter.com/carolinegluck Caroline on LinkedIn: https://www.linkedin.com/in/cgluck/ Dave on Twitter: https://twitter.com/thedavedev Dave on LinkedIn: https://www.linkedin.com/in/davidisbitski AWS SAA Exam Guide - https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate_Exam-Guide.pdf Party Rock for Exam Study - https://partyrock.aws/u/tedtrent/KQtYIhbJb/Solutions-Architect-Study-Buddy All Things AWS Training - Links to Self-paced and Instructor Led https://aws.amazon.com/training/ AWS Skill Builder – Free CPE Course - https://explore.skillbuilder.aws/learn/course/134/aws-cloud-practitioner-essentials AWS Skill Builder – Learning Badges - https://explore.skillbuilder.aws/learn/public/learning_plan/view/1044/solutions-architect-knowledge-badge-readiness-path AWS Usergroup Communities: https://aws.amazon.com/developer/community/usergroups Subscribe: Spotify: https://open.spotify.com/show/7rQjgnBvuyr18K03tnEHBI Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-developers-podcast/id1574162669 Stitcher: https://www.stitcher.com/show/1065378 Pandora: https://www.pandora.com/podcast/aws-developers-podcast/PC:1001065378 TuneIn: https://tunein.com/podcasts/Technology-Podcasts/AWS-Developers-Podcast-p1461814/ Amazon Music: https://music.amazon.com/podcasts/f8bf7630-2521-4b40-be90-c46a9222c159/aws-developers-podcast Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zb3VuZGNsb3VkLmNvbS91c2Vycy9zb3VuZGNsb3VkOnVzZXJzOjk5NDM2MzU0OS9zb3VuZHMucnNz RSS Feed: https://feeds.soundcloud.com/users/soundcloud:users:994363549/sounds.rss
This week, we review the major announcements from AWS re:Invent and discuss how the hyperscalers are embracing A.I. Plus, a few thoughts on children's chores. Watch the YouTube Live Recording of Episode (https://www.youtube.com/watch?v=q0xwqUis6xA) 443 (https://www.youtube.com/watch?v=q0xwqUis6xA) Runner-up Titles No Slack The Corporate Podcast. Quality of life stop Our roads diverge Eats a bag of llama Nobody wants to do a bake-off AI all the time Rundown AWS re:Invent Top announcements of AWS re:Invent 2023 | Amazon Web Services (https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2023/) Salesforce Inks Deal to Sell on Amazon Web Services' Marketplace (https://www.bloomberg.com/news/articles/2023-11-27/salesforce-to-sell-software-on-aws-marketplace-in-self-service-purchase-push#xj4y7vzkg) AWS Unveils Next Generation AWS-Designed Chips (https://press.aboutamazon.com/2023/11/aws-unveils-next-generation-aws-designed-chips) Join the preview for new memory-optimized, AWS Graviton4-powered Amazon EC2 instances (R8g) (https://aws.amazon.com/blogs/aws/join-the-preview-for-new-memory-optimized-aws-graviton4-powered-amazon-ec2-instances-r8g/) Announcing the new Amazon S3 Express One Zone high performance storage class (https://aws.amazon.com/blogs/aws/new-amazon-s3-express-one-zone-high-performance-storage-class/) AWS unveils new Trainium AI chip and Graviton 4, extends Nvidia partnership (https://www.zdnet.com/article/aws-unveils-new-trainium-ai-chip-and-graviton-4-extends-nvidia-partnership/) AI Chip - AWS Inferentia - AWS (https://aws.amazon.com/machine-learning/inferentia/) DGX Platform (https://www.nvidia.com/en-au/data-center/dgx-platform/) Foundational Models - Amazon Bedrock - AWS (https://aws.amazon.com/bedrock/) Supported models in Amazon Bedrock - Amazon Bedrock (https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html#models-supported-meta) Agents for Amazon Bedrock is now available with improved control of orchestration and visibility into reasoning (https://aws.amazon.com/blogs/aws/agents-for-amazon-bedrock-is-now-available-with-improved-control-of-orchestration-and-visibility-into-reasoning/) Knowledge Bases now delivers fully managed RAG experience in Amazon Bedrock (https://aws.amazon.com/blogs/aws/knowledge-bases-now-delivers-fully-managed-rag-experience-in-amazon-bedrock/) Customize models in Amazon Bedrock with your own data using fine-tuning and continued pre-training (https://aws.amazon.com/blogs/aws/customize-models-in-amazon-bedrock-with-your-own-data-using-fine-tuning-and-continued-pre-training/) Amazon Q brings generative AI-powered assistance to IT pros and developers (https://aws.amazon.com/blogs/aws/amazon-q-brings-generative-ai-powered-assistance-to-it-pros-and-developers-preview/) Improve developer productivity with generative-AI powered Amazon Q in Amazon CodeCatalyst (https://aws.amazon.com/blogs/aws/improve-developer-productivity-with-generative-ai-powered-amazon-q-in-amazon-codecatalyst-preview/) Upgrade your Java applications with Amazon Q Code Transformation (https://aws.amazon.com/blogs/aws/upgrade-your-java-applications-with-amazon-q-code-transformation-preview/) Introducing Amazon Q, a new generative AI-powered assistant (https://aws.amazon.com/blogs/aws/introducing-amazon-q-a-new-generative-ai-powered-assistant-preview/) New Amazon Q in QuickSight uses generative AI assistance for quicker, easier data insights (https://aws.amazon.com/blogs/aws/new-amazon-q-in-quicksight-uses-generative-ai-assistance-for-quicker-easier-data-insights-preview/) Amazon Managed Service for Prometheus collector provides agentless metric collection for Amazon EKS (https://aws.amazon.com/blogs/aws/amazon-managed-service-for-prometheus-collector-provides-agentless-metric-collection-for-amazon-eks/) Amazon CloudWatch Logs now offers automated pattern analytics and anomaly detection (https://aws.amazon.com/blogs/aws/amazon-cloudwatch-logs-now-offers-automated-pattern-analytics-and-anomaly-detection/) Use Amazon CloudWatch to consolidate hybrid, multicloud, and on-premises metrics (https://aws.amazon.com/blogs/aws/new-use-amazon-cloudwatch-to-consolidate-hybrid-multi-cloud-and-on-premises-metrics/) Amazon EKS Pod Identity simplifies IAM permissions for applications on Amazon EKS clusters (https://aws.amazon.com/blogs/aws/amazon-eks-pod-identity-simplifies-iam-permissions-for-applications-on-amazon-eks-clusters/) Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service is now available (https://aws.amazon.com/blogs/aws/amazon-dynamodb-zero-etl-integration-with-amazon-opensearch-service-is-now-generally-available/) Amazon says its first Project Kuiper internet satellites were fully successful in testing (https://www.cnbc.com/2023/11/16/amazon-kuiper-internet-satellites-fully-successful-in-testing.html) AWS takes the cheap shots (https://techcrunch.com/2023/11/28/aws-takes-the-cheap-shots/) Here's everything Amazon Web Services announced at AWS re:Invent (https://techcrunch.com/2023/11/28/heres-everything-aws-reinvent-2023-so-far/) Relevant to your Interests Oracle Cloud Made All The Right Moves In 2022 (https://moorinsightsstrategy.com/oracle-cloud-made-all-the-right-moves-in-2022/) Ransomware gang files SEC complaint over victim's undisclosed breach (https://www.bleepingcomputer.com/news/security/ransomware-gang-files-sec-complaint-over-victims-undisclosed-breach/) Keynote Highlights: Satya Nadella at Microsoft Ignite 2023 (https://www.youtube.com/watch?v=QMlUJqxhdoY) Thoma Bravo to sell about $500 million in Dynatrace stock (https://www.marketwatch.com/story/thoma-bravo-to-sell-about-500-million-in-dynatrace-stock-9d7bd0e6) FinOps Open Cost and Usage Specification 1.0-preview Released to Demystify Cloud Billing Data (https://www.prnewswire.com/news-releases/finops-open-cost-and-usage-specification-1-0-preview-released-to-demystify-cloud-billing-data-301990559.html?tc=eml_cleartime) AWS, Microsoft, Google and Oracle partner to make cloud spend more transparent | TechCrunch (https://techcrunch.com/2023/11/16/aws-microsoft-google-and-oracle-partner-to-make-cloud-spend-more-transparent/) Privacy is Priceless, but Signal is Expensive (https://signal.org/blog/signal-is-expensive/) Several popular AI products flagged as unsafe for kids by Common Sense Media | TechCrunch (https://techcrunch.com/2023/11/16/several-popular-ai-products-flagged-as-unsafe-for-kids-by-common-sense-media/) Amazon to sell Hyundai vehicles online starting in 2024 (https://finance.yahoo.com/news/amazon-sell-hyundai-vehicles-online-180500951.html) Amazon to launch car sales next year with Hyundai (https://news.google.com/articles/CBMiP2h0dHBzOi8vd3d3LmF4aW9zLmNvbS8yMDIzLzExLzE2L2FtYXpvbi1oeXVuZGFpLWNhcnMtc2FsZS1hbGV4YdIBAA?hl=en-US&gl=US&ceid=US%3Aen) Canonical Microcloud: Simple, free, on-prem Linux clustering (https://www.theregister.com/2023/11/16/canonical_microcloud/) Introducing the Functional Source License: Freedom without Free-riding (https://blog.sentry.io/introducing-the-functional-source-license-freedom-without-free-riding/) The Problems with Money In (Open Source) Software | Aneel Lakhani | Monktoberfest 2023 (https://www.youtube.com/watch?v=LTCuLyv6SHo) DXC Technology and AWS Take Their Strategic Partnership to the Next Level to Deliver the Future of Cloud for Customers (https://dxc.com/us/en/about-us/newsroom/press-releases/11202023) Broadcom and VMware Intend to Close Transaction on November 22, 2023 (https://www.businesswire.com/news/home/20231121379706/en/Broadcom-and-VMware-Intend-to-Close-Transaction-on-November-22-2023) Broadcom announces successful acquisition of VMware | Hock Tan (https://www.broadcom.com/blog/broadcom-announces-successful-acquisition-of-vmware) Broadcom closes $69 billion VMware deal after China approval (https://finance.yahoo.com/news/broadcom-closes-69-billion-vmware-133704461.html) VMware is now part of Broadcom | VMware by Broadcom (https://www.broadcom.com/info/vmware) Binance CEO Changpeng Zhao Reportedly Quits and Pleads Guilty to Breaking US Law (https://www.wired.com/story/binance-cz-ceo-quits-pleads-guilty-breaking-law/) Congrats To Elon Musk: I Didn't Think You Had It In You To File A Lawsuit This Stupid. But, You Crazy Bastard, You Did It! (https://www.techdirt.com/2023/11/21/congrats-to-elon-musk-i-didnt-think-you-had-it-in-you-to-file-a-lawsuit-this-stupid-but-you-crazy-bastard-you-did-it/) Hackers spent 2+ years looting secrets of chipmaker NXP before being detected (https://arstechnica.com/security/2023/11/hackers-spent-2-years-looting-secrets-of-chipmaker-nxp-before-being-detected/) Meet ‘Anna Boyko': How a Fake Speaker Blew up DevTernity (https://thenewstack.io/meet-anna-boyko-how-a-fake-speaker-blew-up-devternity/) IBM's Db2 database dinosaur comes to AWS (https://go.theregister.com/feed/www.theregister.com/2023/11/29/aws_launch_ibms_db2_database/) Reports of AI ending human labour may be greatly exaggerated (https://www.ecb.europa.eu/pub/economic-research/resbull/2023/html/ecb.rb231128~0a16e73d87.es.html) New Google geothermal electricity project could be a milestone for clean energy (https://apnews.com/article/geothermal-energy-heat-renewable-power-climate-5c97f86e62263d3a63d7c92c40f1330d) VMware's $92bn sale showers cash on Michael Dell and Silver Lake (https://www.ft.com/content/d01901a2-db4b-45df-8ce5-f57ff46d463e) Gartner Says Cloud Will Become a Business Necessity by 2028 (https://www.gartner.com/en/newsroom/press-releases/2023-11-29-gartner-says-cloud-will-become-a-business-necessity-by-2028) IRS starts the bidding for $1.9B IT services recompete (https://www.nextgov.com/acquisition/2023/11/irs-starts-bidding-19b-it-services-recompete/392303/) WSJ News Exclusive | Apple Pulls Plug on Goldman Credit-Card Partnership (https://www.wsj.com/finance/banking/apple-pulls-plug-on-goldman-credit-card-partnership-ca1dfb45) Apple employees most likely to leave to join Google shows LinkedIn (https://9to5mac.com/2023/11/23/apple-employees-next-jobs/) Ranked: Worst Companies for Employee Retention (U.S. and UK) (https://www.visualcapitalist.com/cp/ranked-worst-companies-for-employee-retention-u-s-and-uk/) Apple announces RCS support for iMessage (https://arstechnica.com/gadgets/2023/11/apple-announces-rcs-support-for-imessage/) Apple says iPhones will support RCS in 2024 (https://www.theverge.com/2023/11/16/23964171/apple-iphone-rcs-support) Today on The Vergecast: what Apple really means when it talks about RCS. (https://www.theverge.com/2023/11/17/23965656/today-on-the-vergecast-what-apple-really-means-when-it-talks-about-rcs) **## Nonsense Ikea debuts a trio of affordable smart home sensors (https://www.theverge.com/2023/11/28/23977693/ikea-sensors-door-window-water-motion-price-date-specs) Apple and Spotify have revealed their top podcasts of 2023 (https://www.theverge.com/2023/11/29/23981468/apple-replay-spotify-wrapped-podcasts-rogan-crime-junkie-alex-cooper) Listener Feedback Matt's Trackball: Amazon.com: Kensington Expert Trackball Mouse (K64325), Black Silver, 5"W x 5-3/4"D x 2-1/2"H : Electronics (https://amzn.to/3ujm7ct) Conferences Jan 29, 2024 to Feb 1, 2024 That Conference Texas (https://that.us/events/tx/2024/schedule/) If you want your conference mentioned, let's talk media sponsorships. SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us: Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), Mastodon (https://hachyderm.io/@softwaredefinedtalk), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk), Threads (https://www.threads.net/@softwaredefinedtalk) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off Coté's book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: The Complete History & Strategy of Visa (https://www.acquired.fm/episodes/visa) Matt: Markdown in Google Docs (https://support.google.com/docs/answer/12014036) Google Docs to Markdown (https://workspace.google.com/marketplace/app/docs_to_markdown/700168918607) Coté: pork chops, preferably thin sliced. Photo Credits Header (https://unsplash.com/photos/bike-on-concrete-floor-j0zlzt40J-0) Artwork (https://unsplash.com/photos/person-holding-black-amazon-echo-dot-qQRrhMIpxPw)
Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)
If you want complete control over your servers, you would choose (something like) Amazon EC2 and start with creating a new Machine Image. But, what if your interest primarily lied in building your app, and solving your user's problems ASAP, and you didn't want to spend much, if any, time on setting up and configuring servers? #snowpal aws.snowpal.com learn.snowpal.com
Tune in to listen to Jillian Forde and Jake Siddall (AWS Senior Product Manager) dive deep into a new service called Amazon EC2 Capacity Blocks for ML. EC2 Capacity Blocks enable you to reserve GPU instances in Amazon EC2 UltraClusters to run ML workloads. Amazon EC2 Capacity Blocks for ML product details: https://bit.ly/3StmbR8
Welcome to InfoecTrain's comprehensive "Coffee with Prabh" Session! Join us for an engaging coffee session with two AWS experts, Prabh and Krish, as they walk you through the essentials of getting started with AWS Cloud. In this informative Podcast, we will cover the basics of AWS account creation and dive into the fascinating world of Amazon EC2. This coffee session is perfect for anyone looking to kickstart their journey in the world of AWS Cloud or expand their existing knowledge. Grab your favorite cup of coffee, sit back, and get ready to enhance your AWS skills with Prabh and Krish. Don't forget to like, share, and subscribe for more insightful AWS content.
Simon Elisha is joined by Leif Reinert, Principal Product Manager Tech at AWS, to discuss how Amazon EC2 P5 instances with NVIDIA H100 GPUs can accelerate your ML training and HPC workloads, helping you get results faster and reduce costs. About Amazon EC2 P5 Instances: https://bit.ly/3Q0Dkie Amazon EC2 P5 Instances Powered by NVIDIA H100 Tensor Core GPUs for Accelerating Generative AI and HPC Applications (blog post): https://bit.ly/3Fp3nLc
Amazon EC2 M7a, C7a, and R7a instances -- powered by 4th Generation AMD EPYC processors -- deliver up to 50% higher performance compared to the previous gen. Today Hawn is joined by Sinem Gulbay, Senior Product Manager here at AWS, to dive into the benefits of these new instances, their technical specs, and when, why, and how you should leverage them.
Learn more about the recently launched I4g AWS Nitro SSDs. AWS Nitro SSDs build on the AWS silicon innovation with the AWS Nitro System and are custom-designed to deliver the best storage performance for your I/O intensive workloads running in Amazon EC2. I4g instances offer similar memory and storage ratios to existing I4i instances and are optimized for workloads with small to medium sized datasets that perform a high mix of random read/write and require very low I/O latency, such as databases and real-time analytics. I4g instances deliver the best compute price performance for a storage optimized instance, and best storage performance per TB for a Graviton-based storage instance. Amazon EC2 I4g instances deliver up to 15% better compute performance compared to similar storage-optimized instances. Amazon l4g instances website: https://go.aws/3YxI3vr Blog: Amazon EC2 I4g storage-optimized instances: https://go.aws/3OoKbRK
What is AWS CLI? AWS CLI, or Amazon Web Services Command Line Interface, is a powerful and versatile tool that enables users to interact with various AWS services from a command-line interface. It provides a convenient and efficient way to manage and automate AWS resources and services, making it an essential component for developers, system administrators, and DevOps professionals. AWS CLI offers a unified interface to interact with various AWS services, including Amazon S3 for storage, Amazon EC2 for virtual servers, Amazon RDS for managed databases, AWS Lambda for serverless computing, and many others. By leveraging the CLI, users can perform various operations, such as creating and managing resources, configuring permissions, deploying applications, and retrieving information about their AWS infrastructure. How does AWS CLI Work? AWS CLI is a command-line tool that interacts with the AWS Management Console and AWS APIs. Users install it on their local machine and configure it with their access credentials. When a command is executed, AWS CLI generates API requests based on the command and sends them to the appropriate AWS service endpoints. The service processes the requests, generates responses, and AWS CLI retrieves and presents the results to the user. This allows users to manage and automate AWS resources and services through a command-line interface, enhancing efficiency and control. View More: What is AWS CLI?
How to get the best price performance in Amazon EC2 for the most demanding machine learning training workloads? Tune in to learn how AWS Trainium-based Amazon EC2 Trn1n instances can help you train your network-intensive generative AI models at scale. Amazon EC2 Trn1n instances double the bandwidth offered by Trn1 instances to 1600 Gbps of EFA and deliver up to 20% faster time-to-train than Trn1 instances. Both Trn1 and Trn1n instances deliver up to 50% savings on training costs over comparable Amazon EC2 instances. Tune in to learn more about this new launch that helps you increase performance, reduce costs, and also improve energy efficiency when training your large-scale ML models. Trn1 Website: https://go.aws/44v90ST AWS Neuron Website: https://bit.ly/46SLyjX AWS Trainium Website: https://bit.ly/3DqiCSM AWS Inferentia Website: https://go.aws/44ymLA9
How can you build designs faster and predict the weather more efficiently? How can you can carry out complex calculations across HPC clusters using up to tens of thousands of cores with high performance and lower costs? In this podcast, Heidi Poxon, Principal HPC Technologist, talks about using high performance computing (HPC) for a range of use cases from building designs for aircraft and race cars to climate modeling. You'll learn about newly launched Amazon Elastic Compute Cloud (Amazon EC2) Hpc7g instances, powered by AWS Graviton processors, which are custom Arm-based processors designed by Amazon Web Services (AWS). With these instances you can scale your HPC clusters to run compute-intensive workloads such as Computational Fluid Dynamics (CFD), weather forecasting, and molecular dynamics. Amazon EC2 Hpc7g Instances website: https://go.aws/3NPhFtk Amazon EC2 Hpc7g Instances Blog: https://go.aws/3raBXVc
Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan, Jonathan, and Matthew are all here this week to discuss the latest news and announcements in the world of cloud and AI - including New Relic Grok, Athena Provisioned Capacity from AWS, and updates to the Azure Virtual Desktop. Titles we almost went with this week: None! This week's title was SO GOOD we didn't bother with any alternates. Sometimes it's just like that, you know? A big thanks to this week's sponsor: Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.
AWS Inferentia2-based Amazon EC2 Inf2 instances can help you deploy your 100B+ parameter generative AI models at scale. Inf2 instances deliver up to 40% better price performance than comparable Amazon EC2 instances. Tune in to learn more about this new launch that helps you increase performance, reduce costs, and also improve energy efficiency when deploying your ML applications. Inf2 PDP https://go.aws/44oez5T Neuron documentation https://bit.ly/44oLmrz AWS Inferentia https://go.aws/3NAyhFr AWS Trainium https://go.aws/3nkivnH
In this Video, InfosecTrain will be hosting a “Cloud Computing Expert Masterclass” with certified experts, in order to help you unlock the mystery of cloud computing and its incredible benefits! If you're looking to learn more about the benefits of cloud computing, then this event is definitely for you! In this Masterclass, you'll get to hear from top experts on the topic, and explore the various benefits of cloud computing. From simplified management to improved security, this Masterclass has it all! Thank you for watching this video, For more details or free demo with our expert write into us at sales@infosectrain.com ➡️ Agenda for the Webinar ✑ Day-3 : AWS Compute Services
Amazon CodeWhisperer enables software developers to get real-time code recommendations in their IDE based on their (English) comments describing the task at hand, and their current coding context. With CodeWhisperer, developers can simply write a natural language comment that outlines a specific task such as “get new files uploaded in the last 24 hours from the S3 bucket” and CodeWhisperer automatically determines which cloud services and public libraries are best suited for the specified task and generates the code for the developer. CodeWhisperer is especially well suited to helping developers generate code that simplifies consumption of AWS services (e.g., Amazon EC2 or Amazon S3), making it the best coding assistant for if you are developing for AWS. Amazon CodeWhisperer https://go.aws/41K0Wwl
AWS Morning Brief for the week of April 17, 2023 with Corey Quinn. This week is RSA in San Francisco; I'll be haunting the expo hall at some point, so if you're in town say hi.Links: The Last Week in AWS Job Board continues to thrive; thanks for your ongoing support. Amazon Chime SDK updates Service Level Agreement Amazon CodeWhisperer is now generally available Amazon Connect now enables agents to handle voice calls, chats, and tasks concurrently Amazon EC2 Serial Console is now available on EC2 bare metal instances Amazon RDS for MySQL now supports up to 15 read replicas for RDS Multi-AZ deployment option with two readable standby database instances AWS Graviton2-based Amazon EC2 instances are available in additional regions AWS Ground Station now supports Wideband Digital Intermediate Frequency AWS Lambda adds support for Node.js 18 in the AWS GovCloud (US) Regions Introducing AWS Lambda response streaming Understanding Amazon DynamoDB latency Announcing New Tools for Building with Generative AI on AWS AWS Now Supports Credentials-fetcher for gMSA on Amazon Linux 2023 AWS investment in South Africa results in economic ripple effect New Global AWS Data Processing Addendum 15 cool things we found inside the Spheres, Amazon's urban rainforest in downtown Seattle
AWS Puts Up a New VPC Lattice to Ease the Growth of Your Connectivity AKA Welcome to April (how is it April already?) This week, Justin, Jonathan, and Matt are your guides through all the latest and greatest in Cloud news; including VPC Lattice from AWS, the one and only time we'll talk about Service Catalog, and an ultra premium DDoS experience. All this week on The Cloud Pod. This week's alternate title(s): AWS Finally makes service catalogs good with Terraform Amazon continues to believe retailers with supply chain will give all their data to them Azure copies your data from S3… AWS copies your data from Azure Blobs… or how I set money on fire with data egress charges
The Cloud Pod recaps all of the positives and negatives of Amazon ReInvent 2022, the annual conference in Las Vegas, bringing together 50,000 cloud computing professionals. This year's keynote speakers include Adam Selpisky, CEO of Amazon Web Services, Swami Sivasubramanian, Vice President of Data and Machine Learning at AWS and Werner Vogels, Amazon's CTO. Attendees and web viewers were treated to new features and products, such as AWS Lambda Snapstart for Java Functions, New Quicksight capabilities and quality-of-life improvements to hundreds of services. Justin, Jonathan, Ryan, Peter and Special guest Joe Daly from the Finops foundation talk about the show and the announcements. Thank you to our sponsor, Foghorn Consulting, which provides top notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you're having trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week. Episode Highlights ⏰ AWS Pricing Calculator now supports modernization cost estimates for Microsoft workloads. ⏰ AWS Re:Invent 2022 announcements and keynote updates. Top Quote
RE:INVENT NOTICE Jonathan, Ryan and Justin will be live streaming the major keynotes starting Monday Night, followed by Adam's keynote on Tuesday, Swami's keynote on Wednesday and Wrap up our Re:Invent coverage with Werner's keynote on Thursday. Tune into our live stream here on the site or via Twitch/Twitter, etc. On The Cloud Pod this week, Amazon Time Sync is now available over the internet as a public NTP service, Amazon announces ECS Task Scale-in protection, and Private Marketplace is now in preview. Thank you to our sponsor, Foghorn Consulting, which provides top notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you're having trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week. Episode Highlights ⏰ Amazon Time Sync is now available over the internet as a public NTP service. ⏰ Amazon announces ECS Task Scale-in protection. ⏰ Private Marketplace is now in preview. Top Quote
In this episode, Ryan and Bhavin interview Alexander Mattoni - Co-founder and Head of Engineering at Cycle.io about When to use and When to not use Kubernetes. The discussion focuses on the challenges associated with Kubernetes adoption - On Day 0 and Day 2, and what are other alternatives available to organizations that are just looking to run their applications easily. We talk about how Cycle.io can help organizations build a simplified infrastructure stack to run their applications. Have a listen and let us know what you think about Kubernetes. Also, send us your 3-4 mins clips about your experience with Kubernetes - to be shared on future episodes Show Notes: Alexander Mattoni - https://twitter.com/alexmattoni Cycle.io - https://cycle.io/ News: AWS Controllers for Kubernetes - ACK for Amazon EC2 https://aws.amazon.com/about-aws/whats-new/2022/11/aws-controllers-kubernetes-ack-elastic-compute-cloud-ec2-generally-available/ Removal of GlusterFS in 1.26 - https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/ Two possible data inconsistency issues in etcd v3.4.[20-21] and v3.5 - https://groups.google.com/a/kubernetes.io/g/dev/c/sEVopPxKPDo?pli=1 Kubecon NA 2022 recordings - https://youtube.com/playlist?list=PLj6h78yzYM2O5aNpRM71NQyx3WUe1xpTn Kubernetes Bytes season 1 on youtube - https://youtube.com/playlist?list=PLCOmEAve4xr2lbCd6sPXMRf6XcZeWuaJ5 Kubernetes Bytes at Data On Kubernetes Day - Kubecon NA - https://youtu.be/q_K8Ma9LxWA Cloud Native Security Con NA - Feb1-2 https://events.linuxfoundation.org/cloudnativesecuritycon-north-america/ TiKV is an open-source, distributed, and transactional key-value database - evolutions of TiKV https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-on-demand-webinar-the-evolution-of-tikv Backup and Restore using alpha k8s checkpointing feature - https://martinheinz.dev/blog/85 | https://kubernetes.io/docs/reference/node/kubelet-checkpoint-api/
Links: Ben Kehoe has left iRobot. And where's he going next? Presumably to re:Invent! I am too, with my re:Quinnvent nonsense Amazon Athena announces Query Result Reuse to accelerate queries Amazon EC2 enables you to opt out of directly shared Amazon Machine Images Amazon EC2 placement groups can now be shared across multiple AWS accounts Amazon EC2 now supports specifying list of instance types to use in attribute-based instance type selection for Auto Scaling groups, EC2 Fleet, and Spot Fleet Amazon Lightsail announces support for domain registration and DNS autoconfiguration Amazon RDS now supports new General Purpose gp3 storage volumes Announcing recurring custom line items for AWS Billing Conductor AWS Lambda announces Telemetry API, further enriching monitoring and observability capabilities of Lambda Extensions AWS Cost Explorer's New Look and Common Use Cases A New AWS Region Opens in Switzerland - eu-central-2 is now available. Introducing AWS Resource Explorer – Quickly Find Resources in Your AWS Account Overview of building resilient applications with Amazon DynamoDB global tables Publish Amazon DevOps Guru Insights to Slack Channel Uncompressed Media over IP on AWS: Read the whitepaper Enable cross-account queries on AWS CloudTrail lake using delegated administration from AWS Organizations NASA and ASDI announce no-cost access to important climate dataset on the AWS Cloud
On The Cloud Pod this week, Amazon announces Neptune Serverless, Google introduces Google Blockchain Node Engine, and we get some cost management updates from Microsoft. Thank you to our sponsor, Foghorn Consulting, which provides top notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you're having trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week. General News [1:24]
Links: Amazon Aurora supports cluster export to S3 Amazon Cognito now provides user pool deletion protection Amazon Connect adds real-time schedule adherence Amazon EC2 enables easier patching of guest operating system and applications with Replace Root Volume Amazon Neptune Serverless is now generally available Introducing the Amazon OpenSearch Service delivery program Amazon SageMaker Canvas supports tags to track and allocate costs incurred by users AWS Console Mobile Application adds support for AWS CloudShell AWS Fault Injection Simulator now supports network connectivity disruption AWS Nitro Enclaves is now supported on AWS Graviton AWS Organizations console now allows users to centrally manage primary contact information on AWS accounts AWS Private Certificate Authority introduces a mode for short-lived certificates Announcing dark mode support in the AWS Management Console EC2 High Memory instances with 18TiB and 24TiB of memory are now available with On-Demand and Savings Plan purchase options How to take advantage of the AWS Free Tier Goldman Sachs, a legacy financial services firm, transforms its operations on AWS Reduce food waste to improve sustainability and financial results in retail with Amazon Forecast Cost Optimization recommendations for AWS Config Optimize your Amazon EC2 instances cost at scale by migrating from Intel to AMD using AWS Systems Manager Automation
Tune in for a replay of The Six Five Summit's #Cloud #Infrastructure Spotlight Keynote with Dave Brown, VP, Amazon EC2, AWS. AWS is constantly innovating on behalf of its customers. With more than 500 instances, Amazon EC2 has the broadest and deepest portfolio of instances in the cloud to run virtually every workload. This portfolio includes instances that are powered by Intel, AMD, and NVIDIA, as well as AWS-designed custom chips. To further increase performance, drive down cost, and accelerate innovation, AWS has invested in its own custom silicon. When it comes to silicon innovation, AWS has a long and proven history, including the Nitro System, Graviton processors, and Inferentia and Trainium chips for machine learning. In this session, Pat Moorhead and Dave will dive deep into the latest offering, the AWS Graviton3 processors that enable the best price-performance for compute-intensive workloads in Amazon EC2. The Six Five Summit is a 100% virtual, on-demand event designed to help you stay on top of the latest developments and trends in digital transformation brought to you by Futurum Research and Moor Insights & Strategy. With 12 tracks and over 70 pre-recorded video sessions, The Six Five Summit showcases an exciting lineup of leading technology experts whose insights will help prepare you for what's now and what's next in digital transformation as you continue to scale and pivot for the future. You will hear cutting-edge insights on business agility, technology-powered transformation, and thoughts on strategies to ensure business continuity and resilience, along with what's ahead for the future of the workplace. More about The Six Five Summit: https://thesixfivesummit.com/
On The Cloud Pod this week, the team chats cloud region wars to establish the true victor. Plus: AWS Storage Day offers a blockhead badge, all the fun of the Microsoft Dev Box, and Google sends people back to sleep with its Cloud Monitoring snooze alert policy. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
On The Cloud Pod this week, the team discusses why Ryan's yelling all day (hint: he's learning). Plus: Peter misses the all-important cloud earnings, AWS Skill Builder subscriptions are now available, and Google Eventarc connects SaaS platforms. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
Cloud Posse holds public "Office Hours" every Wednesday at 11:30am PST to answer questions on all things related to DevOps, Terraform, Kubernetes, CICD. Basically, it's like an interactive "Lunch & Learn" session where we get together for about an hour and talk shop. These are totally free and just an opportunity to ask us (or our community of experts) any questions you may have. You can register here: https://cloudposse.com/office-hoursJoin the conversation: https://slack.cloudposse.com/Find out how we can help your company:https://cloudposse.com/quizhttps://cloudposse.com/accelerate/Learn more about Cloud Posse:https://cloudposse.comhttps://github.com/cloudpossehttps://sweetops.com/https://newsletter.cloudposse.comhttps://podcast.cloudposse.com/[00:00:00] Intro[00:01:12] GitHub Projects is now generally availablehttps://github.blog/2022-07-27-planning-next-to-your-code-github-projects-is-now-generally-available/[00:03:45] Developers can now run GitHub Action Runners on their own Mac (M1)https://github.blog/changelog/2022-08-09-github-actions-self-hosted-runners-now-support-apple-m1-hardware[00:06:05] Checkov adds policies for GitHub Actions, GitLab Runners, CircleCI, and Argohttps://bridgecrew.io/blog/checkov-enables-ci-cd-security-with-new-supply-chain-security-policies/[00:10:44] Slack Static Site Generatorhttps://saveslack.com/[00:13:14] Stop Using CPU Limits on Kuberneteshttps://home.robusta.dev/blog/stop-using-cpu-limits/[00:14:38] CDK for Terraform Is Now Generally Available (Reddit)https://www.hashicorp.com/blog/cdk-for-terraform-now-generally-available[00:23:53] Single API to take crash-consistent snapshots of a subset of EBS volumes attached to an Amazon EC2 instancehttps://aws.amazon.com/about-aws/whats-new/2022/08/amazon-ebs-crash-consistent-snapshots-subset-ebs-volumes-attached-amazon-ec2-instance/[00:25:42] Designing the VPC's for my org and i'm looking for insights. What would you do differently if you had this luxury? @Isaac[00:43:49] How big of Devops team and how many apps to support? @Andrew[00:56:25] Outro#officehours,#cloudposse,#sweetops,#devops,#sre,#terraform,#kubernetes,#awsSupport the show
Curiosity, Focus, and Forging a Path.In this episode of The Outspoken Podcast, host Shana Cosgrove talks to Gerard Spivey, Senior Systems Development Engineer at Amazon Web Services. Gerard speaks in detail about Amazon's interview process, giving us insight into their procedures and how he prepared himself. We also hear about Gerard's time at Amazon and the types of work he's taking on. Side hustles are a way of life for Gerard, and he speaks about his latest experiences managing his YouTube channel, Gerard's Curious Tech. Lastly, Gerard talks about his time at NYLA and how he was able to bring his full self to work thanks to NYLA's culture. QUOTES “I can do slow and steady, I can find my target audience, and then once I have that I can figure out what I want to parlay that into later.” - Gerard Spivey [25:59] “‘I'm a Senior Director [at Intel], and I can do what I want' is basically what he told me. He's like ‘the company has a 3.0 thing, but for someone like you who actually knows what they're talking about it's not a problem.' So I said, ‘Ooh this is my time, they're letting me in'” - Gerard Spivey [42:07] “You're in a good spot in your career when you're valued for the thing you're going to do next versus the thing you did previously. What you're going to do next is your competitive value - that is what you bring to the table.” - Gerard Spivey [48:27] TIMESTAMPS [00:04] Intro [01:31] Gerard's Wedding Ceremony [02:32] Working at Amazon Web Services (AWS) [05:33] Amazon's Interview Process [12:06] Gerard's Experience with the Job Market [15:54] Working at Amazon [19:11] Starting a New Job During COVID [19:43] Side Hustles [23:21] Gerard's YouTube Channel [31:08] Gerard's Childhood [31:52] How Gerard Decided to Study Electrical Engineering [34:19] Choosing a College [45:13] Gerard's Advice to his Younger Self [47:42] Favorite Books [50:57] Gerard's Time at NYLA [55:36] Outro RESOURCES https://aws.amazon.com/ec2/ (Amazon EC2) https://aws.amazon.com/ec2/instance-types/ (Amazon EC2 Instance Types) https://aws.amazon.com/dynamodb/ (Amazon DynamoDB) https://sre.google/ (Site Reliability Engineering (SRE)) https://www.c2stechs.com/ (Commercial Cloud Services (C2S)) https://www.thebalancecareers.com/what-is-the-star-interview-response-technique-2061629 (STAR Interview Response Method) https://www.microsoft.com/en-us/microsoft-365/exchange/email (Microsoft Exchange) https://azure.microsoft.com/en-us/ (Microsoft Azure) https://www.synopsys.com/glossary/what-is-cicd.html (CI/CD) https://mlt.org/ (Management Leadership for Tomorrow (MLT)) https://www.hbs.edu/ (Harvard Business School) https://a16z.com/ (Andreessen Horowitz) https://www.youtube.com/ (YouTube) https://www.nsbe.org/K-12/Programs/PCI-Programs (NSBE Pre-College Initiative Program) https://www.jhu.edu/ (Johns Hopkins University) https://www.abet.org/ (Accreditation Board for Engineering and Technology (ABET)) https://www.ncat.edu/ (North Carolina A&T State University) https://www.morgan.edu/ (Morgan State University) https://howard.edu/ (Howard University) https://www.rit.edu/ (Rochester Institute of Technology) https://www.psu.edu/ (Penn State University) https://www.digitaltechnologieshub.edu.au/teach-and-assess/classroom-resources/topics/digital-systems/ (Digital Systems) https://www.xilinx.com/products/silicon-devices/fpga/what-is-an-fpga.html (Field Programmable Gate Arrays (FPGAs)) https://www.gwu.edu/ (The George Washington University) https://www.intel.com/content/www/us/en/homepage.html (Intel) https://www.pcmag.com/encyclopedia/term/pci-express (PCI Express) https://www.intel.com/content/www/us/en/io/serial-ata/serial-ata-developer.html (Serial ATA (SATA)) https://consortium.org/ (Consortium of Universities of the Washington Metropolitan Area) https://www.amazon.com/Zero-One-Notes-Startups-Future/dp/0804139296 (Zero to One) by Peter Thiel and Blake Masters https://www.richdad.com/...
On The Cloud Pod this week, the team talks tactics for infiltrating the new Google Cloud center in Ohio. Plus: AWS goes sci-fi with the new Graviton3 processors, the new GKE cost estimator calculates the value of your soul, and Microsoft builds the metaverse. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
On The Cloud Pod this week, the team struggles with scheduling to get everyone in the same room for just one week. Plus, Microsoft increases pay for talent retention while changing licensing for European Cloud Providers, Google Cloud introduces AlloyDB for PostgreSQL, and AWS announces EC2 support for NitroTPM. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
About JamesJames has been part of AWS for over 15 years. During that time he's led software engineering for Amazon EC2 and more recently leads the AWS Commerce Platform group that runs some of the largest systems in the world, handling volumes of data and request rates that would make your eyes water. And AWS customers trust us to be right all the time so there's no room for error.Links Referenced:Email: jamesg@amazon.comTranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Vultr. Optimized cloud compute plans have landed at Vultr to deliver lightning-fast processing power, courtesy of third-gen AMD EPYC processors without the IO or hardware limitations of a traditional multi-tenant cloud server. Starting at just 28 bucks a month, users can deploy general-purpose, CPU, memory, or storage optimized cloud instances in more than 20 locations across five continents. Without looking, I know that once again, Antarctica has gotten the short end of the stick. Launch your Vultr optimized compute instance in 60 seconds or less on your choice of included operating systems, or bring your own. It's time to ditch convoluted and unpredictable giant tech company billing practices and say goodbye to noisy neighbors and egregious egress forever. Vultr delivers the power of the cloud with none of the bloat. “Screaming in the Cloud” listeners can try Vultr for free today with a $150 in credit when they visit getvultr.com/screaming. That's G-E-T-V-U-L-T-R dot com slash screaming. My thanks to them for sponsoring this ridiculous podcast.Corey: Finding skilled DevOps engineers is a pain in the neck! And if you need to deploy a secure and compliant application to AWS, forgettaboutit! But that's where DuploCloud can help. Their comprehensive no-code/low-code software platform guarantees a secure and compliant infrastructure in as little as two weeks, while automating the full DevSecOps lifestyle. Get started with DevOps-as-a-Service from DuploCloud so that your cloud configurations are done right the first time. Tell them I sent you and your first two months are free. To learn more visit: snark.cloud/duplo. Thats's snark.cloud/D-U-P-L-O-C-L-O-U-D. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. And I've been angling to get someone from a particular department at AWS on this show for nearly its entire run. If you were to find yourself in an Amazon building and wander through the various dungeons and boiler rooms and subterranean basements—I presume; I haven't seen nearly as many of you inside of those buildings as people might think—you pass interesting departments labeled things like ‘Spline Reticulation,' or whatnot. And then you come to a very particular group called Commerce Platform.Now, I'm not generally one to tell other people's stories for them. My guest today is James Greenfield, the VP of Commerce Platform at AWS. James, thank you for joining me and suffering the slings and arrows I will no doubt be hurling at you.James: Thanks for having me. I'm looking forward to it.Corey: So, let's start at the very beginning—because I guarantee you, you're going to do a better job of giving the chapter and verse answer than I would from a background mired deeply in snark—what is Commerce Platform? It sounds almost like it's the retail website that sells socks, books, and underpants.James: So, Commerce Platform actually spans a bunch of different things. And so, I'm going to try not to bore you with a laundry list of all of the things that we do—it's a much longer list than most people assume even internal to AWS—at its core, Commerce Platform owns all of the infrastructure and processes and software that takes the fact that you've been running an EC2 instance, or you're storing an object in S3 for some period of time, and turns it into a number at the end of the month. That is what you asked for that service and then proceeds to try to give you as many ways to pay us as easily as possible. There are a few other bits in there that are maybe less obvious. One is we're also responsible for protecting the platform and our customers from fraudulent activity. And then we're also responsible for helping collect all of the data that we need for internal reporting to support some of the back-ends services that a business needs to do things like revenue recognition and general financial reporting.Corey: One of the interesting aspects about the billing system is just how deeply it permeates everything that happens within AWS. I frequently say that when it comes to cloud, cost and architecture are foundationally and fundamentally the same exact thing. If your entire service goes down, a few interesting things happen. One, I don't believe a single customer is going to complain other than maybe a few accountants here and there because the books aren't reconciling, but also you've removed a whole bunch of constraints around why things are the way that they are. Like, what is the most efficient way to run this workload?Well, if all the computers suddenly become free, I don't really care about efficiency, so much is, “Oh, hey. There's a fly, what do I have as a flyswatter? That's right, I'm going to drop a building on it.” And those constraints breed almost everything. I've said, for example, that S3 has infinite storage because it does.They can add drives faster than we're able to fill them—at least historically; they added some more replication services—but they're going to be able to buy hard drives faster than the rest of us are going to be able to stretch our budgets. If that constraint of the budget falls away, all bets are really off, and more or less, we're talking about the destruction of the cloud as a viable business entity. No pressure or anything.James: [laugh].Corey: You're also a recent transplant into AWS billing as a whole, Commerce Platform in general. You spent 15 years at the company, the vast majority of that over an EC2. So, either it was you've been exiled to a basically digital Siberia or it was one of those, “Okay, keeping all the EC2 servers up, this is easy. I don't see what people stress about.” And they say, “Oh, ho ho, try this instead.” How did you find yourself migrating over to the Commerce Platform?James: That's actually one I've had a lot from folks that I've worked with. You're right, I spent the first 15 or so years of my career at AWS in EC2, responsible for various things over there. And when the leadership role in Commerce Platform opened up, the timing was fortuitous, and part of it, I was in the process of relocating my family. We moved to Vancouver in the middle of last year. And we had an opening in the role and started talking about, potentially, me stepping into that role.The reason that I took it—there's a few reasons, but the primary reason is that if I look back over my career, I've kind of naturally gravitated towards owning things where people only really remember that they exist when they're not working. And for some reason, you know, I enjoy the opportunity to try to keep those kinds of services ticking over to the point where people don't notice them. And so, Commerce Platform lands squarely in that space. I've always been attracted to opportunities to have an impact, and it's hard to imagine having much more of an impact than in the Commerce Platform space. It underpins everything, as you said earlier.Every single one of our customers depends on the service, whether they think about it or realize it. Every single service that we offer to customers depends on us. And so, that really is the sort of nexus within AWS. And I'm a platform guy, I've always been a platform guy. I like the force multiplier nature of platforms, and so Commerce Platform, you know, as I kind of thought through all of those elements, really was a great opportunity to step in.And I think there's something to be said for, I've been a customer of Commerce Platform internally for a long time. And so, a chance to cross over and be on the other side of that was something that I didn't want to pass up. And so, you know, I'm digging in, and learning quickly, ramping up. By no means an expert, very dependent on a very smart, talented, committed group of people within the team. That's kind of the long and short of how and why.Corey: Let's say that I am taking on the role of an AWS product team, for the sake of argument. I know, keep the cringe down for a second, as far as oh, God, the wince is just inevitable when the idea of me working there ever comes up to anyone. But I have an idea for a service—obviously, it runs containers, and maybe it does some other things as well—going from idea to six-pager to MVP to barely better than MVP day-one launch, and at some point, various things happen to that service. It gets staff with a team, objectives and a roadmap get built, a P&L and budget, and a pricing model and the rest. One the last thing that happens, apparently, is someone picks the worst name off of a list of candidates, slaps it on the product, and ships it off there.At what point does the billing system and figuring out the pricing dimensions for a given service tend to factor in? Is that a last-minute story? Is that almost from the beginning? Where along that journey does, “Oh, by the way, we're building this thing. Maybe we should figure out, I don't know, how to make money from it.” Factor into the conversation?James: There are two parts to that answer. Pretty early on as we're trying to define what that service is going to look like, we're already typically thinking about what are the dimensions that we might charge along. The actual pricing discussions typically happen fairly late, but identifying those dimensions and, sort of, the right way to present it to customers happens pretty early on. The thing that doesn't happen early enough is actually pulling the Commerce Platform team in. but it is something that we're going to work this year to try to get a little bit more in front of.Corey: Have you found historically that you have a pretty good idea of how a service is going to be priced, everything is mostly thought through, a service goes to either private preview or you're discussing about a launch, and then more or less, I don't know, someone like me crops up with a, “Hey, yeah, let's disregard 90% of what the service does because I see a way to misuse the remaining 10% of it as a database.” And you run some mental math and realize, “Huh. We're suddenly giving, like, eight petabytes of storage per customer away for free. Maybe we should guard against that because otherwise, it's rife with misuse.” It used to be that I could find interesting ways to sneak through the cracks of various services—usually in pursuit of a laugh—those are getting relatively hard to come by and invariably a lot more trouble than they're worth. Is that just better comprehensive diligence internally, is that learning from customers, or am I just bad at this?James: No, I mean, what you're describing is almost a variant of the Defender's Dilemma. They are way more ways to abuse something than you can imagine, and so defending against that is pretty challenging. And it's important because, you know, if you turn the economics of something upside down, then it just becomes harder for us to offer it to customers who want to use it legitimately. I would say 90% of that improvement is us learning. We make plenty of mistakes, but I think, you know, one of the things that I've always been impressed by over my time here is how intentional we are trying to learn from those mistakes.And so, I think that's what you're seeing there. And then we try very hard to listen to customers, talk to folks like you, because one of the best ways to tackle anything it smells of the Defender's Dilemma is to harness that collective creativity of a large number of smart people because you really are trying to cover as much ground as possible.Corey: There was a fun joke going around a while back of what is the most expensive environment you can get running on a free tier account before someone from AWS steps in, and I think I got it to something like half a billion dollars in the first month. Now, I haven't actually tested this for reasons that mostly have to do with being relatively poor compared to, you know, being able to buy Guam. And understanding as well the fraud protections built into something like AWS are largely built around defending against getting service usage for free that in some way, shape or form, benefits the attacker. The easy example of that would be mining cryptocurrency, which is just super-economic as long as you use someone else's AWS account to do it. Whereas a lot of my vectors are, “Yeah, ignore all of that. How do I just make the bill artificially high? What can I do to misuse data transfer? And passing a single gigabyte through, how much can I make that per gigabyte cost be?” And, “Oh, circular replication and the Lambda invokes itself pattern,” and basically every bad architectural decision you can possibly make only this time, it's intentional.And that shines some really interesting light on it. And I have to give credit where due, a lot of that didn't come from just me sitting here being sick and twisted nearly so much as it did having seen examples of that type of misconfiguration—by mistake—in a variety of customer accounts, most confidently my own because it turns out that the way I learn things is by screwing them up first.James: Yeah, you've touched on a couple of different things in there. So, you know, maybe the first one is, I typically try to draw a line between fraud and abuse. And fraud is essentially trying to spend somebody else's money to get something for free. And we spent a lot of time trying to shut that down, and we're getting really good at catching it. And then abuse is either intentional or unintentional. There's intentional abuse: You find a chink in our armor and you try to take advantage of it.But much more commonly is unintentional abuse. It's not really abuse, you know. Abuse has very negative connotations, but it's unintentionally setting something up so that you run up a much larger bill than you intended. And we have a number of different internal efforts, and we're working on a bunch more this year, to try to catch those early on because one of my personal goals is to minimize the frequency with which we surprise customers. And the least favorite kind of surprise for customers is a [laugh] large bill. And so, what you're talking about there is, in a sufficiently complex system, there's always going to be weaknesses and ways to get yourself tied up in knots.We're trying both at the service team level, but also within my teams to try to find ways to make it as hard as possible to accidentally do that to yourself and then catch when you do so that we can stop it. And even more on the intentional abuse side of things, if somebody's found a way to do something that's problematic for our services, then you know, that's pretty much on us. But we will often reach out and engage with whoever's doing and try to understand what they're trying to do and why. Because often, somebody's trying to do something legitimate, they've got a problem to solve, they found a creative way to solve it, and it may put strain on the service because it's just not something we designed for, and so we'll try to work with them to use that to feed into either new services, or find a better place for that workload, or just bolster what they're using. And maybe that's something that eventually becomes a fully-fledged feature that we offer the customers. We're always open to learning from our customers. They have found far more creative ways to get really cool things done with our services than we've ever imagined. And that's true today.Corey: I mean, most of my service criticisms come down to the fact that you have more-or-less built a very late model, high performing iPad, and I'm out there complaining about, “What a shitty hammer this thing is, it barely works at all, and then it breaks in my hand. What gives?” I would also challenge something you said a minute ago that the worst day for some customers is to get a giant surprise bill, but [unintelligible 00:13:53] to that is, yeah, but, on some level, that kind of only money; you do have levers on your side to fix those issues. A worse scenario is you have a customer that exhibits fraud-like behavior, they're suddenly using far more resources than they ever did before, so let's go ahead and turn them off or throttle them significantly, and you call them up to tell them you saved them some money, and, “Our Superbowl ad ran. What exactly do you think you're doing?” Because they don't get a second bite at that kind of Apple.So, there's a parallel on both sides of this. And those are just two examples. The world is full of nuances, and at the scale that you folks operate at. The one-in-a-million events happen multiple times a second, the corner cases become common cases, and I'm surprised—to be direct—how little I see you folks dropping the ball.James: Credit to all of the teams. I think our secret sauce, if anything, really does come down to our people. Like, a huge amount of what you see as hopefully relatively consistent, good execution comes down to people behind the scenes making sure. You know, like, some of it is software that we built and made sure it's robust and tested to scale, but there's always an element of people behind the scenes, when you hit those edge cases or something doesn't quite go the way that you planned, making sure that things run smoothly. And that, if anything, is something that I'm immensely proud of and is kind of amazing to watch from the inside.Corey: And, on some level, it's the small errors that are the bigger concern than the big ones. Back a couple years ago, when they announced GP3 volumes at re:Invent, well, great, well spin up a test volume and kick the tires on it for an hour. And I think it was 80 or 100 gigs or whatnot, and the next day in the bill, it showed up as about $5,000. And it was, “Okay, that's not great. Not great at all.” And it turned out that it was a mispricing error by I think a factor of a million.And okay, at least it stood out. But there are scenarios where we were prepared to pay it because, oops, you got one over on us. Good job. That's never been the mindset I've gotten about AWS's philosophy for pricing. The better example that I love because no one took it seriously, was a few years before that when there was a LightSail bug in the billing system, and it made the papers because people suddenly found that for their LightSail instance, they were getting predicted bills of $4 billion.And the way I see it, you really only had to make that work once and then you've made your numbers for the year, so why not? Someone's going to pay for it, probably. But that was such out-of-the-world numbers that no one saw that and ever thought it was anything other than a bug. It's the small pernicious things that creep in. Because the billing system is vast; I had no idea when I started working with AWS bills just how complicated it really was.James: Yeah, I remember both of those, and there's something in there that you touched on that I think is really important. That's something that I realized pretty early on at Amazon, and it's why customer obsession is our flagship leadership principle. It's not because it's love and butterflies and unicorns; customer obsession is key to us because that's how you build a long-term sustainable business is your customers depend on you. And it drives how we think about everything that we do. And in the billing space, small errors, even if there are small errors in the customer's favor, slowly erode that trust.So, we take any kind of error really seriously and we try to figure out how we can make sure that it doesn't happen again. We don't always get that right. As you said, we've built an enormous, super-complex business to growing really quickly, and really quick growth like that always acts as kind of a multiplier on top of complexity. And on the pricing points, we're managing millions of pricing points at the moment.And our tools that we use internally, there's always room for improvement. It's a huge area of focus for us. We're in the beginning of looking at applying things like formal methods to make sure that we can make very hard guarantees about the correctness of some of those. But at the end of the day, people are plugging numbers in and you need as many belts and braces as possible to make sure that you don't make mistakes there.Corey: One of the things that struck me by surprise when I first started getting deep into this space was the fact that the finalized bill was—what does it mean to have this be ‘finalized?' It can hit the Cost and Usage Report in an S3 bucket and it can change retroactively after the month closed periodically. And that's when I started to have an inkling of a few things: Not just the sheer scale and complexity inherent to something like the billing system that touches everything, but the sheer data retention stories where you clearly have to be able to go back and reconstruct a bill from the raw data years ago. And I know what the output of all of those things are in the form of Cost and Usage Reports and the billing data from our client accounts—which is the single largest expense in all of our AWS accounts; we spent thousands and thousands and thousands of dollars a year just on storing all of that data, let alone the processing piece of it—the sheer scale is staggering. I used to wonder why does it take you a day to record me using something to it's showing up in the bill? And the more I learned the more it became a how can you do that in only a day?James: Yes, the scale is actually mind-boggling. I'm pretty sure that the core of our billing system is—I'm reasonably confident it's the largest or one of the largest data processing systems on the planet. I remember pretty early on when I joined Commerce Platform and was still starting to wrap my head around some of these things, Googling the definition of quadrillion because we measured the number of metering events, which is how we record usage in services, on a daily basis in the quadrillions, which is a billion billions. So, it's just an absolutely staggering number. And so, the scale here is just out of this world.That's saying something because it's not like other services across AWS are small in their own right. But I'm still reasonably sure that being one of a handful of services that is kind of at the nexus of AWS and kind of deals with the aggregate of AWS's scale, this is probably one of the biggest systems on the planet. And that shows up in all sorts of places. You start with that input, just the sheer volume of metering events, but that has to produce as an output pretty fine-grained line item detailed information, which ultimately rolls up into the total that a customer will see in their bill. But we have a number of different systems further down the pipeline that try to do things like analyze your usage, make sensible recommendations, look for opportunities to improve your efficiency, give you the ability to slice and dice your data and allocate it out to different parts of your business in whatever way it makes sense for your business. And so, those systems have to deal with anywhere from millions to billions to recently, we were talking about trillions of data points themselves. And so, I was tangentially aware of some of the scale of this, but being in the thick of it having joined the team really just does underscore just how vast the systems are.Corey: I think it's, on some level, more than a little unfortunate that that story isn't being more widely told, more frequently. Because when Commerce Platform has job postings that are available on the website, you read it and it's very vague. It doesn't tend to give hard numbers about a lot of these things, and people who don't play in these waters can easily be forgiven for thinking the way that you folks do your job is you fire up one of those 24 terabyte of RAM instances that—you know, those monstrous things that you folks offer—and what do you do next? Well, Microsoft Excel. We have a special high memory version that we've done some horse-trading with our friends over at Microsoft for.It's, yeah, you're several steps beyond that, at this point. It's a challenging problem that every one of your customers has to deal with, on some level, as well. But we're only dealing with the output of a lot of the processing that you folks are doing first.James: You're exactly right. And a big focus for some of my teams is figuring out how to help customers deal with that output. Because even if you're talking about couple of orders of magnitude reduction, you're still talking about very large numbers there. So, to help customers make sense of that, we have a range of tools that exist, we're investing in.There's another dimension of complexity in the space that I think is one that's also very easy to miss. And I think of it as arbitrary complexity. And it's arbitrary because some of the rules that we have to box within here are driven by legislative changes. As you operate more and more countries around the world, you want to make sure that we're tax compliant, that we help our customers be tax compliant. Those rules evolve pretty rapidly, and Country A may sit next to Country B, but that doesn't mean that they're talking to one another. They've all got their own ideas. They're trying to accomplish r—00:22:47Corey: A company is picking up and relocating from India to Germany. How do we—James: Exactly.Corey: —change that on the AWS side and the rest? And it's, “Hoo boy, have you considered burning it all down and filing an insurance claim to start over?” And, like, there's a lot of complexity buried underneath that that just doesn't rise to the notice of 99% of your customers.James: And the fact that it doesn't rise to the notice is something that we strive for. Like, these shouldn't be things that customers have to worry about. Because it really is about clearing away the things that, as far as possible, you don't want to have to spend time thinking about so that you can focus on the thing that your business does that differentiates you. It's getting rid of that undifferentiated heavy lifting. And there's a ton of that in this space, and if you're blissfully unaware of it, then hopefully that means that we're doing our job.Corey: What I'm, I think, the most surprised about, and I have been for a long time. And please don't take this as an insult to various other folks—engineers, the rest, not just in other parts of AWS but throughout the other industry—but talking to the people who work within Commerce Platform has always been just a fantastic experience. The caliber of people that you have managed to attract and largely retain—we don't own people, they do matriculate out eventually—but the caliber of people that you've retained on your teams has just been out of this world. And at first, I wondered, why are these awesome people working on something as boring and prosaic as billing? And then I started learning a little bit more as I went, and, “Oh, wow. How did they learn all the stuff that they have to hold in their head in tension at once to be able to build things like this?” It's incredibly inspiring just watching the caliber of the people that you've been able to bring in.James: I've been really, really excited joining this team, as I've gotten other folks on the team because there's some super-smart people here. But what's really jumped out to me is how committed the team is. This is, for the most part, a team that has been in the space for many years. Many of them have—we talk about boomerangs, folks who live AWS, go spend some time somewhere else and come back and there's a surprisingly high proportion of folks in Commerce Platform who have spent time somewhere else and then come back because they enjoy the space, they find that challenging, folks are attracted to the ability to have an impact because it is so foundational. But yeah, there's a super-committed core to this team. And I really enjoy working with teams where you've got that because then you really can take the long view and build something great. And I think we have tons of opportunities to do that here.Corey: It sounds ridiculous, but I've reached out to team members before to explain two-cent variances in my bill, and never once have I been confronted with a, “It's two cents. What do you care?” They understand the requirement that these things be accurate, not just, “Eh, take our word for it.” And also, frankly, they understand that two cents on a $20 bill looks a little different on a $20 million bill. So yeah, let us figure out if this is systemic or something I have managed to break.It turns out the Cost and Usage Report processing systems don't love it when there's a cost allocation tag whose name contains an emoji. Who knew? It's the little things in life that just have this fun way of breaking when you least expect it.James: They're also a surprisingly interesting problem. So like, it turns out something as simple as rounding numbers consistently across a distributed system at this scale, is a non-trivial problem. And if you don't, then you do get small seventh or eighth decimal place differences that add up to something that then shows up as a two-cent difference somewhere. And so, there's some really, really interesting problems in the space. And I think the team often takes these kinds of things as a personal challenge. It should be correct, and it's not, so we should go make sure it is correct. The interesting problems abound here, but at the end of the day, it's the kind of thing that any engineering team wants to go and make sure it's correct because they know that it can be.Corey: This episode is sponsored in parts by our friend EnterpriseDB. EnterpriseDB has been powering enterprise applications with PostgreSQL for 15 years. And now EnterpriseDB has you covered wherever you deploy PostgreSQL on premises, private cloud, and they just announced a fully managed service on AWS and Azure called BigAnimal, all one word. Don't leave managing your database to your cloud vendor because they're too busy launching another half dozen manage databases to focus on any one of them that they didn't build themselves. Instead, work with the experts over at EnterpriseDB. They can save you time and money, they can even help you migrate legacy applications, including Oracle, to the cloud.To learn more, try BigAnimal for free. Go to biganimal.com/snark, and tell them Corey sent you.Corey: On the one hand, I love people who just round and estimate—we all do that, let's be clear; I sit there and I back-of-the-envelope everything first. But then I look at some of your pricing pages and I count the digits after the zeros. Like, you're talking about trillionths of a dollar on some of your pricing points. And you add it up in the course of a given hour and it's like, oh, it's $250 a month, most months. And it's you work backwards to way more decimal places of precision than is required, sometimes.I'm also a personal fan of the bill that counts, for example, number of Route 53 zones. Great. And it counts them to four decimal places of precision. Like, I don't even know what half of it Route 53 zone is at this point, let alone something to, like, ah the 1,000th of the zone is going to cause this. It's all an artifact of what the underlying systems are.Can you by any chance shed a little light on what the evolution of those systems has been over a period of time? I have to imagine that anything you built in the early days, 16 years ago or so from the time of this recording when S3 launched to general availability, you probably didn't have to worry about this scope and scale of what you do, now. In fact, I suspect if you tried to funnel this volume through S3 back then, the whole thing would have collapsed under its own weight. What's evolved over the time that you had the billing system there? Because changes come slowly to your environment. And frankly, I appreciate that as a customer. I don't like surprising people in finance.James: Yeah, you're totally right. So, I joined the EC2 team as an engineer myself, some 16 years ago, and the very first thing that I did was our billing integration. And so, my relationship with the Commerce Platform organization—what was the billing team way back when—it goes back over my entire career at AWS. And at the time, the billing team was similar, you know, [unintelligible 00:28:34] eight people. And that was everything. There was none of the scale and complexity; it was all one system.And much like many of our biggest, oldest services—EC2 is very similar, S3 is as well—there's been significant growth over the last decade-and-a-half. A lot of that growth has been rapid, and rapid growth presents its own challenges. And you live with decisions that you make early on that you didn't realize were significant decisions that have pretty deep implications 15 years later. We're still working through some of those; they present their own challenges. Evolving an existing system to keep up with the growth of business and a customer base that's as varied and complex as ours is always challenging.And also harder but I also think more fun than a clean sheet redo at this point. Like, that's a great thought exercise for, well, if we got to do this again today, what would we do now that we've learned so much over the last 15 years? But there's this—I find it personally fascinating challenge with evolving a live system where it's like, “No, no, like, things exist, so how do we go from there to where we want to be next?”Corey: Turn the billing system off for 18 months, rebuild—James: Yeah. [laugh].Corey: The whole thing from first principles. Light it up. I'm sure you'd have a much better billing system, and also not a company left anymore.James: [laugh]. Exactly, exactly. I've always enjoyed that challenge. You know, even prior to AWS, my previous careers have involved similar kinds of constraints where you've got a live system, or you've got an existing—in the one case, it was an existing SDK that was deployed to tens of thousands of customers around the world, and so backwards compatibility was something that I spent the first five years of my career thinking about it way more detail than I think most people do. And it's a very similar mindset. And I enjoy that challenge. I enjoy that: How do I evolve from here to there without breaking customers along the way?And that's something that we take pretty seriously across AWS. I think SimpleDB is the poster child for we never turn things off. But that applies equally to the services that are maybe less visible to customers, and billing is definitely one of them. Like, we don't get to switch stuff off. We don't get to throw things away and start again. It's this constant state of evolution.Corey: So, let's say that I were to find a way to route data through a series of two Managed NAT Gateways and then egress to internet, and the sheer density of the expense of that traffic tears a hole in the fabric of space-time, it goes back 15 years ago, and you can make a single change to how the billing system was built. What would it be? What pisses you off the most about the current constraints that you have to work within or around?James: I think one of the biggest challenges we've got, actually, is the concept of an account. Because an account means half-a-dozen different things. And way back, when it seemed like a great idea, you just needed an account; an account was your customer, and it was the same thing as the boundary that you put all your resources inside. And of course, it's the same thing that you're going to roll all of your usage up and issue a bill against. And that has been one of the areas that's seen the most evolution and probably still has a pretty long way to go.And what's interesting about that is, that's probably something we could have seen coming because we watched the retail business go through, kind of, the same evolution because they started with, well, a customer is a customer is a customer and had to evolve to support the concept of sellers and partners. And then users are different than customers, and you want to log in and that's a different thing. So, we saw that kind of bifurcation of a single entity into a wide range of different related but separate entities, and I think if we'd looked at that, you know, thought out 15 years, then yeah, we could probably have learned something from that. But at the same time, when AWS first kicked off, we had wild ambitions for it, but there was no guarantee that it was going to be the monster that it is today. So, I'm always a little bit reluctant to—like, it's a great thought exercise, but it's easy to end up second-guessing a pretty successful 15 years, so I'm always a little bit careful to walk that line. But I think account is one of the things that we would probably go back and think about a little bit more.Corey: I want to be very clear with this next question that it is intentionally setting up a question I suspect you get a lot. It does not mirror my own thinking on the matter even slightly, but I get a version of it myself all the time. “AWS bills, that sounds boring as hell. Why would you choose to work on such a thing?” Now, I have a laundry list of answers to that aren't nearly as interesting as I suspect yours are going to be. What makes working on this problem space interesting to you?James: There's a bunch of different things. So, first and foremost, the scale that we're talking about here is absolutely mind-blowing. And for any engineer who wants to get stuck into problems that deal with mind-blowingly large volumes of data, incredibly rich dimensions, problems where, honestly, applying techniques like statistical reasoning or machine learning is really the only way to chip away at it, that exists in spades in the space. It's not always immediately obvious, and I think from the outside, it's easy to assume this is actually pretty simple. So, the scale is a huge part of that.Corey: “Oh, petabytes. How quaint.”James: [laugh]. Exactly. Exactly I mean, it's mind-blowing every time I see some of the numbers in various parts of the Commerce Platform space. I talked about quadrillions earlier. Trillions is a pretty common unit of measure.The complexity that I talked about earlier, that's a result of external environments is another one. So, imposed by external entities, whether it's a government or a tax authority somewhere, or a business requirement from customers, or ourselves. I enjoy those as well. Those are different kinds of challenge. They really keep you on your toes.I enjoy thinking of them as an engineering problem, like, how do I get in front of them? And that's something we spend a lot of time doing in Commerce Platform. And when we get it right, customers are just unaware of it. And then the third one is, I personally am always attracted to the opportunity to have an impact. And this is a space where we get to hopefully positively impact every single customer every day. And that, to me is pretty fulfilling.Those are kind of the three standout reasons why I think this is actually a super-exciting space. And I think it's often an underestimated space. I think once folks join the team and sort of start to dig in, I've never heard anybody after they've joined, telling me that what they're doing is boring. Challenging, yes. Is frustrating, sometimes. Hard, absolutely, but boring never comes up.Corey: There's almost no service, other than IAM, that I can think of that impacts every customer simultaneously. And it's easy for me to sit in the cheap seats and say, “Oh, you should change this,” or, “You should change that.” But every change you have is so massive in scale that it's going to break a whole bunch of companies' automations around the bill processing in different ways. You have an entire category of user persona who is used to clicking a certain button in this certain place in the console to generate the report every month, and if that button moves or changes color, or has a different font, suddenly that renders their documentation invalid, and they're scrambling because it's not their core competency—nor should it be—and every change you make is so constricted, just based upon all the different concerns that you've got to be juggling with. How do you get anything done at all? I find that to be one of the most impressive aspects about your organization, bar none.James: Yeah, I'm not going to lie and say that it isn't a challenge, but a lot of it comes down to the talent that we have on the team. We have a super-motivated, super-smart, super-engaged team, and we spend a lot of time figuring out how to make sure that we can keep moving, keep up with the business, keep up with a world that's getting more complicated [laugh] with every passing day. So, you've kind of hit on one of the core challenges there, which is, how do we keep up with all of those different dimensions that are demanding an increasing amount of engineering and new support and new investment from us, while we keep those customers happy?And I think you touched on something else a little bit indirectly there, which is, a lot of our customers are actually pretty technical across AWS. The customers that Commerce Platform supports, are often the least technical of our customers, and so often need the most help understanding why things are the way they are, where the constraints are.Corey: “A big bill from Amazon. How many books did you people buy last month?”—James: [laugh]. Exactly.Corey: —is still very much level of understanding in some cases. And it's not because they're dumb; far from it. It's just, imagine that some people view there as being more to life than understanding the nuances and intricacies of cloud computing. How dare they?James: Exactly. Who would have thought?Corey: So, as you look now over all of your domain, such as it is, what sucks the most? What are you looking to fix as far as impactful changes that the rest of the world might experience? Because I'm not going to accept one of those questions like, “Oh, yeah, on the back-end, we have this storage subsystem for a tertiary thing that just annoys me because it wakes us up once in a whi”—no, no, I want something customer-facing. What's the painful thing you're looking at fixing next?James: I don't like surprising customers. And free tier is, sort of, one of those buckets of surprises, but there are others. Another one that's pretty squarely in my sights is, whether we like it or not, customer accounts get compromised. Usually, it's a password got reused somewhere or was accidentally committed into a GitHub repository somewhere.And we have pretty established, pretty effective mechanisms for finding all of those, we'll scan for passwords and credentials, and alert customers to those, and help them correct that pretty quickly. We're also actually pretty good at detecting when an account does start to do something that suggests that it's been compromised. Usually, the first thing that a compromised account starts to do is cryptocurrency mining. We're pretty quick to catch those; we catch those within a matter of hours, much faster most days.What we haven't really cracked and where I'm focused at the moment is getting back to the customer in a way that's effective. And by that I mean specifically, we detect an account compromised super-quickly, we reach out automatically. And so, you know, a customer has got some kind of contact from us usually within a couple of hours. It's not having the effect that we need it to. Customers are still being surprised a month later by a large bill. And so, we're digging into how much of that is because they never saw the contact, they didn't know what to do with the contact.Corey: It got buried with all the other, “Hey, we saw you spun up an S3 bucket. Have you heard of what S3 is?” Again, that's all valuable, but you have 300-some-odd services. If you start doing that for every service, you're going to hit mail sending limits for Gmail.James: Exactly. It's not just enough that we detect those and notify customers; we have to reduce the size of the surprise. It's one thing to spend 100 bucks a month on average, and then suddenly find that your spend has jumped $250 because you reused the password somewhere and somebody got ahold of it and it's cryptocurrency-mining your account. It's a whole different ballgame to spend 100 bucks a month and then at the end of the month discover that your bill is suddenly $2,000 or $20,000. And so, that's something that I really wanted to make some progress on this year. Corey: I've really enjoyed our conversation. If people want to learn more about how you view these things, how you're approaching some of these problems, or potentially are just the right kind of warped to consider joining up, where's the best place for them to go?James: They should drop me an email at jamesg@amazon.com. That is the most direct way to get hold of me, and I promise I will get back to you. I try to stay on top of my email as much as possible. But that will come straight to me, and I'm always happy to talk to folks about the space, talk to folks about opportunities in this team, opportunities across AWS, or just hear what's not working, make sure that it's something that we're aware of and looking at.Corey: Throughout Amazon, but particularly within Commerce Platform, I've always appreciated the response of, whenever I report something, no matter how ridiculous it is—and I assure you there's an awful lot of ridiculousness in my bug reports—the response has always been the same: “Tell me more. Help me understand what it is you're trying to achieve—even if it is ridiculous—so we can look at this and see what is actually going on.” Every Amazonian team has been great about that or you're not at Amazon very long, but you folks have taken that to an otherworldly level. I just want to thank you for doing that.James: I appreciate you for calling that out. We try, you know, we really do. We take listening to our customers very seriously because, at the end of the day, that's what makes us better, and that's how we make sure we're in it for the long haul.Corey: Thanks once again for being so generous with your time. I really appreciate it.James: Yeah, thanks for having me on. I've enjoyed it.Corey: James Greenfield, VP of Commerce Platform at AWS. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment—possibly on YouTube as well—about how you aren't actually giving this five-stars at all; you have taken three trillions of a star off of the rating.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
On The Cloud Pod this week, Peter's been suspended without pay for two weeks for not filing his vacation requests in triplicate. Plus it's earnings season once again, there's a major Google and SWIFT collaboration afoot, and MSK Serverless is now generally available, making Kafka management fairly hassle-free. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
On The Cloud Pod this week and with half the team gone fishin', Justin and Peter hash it out short and sweet. Plus Google Cloud SQL Insights, Atlassian suffers an outage, and AWS finally offers accessible Lambda Function URLs. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
Google Biglake takes the feature of the week with the ability to federate data from multiple data lakes. On The Cloud Pod this week, the team discusses the most expensive way to run a VM (Oracle wins). Plus some exciting developments, an AWS OpenSearch 1.2 update with several new features, and Azure's having a party, so bring your own IP addresses (BYOIP). A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
Tonight on GeekNights, we revisit the topic of hosting we last covered in 2006. From simple web hosting to SaaS (Software as a Service), PaaS (Platform as a Service) and even Iaas (Infrastructure as a Service), a lot has changed. Amazon EC2 didn't even exist when we did that first episode. In the news, Vimeo moves further into B2B, Twitch pauses its "porn on the front page" feature, Amazon is reportedly censoring interesting words like "union" in its new internal app.
Support Mobycasthttps://glow.fm/mobycastShow DetailsIn this episode, we cover the following topics: Container networking ECS networking mode Configures the Docker networking mode to use for the containers in the taskSpecified as part of the task definition Valid values: noneContainers do not have external connectivity and port mappings can't be specified in the container definition bridge Utilizes Docker's built-in virtual network which runs inside each container instance Containers on an instance are connected to each other using the docker0 bridge Containers use this bridge to communicate with endpoints outside of the instance using primary ENI of instance they are running on Containers share networking properties of the primary ENI, including the firewall rules and IP addressing Containers are addressed by combination of IP address of primary ENI and host port to which they are mapped Cons: You cannot address these containers with the IP address allocated by DockerIt comes from pool of locally scoped addresses You cannot enforce finely grained network ACLs and firewall rules host Bypass Docker's built-in virtual network and maps container ports directly to the EC2's NIC directly You can't run multiple instantiations of the same task on a single container instance when port mappings are used awsvpc Each task is allocated its own ENI and IP addressMultiple applications (including multiple copies of same app) can run on same port number without conflict You must specify a NetworkConfiguration when you create a service or run a task with the task definition Default networking mode is bridge host and awsvpc network modes offer the highest networking performance They use the Amazon EC2 network stack instead of the virtualized network stack provided by the bridge mode Cannot take advantage of dynamic host port mappings Exposed container ports are mapped directly... host: to corresponding host port awsvpc: to attached elastic network interface port Task networking (aka awsvpc mode networking) Benefits Each task has its own attached ENIWith primary private IP address and internal DNS hostname Simplifies container networkingNo host port specified Container port is what is used by task ENI Container ports must be unique in a single task definition Gives more control over how tasks communicate With other tasks Containers share a network namespace Communicate with each other over localhost interfacee.g. curl 127.0.0.1:8080 With other services in VPC Note: containers that belong to the same task can communicate over the localhost interface Take advantage of VPC Flow Logs Better security through use of security groupsYou can assign different security groups to each task, which gives you more fine-grained security Limitations The number of ENIs that can be attached to EC2 instances is fairly smallE.g. c5.large EC2 may have up to 3 ENIs attached to it 1 primary, and 2 for task networking Therefore, you can only host 2 tasks using awsvpc mode networking on a c5.large However, you can increase ENI density using "VPC trunking" VPC trunking Allows for overcoming ENI density limits Multiplexes data over shared communication link How it works Two ENIs are attached to the instance Primary ENI Trunk ENINote that enabling trunking consumes an additional IP address per instance Your account, IAM user, or role must opt in to the awsvpcTrunking account setting Benefits Up to 5x-17x more ENIs per instance E.g. with trunking, c5.large goes from 3 to 12 ENIs1 primary, 1 trunk, and 10 for task networking Migrating a container from EC2 to Fargate IAM roles Roles created automatically by ECS Amazon ECS service-linked IAM role, AWSServiceRoleForECSGives permission to attach ENI to instance Task Execution IAM Role (ecsTaskExecutionRole)Needed for: Pulling images from ECR Pushing logs to CloudWatch Create a task-based IAM role Required because we don't have an ecsInstanceRole anymore Create a IAM policy that gives minimal privileges needed by task Remember two categories of policies: AWS Managed Customer Managed We are going to create a new customer managed policy that contains only the permissions our app needsKMS Decrypt, S3 GETs from specific bucket IAM -> Policies -> Create Policy -> JSONSee IAM Policy example below Create role based on "Elastic Container Service Task" service role This service role gives permission to ECS to use STS to assume role (sts:AssumeRole) and perform actions on its behalf IAM -> Roles -> Create Role "Select type of trusted entity": AWS Service Choose "Elastic Container Service", and then "Elastic Container Service Task" use case Next, then attach IAM policy we created to the role and save Task definition file changes Task-level parameters Add FARGATE for requiredCompatibilities Use awsvpc as the network mode Specify cpu and memory limits at the task level Specify Task Execution IAM Role (executionRoleARN)Allows task to pull images from ECR and send logs to CloudWatch Logs Specify task-based IAM role (taskDefinitionArn)Needed to give task permissions to perform AWS API calls (such as S3 reads) Container-level parametersOnly specify containerPort (do not specify hostPort) See Task Definition example below Create ECS service Choose cluster Specify networking VPC, subnets Create a security group for this task Security group is attached to the ENI Allow inbound port 80 traffic Auto-assign public IP Attach to existing application load balancer Specify production listener (port/protocol) Create a new target group When creating target group, you specify "target type" Instance IP Lambda function For awsvpc mode (and by default, Fargate), you must use the IP target type Specify path pattern for ALB listener, health check pathNote: you cannot specify host-based routing through the consoleYou can update that after creating the service through the ALB console Update security groups Security group for ALBAllow outbound port 80 to the security group we attached to our ENI Security group for RDSAllow inbound port 3306 from the security group for our ENI Create Route 53 recordALIAS pointing to our ALB Log integration with SumoLogic Update task to send logs to stdout/stderrDo not log to file Configure containers for CloudWatch logging ("awslogs" driver) Create Lambda function that subscribes to CloudWatch Log Group Lambda function converts from CloudWatch format to Sumo, then POSTs data to Sumo HTTP Source DeadLetterQueue is recommended to handle/retry failed sends to Sumo Links Introducing Cloud Native Networking for Amazon ECS Containers Task Networking in AWS Fargate Under the Hood: Task Networking for Amazon ECS Task Networking with the awsvpc Network Mode ECS Task Definition - Network Mode How many tasks using the awsvpc network mode can be launched Optimizing Amazon ECS task density using awsvpc network mode Migrating Your Amazon ECS Containers to AWS Fargate Sumo Logic - Collect Logs from AWS Fargate End SongDrifter - Roy EnglandFor a full transcription of this episode, please visit the episode webpage.We'd love to hear from you! You can reach us at: Web: https://mobycast.fm Voicemail: 844-818-0993 Email: ask@mobycast.fm Twitter: https://twitter.com/hashtag/mobycast Reddit: https://reddit.com/r/mobycast
We explain the physics behind ZFS, DTrace switching to the GPL, Emacs debugging, syncookies coming to PF & FreeBSD's history on EC2. This episode was brought to you by Headlines 128 bit storage: Are you high? (https://blogs.oracle.com/bonwick/128-bit-storage:-are-you-high) For people who have heard about ZFS boiling oceans and wonder where that is coming from, we dug out this old piece from 2004 on the blog of ZFS co-creator Jeff Bonwick, originally from the Sun website. 64 bits would have been plenty ... but then you can't talk out of your ass about boiling oceans then, can you? Well, it's a fair question. Why did we make ZFS a 128-bit storage system? What on earth made us think it's necessary? And how do we know it's sufficient? Let's start with the easy one: how do we know it's necessary? Some customers already have datasets on the order of a petabyte, or 2^50 bytes. Thus the 64-bit capacity limit of 2^64 bytes is only 14 doublings away. Moore's Law for storage predicts that capacity will continue to double every 9-12 months, which means we'll start to hit the 64-bit limit in about a decade. Storage systems tend to live for several decades, so it would be foolish to create a new one without anticipating the needs that will surely arise within its projected lifetime. If 64 bits isn't enough, the next logical step is 128 bits. That's enough to survive Moore's Law until I'm dead, and after that, it's not my problem. But it does raise the question: what are the theoretical limits to storage capacity? Although we'd all like Moore's Law to continue forever, quantum mechanics imposes some fundamental limits on the computation rate and information capacity of any physical device. In particular, it has been shown that 1 kilogram of matter confined to 1 liter of space can perform at most 10^51 operations per second on at most 10^31 bits of information [see Seth Lloyd, "Ultimate physical limits to computation." Nature 406, 1047-1054 (2000)]. A fully-populated 128-bit storage pool would contain 2^128 blocks = 2^137 bytes = 2^140 bits; therefore the minimum mass required to hold the bits would be (2^140 bits) / (10^31 bits/kg) = 136 billion kg. That's a lot of gear. To operate at the 1031 bits/kg limit, however, the entire mass of the computer must be in the form of pure energy. By E=mc^2, the rest energy of 136 billion kg is 1.2x1028 J. The mass of the oceans is about 1.4x1021 kg. It takes about 4,000 J to raise the temperature of 1 kg of water by 1 degree Celcius, and thus about 400,000 J to heat 1 kg of water from freezing to boiling. The latent heat of vaporization adds another 2 million J/kg. Thus the energy required to boil the oceans is about 2.4x106 J/kg * 1.4x1021 kg = 3.4x1027 J. Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans. Best part of all: you don't have to understand any of this to use ZFS. Rest assured that you won't hit any limits with that filesystem for a long time. You still have to buy bigger disks over time, though... *** dtrace for Linux, Oracle relicenses dtrace (https://gnu.wildebeest.org/blog/mjw/2018/02/14/dtrace-for-linux-oracle-does-the-right-thing/) At Fosdem we had a talk on dtrace for linux in the Debugging Tools devroom. Not explicitly mentioned in that talk, but certainly the most exciting thing, is that Oracle is doing a proper linux kernel port: ``` commit e1744f50ee9bc1978d41db7cc93bcf30687853e6 Author: Tomas Jedlicka tomas.jedlicka@oracle.com Date: Tue Aug 1 09:15:44 2017 -0400 dtrace: Integrate DTrace Modules into kernel proper This changeset integrates DTrace module sources into the main kernel source tree under the GPLv2 license. Sources have been moved to appropriate locations in the kernel tree. ``` That is right, dtrace dropped the CDDL and switched to the GPL! The user space code dtrace-utils and libdtrace-ctf (a combination of GPLv2 and UPL) can be found on the DTrace Project Source Control page. The NEWS file mentions the license switch (and that it is build upon elfutils, which I personally was pleased to find out). The kernel sources (GPLv2+ for the core kernel and UPL for the uapi) are slightly harder to find because they are inside the uek kernel source tree, but following the above commit you can easily get at the whole linux kernel dtrace directory. The UPL is the Universal Permissive License, which according to the FSF is a lax, non-copyleft license that is compatible with the GNU GPL. Thank you Oracle for making everyone's life easier by waving your magic relicensing wand! Now there is lots of hard work to do to actually properly integrate this. And I am sure there are a lot of technical hurdles when trying to get this upstreamed into the mainline kernel. But that is just hard work. Which we can now start collaborating on in earnest. Like systemtap and the Dynamic Probes (dprobes) before it, dtrace is a whole system observability tool combining tracing, profiling and probing/debugging techniques. Something the upstream linux kernel hackers don't always appreciate when presented as one large system. They prefer having separate small tweaks for tracing, profiling and probing which are mostly separate from each other. It took years for the various hooks, kprobes, uprobes, markers, etc. from systemtap (and other systems) to get upstream. But these days they are. And there is now even a byte code interpreter (eBPF) in the mainline kernel as originally envisioned by dprobes, which systemtap can now target through stapbpf. So with all those techniques now available in the linux kernel it will be exciting to see if dtrace for linux can unite them all. Debugging Emacs or: How I Learned to Stop Worrying and Love DTrace (http://nullprogram.com/blog/2018/01/17/) For some time Elfeed was experiencing a strange, spurious failure. Every so often users were seeing an error (spoiler warning) when updating feeds: “error in process sentinel: Search failed.” If you use Elfeed, you might have even seen this yourself. From the surface it appeared that curl, tasked with the responsibility for downloading feed data, was producing incomplete output despite reporting a successful run. Since the run was successful, Elfeed assumed certain data was in curl's output buffer, but, since it wasn't, it failed hard. Unfortunately this issue was not reproducible. Manually running curl outside of Emacs never revealed any issues. Asking Elfeed to retry fetching the feeds would work fine. The issue would only randomly rear its head when Elfeed was fetching many feeds in parallel, under stress. By the time the error was discovered, the curl process had exited and vital debugging information was lost. Considering that this was likely to be a bug in Emacs itself, there really wasn't a reliable way to capture the necessary debugging information from within Emacs Lisp. And, indeed, this later proved to be the case. A quick-and-dirty work around is to use condition-case to catch and swallow the error. When the bizarre issue shows up, rather than fail badly in front of the user, Elfeed could attempt to swallow the error — assuming it can be reliably detected — and treat the fetch as simply a failure. That didn't sit comfortably with me. Elfeed had done its due diligence checking for errors already. Someone was lying to Elfeed, and I intended to catch them with their pants on fire. Someday. I'd just need to witness the bug on one of my own machines. Elfeed is part of my daily routine, so surely I'd have to experience this issue myself someday. My plan was, should that day come, to run a modified Elfeed, instrumented to capture extra data. I would have also routinely run Emacs under GDB so that I could inspect the failure more deeply. For now I just had to wait to hunt that zebra. Bryan Cantrill, DTrace, and FreeBSD Over the holidays I re-discovered Bryan Cantrill, a systems software engineer who worked for Sun between 1996 and 2010, and is most well known for DTrace. My first exposure to him was in a BSD Now interview in 2015. I had re-watched that interview and decided there was a lot more I had to learn from him. He's become a personal hero to me. So I scoured the internet for more of his writing and talks. Some interesting operating system technology came out of Sun during its final 15 or so years — most notably DTrace and ZFS — and Bryan speaks about it passionately. Almost as a matter of luck, most of it survived the Oracle acquisition thanks to Sun releasing it as open source in just the nick of time. Otherwise it would have been lost forever. The scattered ex-Sun employees, still passionate about their prior work at Sun, along with some of their old customers have since picked up the pieces and kept going as a community under the name illumos. It's like an open source flotilla. Naturally I wanted to get my hands on this stuff to try it out for myself. Is it really as good as they say? Normally I stick to Linux, but it (generally) doesn't have these Sun technologies available. The main reason is license incompatibility. Sun released its code under the CDDL, which is incompatible with the GPL. Ubuntu does infamously include ZFS, but other distributions are unwilling to take that risk. Porting DTrace is a serious undertaking since it's got its fingers throughout the kernel, which also makes the licensing issues even more complicated. Linux has a reputation for Not Invented Here (NIH) syndrome, and these licensing issues certainly contribute to that. Rather than adopt ZFS and DTrace, they've been reinvented from scratch: btrfs instead of ZFS, and a slew of partial options instead of DTrace. Normally I'm most interested in system call tracing, and my go to is strace, though it certainly has its limitations — including this situation of debugging curl under Emacs. Another famous example of NIH is Linux's epoll(2), which is a broken version of BSD kqueue(2). So, if I want to try these for myself, I'll need to install a different operating system. I've dabbled with OmniOS, an OS built on illumos, in virtual machines, using it as an alien environment to test some of my software (e.g. enchive). OmniOS has a philosophy called Keep Your Software To Yourself (KYSTY), which is really just code for “we don't do packaging.” Honestly, you can't blame them since they're a tiny community. The best solution to this is probably pkgsrc, which is essentially a universal packaging system. Otherwise you're on your own. There's also openindiana, which is a more friendly desktop-oriented illumos distribution. Still, the short of it is that you're very much on your own when things don't work. The situation is like running Linux a couple decades ago, when it was still difficult to do. If you're interested in trying DTrace, the easiest option these days is probably FreeBSD. It's got a big, active community, thorough documentation, and a huge selection of packages. Its license (the BSD license, duh) is compatible with the CDDL, so both ZFS and DTrace have been ported to FreeBSD. What is DTrace? I've done all this talking but haven't yet described what DTrace really is. I won't pretend to write my own tutorial, but I'll provide enough information to follow along. DTrace is a tracing framework for debugging production systems in real time, both for the kernel and for applications. The “production systems” part means it's stable and safe — using DTrace won't put your system at risk of crashing or damaging data. The “real time” part means it has little impact on performance. You can use DTrace on live, active systems with little impact. Both of these core design principles are vital for troubleshooting those really tricky bugs that only show up in production. There are DTrace probes scattered all throughout the system: on system calls, scheduler events, networking events, process events, signals, virtual memory events, etc. Using a specialized language called D (unrelated to the general purpose programming language D), you can dynamically add behavior at these instrumentation points. Generally the behavior is to capture information, but it can also manipulate the event being traced. Each probe is fully identified by a 4-tuple delimited by colons: provider, module, function, and probe name. An empty element denotes a sort of wildcard. For example, syscall::open:entry is a probe at the beginning (i.e. “entry”) of open(2). syscall:::entry matches all system call entry probes. Unlike strace on Linux which monitors a specific process, DTrace applies to the entire system when active. To run curl under strace from Emacs, I'd have to modify Emacs' behavior to do so. With DTrace I can instrument every curl process without making a single change to Emacs, and with negligible impact to Emacs. That's a big deal. So, when it comes to this Elfeed issue, FreeBSD is much better poised for debugging the problem. All I have to do is catch it in the act. However, it's been months since that bug report and I'm not really making this connection yet. I'm just hoping I eventually find an interesting problem where I can apply DTrace. Bryan Cantrill: Talks I have given (http://dtrace.org/blogs/bmc/2018/02/03/talks/) *** News Roundup a2k18 Hackathon preview: Syncookies coming to PF (https://undeadly.org/cgi?action=article;sid=20180207090000) As you may have heard, the a2k18 hackathon is in progress. As can be seen from the commit messages, several items of goodness are being worked on. One eagerly anticipated item is the arrival of TCP syncookies (read: another important tool in your anti-DDoS toolset) in PF. Henning Brauer (henning@) added the code in a series of commits on February 6th, 2018, with this one containing the explanation: ``` syncookies for pf. when syncookies are on, pf will blindly answer each and every SYN with a syncookie-SYNACK. Upon reception of the ACK completing the 3WHS, pf will reconstruct the original SYN, shove it through pf_test, where state will be created if the ruleset permits it. Then massage the freshly created state (we won't see the SYNACK), set up the sequence number modulator, and call into the existing synproxy code to start the 3WHS with the backend host. Add an - somewhat basic for now - adaptive mode where syncookies get enabled if a certain percentage of the state table is filled up with half-open tcp connections. This makes pf firewalls resilient against large synflood attacks. syncookies are off by default until we gained more experience, considered experimental for now. see http://bulabula.org/papers/2017/bsdcan/ for more details. joint work with sashan@, widely discussed and with lots of input by many ``` The first release to have this feature available will probably be the upcoming OpenBSD 6.3 if a sufficient number of people test this in their setups (hint, hint). More info is likely to emerge soon in post-hackathon writeups, so watch this space! [Pale Moon] A Perfect example of how not to approach OS developers/packagers Removed from OpenBSD Ports due to Licensing Issues (https://github.com/jasperla/openbsd-wip/issues/86) FreeBSD Palemoon branding violation (https://lists.freebsd.org/pipermail/freebsd-ports/2018-February/112455.html) Mightnight BSD's response (https://twitter.com/midnightbsd/status/961232422091280386) *** FreeBSD EC2 History (http://www.daemonology.net/blog/2018-02-12-FreeBSD-EC2-history.html) A couple years ago Jeff Barr published a blog post with a timeline of EC2 instances. I thought at the time that I should write up a timeline of the FreeBSD/EC2 platform, but I didn't get around to it; but last week, as I prepared to ask for sponsorship for my work I decided that it was time to sit down and collect together the long history of how the platform has evolved and improved over the years. Normally I don't edit blog posts after publishing them (with the exception of occasional typographical corrections), but I do plan on keeping this post up to date with future developments. August 25, 2006: Amazon EC2 launches. It supports a single version of Ubuntu Linux; FreeBSD is not available. December 13, 2010: I manage to get FreeBSD running on EC2 t1.micro instances. March 22, 2011: I manage to get FreeBSD running on EC2 "cluster compute" instances. July 8, 2011: I get FreeBSD 8.2 running on all 64-bit EC2 instance types, by marking it as "Windows" in order to get access to Xen/HVM virtualization. (Unfortunately this meant that users had to pay the higher "Windows" hourly pricing.) January 16, 2012: I get FreeBSD 9.0 running on 32-bit EC2 instances via the same "defenestration" trick. (Again, paying the "Windows" prices.) August 16, 2012: I move the FreeBSD rc.d scripts which handle "EC2" functionality (e.g., logging SSH host keys to the console) into the FreeBSD ports tree. October 7, 2012: I rework the build process for FreeBSD 9.1-RC1 and later to use "world" bits extracted from the release ISOs; only the kernel is custom-built. Also, the default SSH user changes from "root" to "ec2-user". October 31, 2012: Amazon launches the "M3" family of instances, which support Xen/HVM without FreeBSD needing to pay the "Windows" tax. November 21, 2012: I get FreeBSD added to the AWS Marketplace. October 2, 2013: I finish merging kernel patches into the FreeBSD base system, and rework the AMI build (again) so that FreeBSD 10.0-ALPHA4 and later use bits extracted from the release ISOs for the entire system (world + kernel). FreeBSD Update can now be used for updating everything (because now FreeBSD/EC2 uses a GENERIC kernel). October 27, 2013: I add code to EC2 images so that FreeBSD 10.0-BETA2 and later AMIs will run FreeBSD Update when they first boot in order to download and install any critical updates. December 1, 2013: I add code to EC2 images so that FreeBSD 10.0-BETA4 and later AMIs bootstrap the pkg tool and install packages at boot time (by default, the "awscli" package). December 9, 2013: I add configinit to FreeBSD 10.0-RC1 and later to allow systems to be easily configured via EC2 user-data. July 1, 2014: Amazon launches the "T2" family of instances; now the most modern family for every type of EC2 instance (regular, high-memory, high-CPU, high-I/O, burstable) supports HVM and there should no longer be any need for FreeBSD users to pay the "Windows tax". November 24, 2014: I add code to FreeBSD 10.2 and later to automatically resize their root filesystems when they first boot; this means that a larger root disk can be specified at instance launch time and everything will work as expected. April 1, 2015: I integrate the FreeBSD/EC2 build process into the FreeBSD release building process; FreeBSD 10.2-BETA1 and later AMIs are built by the FreeBSD release engineering team. January 12, 2016: I enable Intel 82599-based "first generation EC2 Enhanced Networking" in FreeBSD 11.0 and later. June 9, 2016: I enable the new EC2 VGA console functionality in FreeBSD 11.0 and later. (The old serial console also continues to work.) June 24, 2016: Intel 82599-based Enhanced Networking works reliably in FreeBSD 11.0 and later thanks to discovering and working around a Xen bug. June 29, 2016: I improve throughput on Xen blkfront devices (/dev/xbd*) by enabling indirect segment I/Os in FreeBSD 10.4 and later. (I wrote this functionality in July 2015, but left it disabled by default a first because a bug in EC2 caused it to hurt performance on some instances.) July 7, 2016: I fix a bug in FreeBSD's virtual memory initialization in order to allow it to support boot with 128 CPUs; aka. FreeBSD 11.0 and later support the EC2 x1.32xlarge instance type. January 26, 2017: I change the default configuration in FreeBSD 11.1 and later to support EC2's IPv6 networking setup out of the box (once you flip all of the necessary switches to enable IPv6 in EC2 itself). May 20, 2017: In collaboration with Rick Macklem, I make FreeBSD 11.1 and later compatible with the Amazon "Elastic File System" (aka. NFSv4-as-a-service) via the newly added "oneopenown" mount option (and lots of bug fixes). May 25, 2017: I enable support for the Amazon "Elastic Network Adapter" in FreeBSD 11.1 and later. (The vast majority of the work — porting the driver code — was done by Semihalf with sponsorship from Amazon.) December 5, 2017: I change the default configuration in FreeBSD 11.2 and later to make use of the Amazon Time Sync Service (aka. NTP-as-a-service). The current status The upcoming FreeBSD release (11.2) supports: IPv6, Enhanced Networking (both generations), Amazon Elastic File System, Amazon Time Sync Service, both consoles (Serial VGA), and every EC2 instance type (although I'm not sure if FreeBSD has drivers to make use of the FPGA or GPU hardware on those instances). Colin's Patreon' page if you'd like to support him (https://www.patreon.com/cperciva) X network transparency X's network transparency has wound up mostly being a failure (https://utcc.utoronto.ca/~cks/space/blog/unix/XNetworkTransparencyFailure) I was recently reading Mark Dominus's entry about some X keyboard problems, in which he said in passing (quoting himself): I have been wondering for years if X's vaunted network transparency was as big a failure as it seemed: an interesting idea, worth trying out, but one that eventually turned out to be more trouble than it was worth. [...] My first reaction was to bristle, because I use X's network transparency all of the time at work. I have several programs to make it work very smoothly, and some core portions of my environment would be basically impossible without it. But there's a big qualification on my use of X's network transparency, namely that it's essentially all for text. When I occasionally go outside of this all-text environment of xterms and emacs and so on, it doesn't go as well. X's network transparency was not designed as 'it will run xterm well'; originally it was to be something that should let you run almost everything remotely, providing a full environment. Even apart from the practical issues covered in Daniel Stone's slide presentation, it's clear that it's been years since X could deliver a real first class environment over the network. You cannot operate with X over the network in the same way that you do locally. Trying to do so is painful and involves many things that either don't work at all or perform so badly that you don't want to use them. In my view, there are two things that did in general X network transparency. The first is that networks turned out to not be fast enough even for ordinary things that people wanted to do, at least not the way that X used them. The obvious case is web browsers; once the web moved to lots of images and worse, video, that was pretty much it, especially with 24-bit colour. (It's obviously not impossible to deliver video across the network with good performance, since YouTube and everyone else does it. But their video is highly encoded in specialized formats, not handled by any sort of general 'send successive images to the display' system.) The second is that the communication facilities that X provided were too narrow and limited. This forced people to go outside of them in order to do all sorts of things, starting with audio and moving on to things like DBus and other ways of coordinating environments, handling sophisticated configuration systems, modern fonts, and so on. When people designed these additional communication protocols, the result generally wasn't something that could be used over the network (especially not without a bunch of setup work that you had to do in addition to remote X). Basic X clients that use X properties for everything may be genuinely network transparent, but there are very few of those left these days. (Not even xterm is any more, at least if you use XFT fonts. XFT fonts are rendered in the client, and so different hosts may have different renderings of the same thing, cf.) < What remains of X's network transparency is still useful to some of us, but it's only a shadow of what the original design aimed for. I don't think it was a mistake for X to specifically design it in (to the extent that they did, which is less than you might think), and it did help X out pragmatically in the days of X terminals, but that's mostly it. (I continue to think that remote display protocols are useful in general, but I'm in an usual situation. Most people only ever interact with remote machines with either text mode SSH or a browser talking to a web server on the remote machine.) PS: The X protocol issues with synchronous requests that Daniel Stone talks about don't help the situation, but I think that even with those edges sanded off X's network transparency wouldn't be a success. Arguably X's protocol model committed a lesser version of part of the NeWS mistake. X's network transparency was basically free at the time (https://utcc.utoronto.ca/~cks/space/blog/unix/XFreeNetworkTransparency) I recently wrote an entry about how X's network transparency has wound up mostly being a failure for various reasons. However, there is an important flipside to the story of X's network transparency, and that is that X's network transparency was almost free at the time and in the context it was created. Unlike the situation today, in the beginning X did not have to give up lots of performance or other things in order to get network transparency. X originated in the mid 1980s and it was explicitly created to be portable across various Unixes, especially BSD-derived ones (because those were what universities were mostly using at that time). In the mid to late 1980s, Unix had very few IPC methods, especially portable ones. In particular, BSD systems did not have shared memory (it was called 'System V IPC' for the obvious reasons). BSD had TCP and Unix sockets, some System V machines had TCP (and you could likely assume that more would get it), and in general your safest bet was to assume some sort of abstract stream protocol and then allow for switchable concrete backends. Unsurprisingly, this is exactly what X did; the core protocol is defined as a bidirectional stream of bytes over an abstracted channel. (And the concrete implementation of $DISPLAY has always let you specify the transport mechanism, as well as allowing your local system to pick the best mechanism it has.) Once you've decided that your protocol has to run over abstracted streams, it's not that much more work to make it network transparent (TCP provides streams, after all). X could have refused to make the byte order of the stream clear or required the server and the client to have access to some shared files (eg for fonts), but I don't think either would have been a particularly big win. I'm sure that it took some extra effort and care to make X work across TCP from a different machine, but I don't think it took very much. (At the same time, my explanation here is probably a bit ahistorical. X's initial development seems relatively strongly tied to sometimes having clients on different machines than the display, which is not unreasonable for the era. But it doesn't hurt to get a feature that you want anyway for a low cost.) I believe it's important here that X was intended to be portable across different Unixes. If you don't care about portability and can get changes made to your Unix, you can do better (for example, you can add some sort of shared memory or process to process virtual memory transfer). I'm not sure how the 1980s versions of SunView worked, but I believe they were very SunOS dependent. Wikipedia says SunView was partly implemented in the kernel, which is certainly one way to both share memory and speed things up. PS: Sharing memory through mmap() and friends was years in the future at this point and required significant changes when it arrived. Beastie Bits Grace Hopper Celebration 2018 Call for Participation (https://www.freebsdfoundation.org/news-and-events/call-for-papers/grace-hopper-celebration-2018-call-for-participation/) Google Summer of Code: Call for Project Ideas (https://www.freebsdfoundation.org/blog/google-summer-of-code-call-for-project-ideas/) The OpenBSD Foundation 2018 Fundraising Campaign (https://undeadly.org/cgi?action=article;sid=20180129190641) SSH Mastery 2/e out (https://blather.michaelwlucas.com/archives/3115) AsiaBSDcon 2018 Registration is open (https://2018.asiabsdcon.org/) Tarsnap support for Bitcoin ending April 1st; and a Chrome bug (http://mail.tarsnap.com/tarsnap-announce/msg00042.html) Feedback/Questions Todd - Couple Questions (http://dpaste.com/195HGHY#wrap) Seth - Tar Snap (http://dpaste.com/1N7NQVQ#wrap) Alex - sudo question (http://dpaste.com/3D9P1DW#wrap) Thomas - FreeBSD on ARM? (http://dpaste.com/24NMG47#wrap) Albert - Austria BSD User Group (http://dpaste.com/373CRX7#wrap)
FreeBSD 11.1-RELEASE is out, we look at building at BSD home router, how to be your own OpenBSD VPN provider, and find that glob matching can be simple and fast. This episode was brought to you by Headlines FreeBSD 11.1-RELEASE (https://www.freebsd.org/releases/11.1R/relnotes.html) FreeBSD 11.1 was released on July 26th (https://www.freebsd.org/releases/11.1R/announce.asc) You can download it as an ISO or USB image, a prebuilt VM Image (vmdk, vhd, qcow2, or raw), and it is available as a cloud image (Amazon EC2, Microsoft Azure, Google Compute Engine, Vagrant) Thanks to everyone, including the release engineering team who put so much time and effort into managing this release and making sure it came out on schedule, all of the FreeBSD developers who contributed the features, the companies that sponsored that development, and the users who tested the betas and release candidates. Support for blacklistd(8) has been added to OpenSSH The cron(8) utility has been updated to add support for including files within /etc/cron.d and /usr/local/etc/cron.d by default. The syslogd(8) utility has been updated to add the include keyword which allows specifying a directory containing configuration files to be included in addition to syslog.conf(5). The default syslog.conf(5) has been updated to include /etc/syslog.d and /usr/local/etc/syslog.d by default. The zfsbootcfg(8) utility has been added, providing one-time boot.config(5)-style options The efivar(8) utility has been added, providing an interface to manage UEFI variables. The ipsec and tcpmd5 kernel modules have been added, these can now be loaded without having to recompile the kernel A number of new IPFW modules including Network Prefix Translation for IPv6 as defined in RFC 6296, stateless and stateful NAT64, and a module to modify the TCP-MSS of packets A huge array of driver updates and additions The NFS client now supports the Amazon® Elastic File System™ (EFS) The new ZFS Compressed ARC feature was added, and is enabled by default The EFI loader has been updated to support TFTPFS, providing netboot support without requiring an NFS server For a complete list of new features and known problems, please see the online release notes and errata list, available at: FreeBSD 11.1-RELEASE Release Notes (https://www.freebsd.org/releases/11.1R/relnotes.html) FreeBSD 11.1-RELEASE Errata (https://www.freebsd.org/releases/11.1R/errata.html) For more information about FreeBSD release engineering activities, please see: Release Engineering Information (https://www.freebsd.org/releng/) Availability FreeBSD 11.1-RELEASE is now available for the amd64, i386, powerpc, powerpc64, sparc64, armv6, and aarch64 architectures. FreeBSD 11.1-RELEASE can be installed from bootable ISO images or over the network. Some architectures also support installing from a USB memory stick. The required files can be downloaded as described in the section below. SHA512 and SHA256 hashes for the release ISO, memory stick, and SD card images are included at the bottom of this message. PGP-signed checksums for the release images are also available at: FreeBSD 11.1 Release Checksum Signatures (https://www.freebsd.org/releases/11.1R/signatures.html) A PGP-signed version of this announcement is available at: FreeBSD 11.1-RELEASE Announcement (https://www.FreeBSD.org/releases/11.1R/announce.asc) *** Building a BSD home router - ZFS and Jails (https://eerielinux.wordpress.com/2017/07/15/building-a-bsd-home-router-pt-8-zfs-and-jails/) Part of a series of posts about building a router: Part 1 (https://eerielinux.wordpress.com/2017/05/30/building-a-bsd-home-router-pt-1-hardware-pc-engines-apu2/) -- discussing why you want to build your own router and how to assemble the APU2 Part 2 (https://eerielinux.wordpress.com/2017/06/03/building-a-bsd-home-router-pt-2-the-serial-console-excursion) -- some Unix history explanation of what a serial console is Part 3 (https://eerielinux.wordpress.com/2017/06/10/building-a-bsd-home-router-pt-3-serial-access-and-flashing-the-firmware/) -- demonstrating serial access to the APU and covering firmware update Part 4 (https://eerielinux.wordpress.com/2017/06/15/building-a-bsd-home-router-pt-4-installing-pfsense/) -- installing pfSense Part 5 (https://eerielinux.wordpress.com/2017/06/20/building-a-bsd-home-router-pt-5-installing-opnsense/) -- installing OPNsense instead Part 6 (https://eerielinux.wordpress.com/2017/06/30/building-a-bsd-home-router-pt-7-advanced-opnsense-setup/) -- Comparison of pfSense and OPNsense Part 7 (https://eerielinux.wordpress.com/2017/06/30/building-a-bsd-home-router-pt-7-advanced-opnsense-installation/) -- Advanced installation of OPNsense After the advanced installation in part 7, the tutorials covers converting an unused partition into swap space, and converting the system to ZFS After creating a new pool using the set aside partition, some datasets are created, and the log files, ports, and obj ZFS datasets are mounted The tutorial then goes on to cover how to download the ports tree, and install additional software on the router I wonder what part 9 will be about. *** Be your own VPN provider with OpenBSD (v2) (https://networkfilter.blogspot.com/2017/04/be-your-own-vpn-provider-with-openbsd-v2.htm) This article covers how to build your own VPN server with some advanced features including: Full Disk Encryption (FDE) Separate CA/signing machine (optional) Multiple DNSCrypt proxy instances for failover OpenVPN: Certificate Revocation List/CRL (optional) OpenVPN: TLS 1.2 only OpenVPN: TLS cipher based on AES-256-GCM only OpenVPN: HMAC-SHA512 instead of HMAC-SHA1 OpenVPN: TLS encryption of control channel (makes it harder to identify OpenVPN traffic) The article starts with an explanation of the differences between OpenVPN and IPSEC. In the end the author chose OpenVPN because you can select the port it runs on, and it has a better chance of working from hotel or coffee shop WiFi. The guide them walks through doing an installation on an encrypted disk, with a caution about the limitations of encrypted disk with virtual machines hosted by other parties. The guide then locks down the newly installed system, configuring SSH for keys only, adding some PF rules, and configuring doas Then networking is configured, including enabling IP forwarding since this machine is going to act as the VPN gateway Then a large set of firewall rules are created that NAT the VPN traffic out of the gateway, except for DNS requests that are redirected to the gateways local unbound Then some python scripts are provided to block brute force attempts We will use DNSCrypt to make our DNS requests encrypted, and Unbound to have a local DNS cache. This will allow us to avoid using our VPS provider DNS servers, and will also be useful to your future VPN clients which will be able to use your VPN server as their DNS server too Before configuring Unbound, which is the local DNS cache which will make requests to dnscrypt_proxy, we can configure an additional dnscrypt instance, as explained in the pkg readme. Indeed, dnscrypt DNS servers being public ones, they often goes into maintenance, become offline or temporarily unreachable. To address this issue, it is possible to setup multiple dnscrypt instances. Below are the steps to follow to add one, but you can add more if you wish Then a CA and Certificate are created for OpenVPN OpenVPN is installed and configured as a server Configuration is also provided for a client, and a mobile client Thanks to the author for this great tutorial You might also want to check out this section from their 2015 version of this post: Security vs Anonymity (https://networkfilter.blogspot.nl/2015/01/be-your-own-vpn-provider-with-openbsd.html#security_anonymity) *** Essen Hackathon Trip - Benedict Reuschling (https://www.freebsdfoundation.org/blog/2017-essen-hackathon-trip-report-benedict-reuschling/) Over on the FreeBSD Foundation Blog, Benedict provides a detailed overview of the Essen Hackathon we were at a few weeks ago. Head over there and give it a read, and get a feel for what these smaller type of community events are like. Hopefully you can attend, or better yet, organize, a similar event in your area. News Roundup Blog about my self-hosted httpd blog (https://reykfloeter.com/posts/blog-about-my-blog) I really like Twitter because it allows me to share short messages, we have a great community, and 140 characters are enough for everybody. And this statement was exactly 140 characters, but sometimes I want to say more than that. And that's why I finally created this new blog. I was never really into blogging because I barely had time or the audience to write long articles. I sometimes wrote short stories for sites like undeadly.org, I collected some of them here, but my own blog was hosted on tumblr and never saw any activity. I want to try it again, and this time I decided to create a self-hosted blog. Something that runs on my own server and with httpd, the web server that I wrote for OpenBSD. So I was looking for potential blogging tools that I could use to run my own blog. Besides the popular and heavyweight ones such as WordPress, there are countless other options: I looked at blogs from fellow developers, such as Ted Unangst's flak (I like the fact that it is written in Lua but the implementation is a bit over my head), or Pelican that is used by Peter Hessler for bad.network (but, sorry, I don't like Python), and finally Kristaps Dzonsons' sblg that is used for all of his projects and blogs. I decided to use sblg. Kristaps keeps on releasing very useful free software. Most well-known is mandoc, at least everyone is using it for manpages these days, but there is is also his BCHS (beaches) web stack which strongly advertises OpenBSD's httpd. Great. I also use kcgi whenever I have to write small CGIs. So sblg seemed like the right choice to me. Let me quickly iterate over my current Makefile. I keep on tweaking this file, so it might have been changed by the time you are reading this article. Please note that the Makefile is written for OpenBSD's make, a distant derivative of pmake which is not like GNU make. I'm not a designer or web developer, but I appreciate good looking web pages. I wanted to have something that is responsive, works on desktops and mobiles, looks somewhat modern, works without JavaScript, but doesn't disqualify me for all the eye candy from a geek point of view. I bootstrapped the theme by creating a simple grid layout with a fairly typical blog style: banner, top menu, middle text, sidebar. In 2017, bootstrap is probably a vintage (or retro) framework but it makes it very easy to create responsive pages with a proper layout and without caring about all the CSS and HTML5 madness too much. I also use Font Awesome because it is awesome, provides some fancy icons, and was suggested in sblg's example templates (let's blame Kristaps for it). I do not include any JavaScript which prevents me from using bootstrap's responsive hamburger menu. I have to admit that "reykfloeter" is not an ideal name for a blog. My actual name is "Reyk Flöter", and I normally just use my first name "reyk" as a user- and nickname, but it was taken when I registered my Twitter account and the related domain. So I picked reykfloeter in a few places. I'm aware that my German last name is nearly unpronounceable for others, so "reykfloeter" appears like a random concatenation of letters. As most of us, I own a number of domains and maybe I should move the blog to bsd.plumbing (which is used as a home for relayd and httpd), arc4random.com (but I intended to use it as a fine OpenBSD-powered Entropy-as-a-Service for poor Linuxers), or even copper.coffee? In addition to the domain, I also need a good blog name or tag line. A very memorable example in the BSD world is Peter Hansteen's THAT GRUMPY BSD GUY blog. So what should I use? Reyk Flöter's blog OpenBSD hacker. Coffee nerd. Founder. Ask Reyk (imaginary how-tos and 10 step guides) Sewage, Drainage and BSD Plumbing (bsd.plumbing/blog) A Replacement Call for Random (arc4random.com) Coffee with Reyk (copper.coffee) For now it will just be reykfloeter - blog iXsystems releases the X10 (https://www.ixsystems.com/blog/serverenvy-truenas-x10/) TrueNAS X10 is the the 3rd generation of the TrueNAS unified storage line. The X10 is the first of a new TrueNAS series, and will be expandable to up to 360TB with the TrueNAS ES12 expansion shelf. The X10 is cost effective, at a 30% lower price point than the Z20, making it an effective addition to your backup/DR infrastructure. The street price of a 20TB non-HA model falls under $10K. It's designed to move with six predefined configurations that match common use cases. The dual controllers for high availability are an optional upgrade to ensure business continuity and avoid downtime. The X10 boasts 36 hot swap SAS using two expansion shelves, for up to 360TB of storage, allowing you to backup thousands of VMs or share tens of thousands of files. One of the use cases for TrueNAS X10 is for backup, so users can upgrade the X10 to two ports of blazing 10GigE connectivity. The 20TB non-HA model enables you to backup over 7,000 VDI VMs for under $3.00 per VM. Overall, the X10 is a greener solution than the TrueNAS Z product line, with the non-HA version boasting only 138 watts of power and taking up only 2U of space. Best of all, the TrueNAS X10 starts at $5,500 street. You can purchase a 120TB configuration today for under $20K street. Glob Matching Can Be Simple And Fast Too (https://research.swtch.com/glob) Here's a straightforward benchmark. Time how long it takes to run ls (a)nb in a directory with a single file named a100, compared to running ls | grep (a.)nb. Superscripts denote string repetition and parentheses are for grouping only, so that when n is 3, we're running ls aaab in a directory containing the single file aaa…aaa (100 a's), compared against ls | grep a.a.a.b in the same directory. The exception seems to be the original Berkeley csh, which runs in linear time (more precisely, time linear in n). Looking at the source code, it doesn't attempt to perform glob expansion itself. Instead it calls the C library implementation glob(3), which runs in linear time, at least on this Linux system. So maybe we should look at programming language implementations too. Most programming languages provide some kind of glob expansion, like C's glob. Let's repeat the experiment in a variety of different programming languages: Perhaps the most interesting fact evident in the graph is that GNU glibc, the C library used on Linux systems, has a linear-time glob implementation, but BSD libc, the C library used on BSD and macOS systems, has an exponential-time implementation. PHP is not shown in the graph, because its glob function simply invokes the host C library's glob(3), so that it runs in linear time on Linux and in exponential time on non-Linux systems. (I have not tested what happens on Windows.) All the languages shown in the graph, however, implement glob matching without using the host C library, so the results should not vary by host operating system. The netkit ftpd runs quickly on Linux because it relies on the host C library's glob function. If run on BSD, the netkit ftpd would take exponential time. ProFTPD ships a copy of the glibc glob, so it should run quickly even on BSD systems. Ironically, Pure-FTPd and tnftpd take exponential time on Linux because they ship a copy of the BSD glob function. Presumably they do this to avoid assuming that the host C library is bug-free, but, at least in this one case, the host C library is better than the one they ship. Additional Reading This post is an elaboration of an informal 2012 Google+ post showing that most shells used exponential-time glob expansion. At the time, Tom Duff, the author of Plan 9's rc shell, commented that, “I can confirm that rc gets it wrong. My excuse, feeble as it is, is that doing it that way meant that the code took 10 minutes to write, but it took 20 years for someone to notice the problem. (That's 10 ‘programmer minutes', i.e. less than a day.)” I agree that's a reasonable decision for a shell. In contrast, a language library routine, not to mention a network server, today needs to be robust against worst-case inputs that might be controlled by remote attackers, but nearly all of the code in question predates that kind of concern. I didn't realize the connection to FTP servers until I started doing additional research for this post and came across a reference to CVE-2010-2632 in FreeBSD's glob implementation. BSD VPS Providers Needed (https://torbsd.github.io/blog.html#bsd-vps) One of TDP's recent projects is accumulating a list of virtual private server services (VPS) that provide a BSD option. VPS's are generally inexpensive services that enable the user to only concern themselves with software configuration, and not be bothered with hardware or basic operating system setup. In the pre-Cloud era, VPS providers were the “other people's computers” that users outsourced their systems to. The same shortcomings of cloud services apply to VPS providers. You don't control the hardware. Your files are likely viewable by users up the directory hierarchy. The entropy source or pool is a single source for multiple systems. The same time drift applies to all time-keeping services. Nevertheless, VPS services are often cheap and provide a good spread in terms of geography. All a provider really needs is a few server-grade computers and a decent network connection. VPS's are still a gateway drug to bare-metal servers, although it seems more and more of these gateway users stop at stage one. Cheap systems with a public IP are also a great way to tinker with a new operating system. For this reason, TDP created this list of BSD VPS providers. Some explicitly deny running Tor as a server. Some just reference vague “proxy services.” Others don't mention Tor or proxies at all. The list is a start with currently just under 70 VPS providers listed. Input through various channels already started, and TDP intends to update the list over the coming months. A first draft email and open letter addressed to the providers were drafted, and we are looking to speak directly to at least some of the better-known BSD VPS providers. We may be able to convince a few to allow public Tor relays, or at least published bridges. These providers could be new BSD users' gateway drug into the world of BSD Tor nodes. Running a Tor relay shouldn't be considered a particularly risky activity. Maybe we can adjust that perception. Let us know any input via email or GitHub, and we'll be glad to make updates. Beastie Bits Avoid OS Detection with OpenBSD (https://blog.cagedmonster.net/avoid-os-detection-openbsd/) TrueOS update to fix updating (https://www.trueos.org/blog/update-fix-updating/) MidnightBSD 0.8.5 VirtualBox Install (https://www.youtube.com/watch?v=I08__ZWaJ0w) BSD Pizza Night in Portland (http://calagator.org/events/tag/BSD) *** Feedback/Questions Andrew - BSDCan videos? (http://dpaste.com/08E90PX) Marc - The Rock64 Board (http://dpaste.com/08KE40G) Jason - Follow up on UEFI and Bhyve (http://dpaste.com/2EP7BFC) Patrick - EFI booting (http://dpaste.com/34Z9SFM) ***
This week on the show, we've got all sorts of goodies to discuss. Starting with, vmm, vkernels, raspberry pi and much more! Some iX folks are visiting from out of This episode was brought to you by Headlines vmm enabled (http://undeadly.org/cgi?action=article&sid=20161012092516&mode=flat&count=15) VMM, the OpenBSD hypervisor, has been imported into current It has similar hardware requirements to bhyve, a Intel Nehalem or newer CPU with the hardware virtualization features enabled in the BIOS AMD support has not been started yet OpenBSD is the only supported guest It would be interesting to hear from viewers that have tried it, and hear how it does, and what still needs more work *** vkernels go COW (http://lists.dragonflybsd.org/pipermail/commits/2016-October/624675.html) The DragonflyBSD feature, vkernels, has gained a new Copy-On-Write functionality Disk images can now be mounted RO or RW, but changes will not be written back to the image file This allows multiple vkernels to share the same disk image “Note that when the vkernel operates on an image in this mode, modifications will eat up system memory and swap, so the user should be cognizant of the use-case. Still, the flexibility of being able to mount the image R+W should not be underestimated.” This is another feature we'd love to hear from viewers that have tried it out. *** Basic support for the RPI3 has landed in FreeBSD-CURRENT (https://wiki.freebsd.org/arm64/rpi3) The long awaited bits to allow FreeBSD to boot on the Raspberry Pi 3 have landed There is still a bit of work to be done, some of the as mentioned in Oleksandr's blog post: Raspberry Pi support in HEAD (https://kernelnomicon.org/?p=690) “Raspberry Pi 3 limited support was committed to HEAD. Most of drivers should work with upstream dtb, RNG requires attention because callout mode seems to be broken and there is no IRQ in upstream device tree file. SMP is work in progress. There are some compatibility issue with VCHIQ driver due to some assumptions that are true only for ARM platform. “ This is exciting work. No HDMI support (yet), so if you plan on trying this out make sure you have your USB->Serial adapter cables ready to go. Full Instructions to get started with your RPI 3 can be found on the FreeBSD Wiki (https://wiki.freebsd.org/arm64/rpi3) Relatively soon, I imagine there will be a RaspBSD build for the RPI3 to make it easier to get started Eventually there will be official FreeBSD images as well *** OpenBSD switches softraid crypto from PKCS5 PBKDF2 to bcrypt PBKDF. (https://github.com/openbsd/src/commit/2ba69c71e92471fe05f305bfa35aeac543ebec1f) After the discussion a few weeks ago when a user wrote a tool to brute force their forgotten OpenBSD Full Disk Encryption password (from a password list of possible variations of their password), it was discovered that OpenBSD defaulted to using just 8192 iterations of PKCSv5 for the key derivation function with a SHA1-HMAC The number of iterations can be manually controlled by the user when creating the softraid volume By comparison, FreeBSDs GELI full disk encryption used a benchmark to pick a number of iterations that would take more than 2 seconds to complete, generally resulting in a number of iterations over 1 million on most modern hardware. The algorithm is based on a SHA512-HMAC However, inefficiency in the implementation of PKCSv5 in GELI resulted in the implementation being 50% slower than some other implementations, meaning the effective security was only about 1 second per attempt, rather than the intended 2 seconds. The improved PKCSv5 implementation is out for review currently. This commit to OpenBSD changes the default key derivation function to be based on bcrypt and a SHA512-HMAC instead. OpenBSD also now uses a benchmark to pick a number of of iterations that will take approximately 1 second per attempt “One weakness of PBKDF2 is that while its number of iterations can be adjusted to make it take an arbitrarily large amount of computing time, it can be implemented with a small circuit and very little RAM, which makes brute-force attacks using application-specific integrated circuits or graphics processing units relatively cheap. The bcrypt key derivation function requires a larger amount of RAM (but still not tunable separately, i. e. fixed for a given amount of CPU time) and is slightly stronger against such attacks, while the more modern scrypt key derivation function can use arbitrarily large amounts of memory and is therefore more resistant to ASIC and GPU attacks.” The upgrade to the bcrypt, which has proven to be quite resistant to cracking by GPUs is a significant enhancement to OpenBSDs encrypted softraid feature *** Interview - Josh Paetzel - email@email (mailto:email@email) / @bsdunix4ever (https://twitter.com/bsdunix4ever) MeetBSD ZFS Panel FreeNAS - graceful network reload Pxeboot *** News Roundup EC2's most dangerous feature (http://www.daemonology.net/blog/2016-10-09-EC2s-most-dangerous-feature.html) Colin Percival, FreeBSD's unofficial EC2 maintainer, has published a blog post about “EC2's most dangerous feature” “As a FreeBSD developer — and someone who writes in C — I believe strongly in the idea of "tools, not policy". If you want to shoot yourself in the foot, I'll help you deliver the bullet to your foot as efficiently and reliably as possible. UNIX has always been built around the idea that systems administrators are better equipped to figure out what they want than the developers of the OS, and it's almost impossible to prevent foot-shooting without also limiting useful functionality. The most powerful tools are inevitably dangerous, and often the best solution is to simply ensure that they come with sufficient warning labels attached; but occasionally I see tools which not only lack important warning labels, but are also designed in a way which makes them far more dangerous than necessary. Such a case is IAM Roles for Amazon EC2.” “A review for readers unfamiliar with this feature: Amazon IAM (Identity and Access Management) is a service which allows for the creation of access credentials which are limited in scope; for example, you can have keys which can read objects from Amazon S3 but cannot write any objects. IAM Roles for EC2 are a mechanism for automatically creating such credentials and distributing them to EC2 instances; you specify a policy and launch an EC2 instance with that Role attached, and magic happens making time-limited credentials available via the EC2 instance metadata. This simplifies the task of creating and distributing credentials and is very convenient; I use it in my FreeBSD AMI Builder AMI, for example. Despite being convenient, there are two rather scary problems with this feature which severely limit the situations where I'd recommend using it.” “The first problem is one of configuration: The language used to specify IAM Policies is not sufficient to allow for EC2 instances to be properly limited in their powers. For example, suppose you want to allow EC2 instances to create, attach, detach, and delete Elastic Block Store volumes automatically — useful if you want to have filesystems automatically scaling up and down depending on the amount of data which they contain. The obvious way to do this is would be to "tag" the volumes belonging to an EC2 instance and provide a Role which can only act on volumes tagged to the instance where the Role was provided; while the second part of this (limiting actions to tagged volumes) seems to be possible, there is no way to require specific API call parameters on all permitted CreateVolume calls, as would be necessary to require that a tag is applied to any new volumes being created by the instance.” “As problematic as the configuration is, a far larger problem with IAM Roles for Amazon EC2 is access control — or, to be more precise, the lack thereof. As I mentioned earlier, IAM Role credentials are exposed to EC2 instances via the EC2 instance metadata system: In other words, they're available from http://169.254.169.254/. (I presume that the "EC2ws" HTTP server which responds is running in another Xen domain on the same physical hardware, but that implementation detail is unimportant.) This makes the credentials easy for programs to obtain... unfortunately, too easy for programs to obtain. UNIX is designed as a multi-user operating system, with multiple users and groups and permission flags and often even more sophisticated ACLs — but there are very few systems which control the ability to make outgoing HTTP requests. We write software which relies on privilege separation to reduce the likelihood that a bug will result in a full system compromise; but if a process which is running as user nobody and chrooted into /var/empty is still able to fetch AWS keys which can read every one of the objects you have stored in S3, do you really have any meaningful privilege separation? To borrow a phrase from Ted Unangst, the way that IAM Roles expose credentials to EC2 instances makes them a very effective exploit mitigation mitigation technique.” “To make it worse, exposing credentials — and other metadata, for that matter — via HTTP is completely unnecessary. EC2 runs on Xen, which already has a perfectly good key-value data store for conveying metadata between the host and guest instances. It would be absolutely trivial for Amazon to place EC2 metadata, including IAM credentials, into XenStore; and almost as trivial for EC2 instances to expose XenStore as a filesystem to which standard UNIX permissions could be applied, providing IAM Role credentials with the full range of access control functionality which UNIX affords to files stored on disk. Of course, there is a lot of code out there which relies on fetching EC2 instance metadata over HTTP, and trivial or not it would still take time to write code for pushing EC2 metadata into XenStore and exposing it via a filesystem inside instances; so even if someone at AWS reads this blog post and immediately says "hey, we should fix this", I'm sure we'll be stuck with the problems in IAM Roles for years to come.” “So consider this a warning label: IAM Roles for EC2 may seem like a gun which you can use to efficiently and reliably shoot yourself in the foot; but in fact it's more like a gun which is difficult to aim and might be fired by someone on the other side of the room snapping his fingers. Handle with care!” *** Open-source storage that doesn't suck? Our man tries to break TrueNAS (http://www.theregister.co.uk/2016/10/18/truenas_review/) The storage reviewer over at TheRegister got their hands on a TrueNAS and gave it a try “Data storage is difficult, and ZFS-based storage doubly so. There's a lot of money to be made if you can do storage right, so it's uncommon to see a storage company with an open-source model deliver storage that doesn't suck.” “To become TrueNAS, FreeNAS's code is feature-frozen and tested rigorously. Bleeding-edge development continues with FreeNAS, and FreeNAS comes with far fewer guarantees than does TrueNAS.” “iXsystems provided a Z20 hybrid storage array. The Z20 is a dual-controller, SAS-based, high-availability, hybrid storage array. The testing unit came with a 2x 10GbE NIC per controller and retails around US$24k. The unit shipped with 10x 300GB 10k RPM magnetic hard drives, an 8GB ZIL SSD and a 200GB L2ARC SSD. 50GiB of RAM was dedicated to the ARC by the system's autotune feature.” The review tests the performance of the TrueNAS, which they found acceptable for spinning rust, but they also tested the HA features While the look of the UI didn't impress them, the functionality and built in help did “The UI contains truly excellent mouseover tooltips that provide detailed information and rationale for almost every setting. An experienced sysadmin will be able to navigate the TrueNAS UI with ease. An experienced storage admin who knows what all the terms mean won't have to refer to a wiki or the more traditional help manual, but the same can't be said for the uninitiated.” “After a lot of testing, I'd trust my data to the TrueNAS. I am convinced that it will ensure the availability of my data to within any reasonable test, and do so as a high availability solution. That's more than I can say for a lot of storage out there.” “iXsystems produce a storage array that is decent enough to entice away some existing users of the likes of EMC, NetApp, Dell or HP. Honestly, that's not something I thought possible going into this review. It's a nice surprise.” *** OpenBSD now officially on GitHub (https://github.com/openbsd) Got a couple of new OpenBSD items to bring to your attention today. First up, for those who didn't know, OpenBSD development has (always?) taken place in CVS, similar to NetBSD and previously FreeBSD. However today, Git fans can rejoice, since there is now an “official” read-only github mirror of their sources for public consumption. Since this is read-only, I will assume (unless told otherwise) that pull-requests and whatnot aren't taken. But this will come in handy for the “git-enabled” among us who need an easier way to checkout OpenBSD sources. There is also not yet a guarantee about the stability of the exporter. If you base a fork on the github branch, and something goes wrong with the exporter, the data may be reexported with different hashes, making it difficult to rebase your fork. How to install LibertyBSD or OpenBSD on a libreboot system (https://libreboot.org/docs/bsd/openbsd.html) For the second part of our OpenBSD stories, we have a pretty detailed document posted over at LibreBoot.org with details on how to boot-strap OpenBSD (Or LibertyBSD) using their open-source bios replacement. We've covered blog posts and other tidbits about this process in the past, but this seems to be the definitive version (so far) to reference. Some of the niceties include instructions on getting the USB image formatted not just on OpenBSD, but also FreeBSD, Linux and NetBSD. Instructions on how to boot without full-disk-encryption are provided, with a mention that so far Libreboot + Grub does not support FDE (yet). I would imagine somebody will need to port over the openBSD FDE crypto support to GRUB, as was done with GELI at some point. Lastly some instructions on how to configure grub, and troubleshoot if something goes wrong will help round-out this story. Give it a whirl, let us know if you run into issues. Editorial Aside - Personally I find the libreboot stuff fascinating. It really is one of the last areas that we don't have full control of our systems with open-source. With the growth of EFI, it seems we rely on a closed-source binary / mini-OS of sorts just to boot our Open Source solutions, which needs to be addressed. Hats off to the LibreBoot folks for taking on this important challenge. *** FreeNAS 9.10 – LAGG & VLAN Overview (https://www.youtube.com/watch?v=wqSH_uQSArQ) A video tutorial on FreeNAS's official YouTube Channel Covers the advanced networking features, Link Aggregation and VLANs Covers what the features do, and in the case of LAGG, how each of the modes work and when you might want to use it *** Beastie Bits Remote BSD Developer Position is up for grabs (https://www.cybercoders.com/bsd-developer-remote-job-305206) Isilon is hiring for a FreeBSD Security position (https://twitter.com/jeamland/status/785965716717441024) Google has ported the Networked real-time multi-player BSD game (https://github.com/google/web-bsd-hunt) A bunch of OpenBSD Tips (http://www.vincentdelft.be) The last OpenBSD 6.0 Limited Edition CD has sold (http://www.ebay.com/itm/-/332000602939) Dan spots George Neville-Neil on TV at the Airport (https://twitter.com/DLangille/status/788477000876892162) gnn on CNN (https://www.youtube.com/watch?v=h7zlxgtBA6o) SoloBSD releases v 6.0 built upon OpenBSD (http://solobsd.blogspot.com/2016/10/release-solobsd-60-openbsd-edition.html) Upcoming KnoxBug looks at PacBSD - Oct 25th (http://knoxbug.org/content/2016-10-25) Feedback/Questions Morgan - Ports and Packages (http://pastebin.com/Kr9ykKTu) Mat - ZFS Memory (http://pastebin.com/EwpTpp6D) Thomas - FreeBSD Path Length (http://pastebin.com/HYMPtfjz) Cy - OpenBSD and NetHogs (http://pastebin.com/vGxZHMWE) Lars - Editors (http://pastebin.com/5FMz116T) ***
The wait is over, 11.0 of FreeBSD has (officially) launched. We'll have coverage of this, plus a couple looks back at UNIX history, and a crowd-favorite guest today. This episode was brought to you by Headlines FreeBSD 11.0-RELEASE Now Available (https://lists.freebsd.org/pipermail/freebsd-announce/2016-October/001760.html) FreeBSD 11.0-RELEASE is now officially out. A last minute reroll to pickup OpenSSL updates and a number of other security fixes meant the release was a little behind schedule, and shipped as 11.0-RELEASE-p1, but the release is better for it Improved support for 802.11n and various wifi drivers Support for the AArch64 (arm64) architecture has been added. Native graphics support has been added to the bhyve(8) hypervisor. A new flag, “onifconsole” has been added to /etc/ttys. This allows the system to provide a login prompt via serial console if the device is an active kernel console, otherwise it is equivalent to off. The xz(1) utility has been updated to support multi-threaded compression. A number of kernel panics related to VNET have been fixed The IMAGACT_BINMISC kernel configuration option has been enabled by default, which enables application execution through emulators, such as QEMU via binmiscctl(8). The GENERIC kernel configuration has been updated to include the IPSEC option by default. The kern.osrelease and kern.osreldate are now configurable jail(8) parameters A new sysctl(8), kern.racct.enable, has been added, which when set to a non-zero value allows using rctl(8) with the GENERIC kernel. A new kernel configuration option, RACCT_DISABLED has also been added. The minimum (arcmin) and maximum (arcmax) values for the ZFS adaptive replacement cache can be modified at runtime. Changes to watch out for: OpenSSH DSA key generation has been disabled by default. It is important to update OpenSSH keys prior to upgrading. Additionally, Protocol 1 support has been removed. By default, the ifconfig(8) utility will set the default regulatory domain to FCC on wireless interfaces. As a result, newly created wireless interfaces with default settings will have less chance to violate country-specific regulations. An issue was discovered with Amazon® EC2™ images which would cause the virtual machine to hang during boot when upgrading from previous FreeBSD versions. New EC2™ installations are not affected, but existing installations running earlier releases are advised to wait until the issue is resolved in an Errata Notice before upgrading. An Errata Notice to address this is planned following the release. *** process listing consistency (http://www.tedunangst.com/flak/post/process-listing-consistency) Ted Unangst asks: how consistent is the output of ps(1)? If processes are starting and exiting constantly, and you run ps(1), is the output guaranteed to reflect that exact moment in time, or might it include some processes that have gone away before ps(1) exited, and include some processes that did not exist when ps(1) was started? Ted provides a little example chicken/egg program to try to create such an inconsistency, so you can test out your OS On OpenBSD ps(1) was switched away from the reading kernel memory directly, and instead uses the KERNPROCALL sysctl Thus sysctl can iterate over the entire process list, copying out information to ps(1), without blocking. If we prevent processes from forking or exiting during this time, we get a consistent snapshot. The snapshot may be stale, but it will never show us a viewpoint that never happened. So, OpenBSD will always be consistent, or will it? Is there a way to trick ps on OpenBSD? Not everything is consistent. There's a separate sysctl, KERNPROCARGV, that reads the command line arguments for a process, but it only works on one process at a time. Processes can modify their own argv at any time. A second test program changes the process title of both the chicken and the egg, and if you run ps(1), you can get back a result that never actually happened. The argv of the first program is read by ps(1), and in the meantime, it changes to a different value. The second program also changes its value, so now when ps(1) reads it, it sees the new value, not the original value from when ps(1) was started. So the output is not that consistent, but is it worth the effort to try to make it so? DragonFlyBSD - if_iwm - Add basic powermanagement support via ifconfig wlan0 powersave (http://lists.dragonflybsd.org/pipermail/commits/2016-October/624673.html) WiFi can often be one of the biggest drains on your laptop battery, so anything we can do to improve the situation should be embraced. Imre Vadász over at the DragonFly project has done that, porting over a new set of power management support from Linux to the if_iwm driver. if_iwm - Add basic powermanagement support via ifconfig wlan0 powersave. The DEVICEPOWERFLAGSCAMMSK flag was removed in the upstream iwlwifi in Linux commit ceef91c89480dd18bb3ac51e91280a233d0ca41f. Add scpsdisabled flag to struct iwmsoftc, which corresponds to mvm->psdisabled in struct iwl_mvm in Linux iwlwifi. Adds a hw.iwm.powerscheme tunable which corresponds to the powerscheme module parameter in Linux iwlwifi. Set this to 1 for completely disabling power management, 2 (default) for balanced powermanagement, and 3 for lowerpower mode (which does dtim period skipping). Imports the constants.h file from iwlwifi as ifiwmconstants.h. This doesn't allow changing the powermanagement setting while connected, also one can only choose between enabled and disabled powersaving with ifconfig (so switching between balanced and low-power mode requires rebooting to change the tunable). After any changes to powermanagement (i.e. "ifconfig wlan0 powersave" to enable powermanagement, or "ifconfig wlan0 -powersave" for disabling powermanagement), one has to disconnect and reconnect to the accespoint for the change to take effect.“ Good stuff! These positive changes need to happen more often and sooner, so we can all eek out every drop of power from our respective laptops. *** Helping out an Internet Friend…Dual boot OpenBSD (https://functionallyparanoid.com/2016/10/03/helping-out-an-internet-friend/) Dual-booting OpenBSD and Linux, via UEFI. A year ago we wouldn't be discussing this, but today we have an article where somebody has done exactly that. This Journey was undertaken by Brian Everly (Indiana Bug), partly due to a friend who wanted to dual-boot his laptop which already has an existing UEFI install on it. As a proof of concept, he began by replicating the setup in VMware with UEFI He started by throwing Ubuntu into the VM, with some special attention paid to partitioning to ensure enough room left-over for OpenBSD later. I created a 64MB EFI partition at the front of the disk. Next, I created a 20GB primary partition at the beginning of the space, mounted as the root (/) filesystem. I then added a 4096MB swap partition for Ubuntu. Finally, I used the rest of the free space to create a Reserved BIOS Boot Area FAT32 partition that was not associated with a mount point – this is where I will be installing OpenBSD. With that done, he wrapped up the Ubuntu installation and then turned over to to the OpenBSD side. Some manual partitioning was required to install to the “Reserved FAT32” partition. I mashed through the defaults in the OpenBSD installer until I got to the disk partitioning. Since I told VMWare to make my hard drive an IDE one, I knew I was playing around with wd0 and not sd0 (my USB key). I dumped into fdisk by selecting to (E)dit the partition scheme and saw my setup from Linux. First was the EFI partition (I am guessing I'll have to copy my bootx64.efi file to that at some point), second was the Linux etx4 partition, third was my Linux swap partition and fourth was a weird looking one that is the “Reserved BIOS Boot” partition. That's the one I'll fiddle with. Issuing the command “edit 3” allowed me to fiddle with that partition #3 (remember, we start counting at zero). I set it's type to “A6” (OpenBSD) and then took the defaults with the exception of naming it “OpenBSD”. A quick “write” followed by a “quit” allowed me to update my new partition and get back to the installer. Once the installation was wrapped up (OpenBSD helpfully already created the /boot/EFI partition with the correct EFI loader installed) he was able to reboot and select between the two systems at the UEFI bios screen. For kicks, he lastly went into Ubuntu and grabbed refind. Installing refind provided a fancy graphical selector between the two systems without too much trouble. Next step will be to replicate this process on his friend's laptop. Wishing you luck with that journey! Interview - Bryan Cantrill - email@email (mailto:email@email) / @twitter (https://twitter.com/user) CTO of Joyent *** News Roundup After 22 Years, 386BSD Gets An Update (https://bsd.slashdot.org/story/16/10/09/0230203/after-22-years-386bsd-gets-an-update) Slashdot brings us an interesting mention this week, specifically that after 22 years, we now have an update to 386BSD. 386BSD was last released back in 1994 with a series of articles in Dr. Dobb's Journal -- but then developers for this BSD-based operating system started migrating to both FreeBSD and NetBSD. An anonymous Slashdot reader writes: The last known public release was version 0.1. Until Wednesday, when Lynne Jolitz, one of the co-authors of 386BSD, released the source code to version 1.0 as well as 2.0 on Github. 386BSD takes us back to the days when you could count every file in your Unix distribution and more importantly, read and understand all of your OS source code. 386BSD is also the missing link between BSD and Linux. One can find fragments of Linus Torvalds's math emulation code in the source code of 386BSD. To quote Linus: "If 386BSD had been available when I started on Linux, Linux would probably never had happened.” Though it was designed for Intel 80386 microprocessors, there's already instructions for launching it on the hosted hardware virtualization service Qemu. There you have it! Go grab the new hotness that is 386BSD and run it in 2016! Or perhaps you want FreeBSD 11, but to each their own. *** Progress of the OpenBSD Limited Edition Signed CD set (http://undeadly.org/cgi?action=article&sid=20160929230557&mode=expanded) An update from a story last week! We mentioned the “very” limited edition OpenBSD 6.0 signed CD sets that had gone up for Auction on Ebay. (With proceeds to support for Foundation) As of today, here's where we stand: CD set #1 (Sep 29th + 5 days) sold for $4200 (http://www.ebay.com/itm/-/331985953783) CD set #2 (Oct 4th + 3 days) sold for $3000 (http://www.ebay.com/itm/-/331990536246) CD set #3 (Oct 8th + 3 days) sold for $817 (http://www.ebay.com/itm/-/331994217419) CD set #4 (Oct 11th + 3 days) is currently up for bidding (http://www.ebay.com/itm/-/331997031152) There you have it! The 4th set is almost wrapped up bidding, and the 5th and last set is not far behind. Be sure to grab your piece of BSD history before its gone! PROTOTYPE FreeBSD Jail/ZFS based implementation of the Application Container Specification (https://github.com/3ofcoins/jetpack) “Jetpack is an experimental and incomplete implementation of the App Container Specification for FreeBSD. It uses jails as isolation mechanism, and ZFS for layered storage.” “This document uses some language used in Rocket, the reference implementation of the App Container Specification. While the documentation will be expanded in the future, currently you need to be familiar at least with Rocket's README to understand everything.” + A standard with multiple implementations, that allow substitution of components, such as FreeBSD Jails instead of docker/lxc etc, and ZFS instead of overlayfs etc, is very exciting Microsoft's Forgotten Unix-based Operating System (https://fossbytes.com/xenix-history-microsoft-unix-operating-system/) Do you remember the good old days. You know, when Microsoft was the driving force behind UNIX? Wait, what did you say you may be thinking? It's true, and lets sit back and let FossBytes tell us a tale of what once was reality. The story begins sometime in the late 70's: Turning back the pages to the late 1970's, Microsoft entered into an agreement with AT&T Corporation to license Unix from AT&T. While the company didn't sell the OS to public, it licensed it to other OEM vendors like Intel, SCO, and Tandy. As Microsoft had to face legal trouble due to “Unix” name, the company renamed it and came up with its own Unix distribution. So, AT&T licensed Unix to Redmond that was passed on to other OEMs as Xenix. It's interesting to recall a time when Microsoft enabled people to run Unix — an operating system originally designed for large and multiuser systems — on a microcomputer. Even though it came first, Unix was probably more powerful than MS-DOS. So whatever happened to this microsoft-flavored UNIX you may ask? Sadly it was ditched for DOS due to $REASONS: In early 1980's, IBM was looking for an OS to power its PC. As IBM didn't want to maintain any ties with the recently split AT&T, Xenix was automatically rejected. To fulfill, the tech giant's demand, Microsoft bought 86-DOS from Seattle Computer Products and managed to convince IBM to use it in their systems. Slowly, Microsoft started losing interest in Xenix and traded the full rights of Xenix with SCO, a Xenix partner company. The company filed bankruptcy in 2007 before taking the Xenix legacy to the 21st century in the form of Open Server, previously known as SCO Unix and SCO Open Desktop. An interesting chapter in UNIX history to be sure, and funny enough may come full-circle someday with Microsoft beginning to show interest in UNIX and BSD once again. *** Beastie Bits Ohio LinuxFest 2016 wrap-up (http://blather.michaelwlucas.com/archives/2791) Learn X in Y minutes Where X=zfs (https://learnxinyminutes.com/docs/zfs/) Add touchscreen support for the official 7" RPi touch display (https://svnweb.freebsd.org/base?view=revision&revision=306430) 64-bit U-Boot on Raspberry Pi 3 (https://kernelnomicon.org/?p=682) SNIA SDC 2016 Recap: Michael Dexter (https://www.ixsystems.com/blog/snia-sdc-2016-recap-michael-dexter/) OpenZFS: Stronger than ever (https://www.ixsystems.com/blog/openzfs-devsummit-2016/) Accurate, Traceable, and Verifiable Time Synchronization for World Financial Markets (http://nvlpubs.nist.gov/nistpubs/jres/121/jres.121.023.pdf) ON HOLY WARS AND A PLEA FOR PEACE (https://www.ietf.org/rfc/ien/ien137.txt) Feedback/Questions Morgan - Zero-Filling an VM (http://pastebin.com/CYcqmW7P) Charlie - ZFS Bit-Rot (http://pastebin.com/12mNW57h) Matias - TrueOS / Launchd (http://pastebin.com/NfYWt2cu) Dale - DO Feedback (http://pastebin.com/UvKh2WcF) James - DO / FreeBSD Locks? (http://pastebin.com/0cdMc88U) ***