POPULARITY
Martin Mao is the co-founder and CEO of Chronosphere, an observability platform built for the modern containerized world. Prior to Chronosphere, Martin led the observability team at Uber, tackling the unique challenges of large-scale distributed systems. With a background as a technical lead at AWS, Martin brings unique experience in building scalable and reliable infrastructure. In this episode, he shares the story behind Chronosphere, its approach to cost-efficient observability, and the future of monitoring in the age of AI.What you'll learn:The specific observability challenges that arise when transitioning to containerized environments and microservices architectures, including increased data volume and new problem sources.How Chronosphere addresses the issue of wasteful data storage by providing features that identify and optimize useful data, ensuring customers only pay for valuable insights.Chronosphere's strategy for competing with observability solutions offered by major cloud providers like AWS, Azure, and Google Cloud, focusing on specialized end-to-end product.The innovative ways in which Chronosphere's products, including their observability platform and telemetry pipeline, improve the process of detecting and resolving problems.How Chronosphere is leveraging AI and knowledge graphs to normalize unstructured data, enhance its analytics engine, and provide more effective insights to customers.Why targeting early adopters and tech-forward companies is beneficial for product innovation, providing valuable feedback for further improvements and new features. How observability requirements are changing with the rise of AI and LLM-based applications, and the unique data collection and evaluation criteria needed for GPUs.Takeaways:Chronosphere originated from the observability challenges faced at Uber, where existing solutions couldn't handle the scale and complexity of a containerized environment.Cost efficiency is a major differentiator for Chronosphere, offering significantly better cost-benefit ratios compared to other solutions, making it attractive for companies operating at scale.The company's telemetry pipeline product can be used with existing observability solutions like Splunk and Elastic to reduce costs without requiring a full platform migration.Chronosphere's architecture is purposely single-tenanted to minimize coupled infrastructures, ensuring reliability and continuous monitoring even when core components go down.AI-driven insights for observability may not benefit from LLMs that are trained on private business data, which can be diverse and may cause models to overfit to a specific case.Many tech-forward companies are using the platform to monitor model training which involves GPU clusters and a new evaluation criterion that is unlike general CPU workload.The company found a huge potential by scrubbing the diverse data and building knowledge graphs to be used as a source of useful information when problems are recognized.Subscribe to Startup Project for more engaging conversations with leading entrepreneurs!→ Email updates: https://startupproject.substack.com/#StartupProject #Chronosphere #Observability #Containers #Microservices #Uber #AWS #Monitoring #CloudNative #CostOptimization #AI #ArtificialIntelligence #LLM #MLOps #Entrepreneurship #Podcast #YouTube #Tech #Innovation
In deze speciale aflevering van de Nederlandse Kubernetes Podcast, opgenomen tijdens SRE Day 2024, duiken Ronald Kers (CNCF Ambassador) en Jan Stomphorst (Solutions Architect bij ACC ICT) samen met Erik Schabell (Director Technical Marketing & Evangelism bij Chronosphere) in de wereld van observability, cloud-native monitoring en kostenbesparing.Observability is een essentieel onderdeel van moderne IT-infrastructuren, maar veel organisaties worstelen met de explosieve groei van data en de kosten die hiermee gepaard gaan. Erik deelt zijn ervaringen uit zijn lange carrière in open source en cloud-technologieën, en hoe hij van Red Hat naar Chronosphere is gegaan om bedrijven te helpen hun observability stack te optimaliseren.We bespreken:Hoe Chronosphere organisaties helpt kosten en data-explosie onder controle te houden.Waarom veel verzamelde data nooit wordt gebruikt, maar wel hoge opslag- en verwerkingskosten met zich meebrengt.De verschuiving van traditionele monitoringtools naar cloud-native observability oplossingen.Hoe SRE-teams kunnen voorkomen dat ze worden overspoeld door overbodige alerts en metrics.De psychologische impact van on-call shifts en hoe tooling SRE's kan helpen stress te verminderen.Daarnaast gaan we in op AI, vendor lock-in en de toekomst van Kubernetes. Containers en Kubernetes hebben de IT-wereld getransformeerd, maar wat wordt de volgende grote innovatie? AI? WebAssembly? Serverless? Erik deelt zijn visie over waar de industrie naartoe beweegt en hoe observability de sleutel blijft tot efficiëntere en betrouwbaardere IT-systemen.Stuur ons een bericht.ACC ICT Specialist in IT-CONTINUÏTEIT Bedrijfskritische applicaties én data veilig beschikbaar, onafhankelijk van derden, altijd en overalLike and subscribe! It helps out a lot.You can also find us on:De Nederlandse Kubernetes Podcast - YouTubeNederlandse Kubernetes Podcast (@k8spodcast.nl) | TikTokDe Nederlandse Kubernetes PodcastWhere can you meet us:EventsThis Podcast is powered by:ACC ICT - IT-Continuïteit voor Bedrijfskritische Applicaties | ACC ICT
In order to clear their names in the eyes of Sigil's more powerful factions, the detectives of Open and Shut must steal from the Dead themselves. Information is gathered, every boon is called in, and the Mortuary looms as the Chronosphere beckons. They cannot fail, but do they have the brains to succeed?---Our show contains fantasy violence (and the occasional foul language), treat us like a PG-13 program!---Kyana and Finbar Pin Pack Available Now! Presale has been extended for one week, preorder for yourself at:https://crowdmade.com/collections/rolling-with-difficultySpecial Thank You to Our Friends at 5 GMs in a Trenchcoat! Check Them Out:https://www.5gmsinatrenchcoat.com/Instagram: @5gmsinatrenchcoatThreads: @5gmsinatrenchcoatTwitter: @5GMsOfficialBlueSky: @5gmsofficial.bsky.socialTik Tok: @5gmsinatrenchcoatSpecial Thank You to Our Friends at Twice Rolled Tales:https://linktr.ee/twicerolledtaleshttps://www.twicerolledtales.com/YouTube: https://www.youtube.com/@twicerolledtalesBlueSky: @twicerolledtales.bsky.socialRolling with Difficulty Patreon:patreon.com/rollingwithdifficultyRolling with Difficulty Discord:https://discord.gg/6uAycwAhy6Merch:Redbubble: https://www.redbubble.com/people/RWDPodcast/shop?asc=uContact the Pod:rollwithdifficulty@gmail.comTwitter: @rollwdifficultyInstagram: @rollwithdifficultyRSS Feed: https://rollingwithdifficultypod.transistor.fm/Youtube: https://www.youtube.com/c/RollingwithDifficultyTik Tok: @rollwithdifficultyBlueSky: @rollwithdifficulty.bsky.socialCast:Dungeon Master - Austin FunkTwitter: @atthefunkThe Set's Journal of Faerun: https://www.dmsguild.com/product/345568/The-Sets-Journal-of-Faerun-Vol-1?term=the+setBlueSky: @atthefunk.bsky.socialBileyg - OSP RedTwitter: @OSPyoutubeInstagram: @overly.sarcastic.productionsOverly Sarcastic Productions: https://www.youtube.com/c/OverlySarcasticProductionsChannel/Ingrid - Sophia RicciardiTwitter: @sophie_kay_Instagram: @_sophie_kayMoviestruck: https://moviestruck.transistor.fm/Patreon: https://www.patreon.com/moviestruckBlueSky: @sophiekay.bsky.socialFally - LinnieInstagram: @linnieschell@linnieschell on all platformsOobtaglor - NoirTwitter: @NoirGalaxiesBlueSky: @noirgalaxies.bsky.socialWant to send us snail mail? Use this Address:Austin Funk1314 5th AvePO Box # 1163Bay Shore NY 11706Character Art by @stuckinspaceBackground Art by @tanukimi.sMusic by: Dominic Ricciardihttps://soundcloud.com/dominicricciardimusicFeatured Tracks:Open and Shut ThemeBig DowntimeInvestigation ThemeMysterious ThemeTense MomentFinal BattleCity of Doors Theme ★ Support this podcast on Patreon ★
Deep Energy 2.0 - Music for Sleep, Meditation, Relaxation, Massage and Yoga
Background Music for Sleep, Meditation, Relaxation, Massage, Yoga, Studying and Therapy …… Hi everyone, this is Jim Butler and welcome to the Deep Energy Podcast 1929 to 1932 - Chronosphere - Parts 1 - 4 Please remember to turn on automatic downloads, like and subscribe, tell a friend, share with your family and leave a review. All of those things help build the podcast. Thank you so much!! ………. This podcast is ad supported. We try the best we can to keep all of the ads at the front and the back of the podcast, but depending on the length of the podcast, there maybe ads in the middle. Please check my Bandcamp page for ad free podcasts. www.jimbutler.bandcamp.com ……………………….. Watch me play live on TikTok @jimbutlermusic or On the Insight Timer App as Jim Butler Links for all of the podcasts in the Deep Energy Podcast Network: Deep Energy Podcast (Current Episodes) https://podcasts.apple.com/us/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga/id511265415 https://open.spotify.com/show/1DhN56DzDKc0FhQqR23v9c Deep Energy Classics - All of the ORIGINAL Episodes https://podcasts.apple.com/us/podcast/deep-energy-classics-original-episodes/id1734274408 https://www.spreaker.com/podcast/deep-energy-classics-original-episodes--6108618 https://open.spotify.com/show/7BjEFnqcyKWUkHcYdtFS25?si=05aeab39b5bc4a00 Deep Energy Daily Affirmations - Daily Affirmations to get you through the day https://podcasts.apple.com/us/podcast/deep-energy-daily-affirmations/id1729162791 https://open.spotify.com/show/0oaA8dRsWDQLkqeXmykGvu?si=461c7b47417b4e55 Deep Energy Guided Meditations - Guided Meditations to help through your day https://podcasts.apple.com/us/podcast/deep-energy-guided-meditations-with-michelle-davis-jim/id1732674561 https://open.spotify.com/show/1Kg2LTaFux10Ul94phybp8?si=ebbbf33757d64c13 https://www.spreaker.com/podcast/deep-energy-guided-meditations-with-michelle-davis-jim-butler--6098026 Slow Piano for Sleep - Solo Piano Pieces https://podcasts.apple.com/us/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and/id1626828397 https://www.spreaker.com/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and-relaxation--5572963 ………… www,jimbutlermusic.com jimbutlermusic@gmail.com All Social Media (FB - IG - YT - TT) is: @jimbutlermusic Merch: www.deepenergy.threadless.com Bandcamp Monthly No Ads Subscription/Patreon: www.jimbutler.bandcamp.com Custom Made Music: jimbutlermusic@gmail.com ………………….. Thank you for listening. All music is created, performed and composed by Jim Butler. AI IS NEVER USED TO CREATE MY MUSIC. Until the next time, please be kind to one another, peace, bye… …….. Original Image by the Dream App (not sponsored) or Canva (not sponsored) …………………. Become a supporter of this podcast: https://www.spreaker.com/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga-and-studying--4262945/support.
Background Music for Sleep, Meditation, Relaxation, Massage, Yoga, Studying and Therapy …… Hi everyone, this is Jim Butler and welcome to the Deep Energy Podcast 1929 to 1932 - Chronosphere - Parts 1 - 4 Please remember to turn on automatic downloads, like and subscribe, tell a friend, share with your family and leave a review. All of those things help build the podcast. Thank you so much!! ………. This podcast is ad supported. We try the best we can to keep all of the ads at the front and the back of the podcast, but depending on the length of the podcast, there maybe ads in the middle. Please check my Bandcamp page for ad free podcasts. www.jimbutler.bandcamp.com ……………………….. Watch me play live on TikTok @jimbutlermusic or On the Insight Timer App as Jim Butler Links for all of the podcasts in the Deep Energy Podcast Network: Deep Energy Podcast (Current Episodes) https://podcasts.apple.com/us/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga/id511265415 https://open.spotify.com/show/1DhN56DzDKc0FhQqR23v9c Deep Energy Classics - All of the ORIGINAL Episodes https://podcasts.apple.com/us/podcast/deep-energy-classics-original-episodes/id1734274408 https://www.spreaker.com/podcast/deep-energy-classics-original-episodes--6108618 https://open.spotify.com/show/7BjEFnqcyKWUkHcYdtFS25?si=05aeab39b5bc4a00 Deep Energy Daily Affirmations - Daily Affirmations to get you through the day https://podcasts.apple.com/us/podcast/deep-energy-daily-affirmations/id1729162791 https://open.spotify.com/show/0oaA8dRsWDQLkqeXmykGvu?si=461c7b47417b4e55 Deep Energy Guided Meditations - Guided Meditations to help through your day https://podcasts.apple.com/us/podcast/deep-energy-guided-meditations-with-michelle-davis-jim/id1732674561 https://open.spotify.com/show/1Kg2LTaFux10Ul94phybp8?si=ebbbf33757d64c13 https://www.spreaker.com/podcast/deep-energy-guided-meditations-with-michelle-davis-jim-butler--6098026 Slow Piano for Sleep - Solo Piano Pieces https://podcasts.apple.com/us/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and/id1626828397 https://www.spreaker.com/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and-relaxation--5572963 ………… www,jimbutlermusic.com jimbutlermusic@gmail.com All Social Media (FB - IG - YT - TT) is: @jimbutlermusic Merch: www.deepenergy.threadless.com Bandcamp Monthly No Ads Subscription/Patreon: www.jimbutler.bandcamp.com Custom Made Music: jimbutlermusic@gmail.com ………………….. Thank you for listening. All music is created, performed and composed by Jim Butler. AI IS NEVER USED TO CREATE MY MUSIC. Until the next time, please be kind to one another, peace, bye… …….. Original Image by the Dream App (not sponsored) or Canva (not sponsored) …………………. Become a supporter of this podcast: https://www.spreaker.com/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga-and-studying--4262945/support.
Deep Energy 2.0 - Music for Sleep, Meditation, Relaxation, Massage and Yoga
Background Music for Sleep, Meditation, Relaxation, Massage, Yoga, Studying and Therapy …… Hi everyone, this is Jim Butler and welcome to the Deep Energy Podcast 1929 to 1932 - Chronosphere - Parts 1 - 4 Please remember to turn on automatic downloads, like and subscribe, tell a friend, share with your family and leave a review. All of those things help build the podcast. Thank you so much!! ………. This podcast is ad supported. We try the best we can to keep all of the ads at the front and the back of the podcast, but depending on the length of the podcast, there maybe ads in the middle. Please check my Bandcamp page for ad free podcasts. www.jimbutler.bandcamp.com ……………………….. Watch me play live on TikTok @jimbutlermusic or On the Insight Timer App as Jim Butler Links for all of the podcasts in the Deep Energy Podcast Network: Deep Energy Podcast (Current Episodes) https://podcasts.apple.com/us/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga/id511265415 https://open.spotify.com/show/1DhN56DzDKc0FhQqR23v9c Deep Energy Classics - All of the ORIGINAL Episodes https://podcasts.apple.com/us/podcast/deep-energy-classics-original-episodes/id1734274408 https://www.spreaker.com/podcast/deep-energy-classics-original-episodes--6108618 https://open.spotify.com/show/7BjEFnqcyKWUkHcYdtFS25?si=05aeab39b5bc4a00 Deep Energy Daily Affirmations - Daily Affirmations to get you through the day https://podcasts.apple.com/us/podcast/deep-energy-daily-affirmations/id1729162791 https://open.spotify.com/show/0oaA8dRsWDQLkqeXmykGvu?si=461c7b47417b4e55 Deep Energy Guided Meditations - Guided Meditations to help through your day https://podcasts.apple.com/us/podcast/deep-energy-guided-meditations-with-michelle-davis-jim/id1732674561 https://open.spotify.com/show/1Kg2LTaFux10Ul94phybp8?si=ebbbf33757d64c13 https://www.spreaker.com/podcast/deep-energy-guided-meditations-with-michelle-davis-jim-butler--6098026 Slow Piano for Sleep - Solo Piano Pieces https://podcasts.apple.com/us/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and/id1626828397 https://www.spreaker.com/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and-relaxation--5572963 ………… www,jimbutlermusic.com jimbutlermusic@gmail.com All Social Media (FB - IG - YT - TT) is: @jimbutlermusic Merch: www.deepenergy.threadless.com Bandcamp Monthly No Ads Subscription/Patreon: www.jimbutler.bandcamp.com Custom Made Music: jimbutlermusic@gmail.com ………………….. Thank you for listening. All music is created, performed and composed by Jim Butler. AI IS NEVER USED TO CREATE MY MUSIC. Until the next time, please be kind to one another, peace, bye… …….. Original Image by the Dream App (not sponsored) or Canva (not sponsored) …………………. Become a supporter of this podcast: https://www.spreaker.com/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga-and-studying--4262945/support.
Background Music for Sleep, Meditation, Relaxation, Massage, Yoga, Studying and Therapy …… Hi everyone, this is Jim Butler and welcome to the Deep Energy Podcast 1929 to 1932 - Chronosphere - Parts 1 - 4 Please remember to turn on automatic downloads, like and subscribe, tell a friend, share with your family and leave a review. All of those things help build the podcast. Thank you so much!! ………. This podcast is ad supported. We try the best we can to keep all of the ads at the front and the back of the podcast, but depending on the length of the podcast, there maybe ads in the middle. Please check my Bandcamp page for ad free podcasts. www.jimbutler.bandcamp.com ……………………….. Watch me play live on TikTok @jimbutlermusic or On the Insight Timer App as Jim Butler Links for all of the podcasts in the Deep Energy Podcast Network: Deep Energy Podcast (Current Episodes) https://podcasts.apple.com/us/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga/id511265415 https://open.spotify.com/show/1DhN56DzDKc0FhQqR23v9c Deep Energy Classics - All of the ORIGINAL Episodes https://podcasts.apple.com/us/podcast/deep-energy-classics-original-episodes/id1734274408 https://www.spreaker.com/podcast/deep-energy-classics-original-episodes--6108618 https://open.spotify.com/show/7BjEFnqcyKWUkHcYdtFS25?si=05aeab39b5bc4a00 Deep Energy Daily Affirmations - Daily Affirmations to get you through the day https://podcasts.apple.com/us/podcast/deep-energy-daily-affirmations/id1729162791 https://open.spotify.com/show/0oaA8dRsWDQLkqeXmykGvu?si=461c7b47417b4e55 Deep Energy Guided Meditations - Guided Meditations to help through your day https://podcasts.apple.com/us/podcast/deep-energy-guided-meditations-with-michelle-davis-jim/id1732674561 https://open.spotify.com/show/1Kg2LTaFux10Ul94phybp8?si=ebbbf33757d64c13 https://www.spreaker.com/podcast/deep-energy-guided-meditations-with-michelle-davis-jim-butler--6098026 Slow Piano for Sleep - Solo Piano Pieces https://podcasts.apple.com/us/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and/id1626828397 https://www.spreaker.com/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and-relaxation--5572963 ………… www,jimbutlermusic.com jimbutlermusic@gmail.com All Social Media (FB - IG - YT - TT) is: @jimbutlermusic Merch: www.deepenergy.threadless.com Bandcamp Monthly No Ads Subscription/Patreon: www.jimbutler.bandcamp.com Custom Made Music: jimbutlermusic@gmail.com ………………….. Thank you for listening. All music is created, performed and composed by Jim Butler. AI IS NEVER USED TO CREATE MY MUSIC. Until the next time, please be kind to one another, peace, bye… …….. Original Image by the Dream App (not sponsored) or Canva (not sponsored) …………………. Become a supporter of this podcast: https://www.spreaker.com/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga-and-studying--4262945/support.
Deep Energy 2.0 - Music for Sleep, Meditation, Relaxation, Massage and Yoga
Background Music for Sleep, Meditation, Relaxation, Massage, Yoga, Studying and Therapy …… Hi everyone, this is Jim Butler and welcome to the Deep Energy Podcast 1929 to 1932 - Chronosphere - Parts 1 - 4 Please remember to turn on automatic downloads, like and subscribe, tell a friend, share with your family and leave a review. All of those things help build the podcast. Thank you so much!! ………. This podcast is ad supported. We try the best we can to keep all of the ads at the front and the back of the podcast, but depending on the length of the podcast, there maybe ads in the middle. Please check my Bandcamp page for ad free podcasts. www.jimbutler.bandcamp.com ……………………….. Watch me play live on TikTok @jimbutlermusic or On the Insight Timer App as Jim Butler Links for all of the podcasts in the Deep Energy Podcast Network: Deep Energy Podcast (Current Episodes) https://podcasts.apple.com/us/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga/id511265415 https://open.spotify.com/show/1DhN56DzDKc0FhQqR23v9c Deep Energy Classics - All of the ORIGINAL Episodes https://podcasts.apple.com/us/podcast/deep-energy-classics-original-episodes/id1734274408 https://www.spreaker.com/podcast/deep-energy-classics-original-episodes--6108618 https://open.spotify.com/show/7BjEFnqcyKWUkHcYdtFS25?si=05aeab39b5bc4a00 Deep Energy Daily Affirmations - Daily Affirmations to get you through the day https://podcasts.apple.com/us/podcast/deep-energy-daily-affirmations/id1729162791 https://open.spotify.com/show/0oaA8dRsWDQLkqeXmykGvu?si=461c7b47417b4e55 Deep Energy Guided Meditations - Guided Meditations to help through your day https://podcasts.apple.com/us/podcast/deep-energy-guided-meditations-with-michelle-davis-jim/id1732674561 https://open.spotify.com/show/1Kg2LTaFux10Ul94phybp8?si=ebbbf33757d64c13 https://www.spreaker.com/podcast/deep-energy-guided-meditations-with-michelle-davis-jim-butler--6098026 Slow Piano for Sleep - Solo Piano Pieces https://podcasts.apple.com/us/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and/id1626828397 https://www.spreaker.com/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and-relaxation--5572963 ………… www,jimbutlermusic.com jimbutlermusic@gmail.com All Social Media (FB - IG - YT - TT) is: @jimbutlermusic Merch: www.deepenergy.threadless.com Bandcamp Monthly No Ads Subscription/Patreon: www.jimbutler.bandcamp.com Custom Made Music: jimbutlermusic@gmail.com ………………….. Thank you for listening. All music is created, performed and composed by Jim Butler. AI IS NEVER USED TO CREATE MY MUSIC. Until the next time, please be kind to one another, peace, bye… …….. Original Image by the Dream App (not sponsored) or Canva (not sponsored) …………………. Become a supporter of this podcast: https://www.spreaker.com/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga-and-studying--4262945/support.
Background Music for Sleep, Meditation, Relaxation, Massage, Yoga, Studying and Therapy …… Hi everyone, this is Jim Butler and welcome to the Deep Energy Podcast 1929 to 1932 - Chronosphere - Parts 1 - 4 Please remember to turn on automatic downloads, like and subscribe, tell a friend, share with your family and leave a review. All of those things help build the podcast. Thank you so much!! ………. This podcast is ad supported. We try the best we can to keep all of the ads at the front and the back of the podcast, but depending on the length of the podcast, there maybe ads in the middle. Please check my Bandcamp page for ad free podcasts. www.jimbutler.bandcamp.com ……………………….. Watch me play live on TikTok @jimbutlermusic or On the Insight Timer App as Jim Butler Links for all of the podcasts in the Deep Energy Podcast Network: Deep Energy Podcast (Current Episodes) https://podcasts.apple.com/us/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga/id511265415 https://open.spotify.com/show/1DhN56DzDKc0FhQqR23v9c Deep Energy Classics - All of the ORIGINAL Episodes https://podcasts.apple.com/us/podcast/deep-energy-classics-original-episodes/id1734274408 https://www.spreaker.com/podcast/deep-energy-classics-original-episodes--6108618 https://open.spotify.com/show/7BjEFnqcyKWUkHcYdtFS25?si=05aeab39b5bc4a00 Deep Energy Daily Affirmations - Daily Affirmations to get you through the day https://podcasts.apple.com/us/podcast/deep-energy-daily-affirmations/id1729162791 https://open.spotify.com/show/0oaA8dRsWDQLkqeXmykGvu?si=461c7b47417b4e55 Deep Energy Guided Meditations - Guided Meditations to help through your day https://podcasts.apple.com/us/podcast/deep-energy-guided-meditations-with-michelle-davis-jim/id1732674561 https://open.spotify.com/show/1Kg2LTaFux10Ul94phybp8?si=ebbbf33757d64c13 https://www.spreaker.com/podcast/deep-energy-guided-meditations-with-michelle-davis-jim-butler--6098026 Slow Piano for Sleep - Solo Piano Pieces https://podcasts.apple.com/us/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and/id1626828397 https://www.spreaker.com/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and-relaxation--5572963 ………… www,jimbutlermusic.com jimbutlermusic@gmail.com All Social Media (FB - IG - YT - TT) is: @jimbutlermusic Merch: www.deepenergy.threadless.com Bandcamp Monthly No Ads Subscription/Patreon: www.jimbutler.bandcamp.com Custom Made Music: jimbutlermusic@gmail.com ………………….. Thank you for listening. All music is created, performed and composed by Jim Butler. AI IS NEVER USED TO CREATE MY MUSIC. Until the next time, please be kind to one another, peace, bye… …….. Original Image by the Dream App (not sponsored) or Canva (not sponsored) …………………. Become a supporter of this podcast: https://www.spreaker.com/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga-and-studying--4262945/support.
Deep Energy 2.0 - Music for Sleep, Meditation, Relaxation, Massage and Yoga
Background Music for Sleep, Meditation, Relaxation, Massage, Yoga, Studying and Therapy …… Hi everyone, this is Jim Butler and welcome to the Deep Energy Podcast 1929 to 1932 - Chronosphere - Parts 1 - 4 Please remember to turn on automatic downloads, like and subscribe, tell a friend, share with your family and leave a review. All of those things help build the podcast. Thank you so much!! ………. This podcast is ad supported. We try the best we can to keep all of the ads at the front and the back of the podcast, but depending on the length of the podcast, there maybe ads in the middle. Please check my Bandcamp page for ad free podcasts. www.jimbutler.bandcamp.com ……………………….. Watch me play live on TikTok @jimbutlermusic or On the Insight Timer App as Jim Butler Links for all of the podcasts in the Deep Energy Podcast Network: Deep Energy Podcast (Current Episodes) https://podcasts.apple.com/us/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga/id511265415 https://open.spotify.com/show/1DhN56DzDKc0FhQqR23v9c Deep Energy Classics - All of the ORIGINAL Episodes https://podcasts.apple.com/us/podcast/deep-energy-classics-original-episodes/id1734274408 https://www.spreaker.com/podcast/deep-energy-classics-original-episodes--6108618 https://open.spotify.com/show/7BjEFnqcyKWUkHcYdtFS25?si=05aeab39b5bc4a00 Deep Energy Daily Affirmations - Daily Affirmations to get you through the day https://podcasts.apple.com/us/podcast/deep-energy-daily-affirmations/id1729162791 https://open.spotify.com/show/0oaA8dRsWDQLkqeXmykGvu?si=461c7b47417b4e55 Deep Energy Guided Meditations - Guided Meditations to help through your day https://podcasts.apple.com/us/podcast/deep-energy-guided-meditations-with-michelle-davis-jim/id1732674561 https://open.spotify.com/show/1Kg2LTaFux10Ul94phybp8?si=ebbbf33757d64c13 https://www.spreaker.com/podcast/deep-energy-guided-meditations-with-michelle-davis-jim-butler--6098026 Slow Piano for Sleep - Solo Piano Pieces https://podcasts.apple.com/us/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and/id1626828397 https://www.spreaker.com/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and-relaxation--5572963 ………… www,jimbutlermusic.com jimbutlermusic@gmail.com All Social Media (FB - IG - YT - TT) is: @jimbutlermusic Merch: www.deepenergy.threadless.com Bandcamp Monthly No Ads Subscription/Patreon: www.jimbutler.bandcamp.com Custom Made Music: jimbutlermusic@gmail.com ………………….. Thank you for listening. All music is created, performed and composed by Jim Butler. AI IS NEVER USED TO CREATE MY MUSIC. Until the next time, please be kind to one another, peace, bye… …….. Original Image by the Dream App (not sponsored) or Canva (not sponsored) …………………. Become a supporter of this podcast: https://www.spreaker.com/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga-and-studying--4262945/support.
Background Music for Sleep, Meditation, Relaxation, Massage, Yoga, Studying and Therapy …… Hi everyone, this is Jim Butler and welcome to the Deep Energy Podcast 1929 to 1932 - Chronosphere - Parts 1 - 4 Please remember to turn on automatic downloads, like and subscribe, tell a friend, share with your family and leave a review. All of those things help build the podcast. Thank you so much!! ………. This podcast is ad supported. We try the best we can to keep all of the ads at the front and the back of the podcast, but depending on the length of the podcast, there maybe ads in the middle. Please check my Bandcamp page for ad free podcasts. www.jimbutler.bandcamp.com ……………………….. Watch me play live on TikTok @jimbutlermusic or On the Insight Timer App as Jim Butler Links for all of the podcasts in the Deep Energy Podcast Network: Deep Energy Podcast (Current Episodes) https://podcasts.apple.com/us/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga/id511265415 https://open.spotify.com/show/1DhN56DzDKc0FhQqR23v9c Deep Energy Classics - All of the ORIGINAL Episodes https://podcasts.apple.com/us/podcast/deep-energy-classics-original-episodes/id1734274408 https://www.spreaker.com/podcast/deep-energy-classics-original-episodes--6108618 https://open.spotify.com/show/7BjEFnqcyKWUkHcYdtFS25?si=05aeab39b5bc4a00 Deep Energy Daily Affirmations - Daily Affirmations to get you through the day https://podcasts.apple.com/us/podcast/deep-energy-daily-affirmations/id1729162791 https://open.spotify.com/show/0oaA8dRsWDQLkqeXmykGvu?si=461c7b47417b4e55 Deep Energy Guided Meditations - Guided Meditations to help through your day https://podcasts.apple.com/us/podcast/deep-energy-guided-meditations-with-michelle-davis-jim/id1732674561 https://open.spotify.com/show/1Kg2LTaFux10Ul94phybp8?si=ebbbf33757d64c13 https://www.spreaker.com/podcast/deep-energy-guided-meditations-with-michelle-davis-jim-butler--6098026 Slow Piano for Sleep - Solo Piano Pieces https://podcasts.apple.com/us/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and/id1626828397 https://www.spreaker.com/podcast/slow-piano-for-sleep-music-for-sleep-meditation-and-relaxation--5572963 ………… www,jimbutlermusic.com jimbutlermusic@gmail.com All Social Media (FB - IG - YT - TT) is: @jimbutlermusic Merch: www.deepenergy.threadless.com Bandcamp Monthly No Ads Subscription/Patreon: www.jimbutler.bandcamp.com Custom Made Music: jimbutlermusic@gmail.com ………………….. Thank you for listening. All music is created, performed and composed by Jim Butler. AI IS NEVER USED TO CREATE MY MUSIC. Until the next time, please be kind to one another, peace, bye… …….. Original Image by the Dream App (not sponsored) or Canva (not sponsored) …………………. Become a supporter of this podcast: https://www.spreaker.com/podcast/deep-energy-podcast-music-for-sleep-meditation-yoga-and-studying--4262945/support.
This special episode recorded live at KubeCon Salt Lake City last November is with Martin Mao, CEO and co-founder at Chronosphere.We talked about how M3 was foundational to the early history of Chronosphere, and how the ability to leverage M3, which Martin and his co-founder had written while they were still working at Uber. One of the most important aspects of this story is that since M3 is the foundation Chronosphere is built on, the fact that it was developed over four years at Uber while they were still on Uber's payroll meant that when they decided to build a company it allowed them to get to market dramatically faster than would have been possible otherwise. Chronosphere's core platform is a proprietary SaaS product, but still has a significant relationship with two other projects: Perses, which was developed at Chronosphere and donated to the CNCF in 2024; and FluentBit, a CNCF graduated project that was originally developed by Calyptia and became part of Chronosphere when it acquired Calyptia. We talked about: The pros and cons of donating projects to the CNCF, from both the perspectives of the company creating the project and the interests of the community and project itselfWhy Chronosphere's core platform isn't open source itselfHow a company can end up getting financial advantages from being the stewards of large open source community, even if the connection doesn't always seem obviousHow product roadmaps are managed for the two projects versus how it's managed for Chronosphere's proprietary products. If you're building a company around an open source project and aren't sure how to manage the relationship between the project and product, you might want to work with me or come to Open Source Founders Summit this May.
Segments Early Adopters News Freebies Backlog Blog White Elephant One Last Thing Don't forget to follow/reach us at: Twitter: @SuperGGRadio Email: SuperGGRadio@gmail.com Wordpress: www.superggradio.wordpress.com Facebook: https://www.facebook.com/SuperGGRadio Twitch: Twitch.tv/superggradio Youtube: http://tiny.cc/1xr07y Games Discussed: One Bear Army https://store.steampowered.com/app/2646480/One_Bear_Army/ Enter the Chronosphere https://store.steampowered.com/app/1969810/Enter_the_Chronosphere/ Driving is Hard https://store.steampowered.com/app/3175860/Driving_Is_Hard/ Dragon Age Veilguard Interaction Isn't Explicit Music Credits: Introduction: I Got A Story by DJ Quads Intermission:Chill Jazzy Lofi Hip Hop by DJ Quads End Credits: Show Me by DJ Quads Mixed by Joel DeWitte Edited by Joel DeWitte
Jeff and Christian welcome Lucy James from Gamespot and Giant Bomb to the show this week to discuss a new report that claims gamers spend more time watching others playing than playing themselves, Apple's plan to support PSVR 2 motion controllers on Vision Pro, and the huge PC Gamer Most Wanted showcase. The Playlist: Lucy: Indiana Jones and the Great Circle, After Inc Christian: Indiana Jones and the Great Circle Jeff: Path of Exile 2 VR Talk: Christian: Skydance's Behemoth Jeff: Skydance's Behemoth Parting Gifts!
Chronosphere Fiction is a story telling podcast where writers' creations come to life with sound effects and music. Our Narrator sets the scenes from a big empty theater. An earthquake in a British tourist cave exposes the skeleton of a murder victim. However, a forensics examination reveals further details so shocking that baffled police must close off the investigation to the public and hastily summon leading scientists. Learn more about your ad choices. Visit megaphone.fm/adchoices
Chronosphere Fiction is a story telling podcast where writers' creations come to life with sound effects and music. Our Narrator sets the scenes from a big empty theater. An earthquake in a British tourist cave exposes the skeleton of a murder victim. However, a forensics examination reveals further details so shocking that baffled police must close off the investigation to the public and hastily summon leading scientists. Learn more about your ad choices. Visit megaphone.fm/adchoices
From elementary school music teacher to a Senior Cloud Engineer at Defiance Digital, Mike Gray has lived quite a few lives. He hit it off with Corey during the AWS New York Summit this past summer. What brought them together? Their mutual frustration at what dominated the discourse of the event: the current fascination with GenAI. Although Mike has his qualms with AI, he also enjoys working with it quite a bit. As a matter of fact, he uses it to help automate his home and appliances! From exploring what goes into consulting customers on cloud products, to the nightmare of having your kids hijacking your Alexa with an endless stream of children's music, this episode features twists and turns, leaving no stone unturned.Show Highlights:(0:00) Intro(0:40) Chronosphere sponsor read(1:14) The responsibilities of a Senior Cloud Engineer at Defiance Digital(2:07) Cloud product consulting(3:27) The challenges of working with Kubernetes(7:50) Mike's problems with AI(9:33) Challenges with home automation(15:38) Chronosphere sponsor read(16:13) The joys of home automation(18:34) Prefered hardware for home automation(20:10) Home automation and the impact on your relationships and kids(23:43) Going from teaching kids to the world of tech(28:42) Where you can find more from MikeAbout Mike GrayMike Gray is a technologist, currently employed as a Senior Cloud Engineer, with a focus on Amazon Web Services and Google Cloud Platform.In previous roles, he has worked with companies of every size, from single-digit employee startups to Fortune 500 companies. In a past life, Mike has worked as a professional musician and music educator.Mike is also an active open source contributor, splitting time between OpenVoiceOS and Neon AI. Think of it as open source Alexa, but all your data stays at home.LinksMike's website: https://graywind.orgMike's email: mike@graywind.orgMike's Twitter: https://x.com/saxmanmikeSponsorChronosphere: https://chronosphere.io/?utm_source=duckbill-group&utm_medium=podcast
It turns out, you don't need to step outside to observe the clouds. On this episode, we're joined by Chronosphere Field CTO Ian Smith. He and Corey delve into the innovative solutions Chronosphere offers, share insights from Ian's experience in the industry, and discuss the future of cloud-native technologies. Whether you're a seasoned cloud professional or new to the field, this conversation with Ian Smith is packed with valuable perspectives and actionable takeaways.Show Highlights:(0:00) Intro(0:42) Chronosphere sponsor read(1:53) The role of Chief of Staff at Chronosphere(2:45) Getting recognized in the Gartner Magic Quadrant(4:42) Talking about the buying process(8:26) The importance of observability(10:18) Guiding customers as a vendor(12:19) Chronosphere sponsor read(12:46) What should you do as an observability buyer(16:01) Helping orgs understand observability(19:56) Avoiding toxicly positive endorsements(24:15) Being transparent as a vendor(27:43) The myth of "winner take all"(30:02) Short term fixes vs. long term solutions(33:54) Where you can find more from Ian and ChronosphereAbout Ian SmithIan Smith is Field CTO at Chronosphere where he works across sales, marketing, engineering and product to deliver better insights and outcomes to observability teams supporting high-scale cloud-native environments. Previously, he worked with observability teams across the software industry in pre-sales roles at New Relic, Wavefront, PagerDuty and Lightstep.LinksChronosphere: https://chronosphere.io/?utm_source=duckbill-group&utm_medium=podcastIan's Twitter: https://x.com/datasmithingIan's LinkedIn: https://www.linkedin.com/in/ismith314159/SponsorChronosphere: https://chronosphere.io/?utm_source=duckbill-group&utm_medium=podcast
After years of trying, Corey has finally convinced a TAM to come on the show! In this lively episode, AWS Senior Technical Account Manager Shlomo Dubrowin takes the mic to share his fascinating experiences dealing with cloud complexities. Listen in as Shlomo recounts building AWS Reasonable Account Defaults from scratch, stresses the importance of writing a solid application, and shares the benefits of leveraging GenAI to help maintain his work. Don't miss this entertaining and insightful conversation that could save you a few bucks!Show Highlights:(0:00) Intro(0:42) Chronosphere sponsor read(1:15) Finally getting a TAM on the show(2:24) Providing quality customer service as a TAM(5:31) AWS Reasonable Account Defaults(11:01) What went into crafting AWS Reasonable Accounts Defaults(12:20) Chronosphere sponsor read(12:54) Writing a program that won't break easily(17:25) Optimizing billing data(19:53) Transparency in costs(21:27) Expanding AWS Reasonable Account Defaults(23:34) Further optimizing AWS Reasonable Account Defaults in the future(26:18) Building with GenAI(29:01) Where you can find more from ShlomoAbout Shlomo DubrowinShlomo Dubrowin has been a TAM for over 6 years supporting AWS customers from startups through to Fortune 100 companies. He has spoken at re:Invent twice and has specialized in Cost Optimization. Shlomo has been in the tech industry since 1994. And he lives with his wife, son and 2 dogs.LinksClouded Torah: https://www.clouded-torah.org/SponsorChronosphere: https://chronosphere.io/?utm_source=duckbill-group&utm_medium=podcast
In this Screaming in the Cloud Replay, we revisit a spirited debated between Corey and the VP of Product and Industry Marketing at Google Cloud, Brian Hall. The topic — How much time should one spend in a job? But thankfully, their conversation doesn't limit itself to just that! Corey and Brian chat about how social media's failure to capture nuance and context can lead to some unfortunate misinterpretations. Brian offers some insight on his significant amount of time spent at Microsoft under various roles. He gives his perspective on how one should optimize their career path for where they want to go, and not just follow the money. Tune in to see how Corey and Brian let the dust settle, and develop what was a disagreement into a well-rounded conversation.Show Highlights:(0:00) Intro(1:02) Chronosphere sponsor read(1:36) Job hopping vs. job loyalty(6:14) Being in the right place at the right time(9:57) Investing in the job vs. the job investing in you(13:31) Weighing the cost of job hopping(20:14) Chronosphere sponsor read(20:47) Changing jobs to get a raise(24:02) How to attract people as a cloud employer(26:31) Changing paths into the industry(30:14) What's ahead for Brian(32:33) Where you can find more from BrianAbout Brian HallBrian Hall leads the Google Cloud Product and Industry Marketing team - focused on accelerating the growth of Google Cloud. Before joining Google, he spent more than 25 years in different forms of product marketing or engineering.Brian is the father of three children who are all named after trees in different ways. He met his wife Edie at the beginning of their first year at Yale University, where he studied math, econ, and philosophy and was the captain of the Swim and Dive team my senior year. Edie has a PhD in forestry and runs a sustainability and forestry consulting firm she started, that is aptly named “Three Trees Consulting”. They love the outdoors, tennis, running, and adventures in Brian's 1986 Volkswagen Van, which is his first and only car, that he can't bring myself to get rid of.Links:Twitter: https://twitter.com/IsForAtLinkedIn: https://www.linkedin.com/in/brhall/Episode 10: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/episode-10-education-is-not-ready-for-teacherless/Original Episode:https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/letting-the-dust-settle-on-job-hopping-with-brian-hall/Sponsorhttps://chronosphere.io/?utm_source=duckbill-group&utm_medium=podcast
Much of the discourse surrounding GenAI has centered on replacement, but what if tools focused on harmony instead? In this episode of Screaming in the Cloud, Kubiya CEO Amit Eyal Govrin explains why his company is flipping the script on AI. Amit and Corey discuss the perks and shortcomings of today's automation, how Kubiya functions as a teammate alongside its human counterparts, and the GenAI trends that aren't getting the attention they deserve. If you're worrying about your job security in the current AI climate, this discussion may help put your fears at ease.Show Highlights:(0:00) Intro(0:47) Chronosphere sponsor read(1:21) What Amit and Kubiya are building(5:34) Pros and cons of automation(9:10) Building a virtual teammate(12:39) Implementing AI with nuance(16:16) Real world applications of the tech(18:09) Firefly ad read(18:43) The value of human review in the world of AI(21:10) Complexities (or lack thereof) of GenAI(24:36) What people are sleeping on when it comes to GenAI(28:08) Where you can learn more about KubiyaAbout Amit Eyal Govrin:Amit is the CEO of Kubiya, helping the industry Break through the Time-To-Automation Paradox. As an early pioneer in the FinOps domain - executive position at Cloudyn (currently Azure Cost Manager), Zesty (advisor, early investor) and leading DevOps partnerships at AWS.Links Referenced:Kubiya: kubiya.aiSponsorChronosphere: https://chronosphere.io/?utm_source=duckbill-group&utm_medium=podcast
On this episode of DevOps Dialogues: Insights & Innovations, I am joined by Chronosphere's CEO, Martin Mao, for discussion of the partnership with CrowdStrike and the recent acquisition of Calyptia. Our discussion covers: Chronosphere recently unveiled “Logs powered by CrowdStrike,” a comprehensive log storage and visualization solution, and how embedding CrowdStrike Falcon LogScale within Chronosphere's cloud-native observability platform helps clients Chronosphere's recently acquisition of Calyptia, founded by the original creators of the Fluent Ecosystem What's next for Chronosphere, and the next sets of advancements in the observability and security space Cloud native coming off Kubecon These topics reflect ongoing discussions, challenges, and innovations within the DevOps community.
Pass The Controller Podcast: A Video Game & Nerd Culture Show
The Pass The Controller Podcast is a show where a couple of best friends dive into the latest in gaming and nerd culture. In this episode, Brenden, Mike, Todd, and Dom sit down and chat about what they've been playing, Dragon Quest 1 (Dragon Warrior), Princess Peach Showtime!, FFXIV, Brenden had the chance to preview Visions of Mana, and the gang recaps their favorite PAX East 2024 games and moments. Some highlights include: LUCID, Enter the Chronosphere, and Fretless! Please get vaccinated. Be sure to SUBSCRIBE and LEAVE A REVIEW on Apple Podcasts, Spotify, or wherever you listen to the show! passthecontroller.io twitter.com/passcontroller
This is the PAX East 2024 retrospective and run-through of all of the games I played! Woooo! Games featured: Animal Well (https://store.steampowered.com/app/813230/ANIMAL_WELL/): The Art of Flight (https://store.steampowered.com/app/1706800/The_Art_Of_Flight/) Duck Paradox (https://store.steampowered.com/app/2058150/Duck_Paradox/) Enter the Chronosphere (https://store.steampowered.com/app/1969810/Enter_the_Chronosphere/) Fallen Aces (https://store.steampowered.com/app/1411910/Fallen_Aces/) Fretless (https://store.steampowered.com/app/2429050/Fretless__The_Wrath_of_Riffson/) Lucid (https://store.steampowered.com/app/1717730/LUCID/) So to Speak (https://store.steampowered.com/app/1779030/So_to_Speak/) Sorry, We're Closed (https://store.steampowered.com/app/1796580/Sorry_Were_Closed/) Tape to Tape (https://store.steampowered.com/app/1566200/Tape_to_Tape/) Throne of Bone (https://store.steampowered.com/app/1866630/Throne_of_Bone/) YouTube VoD of the Nintendo DS panel I participated in! (https://youtu.be/exJGKqBZ9T4?si=1YzUCp0FvvuHz6Fc) Support Tales from the Backlog on Patreon! (https://patreon.com/realdavejackson) or buy me a coffee on Ko-fi (https://ko-fi.com/realdavejackson)! Join the Tales from the Backlog Discord server! (https://discord.gg/V3ZHz3vYQR) Social Media: Instagram (https://www.instagram.com/talesfromthebacklog/) Twitter (https://twitter.com/tftblpod) Facebook (https://www.facebook.com/TalesfromtheBacklog/) Cover art by Jack Allen- find him at https://www.instagram.com/jackallencaricatures/ and his other pages (https://linktr.ee/JackAllenCaricatures) Music composed by Bert Cole- find more music at bitbybitsound.com (www.bitbybitsound.com) Patreon: patreon.com/bitbybitsound Twitter: @BitByBitSound Listen to A Top 3 Podcast on Apple (https://podcasts.apple.com/us/podcast/a-top-3-podcast/id1555269504), Spotify (https://open.spotify.com/show/2euGp3pWi7Hy1c6fmY526O?si=0ebcb770618c460c) and other podcast platforms (atop3podcast.fireside.fm)!
Cloud-native is the new gold rush for businesses seeking speed, efficiency, and innovation. But are you getting the most out of your investment? Legacy troubleshooting and observability tools can be hidden anchors, dragging down your developers' productivity.The result? You're not reaping the full benefits of cloud-native, and your competitors are leaving you in the dust. Chronosphere, a leader in modern observability, can empower your developers and unlock the true potential of cloud-native. Buckle up and get ready to discover how observability can become your secret weapon for unleashing developer agility and innovation.In this episode of the EM360 Podcast, VP of Research at Eckerson Group, Kevin Petrie speaks to Ian Smith, Field CTO at Chronosphere, to discuss: Observability Cloud-native software developersDeveloper inefficiencyThe Chronosphere solutionData pipeline best practices
Cloud-native is the new gold rush for businesses seeking speed, efficiency, and innovation. But are you getting the most out of your investment? Legacy troubleshooting and observability tools can be hidden anchors, dragging down your developers' productivity.The result? You're not reaping the full benefits of cloud-native, and your competitors are leaving you in the dust. Chronosphere, a leader in modern observability, can empower your developers and unlock the true potential of cloud-native. Buckle up and get ready to discover how observability can become your secret weapon for unleashing developer agility and innovation.In this episode of the EM360 Podcast, VP of Research at Eckerson Group, Kevin Petrie speaks to Ian Smith, Field CTO at Chronosphere, to discuss: Observability Cloud-native software developersDeveloper inefficiencyThe Chronosphere solutionData pipeline best practices
Chronosphere, a startup that offers a cloud native observability platform, today announced that it has acquired Calyptia. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of The New Stack Makers, Rob Skillington, co-founder and CTO of Chronosphere, discusses the challenges engineers face in building tools for their organizations. Skillington emphasizes that the "build or buy" decision oversimplifies the issue of tooling and suggests that understanding the abstractions of a project is crucial. Engineers should consider where to build and where to buy, creating solutions that address the entire problem. Skillington advises against short-term thinking, urging innovators to consider the long-term landscape.Drawing from his experience at Uber, Skillington highlights the importance of knowing the audience and customer base, even when they are colleagues. He shares a lesson learned when building a visualization platform for engineers at Uber, where understanding user adoption as a key performance indicator upfront could have improved the project's outcome.Skillington also addresses the "not invented here syndrome," noting its prevalence in organizations like Microsoft and its potential impact on tool adoption. He suggests that younger companies, like Uber, may be more inclined to explore external solutions rather than building everything in-house. The conversation provides insights into Skillington's experiences and the considerations involved in developing internal tools and platforms.Learn more from The New Stack about Software Engineering, Observability, and Chronosphere:Cloud Native Observability: Fighting Rising Costs, IncidentsA Guide to Measuring Developer Productivity 4 Key Observability Best Practices
Rachel Dines, Head of Product and Technical Marketing at Chronosphere, joins Corey on Screaming in the Cloud to discuss why creating a cloud-native observability strategy is so critical, and the challenges that come with both defining and accomplishing that strategy to fit your current and future observability needs. Rachel explains how Chronosphere is taking an open-source approach to observability, and why it's more important than ever to acknowledge that the stakes and costs are much higher when it comes to observability in the cloud. About RachelRachel leads product and technical marketing for Chronosphere. Previously, Rachel wore lots of marketing hats at CloudHealth (acquired by VMware), and before that, she led product marketing for cloud-integrated storage at NetApp. She also spent many years as an analyst at Forrester Research. Outside of work, Rachel tries to keep up with her young son and hyper-active dog, and when she has time, enjoys crafting and eating out at local restaurants in Boston where she's based.Links Referenced: Chronosphere: https://chronosphere.io/ LinkedIn: https://www.linkedin.com/in/rdines/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's featured guest episode is brought to us by our friends at Chronosphere, and they have also brought us Rachel Dines, their Head of Product and Solutions Marketing. Rachel, great to talk to you again.Rachel: Hi, Corey. Yeah, great to talk to you, too.Corey: Watching your trajectory has been really interesting, just because starting off, when we first started, I guess, learning who each other were, you were working at CloudHealth which has since become VMware. And I was trying to figure out, huh, the cloud runs on money. How about that? It feels like it was a thousand years ago, but neither one of us is quite that old.Rachel: It does feel like several lifetimes ago. You were just this snarky guy with a few followers on Twitter, and I was trying to figure out what you were doing mucking around with my customers [laugh]. Then [laugh] we kind of both figured out what we're doing, right?Corey: So, speaking of that iterative process, today, you are at Chronosphere, which is an observability company. We would have called it a monitoring company five years ago, but now that's become an insult after the observability war dust has settled. So, I want to talk to you about something that I've been kicking around for a while because I feel like there's a gap somewhere. Let's say that I build a crappy web app—because all of my web apps inherently are crappy—and it makes money through some mystical form of alchemy. And I have a bunch of users, and I eventually realize, huh, I should probably have a better observability story than waiting for the phone to ring and a customer telling me it's broken.So, I start instrumenting various aspects of it that seem to make sense. Maybe I go too low level, like looking at all the discs on every server to tell me if they're getting full or not, like their ancient servers. Maybe I just have a Pingdom equivalent of is the website up enough to respond to a packet? And as I wind up experiencing different failure modes and getting yelled at by different constituencies—in my own career trajectory, my own boss—you start instrumenting for all those different kinds of breakages, you start aggregating the logs somewhere and the volume gets bigger and bigger with time. But it feels like it's sort of a reactive process as you stumble through that entire environment.And I know it's not just me because I've seen this unfold in similar ways in a bunch of different companies. It feels to me, very strongly, like it is something that happens to you, rather than something you set about from day one with a strategy in mind. What's your take on an effective way to think about strategy when it comes to observability?Rachel: You just nailed it. That's exactly the kind of progression that we so often see. And that's what I really was excited to talk with you about today—Corey: Oh, thank God. I was worried for a minute there that you'd be like, “What the hell are you talking about? Are you just, like, some sort of crap engineer?” And, “Yes, but it's mean of you to say it.” But yeah, what I'm trying to figure out is there some magic that I just was never connecting? Because it always feels like you're in trouble because the site's always broken, and oh, like, if the disk fills up, yeah, oh, now we're going to start monitoring to make sure the disk doesn't fill up. Then you wind up getting barraged with alerts, and no one wins, and it's an uncomfortable period of time.Rachel: Uncomfortable period of time. That is one very polite way to put it. I mean, I will say, it is very rare to find a company that actually sits down and thinks, “This is our observability strategy. This is what we want to get out of observability.” Like, you can think about a strategy and, like, the old school sense, and you know, as an industry analyst, so I'm going to have to go back to, like, my roots at Forrester with thinking about, like, the people, and the process, and the technology.But really what the bigger component here is like, what's the business impact? What do you want to get out of your observability platform? What are you trying to achieve? And a lot of the time, people have thought, “Oh, observability strategy. Great, I'm just going to buy a tool. That's it. Like, that's my strategy.”And I hate to bring it to you, but buying tools is not a strategy. I'm not going to say, like, buy this tool. I'm not even going to say, “Buy Chronosphere.” That's not a strategy. Well, you should buy Chronosphere. But that's not a strategy.Corey: Of course. I'm going to throw the money by the wheelbarrow at various observability vendors, and hope it solves my problem. But if that solved the problem—I've got to be direct—I've never spoken to those customers.Rachel: Exactly. I mean, that's why this space is such a great one to come in and be very disruptive in. And I think, back in the days when we were running in data centers, maybe even before virtual machines, you could probably get away with not having a monitoring strategy—I'm not going to call it observability; it's not we call the back then—you could get away with not having a strategy because what was the worst that was going to happen, right? It wasn't like there was a finite amount that your monitoring bill could be, there was a finite amount that your customer impact could be. Like, you're paying the penny slots, right?We're not on the penny slots anymore. We're in the $50 craps table, and it's Las Vegas, and if you lose the game, you're going to have to run down the street without your shirt. Like, the game and the stakes have changed, and we're still pretending like we're playing penny slots, and we're not anymore.Corey: That's a good way of framing it. I mean, I still remember some of my biggest observability challenges were building highly available rsyslog clusters so that you could bounce a member and not lose any log data because some of that was transactionally important. And we've gone beyond that to a stupendous degree, but it still feels like you don't wind up building this into the application from day one. More's the pity because if you did, and did that intelligently, that opens up a whole world of possibilities. I dream of that changing where one day, whenever you start to build an app, oh, and we just push the button and automatically instrument with OTel, so you instrument the thing once everywhere it makes sense to do it, and then you can do your vendor selection and what you said were decisions later in time. But these days, we're not there.Rachel: Well, I mean, and there's also the question of just the legacy environment and the tech debt. Even if you wanted to, the—actually I was having a beer yesterday with a friend who's a VP of Engineering, and he's got his new environment that they're building with observability instrumented from the start. How beautiful. They've got OTel, they're going to have tracing. And then he's got his legacy environment, which is a hot mess.So, you know, there's always going to be this bridge of the old and the new. But this was where it comes back to no matter where you're at, you can stop and think, like, “What are we doing and why?” What is the cost of this? And not just cost in dollars, which I know you and I could talk about very deeply for a long period of time, but like, the opportunity costs. Developers are working on stuff that they could be working on something that's more valuable.Or like the cost of making people work round the clock, trying to troubleshoot issues when there could be an easier way. So, I think it's like stepping back and thinking about cost in terms of dollar sense, time, opportunity, and then also impact, and starting to make some decisions about what you're going to do in the future that's different. Once again, you might be stuck with some legacy stuff that you can't really change that much, but [laugh] you got to be realistic about where you're at.Corey: I think that that is a… it's a hard lesson to be very direct, in that, companies need to learn it the hard way, for better or worse. Honestly, this is one of the things that I always noticed in startup land, where you had a whole bunch of, frankly, relatively early-career engineers in their early-20s, if not younger. But then the ops person was always significantly older because the thing you actually want to hear from your ops person, regardless of how you slice it, is, “Oh, yeah, I've seen this kind of problem before. Here's how we fixed it.” Or even better, “Here's the thing we're doing, and I know how that's going to become a problem. Let's fix it before it does.” It's the, “What are you buying by bringing that person in?” “Experience, mostly.”Rachel: Yeah, that's an interesting point you make, and it kind of leads me down this little bit of a side note, but a really interesting antipattern that I've been seeing in a lot of companies is that more seasoned ops person, they're the one who everyone calls when something goes wrong. Like, they're the one who, like, “Oh, my God, I don't know how to fix it. This is a big hairy problem,” I call that one ops person, or I call that very experienced person. That experience person then becomes this huge bottleneck into solving problems that people don't really—they might even be the only one who knows how to use the observability tool. So, if we can't find a way to democratize our observability tooling a little bit more so, like, just day-to-day engineers, like, more junior engineers, newer ones, people who are still ramping, can actually use the tool and be successful, we're going to have a big problem when these ops people walk out the door, maybe they retire, maybe they just get sick of it. We have these massive bottlenecks in organizations, whether it's ops or DevOps or whatever, that I see often exacerbated by observability tools. Just a side note.Corey: Yeah. On some level, it feels like a lot of these things can be fixed with tooling. And I'm not going to say that tools aren't important. You ever tried to implement observability by hand? It doesn't work. There have to be computers somewhere in the loop, if nothing else.And then it just seems to devolve into a giant swamp of different companies, doing different things, taking different approaches. And, on some level, whenever you read the marketing or hear the stories any of these companies tell you also to normalize it from translating from whatever marketing language they've got into something that comports with the reality of your own environment and seeing if they align. And that feels like it is so much easier said than done.Rachel: This is a noisy space, that is for sure. And you know, I think we could go out to ten people right now and ask those ten people to define observability, and we would come back with ten different definitions. And then if you throw a marketing person in the mix, right—guilty as charged, and I know you're a marketing person, too, Corey, so you got to take some of the blame—it gets mucky, right? But like I said a minute ago, the answer is not tools. Tools can be part of the strategy, but if you're just thinking, “I'm going to buy a tool and that's going to solve my problem,” you're going to end up like this company I was talking to recently that has 25 different observability tools.And not only do they have 25 different observability tools, what's worse is they have 25 different definitions for their SLOs and 25 different names for the same metric. And to be honest, it's just a mess. I'm not saying, like, go be Draconian and, you know, tell all the engineers, like, “You can only use this tool [unintelligible 00:10:34] use that tool,” you got to figure out this kind of balance of, like, hands-on, hands-off, you know? How much do you centralize, how much do you push and standardize? Otherwise, you end up with just a huge mess.Corey: On some level, it feels like it was easier back in the days of building it yourself with Nagios because there's only one answer, and it sucks, unless you want to start going down the world of HP OpenView. Which step one: hire a 50-person team to manage OpenView. Okay, that's not going to solve my problem either. So, let's get a little more specific. How does Chronosphere approach this?Because historically, when I've spoken to folks at Chronosphere, there isn't that much of a day one story, in that, “I'm going to build a crappy web app. Let's instrument it for Chronosphere.” There's a certain, “You must be at least this tall to ride,” implicit expectation built into the product just based upon its origins. And I'm not saying that doesn't make sense, but it also means there's really no such thing as a greenfield build out for you either.Rachel: Well, yes and no. I mean, I think there's no green fields out there because everyone's doing something for observability, or monitoring, or whatever you want to call it, right? Whether they've got Nagios, whether they've got the Dog, whether they've got something else in there, they have some way of introspecting their systems, right? So, one of the things that Chronosphere is built on, that I actually think this is part of something—a way you might think about building out an observability strategy as well, is this concept of control and open-source compatibility. So, we only can collect data via open-source standards. You have to send this data via Prometheus, via Open Telemetry, it could be older standards, like, you know, statsd, Graphite, but we don't have any proprietary instrumentation.And if I was making a recommendation to somebody building out their observability strategy right now, I would say open, open, open, all day long because that gives you a huge amount of flexibility in the future. Because guess what? You know, you might put together an observability strategy that seems like it makes sense for right now—actually, I was talking to a B2B SaaS company that told me that they made a choice a couple of years ago on an observability tool. It seemed like the right choice at the time. They were growing so fast, they very quickly realized it was a terrible choice.But now, it's going to be really hard for them to migrate because it's all based on proprietary standards. Now, of course, a few years ago, they didn't have the luxury of Open Telemetry and all of these, but now that we have this, we can use these to kind of future-proof our mistakes. So, that's one big area that, once again, both my recommendation and happens to be our approach at Chronosphere.Corey: I think that that's a fair way of viewing it. It's a constant challenge, too, just because increasingly—you mentioned the Dog earlier, for example—I will say that for years, I have been asked whether or not at The Duckbill Group, we look at Azure bills or GCP bills. Nope, we are pure AWS. Recently, we started to hear that same inquiry specifically around Datadog, to the point where it has become a board-level concern at very large companies. And that is a challenge, on some level.I don't deviate from my typical path of I fix AWS bills, and that's enough impossible problems for one lifetime, but there is a strong sense of you want to record as much as possible for a variety of excellent reasons, but there's an implicit cost to doing that, and in many cases, the cost of observability becomes a massive contributor to the overall cost. Netflix has said in talks before that they're effectively an observability company that also happens to stream movies, just because it takes so much effort, engineering, and raw computing resources in order to get that data do something actionable with it. It's a hard problem.Rachel: It's a huge problem, and it's a big part of why I work at Chronosphere, to be honest. Because when I was—you know, towards the tail end at my previous company in cloud cost management, I had a lot of customers coming to me saying, “Hey, when are you going to tackle our Dog or our New Relic or whatever?” Similar to the experience you're having now, Corey, this was happening to me three, four years ago. And I noticed that there is definitely a correlation between people who are having these really big challenges with their observability bills and people that were adopting, like Kubernetes, and microservices and cloud-native. And it was around that time that I met the Chronosphere team, which is exactly what we do, right? We focus on observability for these cloud-native environments where observability data just goes, like, wild.We see 10X 20X as much observability data and that's what's driving up these costs. And yeah, it is becoming a board-level concern. I mean, and coming back to the concept of strategy, like if observability is the second or third most expensive item in your engineering bill—like, obviously, cloud infrastructure, number one—number two and number three is probably observability. How can you not have a strategy for that? How can this be something the board asks you about, and you're like, “What are we trying to get out of this? What's our purpose?” “Uhhhh… troubleshooting?”Corey: Right because it turns into business metrics as well. It's not just about is the site up or not. There's a—like, one of the things that always drove me nuts not just in the observability space, but even in cloud costing is where, okay, your costs have gone up this week so you get a frowny face, or it's in red, like traffic light coloring. Cool, but for a lot of architectures and a lot of customers, that's because you're doing a lot more volume. That translates directly into increased revenues, increased things you care about. You don't have the position or the context to say, “That's good,” or, “That's bad.” It simply is. And you can start deriving business insight from that. And I think that is the real observability story that I think has largely gone untold at tech conferences, at least.Rachel: It's so right. I mean, spending more on something is not inherently bad if you're getting more value out of it. And it definitely a challenge on the cloud cost management side. “My costs are going up, but my revenue is going up a lot faster, so I'm okay.” And I think some of the plays, like you know, we put observability in this box of, like, it's for low-level troubleshooting, but really, if you step back and think about it, there's a lot of larger, bigger picture initiatives that observability can contribute to in an org, like digital transformation. I know that's a buzzword, but, like that is a legit thing that a lot of CTOs are out there thinking about. Like, how do we, you know, get out of the tech debt world, and how do we get into cloud-native?Maybe it's developer efficiency. God, there's a lot of people talking about developer efficiency. Last week at KubeCon, that was one of the big, big topics. I mean, and yeah, what [laugh] what about cost savings? To me, we've put observability in a smaller box, and it needs to bust out.And I see this also in our customer base, you know? Customers like DoorDash use observability, not just to look at their infrastructure and their applications, but also look at their business. At any given minute, they know how many Dashers are on the road, how many orders are being placed, cut by geos, down to the—actually down to the second, and they can use that to make decisions.Corey: This is one of those things that I always found a little strange coming from the world of running systems in large [unintelligible 00:17:28] environments to fixing AWS bills. There's nothing that even resembles a fast, reactive response in the world of AWS billing. You wind up with a runaway bill, they're going to resolve that over a period of weeks, on Seattle business hours. If you wind up spinning something up that creates a whole bunch of very expensive drivers behind your bill, it's going to take three days, in most cases, before that starts showing up anywhere that you can reasonably expect to get at it. The idea of near real time is a lie unless you want to start instrumenting everything that you're doing to trap the calls and then run cost extrapolation from there. That's hard to do.Observability is a very different story, where latencies start to matter, where being able to get leading indicators of certain events—be a technical or business—start to be very important. But it seems like it's so hard to wind up getting there from where most people are. Because I know we like to talk dismissively about the past, but let's face it, conference-ware is the stuff we're the proudest of. The reality is the burning dumpster of regret in our data centers that still also drives giant piles of revenue, so you can't turn it off, nor would you want to, but you feel bad about it as a result. It just feels like it's such a big leap.Rachel: It is a big leap. And I think the very first step I would say is trying to get to this point of clarity and being honest with yourself about where you're at and where you want to be. And sometimes not making a choice is a choice, right, as well. So, sticking with the status quo is making a choice. And so, like, as we get into things like the holiday season right now, and I know there's going to be people that are on-call 24/7 during the holidays, potentially, to keep something that's just duct-taped together barely up and running, I'm making a choice; you're make a choice to do that. So, I think that's like the first step is the kind of… at least acknowledging where you're at, where you want to be, and if you're not going to make a change, just understanding the cost and being realistic about it.Corey: Yeah, being realistic, I think, is one of the hardest challenges because it's easy to wind up going for the aspirational story of, “In the future when everything's great.” Like, “Okay, cool. I appreciate the need to plant that flag on the hill somewhere. What's the next step? What can we get done by the end of this week that materially improves us from where we started the week?” And I think that with the aspirational conference-ware stories, it's hard to break that down into things that are actionable, that don't feel like they're going to be an interminable slog across your entire existing environment.Rachel: No, I get it. And for things like, you know, instrumenting and adding tracing and adding OTEL, a lot of the time, the return that you get on that investment is… it's not quite like, “I put a dollar in, I get a dollar out,” I mean, something like tracing, you can't get to 60% instrumentation and get 60% of the value. You need to be able to get to, like, 80, 90%, and then you'll get a huge amount of value. So, it's sort of like you're trudging up this hill, you're charging up this hill, and then finally you get to the plateau, and it's beautiful. But that hill is steep, and it's long, and it's not pretty. And I don't know what to say other than there's a plateau near the top. And those companies that do this well really get a ton of value out of it. And that's the dream, that we want to help customers get up that hill. But yeah, I'm not going to lie, the hill can be steep.Corey: One thing that I find interesting is there's almost a bimodal distribution in companies that I talk to. On the one side, you have companies like, I don't know, a Chronosphere is a good example of this. Presumably you have a cloud bill somewhere and the majority of your cloud spend will be on what amounts to a single application, probably in your case called, I don't know, Chronosphere. It shares the name of the company. The other side of that distribution is the large enterprise conglomerates where they're spending, I don't know, $400 million a year on cloud, but their largest workload is 3 million bucks, and it's just a very long tail of a whole bunch of different workloads, applications, teams, et cetera.So, what I'm curious about from the Chronosphere perspective—or the product you have, not the ‘you' in this metaphor, which gets confusing—is, it feels easier to instrument a Chronosphere-like company that has a primary workload that is the massive driver of most things and get that instrumented and start getting an observability story around that than it does to try and go to a giant company and, “Okay, 1500 teams need to all implement this thing that are all going in different directions.” How do you see it playing out among your customer base, if that bimodal distribution holds up in your world?Rachel: It does and it doesn't. So, first of all, for a lot of our customers, we often start with metrics. And starting with metrics means Prometheus. And Prometheus has hundreds of exporters. It is basically built into Kubernetes. So, if you're running Kubernetes, getting Prometheus metrics out, actually not a very big lift. So, we find that we start with Prometheus, we start with getting metrics in, and we can get a lot—I mean, customers—we have a lot of customers that use us just for metrics, and they get a massive amount of value.But then once they're ready, they can start instrumenting for OTEL and start getting traces in as well. And yeah, in large organizations, it does tend to be one team, one application, one service, one department that kind of goes at it and gets all that instrumented. But I've even seen very large organizations, when they get their act together and decide, like, “No, we're doing this,” they can get OTel instrumented fairly quickly. So, I guess it's, like, a lining up. It's more of a people issue than a technical issue a lot of the time.Like, getting everyone lined up and making sure that like, yes, we all agree. We're on board. We're going to do this. But it's usually, like, it's a start small, and it doesn't have to be all or nothing. We also just recently added the ability to ingest events, which is actually a really beautiful thing, and it's very, very straightforward.It basically just—we connect to your existing other DevOps tools, so whether it's, like, a Buildkite, or a GitHub, or, like, a LaunchDarkly, and then anytime something happens in one of those tools, that gets registered as an event in Chronosphere. And then we overlay those events over your alerts. So, when an alert fires, then first thing I do is I go look at the alert page, and it says, “Hey, someone did a deploy five minutes ago,” or, “There was a feature flag flipped three minutes ago,” I solved the problem right then. I don't think of this as—there's not an all or nothing nature to any of this stuff. Yes, tracing is a little bit of a—you know, like I said, it's one of those things where you have to make a lot of investment before you get a big reward, but that's not the case in all areas of observability.Corey: Yeah. I would agree. Do you find that there's a significant easy, early win when customers start adopting Chronosphere? Because one of the problems that I've found, especially with things that are holistic, and as you talk about tracing, well, you need to get to a certain point of coverage before you see value. But human psychology being what it is, you kind of want to be able to demonstrate, oh, see, the Meantime To Dopamine needs to come down, to borrow an old phrase. Do you find that some of there's some easy wins that start to help people to see the light? Because otherwise, it just feels like a whole bunch of work for no discernible benefit to them.Rachel: Yeah, at least for the Chronosphere customer base, one of the areas where we're seeing a lot of traction this year is in optimizing the costs, like, coming back to the cost story of their overall observability bill. So, we have this concept of the control plane in our product where all the data that we ingest hits the control plane. At that point, that customer can look at the data, analyze it, and decide this is useful, this is not useful. And actually, not just decide that, but we show them what's useful, what's not useful. What's being used, what's high cardinality, but—and high cost, but maybe no one's touched it.And then we can make decisions around aggregating it, dropping it, combining it, doing all sorts of fancy things, changing the—you know, downsampling it. We can do this, on the trace side, we can do it both head based and tail based. On the metrics side, it's as it hits the control plane and then streams out. And then they only pay for the data that we store. So typically, customers are—they come on board and immediately reduce their observability dataset by 60%. Like, that's just straight up, that's the average.And we've seen some customers get really aggressive, get up to, like, in the 90s, where they realize we're only using 10% of this data. Let's get rid of the rest of it. We're not going to pay for it. So, paying a lot less helps in a lot of ways. It also helps companies get more coverage of their observability. It also helps customers get more coverage of their overall stack. So, I was talking recently with an autonomous vehicle driving company that recently came to us from the Dog, and they had made some really tough choices and were no longer monitoring their pre-prod environments at all because they just couldn't afford to do it anymore. It's like, well, now they can, and we're still saving the money.Corey: I think that there's also the downstream effect of the money saving to that, for example, I don't fix observability bills directly. But, “Huh, why is your CloudWatch bill through the roof?” Or data egress charges in some cases? It's oh because your observability vendor is pounding the crap out of those endpoints and pulling all your log data across the internet, et cetera. And that tends to mean, oh, yeah, it's not just the first-order effect; it's the second and third and fourth-order effects this winds up having. It becomes almost a holistic challenge. I think that trying to put observability in its own bucket, on some level—when you're looking at it from a cost perspective—starts to be a, I guess, a structure that makes less and less sense in the fullness of time.Rachel: Yeah, I would agree with that. I think that just looking at the bill from your vendor is one very small piece of the overall cost you're incurring. I mean, all of the things you mentioned, the egress, the CloudWatch, the other services, it's impacting, what about the people?Corey: Yeah, it sure is great that your team works for free.Rachel: [laugh]. Exactly, right? I know, and it makes me think a little bit about that viral story about that particular company with a certain vendor that had a $65 million per year observability bill. And that impacted not just them, but, like, it showed up in both vendors' financial filings. Like, how did you get there? How did you get to that point? And I think this all comes back to the value in the ROI equation. Yes, we can all sit in our armchairs and be like, “Well, that was dumb,” but I know there are very smart people out there that just got into a bad situation by kicking the can down the road on not thinking about the strategy.Corey: Absolutely. I really want to thank you for taking the time to speak with me about, I guess, the bigger picture questions rather than the nuts and bolts of a product. I like understanding the overall view that drives a lot of these things. I don't feel I get to have enough of those conversations some weeks, so thank you for humoring me. If people want to learn more, where's the best place for them to go?Rachel: So, they should definitely check out the Chronosphere website. Brand new beautiful spankin' new website: chronosphere.io. And you can also find me on LinkedIn. I'm not really on the Twitters so much anymore, but I'd love to chat with you on LinkedIn and hear what you have to say.Corey: And we will, of course, put links to all of that in the [show notes 00:28:26]. Thank you so much for taking the time to speak with me. It's appreciated.Rachel: Thank you, Corey. Always fun.Corey: Rachel Dines, Head of Product and Solutions Marketing at Chronosphere. This has been a featured guest episode brought to us by our friends at Chronosphere, and I'm Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry and insulting comment that I will one day read once I finished building my highly available rsyslog system to consume it with.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business, and we get to the point. Visit duckbillgroup.com to get started.
Observability software helps teams to actively monitor and debug their systems, and these tools are increasingly vital in DevOps. However, it's not uncommon for the volume of observability data to exceed the amount of actual business data. This creates two challenges – how to analyze the large stream of observability data, and how to keep The post Chronosphere with Martin Mao appeared first on Software Engineering Daily.
Today, John Duisberg sits down with Seth Bartholomew, the Head of Employee Experience at Chronosphere. With a rich background in People leadership roles at top companies such as Huly, Disney, and FabFitFun, Seth has a unique perspective on how to create thriving company cultures and empower employees to truly flourish. Discover how Chronosphere operationalizes company values across different geographies and time zones, while maintaining a welcoming, remote-first environment. You'll also learn about the importance of intentionality in acquiring and retaining talent, the role of data in crafting better employee experiences, and the critical best practices shared in Chronosphere's Remote-First Playbook. Tune in to hear Seth and John's conversation about helping your organization and people flourish in a remote-first work environment. Don't forget to join our leadership community at thegreatretention.com to stay informed about upcoming events and other helpful content designed to help you go further as a people-first leader and develop a winning culture, everywhere your leadership influence reaches. Resources related to this episode Visit https://chronosphere.io/ Follow Seth Bartholomew at https://www.linkedin.com/in/sethbartholomew/ Referenced during today's episode: https://www.donut.com/ Credits Theme Music
Observability software helps teams to actively monitor and debug their systems, and these tools are increasingly vital in DevOps. However, it’s not uncommon for the volume of observability data to exceed the amount of actual business data. This creates two challenges – how to analyze the large stream of observability data, and how to keep The post Chronosphere with Martin Mao appeared first on Software Engineering Daily.
PromCon, the flagship yearly event of the Prometheus community, took place in Berlin 28-29 September 2023, and we're here to bring you the highlights from the Prometheus ecosystem, including the pivotal decision on Prometheus 3.0! Brace yourselves for some exciting announcements! We also delved into the latest addition to the ecosystem, Perses project, which promises to revolutionize the world of dashboard visualization and monitoring. This new open source project, now part of the Linux Foundation, aims to become the GitOps-friendly standard dashboard visualization tool for Prometheus and other data sources. On this episode I hosted Augustin Husson, Prometheus maintainer and the creator of the Perses project, at the heels of his PromCon announcement of the Perses release. Augustin is also principal engineer at Amadeus, a technology vendor for travel agencies. Augustin joined Amadeus to create a new internal monitoring system based on Prometheus, and he will also share his end-user journey and insights. The episode was live-streamed on 4 October 2023 and the video is available at https://www.youtube.com/watch?v=MzQZagfgIKk OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube. We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and chime in with your comments and questions on the live chat. https://www.youtube.com/@openobservabilitytalks https://www.twitch.tv/openobservability Show Notes: 00:00 - show, episode and guest intro 05:46 - OpenTelemetry support in Prometheus 11:02 - Green IT use case with Prometheus 14:12 - scrape sharding support in Prometheus operator 19:33 - scaling out Alerts and alert sharding 24:45 - Windows Exporter is released 27:50 - revamping the Prometheus UI with React 30:48 - Prometheus 3.0 and DevDay updates 41:04 - Perses project origins at Amadeus 47:10 - Perses joining open source foundation 49:58 - embedding Perses in Red Hat OpenShift and in Chronosphere 54:05 - Perses current release 59:32 - Perses roadmap 1:03:11 - Perses joining the Linux Foundation and the CNCF 1:07:47 - how to get involved in Perses 1:10:03 - episode outro Resources: Perses on GitHub: https://github.com/perses/perses The CoreDash Project: https://github.com/coredashio/community Perses overview talk at PromCon 2023: https://promcon.io/2023-berlin/talks/... Prometheus support for OpenTelemetry Metrics in OTLP: https://horovits.medium.com/83f85878e46a Socials: Twitter: https://twitter.com/OpenObserv YouTube: https://www.youtube.com/@openobservabilitytalks Dotan Horovits ============ Twitter: @horovits LinkedIn: in/horovits Mastodon: @horovits@fosstodon Augustin Husson =============== Twitter: https://twitter.com/nexucis LinkedIn: https://fr.linkedin.com/in/augustin-husson-69a050a1 Mastodon: https://hachyderm.io/@nexucis
Observability in multi-cloud environments is becoming increasingly complex, as highlighted by Martin Mao, CEO and co-founder of Chronosphere. This challenge has two main components: a rise in customer-facing incidents, which demand significant engineering time for debugging, and the ineffectiveness and high cost of existing tools. These issues are creating a problematic return on investment for the industry.Mao discussed these observability challenges on The New Stack Makers podcast with host Heather Joslyn, emphasizing the need to help teams prioritize alerts and encouraging a shift left approach for security responsibility among developers. With the adoption of distributed cloud architectures, organizations are not only dealing with a surge in data but also facing a cultural shift towards DevOps, where developers are expected to be more accountable for their software in production.Historically, operations teams handled software in production, but in the cloud-native world, developers must take on these responsibilities themselves. Many current observability tools were designed for centralized operations teams, which creates a gap in addressing developer needs.Mao suggests that cloud-native observability tools should empower developers to run and maintain their software in production, providing insights into the complex environments they work in. Moreover, observability tools can assist developers in understanding the intricacies of their software, such as its dependencies and operational aspects.To streamline the data obtained from observability efforts and manage costs, Chronosphere introduced the "Observability Data Optimization Cycle." This framework starts with establishing centralized governance to set budgets for teams generating data. The goal is to optimize data usage to extract value without incurring unnecessary costs. This approach applies financial operations (FinOps) concepts to the observability space, helping organizations tackle the challenges of cloud-native observability.Learn more from The New Stack about Observability and Chronosphere:Observability Overview, News and Trends4 Key Observability Best PracticesTop Ways to Reduce Your Observability CostsTop 4 Factors for Cloud Native Observability Tool Selection
Guest Gabriela Serret-Campos is the Global Head of People Strategy and Talent at Chronosphere, a cloud company that raised $200 million on a $1+ billion valuation in 2021. She is a seasoned professional with a rich background in shaping organizational strategies and fostering talent to drive success. As the modern workplace changes, Serret-Campos is an expert navigator of the evolving shifts and new challenges in recruiting and culture building. She possesses invaluable insights into how venture-backed companies can quickly scale a team with an emphasis on employee retention. Prior to Chronosphere, Serret-Campos served as the SVP of People Operations at Drizly, where she played a pivotal role leading up to the company's acquisition by Uber for $1.1 billion in 2021. Adjacent to her roles, she founded Lead by Design, an advisory consultancy specializing in leadership development, strategic planning, and DEI initiatives. Serret-Campos is a founding member of the POPS Advisory Board, championing a people-first approach and promoting the idea that employees deserve care and investment. She's also a member of the Underscore VC Core Community. Her earlier experience included running boutique leadership development firm Intelligent.ly, which was focused on partnering with hyper growth tech companies to help them align their people to purpose to drive productivity. She did similar work at Infusionsoft and EDMC-OHE, highlighting her consistent dedication to fostering leadership development. She designed innovative programs and initiatives that helped emerging leaders thrive and succeed. We were honored to sit down with Serret-Campos at Startup Boston Week to discuss all of her experiences and how they have shaped her ability to navigate the future of work.
Martin Mao, CEO & Cofounder at Chronosphere, joins Corey on Screaming in the Cloud to discuss the trends he sees in the observability industry. Martin explains why he feels measuring observability costs isn't nearly as important as understanding the velocity of observability costs increasing, and why he feels efficiency is something that has to be built into processes as companies scale new functionality. Corey and Martin also explore how observability can now be used by business executives to provide top line visibility and value, as opposed to just seeing observability as a necessary cost. About MartinMartin is a technologist with a history of solving problems at the largest scale in the world and is passionate about helping enterprises use cloud native observability and open source technologies to succeed on their cloud native journey. He's now the Co-Founder & CEO of Chronosphere, a Series C startup with $255M in funding, backed by Greylock, Lux Capital, General Atlantic, Addition, and Founders Fund. He was previously at Uber, where he led the development and SRE teams that created and operated M3. Previously, he worked at AWS, Microsoft, and Google. He and his family are based in the Seattle area, and he enjoys playing soccer and eating meat pies in his spare time.Links Referenced: Chronosphere: https://chronosphere.io/ LinkedIn: https://www.linkedin.com/in/martinmao/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Human-scale teams use Tailscale to build trusted networks. Tailscale Funnel is a great way to share a local service with your team for collaboration, testing, and experimentation. Funnel securely exposes your dev environment at a stable URL, complete with auto-provisioned TLS certificates. Use it from the command line or the new VS Code extensions. In a few keystrokes, you can securely expose a local port to the internet, right from the IDE.I did this in a talk I gave at Tailscale Up, their first inaugural developer conference. I used it to present my slides and only revealed that that's what I was doing at the end of it. It's awesome, it works! Check it out!Their free plan now includes 3 users & 100 devices. Try it at snark.cloud/tailscalescream Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted guest episode is brought to us by our friends at Chronosphere. It's been a couple of years since I got to talk to their CEO and co-founder, Martin Mao, who is kind enough to subject himself to my slings and arrows today. Martin, great to talk to you.Martin: Great to talk to you again, Corey, and looking forward to it.Corey: I should probably disclose that I did run into you at Monitorama a week before this recording. So, that was an awful lot of fun to just catch up and see people in person again. But one thing that they started off the conference with, in the welcome-to-the-show style of talk, was the question about benchmarking: what observability spend should be as a percentage of your infrastructure spent. And from my perspective, that really feels a lot like a question that looks like, “Well, how long should a piece of string be?” It's always highly contextual.Martin: Mm-hm.Corey: Agree, disagree, or are you hopelessly compromised because you are, in fact, an observability vendor, and it should always be more than it is today?Martin: [laugh]. I would say, definitely agree with you from a exact number perspective. I don't think there is a magic number like 13.82% that this should be. It definitely depends on the context of how observability is used within a company, and really, ultimately, just like anything else you pay for, it really gets derived from the value you get out of it. So, I feel like if you feel like you're getting the value out of it, it's sort of worth the dollars that you put in.I do see why a lot of companies out there and people are interested because they're trying to benchmark, to trying to see, am I doing best practice? So, I do think that there are probably some best practice ranges that I'd say most typical organizations out there that we see. This is one thing I'd say. The other thing I'd say when it comes to observability costs is one of the concerns we've seen talking with companies is that the relative proportion of that cost to the infrastructure is rising over time. And that's probably a bad sign for companies because if you extrapolate, you know, if the relative cost of observability is growing faster than infrastructure, and you extrapolate that out a few years, then the direction in which this is going is bad. So, it's probably more the velocity of growth than the absolute number that folks should be worried about.Corey: I think that that is probably a fair assessment. I get it all the time, at least in years past, where companies will say, “For every 1000 daily active users, what should it cost to service them?” And I finally snapped in one of my talks that I gave at DevOps Enterprise Summit, and said, I think it was something like $7.34.Martin: [laugh]. Right, right.Corey: It's an arbitrary number that has no context on your business, regardless of whether those users are, you know, Twitter users or large banks you have partnerships with. But now you have something to cite. Does it help you? Not really. But we'll it get people to leave you alone and stop asking you awkward questions?Martin: Right, right.Corey: Also not really, but at least now you have a number.Martin: Yeah, a hundred percent. And again, like I said, there's no—and glad magic numbers weren't too far away from each other. But yeah, I mean, there's no exact number there, for sure. One pattern I've been seeing more recently is, like, rather than asking for the number, there's been a lot more clarity in companies on figuring out, “Well, okay, before even pick what the target should be, how much am I spending on this per whatever unit of efficiency is?” Right?And generally, that unit of efficiency, I've actually seen it being mapped more to the business side of things, so perhaps to the number of customers or to customer transactions and whatnot. And those things are generally perhaps easier to model out and easier to justify as opposed to purely, you know, the number of seats or the number of end-users. But I've seen a lot more companies at least focus on the measurement of things. And again, it's been more about this sort of, rather than the absolute number, the relative change in number because I think a lot of these companies are trying to figure out, is my business scaling in a linear fashion or sub-linear fashion or perhaps an exponential fashion, if it's—the costs are, you know, you can imagine growing exponentially, that's a really bad thing that you want to get ahead of.Corey: That I think is probably the real question people are getting at is, it seems like this number only really goes up and to the right, it's not something that we have any real visibility into, and in many cases, it's just the pieces of it that rise to the occasion. A common story is someone who winds up configuring a monitoring system, and they'll be concerned about how much they're paying that vendor, but ignore the fact that, well, it's beating up your CloudWatch API charges all the time on this other side as well, and data egress is not free—surprise, surprise. So, it's the direct costs, it's the indirect costs. And the thing people never talk about, of course, is the cost of people to feed and maintain these systems.Martin: Yeah, a hundred percent, you're spot on. There's the direct costs, there's the indirect costs. Like you mentioned, in observability, network egress is a huge indirect cost. There's the people that you mentioned that need to maintain these systems. And I think those are things that companies definitely should take into account when they think about the total cost of ownership there.I think what's more in observability actually is, and this is perhaps a hard thing to measure, as well, is often we ask companies, “Well, what is the cost of downtime?” Right? Like if you're, if your business is impacted and your customers are impacted and you're down, what is the cost of each additional minute of downtime, perhaps, right? And then the effectiveness of the tool can be evaluated against that because you know, observability is one of these, it's not just any other developer tool; it's the thing that's giving you insight into, is my business or my product or my service operating in the way that I intend. And, you know, is my infrastructure up, for example, as well, right? So, I think there's also the piece of, like, what is the tool really doing in terms of, like, a lost revenue or brand impact? Those are often things that are sort of quite easily overlooked as well.Corey: I am curious to see whether you have noticed a shifting in the narrative lately, where, as someone who sells AWS cost optimization consulting as a service, something that I've noticed is that until about a year ago, no one really seemed to care overly much about what the AWS bill was. And suddenly, my phone's been ringing off the hook. Have you found that the same is true in the observability space, where no one really cared what the observability cost, until suddenly, recently, everyone does or has this been simmering for a while?Martin: We have found that exact same phenomenon. And what I tell most companies out there is, we provide an observability platform that's targeted at cloud-native platforms. So, if your—a cloud-native architecture, so if you're running microservices-oriented architecture on containers, that's the type of architecture that we've optimized our solution for. And historically, we've always done two things to try to differentiate: one is, provide a better tool to solve that particular problem in that particular architecture, and the second one is to be a more cost-efficient solution in doing so. And not just cost-efficient, but a tool that shows you the cost and the value of the data that you're storing.So, we've always had both sides of that equation. And to your point, in conversations in the past years, they've generally been led with, “Look, I'm looking for a better solution. If you just happen to be cheaper, great. That's a nice cherry on top.” Whereas this year, the conversations have flipped 180, in which case, most companies are looking for a more cost-efficient solution. If you just happen to be a better tool at the same time, that's more of a nice-to-have than anything else. So, that conversation has definitely flipped 180 for us. And we found a pretty similar experience to what you've been seeing out in the market right now.Corey: Which makes a tremendous amount of sense. I think that there's an awful lot of—oh, we'll just call it strangeness, I think. That's probably the best way to think about it—in terms of people waking up to the grim reality that not caring about your bills was functionally a zero-interest-rate phenomenon in the corporate sense. Now, suddenly, everyone really has to think about this in some unfortunate and some would say displeasing ways.Martin: Yeah, a hundred percent. And, you know, it was a great environment for tech for over a decade, right? So, it was an environment that I think a lot of companies and a lot of individuals got used to, and perhaps a lot of folks that have entered the market in the last decade don't know of another situation or another set of conditions where, you know, efficiency and cost really do matter. So, it's definitely top of mind, and I do think it's top of mind for good reason. I do think a lot of companies got fairly inefficient over the last few years chasing that top-line growth.Corey: Yeah, that has been—and I think it makes sense in the context with which people were operating. Because before a lot of that wound up hitting, it was, well grow, grow, grow at all costs. “What do you mean you're not doing that right now? You should be doing that right now. Are you being irresponsible? Do we need to come down there and talk to you?”Martin: A hundred percent.Corey: Yeah, it's like eating your vegetables. Now, it's time to start paying attention to this.Martin: Yeah, a hundred percent. It's always a trade-off, right? It's like in an individual company and individual team, you only have so many resources and prioritization. I do think, to your point, in a zero interest environment, trying to grow that top line was the main thing to do, and hence, everything was pushed on how quickly can we deliver new functionality, new features, or grow that top line. Whereas, the efficiency is always something I think a lot of companies looked at as something I can go deal with later on and go fix. And you know, I feel like that that time has now just come.Corey: I will say that I somewhat recently had the distinct privilege of working with a company whose observability story was effectively, “We wait for customers to call and tell us there's a problem and then we go looking in into it.” And on the one hand, my immediate former SRE reflexes kicked in, and I recoiled, but this company has been in this industry longer than I have. They clearly have a model that is working for them and for their customers. It's not the way I would build something, but it does seem that for some use cases, you absolutely are going to be okay with something like that. And I should probably point out, they were not, for example, a bank where yeah, you kind of want to get some early warning on things that could destabilize the economy.Martin: Right, right. I mean, to your point, depending on the context, and the company, it could definitely make sense, and depending on how they execute it as well, right? So, you know, you called out an example already, where if they were a bank or if any correctness or timeliness of a response was important to that business, perhaps not the best thing to do to have your customers find out, especially if you have a ton of customers at the same time. But however, you know, if it's a different type of business where, you know, the responses are perhaps more asynchronous or you don't have a lot of users encountering at the same time or perhaps you have a great A/B experimentation platform and testing platform, you know, there are definitely conditions in which that could be potentially a viable option.Especially when you weigh up the cost and the benefit, right? If the cost to having a few bad customers have a bad experience is not that much to the business and the benefit is that you don't have to spend a ton on observability, perhaps that's a trade-off that the company is willing to make. In most of the businesses that we've been working with, I would say that probably not been the case, but I do think that there's probably some bias and some skew there in the sense that you can imagine a company that cares about these things, perhaps it's more likely to talk to an observability vendor like us to try to fix these problems.Corey: When we spoke a few years back, you definitely were focused on the large, one would say, almost hyperscale style of cloud-native build-out. Is that still accurate or has the bar to entry changed since we last spoke? I know you've raised an awful lot of money, which good for you. It's a sign of a healthy, robust VC ecosystem. What the counterpoint to that is, they're probably not investing in a company whose total addressable market is, like, 15 companies that must be at least this big.Martin: [laugh]. Yeah, a hundred percent. A hundred percent. So, I'll tell you that the bar to entry definitely has changed, but it's not due to a business decision on our end. If you think about how we started and, you know, the focus area, we're really targeting accounts that are adopting cloud-native technology.And it just so happens that the large tech, [decacorns 00:12:35], and the hyperscalers were the earliest adopters of cloud-native. So containerization, microservices, they were the earliest adopters of that, so hence, there was a high correlation in the companies that had that problem and the companies that we could serve. Luckily, for us, the trend has been that more of the rest of the industry has gone down this route as well. And it's not just new startups; you can imagine any new startup these days probably starts off cloud-native from day one, but what we're finding is the more established, larger enterprises are doing this shift as well. And I think the folks out there like Gartner have studied this and predicted that, you know, by about 2028, I believe was the date, about 95% of applications are going to be containerized in large enterprises. So, it's definitely a trend that the rest of the industry will go on. And as they continue down that trend, that's when, sort of, our addressable market will grow because the amount of use cases where our technology shines will grow along with that as well.Corey: I'm also curious about your description of being aimed at cloud-native companies. You gave one example of microservices powered by containers, if I understood correctly. What are the prerequisites for this? When you say that it almost sounds like you're trying to avoid defining a specific architecture that you don't want to deal well with or don't want to support for a variety of reasons? Is that what it is or is there certain you must be built in these ways or the product does not work super well for you? What is it you're trying to say with that, is what I'm trying to get at here.Martin: Yeah, a hundred percent. If you look at the founding story here, it's really myself and my co-founder, found Uber going through this transition of both a new architecture, in the sense that, you know, they were going containers, they were building microservices-oriented architecture there, were also adopting a DevOps mentality as well. So, it was just a new way of building software, almost. And what we found is that when you develop software in this particular way—so you can imagine when you're developing a tiny piece of functionality as a microservice and you're a individual developer, and you're—you know, you can imagine rolling that out into production multiple times a day, in that way of developing software, what we found was that the traditional tools, the application performance monitoring tools, the IT monitoring tools that used to exist pre this way of both architecture and way of developing software just weren't a good fit.So, the whole reason we exist is that we had to figure out a better way of solving this particular problem for the way that Uber built software, which was more of a cloud-native approach. And again, it just so happens that the rest of the industry is moving down this path as well and hence, you know, that problem is larger for a larger portion of the companies out there. You know, I would say some of the things when you look into why the existing solutions can't solve these problems well, you know, if you look at a application performance monitoring tool, an APM tool, it's really focused on introspecting into that application and its interaction with the operating system or the underlying hardware. And yet, these days, that is less important when you're running inside the container. Perhaps you don't even have access to the underlying hardware, or the operating system and what you care about—you can imagine—is how that piece of functionality interacts with all the other pieces of functionality out there, over a network core.So, just the architecture and the conditions ask for a different type of observability, a different type of monitoring, and hence, you just need a different type of solution to go solve for this new world. Along with this, which is sort of related to the cost as well, is that, you know, as we go from virtual machines onto containers, you can imagine the sheer volume of data that gets produced now because everything is much smaller than it was before and a lot more ephemeral than it was before, and hence, every small piece of infrastructure, every small piece of code, you can imagine still needs as much monitoring and observability as it did before, as well. So, just the sheer volume of data is so much larger for the same amount of infrastructure, for the same amount of hardware that that you used to have, and that's really driving a huge problem in terms of being able to scale for it and also being able to pay for these systems as well.Corey: Tired of Apache Kafka's complexity making your AWS bill look like a phone number? Enter Redpanda. You get 10x your streaming data performance without having to rob a bank. And migration? Smoother than a fresh jar of peanut butter. Imagine cutting as much as 50% off your AWS bills. With Redpanda, it's not a dream, it's reality. Visit go.redpanda.com/duckbill. Redpanda: Because Kafka shouldn't cause you nightmares.Corey: I think that there's a common misconception in the industry that people are going to either have ancient servers rotting away in racks, or they're going to build something greenfield, the way that we see done on keynote stage is all the time of companies that have been around with this architecture for less than 18 months. In practice, I find it's awfully frequent that this is much more of a spectrum, and a case-by-case per-workload basis. I haven't met too many data center companies where everything's the disaster that the cloud companies like to paint it as, and vice versa, I also have never yet seen a architecture that really existed as described in a keynote presentation.Martin: A hundred percent agree with you there. And you know, it's not clean-cut from that perspective. And also, you're also forgetting the messy middle as well, right? Like, often what happens is, there's a transition. If you don't start off cloud-native from day one, you do need to transition there from your monolithic applications, from your VM-based architectures, and often the use case can't transform over perfectly.What ends up happening is you start moving some functionality and containerizing some functionality and that still has dependencies between the old architecture and the new architecture. And companies have to live in this middle state, perhaps for a very long time. So, it's definitely true. It's not a clean-cut transition. But you can think about that middle state is actually one that a lot of companies struggle with because all of a sudden, you only have a partial view of the world, or what's happening with your old tools, they're not well suited for the new environments. Perhaps you got to start bringing new tools and new ways of doing things in your new environments, and they're not perhaps the best suited for the old environment as well.So, you do actually end up in this middle state where you need a good solution that can really handle both because there are a lot of interdependencies between the two. And it's actually one of the things that we strive to do here at Chronosphere is to help companies through that transition. So, it's not just all of the new use cases and it's not just all of your new environments. It's actually helping companies through this transition is actually pretty critical as well.Corey: My question for you is, given that you have, I don't want to say a preordained architecture that your customers have to use, but there are certain assumptions you've made based upon both their scale and the environment in which they're operating. How heavy of a lift is it for them to wind up getting Chronosphere into their environments? Just because seems to me that it's not that hard to design an architecture on a whiteboard that can meet almost any requirement. The messy part is figuring out how to get something that resembles that into place on a pre-existing, extant architecture.Martin: Yeah. I'd say it's something we spent a lot of time on. The good thing for the industry overall, for the observability industry, is that open-source standards are now created and now exist when they didn't before. So, if you look at the APM-based view, it was all proprietary agents producing the data themselves that would only really work with one vendor product, whereas if you've look at a modern environment, the production of the data has actually been shifted from the vendor down to the companies themselves, and there'll be producing these pieces of data in open-source standard formats like OpenTelemetry for distributed traces, or perhaps Prometheus for metrics.So, the good thing is that for all of the new environments, there's a standard way to produce all of this data and you can send all that data to whichever vendor you want on the back end. So, it just makes the implementation for the new environments so much easier. Now, for the legacy environments, or if a company is shifting over from an existing tool, there is actually a messy migration there because often you're trying to replace proprietary formats and proprietary ways of producing data with open-source standard ones. So, just something that us as Chronosphere just come in and we view that as a particular problem that we need to solve and we take the responsibility of solving for a company because what we're trying to sell companies is not just a tool, what we're really trying to solve them is the solution to the problem, and the problem is they need an observability solution end to end. So, this often involves us coming in and helping them, you can imagine, not just convert the data types over but also move over existing dashboards, existing alerts.There's a huge piece of lift that the end—that perhaps every developer in a company would have to do if we didn't come in and do it on behalf of those companies. So, it's just an additional responsibility. It's not an easy thing to do. We've built some tooling that helps with it, and we just spend a lot of manual hours going through this, but it's a necessary one in order to help a company transition. Now, the good thing is, once they have transitioned into the new way of doing things and they are dependent on open-source standard formats, they are no longer locked in. So, you know, you can imagine future transitions will be much easier, however the current one does have to go through a little bit of effort.Corey: I think that's probably fair. And then there's no such thing, in my experience, as a easy deployment for something that is large enough to matter. And let's be clear, people are not going to be deploying something as large scale as Chronosphere on a lark. This is going to be when they have a serious application with serious observability challenges. So, it feels like, on some level, that even doing a POC is a tricky proposition, just due to the instrumentation part of it. Something I've seen is that very often, enterprise sales teams will decide that by the time that they can get someone to successfully pull off a POC, at that point, the deal win rate is something like 95% just because no one wants to try that in a bake-off with something else.Martin: Yeah, I'd say that we do see high pilot conversion rates, to your point. For us, it's perhaps a little bit easier than other solutions out there, in the sense that I think with our type of observability tooling, the good thing is, an individual team could pick this up for their one use case and they could get value out of it. It's not that every team across an environment or every team in an organization needs to adopt. So, while generally, we do see that, you know, a company would want to pilot and it's not something you can play around online with by yourself because it does need a particular deployment, it does need a little bit of setup, generally one single team can come and perform that and see value out of the tool. And that sort of value can be extrapolated and applied to all the other teams as well. So, you're correct, but it hasn't been a huge lift. And you know, these processes end to end, we've seen be as short as perhaps 30-something days end to end, which is generally a pretty fast-moving process there.Corey: Now, I guess, on some level, I'm still trying to wrap my head around the idea of the scale that you operate at, just because as you mentioned, this came out of Uber—which is beyond imagining for most people—and you take a look at a wide variety of different use cases. And in my experience it's never been, “Holy crap, we have no observability and we need to fix that.” It's, “There are a variety of systems in place that just are not living up to the hopes, dreams, and potential that they had when they were originally deployed.” Either due to growth or due to lack of product fit, or the fact that it turns out in a post zero-interest-rate world, most people don't want to have a pipeline of 20 discrete observability tools.Martin: Yep, yep. No, a hundred percent. And, to your point there, ultimately, it's our goal and, you know, in many companies were replacing up to six to eight tools in a single platform. And so, it's great to do. That definitely doesn't happen overnight. It takes time.You know, you can imagine in a pilot or when you're looking at it, we're picking a few of the use cases to demonstrate what our tool could do across many other use cases, and then generally on the onboarding, during the onboarding time or perhaps over a period of months or perhaps even a year plus, we then go on board these use cases a piece by piece. So, it's definitely not a quick overnight process there, but, you know, you can imagine something that can help each end developer in that particular company be more effective and it's something that can really help move the bottom line in terms of far better price efficiency. These things are generally not things that are quick fixes; these are generally things that do take some time and a little bit of investment to achieve the results.Corey: So, a question I do have for you, given that I just watched an awful lot of people talking about observability for three days at Monitorama, what are people not talking about? What did you not see discussed that you think should be?Martin: Yeah, one thing I think often gets overlooked, and especially in today's climate is, I think observability gets relegated to a cost center. It's something that every company must have, every company has today, and it's often looked at a tool that gives you insights about your infrastructure and your applications and it's a backend tool, something you have to have, something you have to pay for and it doesn't really move the direct needle for the business top line. And I think that's often something that companies don't talk about enough. And you know, from our experience at Uber and through most of the companies that we work with here at Chronosphere, yes, there are infrastructure problems and application level problems that we help companies solve, but ultimately, the more mature organizations, or when it comes to observability, are often starting to get real-time insights into the business more than the application layer and the infrastructure layer.And if you think about it, companies that are cloud-native architected, there's not one single endpoint or one single application that fulfills a single customer request. So, even if you could look at all the individual pieces, the actual what we have to do for customers in our products and services span across so many of them that often you need to introduce a new view, a view that's just focused on your customers, just focused on the business, and sort of apply the same type of techniques on your backend infrastructure as you do for your business. Now, this isn't a replacement for your BI tools, you still need those, but what we find is that BI tools are more used for longer-term strategic decisions, whereas you may need to do a lot of sort of tactical, more tactical, business operational functions based on having a live view of the business. So, what we find is often observability is only ever thought about for infrastructure, it's only ever thought about as a cost center, but ultimately observability tooling can actually add a lot directly to your top line by giving you visibility into the products and services that make up that top line. And I would say the more mature organizations that we work with here at Chronosphere all had their executives looking at, you know, monitoring dashboards to really get a good sense of what's happening in their business in real-time. So, I think that's something that hopefully a lot more companies evolve into over time and they really see the full benefit of observability and what it can do to a business's top line.Corey: I think that's probably a fair way of approaching it. It seems similar, in some respects, to what I tend to see over in the cloud cost optimization space. People often want to have something prescriptive of, do this, do that do the other thing, but it depends entirely what the needs of the business are internally, it depends upon the stories that they wind up working with, it depends really on what their constraints are, what their architectures are doing. Very often it's a let's look and figure out what's going on and accidentally, they discover they can blow 40% off their spend by just deleting things that aren't in use anymore. That becomes increasingly uncommon with scale, but it's still one of those questions of, “What do we do here and how?”Martin: Yep, a hundred percent.Corey: I really want to thank you for taking the time to speak with me today about what you're seeing. If people want to learn more, where's the best place for them to find you?Martin: Yeah, the best place is probably going to our website Chronosphere.io to find out more about the company, or if you want to chat with me directly, LinkedIn is probably the best place to come find me, via my name.Corey: And we will, of course, put links to both of those things in the [show notes 00:28:49]. Thank you so much for suffering the slings and arrows I was able to throw at you today.Martin: Thank you for having me Corey. Always a pleasure to speak with you, and looking forward to our next conversation.Corey: Likewise. Martin Mao, CEO and co-founder of Chronosphere. This promoted guest episode has been brought to us by Chronosphere, here on Screaming in the Cloud. And I'm Cloud Economist Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an insulting comment that I will never notice because I have an observability gap.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Tom Krazit, Editor in Chief at Runtime, joins Corey on Screaming in the Cloud to discuss what it's like being a journalist in tech. Corey and Tom discuss how important it is to find your voice as a media personality, and Tom explains why he feels one should never compromise their voice for sponsor approval. Tom reveals how he's covering tech news at his new publication, Runtime, and how he got his break in the tech journalism industry. Tom also talks about why he decided to build his own publication rather than seek out a corporate job, the value of digging deeper for stories, and why he feels it's so valuable to be able to articulate the issues engineers care about in simple terms. About TomTom Krazit has written and edited stories about the information technology industry for over 20 years. For the last ten years he has focused specifically on enterprise technology, including all three as-a-service models developed around infrastructure, platform, and enterprise software technologies, security, software development techniques and practices, as well as hardware and chips.Links Referenced: Runtime: https://www.runtime.news/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn, and people sometimes confuse me for a journalist. I am most assuredly not one of those. I'm just loud and have opinions and every once in a while I tell people things they didn't already know. That's not journalism. My guest today, however, is a journalist, Tom Krazit, is the Editor in Chief of the just launched Runtime. Tom, thank you for joining me.Tom: Thanks, Corey. It's a long-time listener, first-time guest.Corey: We've been talking for years now and I'm sort of embarrassed I haven't had you on the show before now. But the journalists has always felt, to me at least, like they're a step apart from the typical, you know, rank and file of those of us working in industry. You folks are different from us, and inviting you all just feels like a faux pas, even though it's very clearly not. Well, how did you get here, I guess is the short version. I know that you're at Runtime, now; you were at Protocol until its demise recently. Before that, when I first started tracking you, you were over at GeekWire. Where do you come from?Tom: [laugh]. Well, I've been doing this for 20 years, which is a long, long time, and it's amazing how much has changed in that time. I started off doing consumer stuff, I was covering Apple during the launch of the iPhone, I was covering Google as they sort of turned into the Borg. And then I joined GigaOm in 2012 and I joined them as an editor. And it became pretty clear that I needed to learn this enterprise stuff real fast because that was like the largest part of GigaOm's business at the time.And so, I kind of just threw myself into it and realized that I actually liked it, you know, which I think is [laugh] hard for some people to understand. But like, I've actually always found it really interesting how these large systems work, and how people build in a variety of ways based on their needs, and, you know, just the dramatic change that we've seen in this industry over the past ten years. So, you know, I've really been doing that ever since.Corey: There's a lot to be said for journalism in the space. And I know a lot of tech companies are starting to… well, that's starting. This is, I guess, a six-year-old phenomenon, at least. But a lot of these small companies were built, and well, we're just going to not talk to the press because we've had bad experiences doing that before, so we're just going to show instead of tell. And that works to a point, but then you hit a certain point of scale where you're a multi-trillion dollar company and, “We don't talk to the press,” no longer becomes tenable. With success comes increased scrutiny, and deservedly so. I feel that there's a certain lack of awareness of that fact in the tech industry versus other large industries that have come before.Tom: I think it's always important to remember how, like, new a lot of this really still is, you know, when compared to, like, other American industries and businesses. Like, tech as a discipline, you know, it's only really in the last ten years that it's been elevated to the extent of, like, sports, or, like, a top-tier news category. And so, I think a lot of people who make those decisions, you know, grew up in a different environment where, you know, you didn't really want to talk about what you were doing because you were worried about competitive things or you were just worried—you wanted to have a ground-up story. And like, yeah, the world is very different now. And I think that, you know, a lot of companies are starting to get that and starting to change the way they think about it.I mean, I also would argue that I think a lot of enterprise tech companies see better value in running ads alongside golf tournaments than actually talking to people about what they really do because I think a lot of them don't really want people to understand [laugh] what they do. They want them to think that they're, you know, the wizard behind the curtain, solving all your digital transformation needs and not actually get into the details of that.Corey: I used to think that I was, as an engineer, much smarter than any of the marketers who were doing these things that obviously make no sense. Like, why would you have a company's logo in an airport for an enterprise software ad, but no URL or way to go buy something? Aren't those people foolish? Yeah, it turns out no. People are not just-fell-off-the-turnip-truck level of sophistication.It's a brand awareness story where you wind up going in and pitching to the board of some big company someday and they already know who you are. That's the value of brand awareness, as I've learned the fun way because I accidentally became something of a marketer. I have this platform—Tom: [crosstalk 00:04:46], Corey—Corey: In the newsletters, but—Tom: Come one. You're totally a brand. You're a brand.Corey: Oh, absolutely. And breakfast cereal.Tom: [laugh].Corey: But I was surprised to realize that people not only cared about what I had to say but would pay me cash money in order to have their product mentioned in the thing that I do. And, “Can you give me money? Of course you can give me money.” But it was purely accidental along the way. So, I have to ask, given that you seem to be a fan of, you know, not starving to death, why would you start a media company in 2023?Tom: Uh, well I needed to do something, Corey. You know, like [laugh] [crosstalk 00:05:22]—Corey: You had a bright career in corporate communications if you want to go over to the dark side. Like, “I'm tired of talking to the audience about truth, I'd rather spin things now because I know how the story gets told.”Tom: I mean, that may come down the road for me at some point, but I wasn't quite ready for that just yet. I have really felt very strongly for a long time that this particular corner of the world needs better journalism. I just, I feel like a lot of what is served up to the people who have to make decisions about this incredibly complicated part of the world, you know, it's either really, really product-oriented, like, “So-and-so introduced the new thing today. It costs this much and it does these three things that they told us under embargo,” you know, or you get, like, real surface-level coverage from, like, the big financial business publications, you know, who understand the importance of things like cloud and things like enterprise software, but haven't really invested the time to understand the technological complexities behind it and how, you know, easy narratives don't necessarily, you know, play in this world.So, there's a middle ground there that I think we at Protocol Enterprise found pretty fertile. And, you know, I think that, for this, for Runtime, you know, I'm really just continuing to carry that work forward and to give people content they need to make decisions about using technology in their businesses that business people can understand without an engineering degree, but that engineers will take a look at it and they'll go, “You know what? He did that right. He did his homework, he got the details right.” And I think that's rare, unfortunately, and then that's a gap I hope to fill.Corey: Something that really struck me as being aligned with how I tend to view things is—to be clear, our timing is a little weird because to my understanding, the inaugural issue is going out later today after we record—Tom: That's correct.Corey: But that would have already happened and have landed in the industry by the time people listen to this. So, I'm really hoping, first off, that the first issue isn't horrifying to a point where, “Oh God, distance myself from it. What have I done?” But you've been in this industry enough that I doubt that's going to be how you play it. But I am curious to know how it winds up finding its voice over the coming weeks and months. Even when you've done this before, as you have I think that every publication starts to have a different area of focus, a different audience, and focus on different aspects of this, which is great because I don't want to see the same take from fifteen different journalist publications.Tom: Totally. I mean, you know, I think a lot of what Protocol Enterprise was, was my voice and, you know, how I thought about this industry and wanted to bring it forward. And so, I think that, you know, off the bat, a lot of what Runtime is will be similar to that. But to your point, I think everything changes. The market changes, what people want changes, I mean, like, look, just the last six months, the rise of all this generative AI discussion has dramatically changed a lot of what software—you know, how it's discussed and how it's thought about, and those are things that, you know, six months ago, we were talking about, maybe, here and there, but we certainly weren't talking about them to degree than we are now.So like, those changes will happen over the coming months. And you know, you just have to sort of keep up with them and make sure—my job is to make sure I am talking to the right people who can put those things into context for the people who need to understand them in order to make their own decisions. You know, I mean, I think we talk a lot about the top-tier decision makers, you know, of companies who need information, but I think there's, like, a whole other, I don't want to call them an underclass, but like, you know, there's a lot of other people within companies who advise those people and who genuinely need help trying to understand the pace and the degree to which things have changed and whether or not it's worth it for them to invest, you know, hundreds of thousands, if not millions, of dollars in some of these new technologies. So, you know, that's kind of the voice I want to bring forward is to represent the buyer, to represent the person who has to make sense of all this and decide whether or not, you know, the sparkly magic beans coming down from the cloud providers and others are really what it's cracked up to be.Corey: The thing that really throws me is that when I started talking to you and other journalists where you speak generally to a tech-savvy audience, but for whatever reason, that audience and you by extension are not as deeply involved in every nuance of the AWS ecosystem or the cloud computing ecosystem as I am. So, I can complain for five minutes straight to you about the Managed NAT Gateways and their pricing and then you'll finally say, “Yeah, I don't know what any of the words Managed, NAT, or Gateway mean in this context. Can you distill that down for me?” It's, “Oh, right. Talking about what I mean in a way that someone who isn't me with my experience can understand it.” I mean, that is such a foreign concept to so many engineers that speaking clearly about what they mean is now being called prompt engineering, instead of, “Describe what you want in plain English.”Tom: Yeah. I think that's a lot of what I hope to accomplish, actually, is to be able to talk to really smart engineers who are really driving this industry forward from their contributions and be able to articulate, like, what it is that they're concerned about, like, what it is that they think is exciting, and to put that into context for people who, you know, who don't know what a gateway is, let alone, like, any of [laugh] those other words you used. So, you know, like, I think there's a real opportunity to do that and that's the kind of thing I get excited about.Corey: I am curious, given that you are just launching at this point, and you have the express intention of being sponsor-supported, as opposed to a subscriber-driven model, which I've thought about a lot over the past, however many years you want to wind up describing I've been doing this. The problem that I've got here is that I have always found that whenever I'm doing something that aligns with making money and taking a sponsor message and putting it out to the world, how do I keep that from informing the coverage? And I've had to go a fair bit out of my way to avoid that. For example, this podcast is going to have ads inserted into it. I don't know what they are, I don't know who these companies are, and that only gets done after I've recorded this episode, so I'm not being restrained by, “Ohh, have to say something nice about Company X because they're sponsoring this episode.” It stays away.Conversely, if I want to criticize Company X, I don't feel that I can't do that because well, they are paying the bills around here. You're still in a very early stage where it is you, primarily. How are you avoiding that, I guess, sense of vendor capture?Tom: You have to be very intentional about it from day one. You have to make it clear when you're talking to sponsors from the business side where the lines are drawn. And you have to, I think from the editorial side, just be fearless and be willing to speak the truth. And if you get negative reaction from sponsors over something you've said, they were never going to be a good long-term partner for you anyway. And I've seen that over the years.Like, companies that get annoyed about coverage because they're sponsors are insecure companies. It's almost a tell, you know, like when you attempt to put pressure on editorial organizations because you're a sponsor and you don't like the way that they're covering something [laugh], it's a deep, deep tell about the state of your business and how you see it. So, like for me, those are almost like signals to use and then go deeper, you know? And then, you know, I do think that there are enough companies that feel strongly about wanting to support the kind of work that I do without impugning the way I think about it, or the way I write about it. Because I mean, like, there's just no other way to do what I do without pulling punches.And I think you would agree, you know, in terms of what you do, like, the voice that you have, the authenticity that you have, is your selling point. And if you compromise that, people know. It's pretty obvious when you are bending your coverage to suit your sponsors. And there's examples of it every day in enterprise tech coverage. And you know, I feel like my track record speaks for itself on that.Corey: I would agree. I don't like everything you write. That's kind of the point. I think that if you look at anyone who's been even moderately prolific and you like everything that they're writing, are they actually doing journalism or are they catering to your specific viewpoint? Now, that doesn't mean that well, I don't like this particular journalist. It's, well, “Oh, because you don't agree with what they say?” “No, because they're editorially sloppy, they take shortcuts, and they apparently peddle misinformation gleefully.”Yeah, I don't like a lot of that type of coverage. I've never seen that from you. And you've had takes I don't agree with, you've had articles that I thought were misleading at times, but I've never gotten the sense at all that they were written in bad faith. And when I run into that, it often makes me question my own biases as well, which is sort of a good thing.Tom: I mean, it's really tough because there are people out there in journalism and media who are operating in bad faith. Like, there's just no… there's no other way to dance around that. That is a fact of life in the 21st century. And I mean, all I can really do is do what I do every day and put it out there and, you know, let people judge it for what it is. And you know, like, I feel like, I have a pretty strong sense of what I will, you know, what I'll cover and how I'll cover it and where I'll go with it, and I think that that sort of governs, you know, every editorial decision that I've ever made. For me, there's just no other way to do it. And if I get to a point where I have to make those compromises in order to have a business, like, I'll just go do something else. I don't need this that much.Corey: When I was starting the Duckbill Group, one of the problems that I had was—it's hard to start a company for a variety of reasons, but one that is not particularly sympathetic is that everything is hard when you're just starting out. You don't know where any business is going to come from if it ever does. And at any point, I looked around, and I have an engineering skill set and I live in San Francisco, and I look around and say, it's Wednesday. I could have a job at a big tech company for hundreds of thousands of dollars a year by Friday if I just go out and say yes. And it's resisting that siren call while building something myself that was really hard.You have that challenge as well, I'd have to imagine because there are always people that various companies are looking to build out their PR and corporate comms groups, and people who understand the industry and know how to tell a story, which you clearly qualify, are always in demand, regardless of the macroeconomic conditions. So, at any point, you have the sort of devil on your shoulder saying, it doesn't need to be this hard. There's an easier, more lucrative path instead of struggling to get something off the ground yourself. Do you find that that becomes a tempting thing that you want to give into, or is it, “Mmm, not today, Satan?”Tom: The latter. I mean, I've had offers from companies I respect and from people I would, you know, be happy to work with under other circumstances. But I mean, I sort of feel like I'm just wired this way. And then that's, like, what I enjoy getting out of bed every day to do, is this. And, you know, like, it's not to say that I couldn't find, long-term, some kind of role inside one of those types of companies that you just mentioned, but I'm not ready for that yet.And, you know, I think I'd bring more value to the industry this way than I would jump in on some pre-IPO rocket ship kind of thing right now. I will say that, like, a lot of this business is a young person's game, so like, that equation changes as you get older. I always tell everybody that, like, journalism over the last 20 years has been, like, one of the slowest-moving games of musical chairs that you'll ever play. And, you know, I've [laugh] been pretty lucky over the past number of years to keep getting a chair, you know, in every single one of those downturns. But, you know, I'm not naive enough to think that my luck would run out one day either. But I mean, if I build my own business, hopefully, I can control that.Corey: There are a lot of tech publications out there and I'm curious as to what direction you plan to take Runtime in, given that it is just you, and you presumably, you know, sleep sometimes, it's probably not breaking news with the first take on absolutely everything, which just, frankly, sounds exhausting. One of the internal models we have here is the best take, not the first take. So, where does your coverage intend to start? Where does it intend to stop? And how fixed is that?Tom: Well, at the moment, you know, what we really want to do is tell the stories that the herd is not telling. And you know, we're making a very deliberate decision to avoid a lot of the embargoed product training—I —I don't know how many of your listeners actually know how the sausage is made, but like, so many PR departments and marketing departments in tech really like to tell news through these embargoed product announcement things. And they'll email you a couple days ahead of time and they'll say, “Hey, Tom, we've got a new thing coming up in our, whatever, cloud storage services area. You know, are you interested in learning more under embargo?” And then a lot of people just say, “Sure,” and take a briefing and write up a story.And like, there's nothing inherently wrong with that. It is news and it is—if you think it's interesting enough to bring out to people, like, great. There's a lot of limitations to it, though, you know, in that you can't really get context around that story because you sort of by definition, if you agree to not tell anybody about this thing that the company told you, you can't go out and ask a third-party expert what they think about it. So, you know, I think that it's a way to control the narrative without really getting the proper story out there. And the hook is that you'll be first.And so, I think what we're trying to do is to step away from that and to really tell more impactful stories that take more time to put together. And I mean, I've been on all sides of the news business and when you get on the hamster wheel, you really don't have time to tell those stories because you're too busy trying to deal with the output you've already committed to. And so, like, one thing that Runtime will be doing right off the bat is taking the time to do those stories to interview the people who don't get talked to as much, who don't have twenty-five PR people on staff to blast the world about their accomplishments, you know, to really go out and find the stories that aren't being told, and to elevate the voices that aren't being heard, and to shine a light on some of the, you know, more complex technological things that others simply don't take the time to figure out.Corey: Well, do you have an intended publication schedule at this point or is it going to be when it makes sense? Because one of the things that drove me nuts that I would go back and change if I could is Last Week in AWS inherently has a timeliness to it and covers things over a certain timeframe as well. I don't get to take two weeks off and pre-write this stuff.Tom: Yeah. So the primary vehicle right now is an email newsletter for Runtime and that'll come out three times a week on Tuesdays, Thursdays, and Saturday mornings. You know, I'll also be publishing stories alongside those newsletters. That will be a little more ad hoc. You know, I'd like to have that line up with the newsletters, but you know, sometimes that's not, you know, a schedule you can really adhere to.But the newsletter is a three times a week operation at the moment and that, you know, is just basically based on—you know, at Protocol, we did five times a week with a staff of six. And that was a big effort. So, I decided that was probably not the best thing for me to tackle right off the bat here. So, one thing I really would like to do with Runtime is to get back to that place where there's a staff, there's beat reporters, there's people who can really take the time to dig into these different areas, you know, across cloud infrastructure, AI, or security, or software development, you know, like, who can really, really plunge themselves into that, and then we can bring a broad product to the market. You know, it'll take some time to get there, but that's the goal.Corey: How do you intend to measure success? I mean, there's obviously financial ways of doing it, but it's also one of those areas of, like, one of the things that drove me nuts is that I'll do something exhaustively researched that takes me forever to get out, and no one seems to notice or care. And then I'll just slap something off eleventh hour, and it goes around the internet three times. And I always find that intensely frustrating. How do you measure whether you've succeeded or whether you failed?Tom: Well, I mean, welcome to the internet, Corey. That's just how it works. I think I will be able to measure that, you know, by how sustainable of venture this is, and like, whether or not I can get back to that point where, you know, we can support a small team to do this because I, you know, I sort of feel that that's the best—that's really what this part of the world needs is that kind of broader coverage from subject matter experts who can really dive into things. I mean, I know a lot about a lot, but I can't spend all my time talking to security people to really understand what's happening in that market, and the same for any other, you know, one of these disciplines that we talk about. So, you know, if a year from now, I come on this show for the one-year anniversary of the launch, and we've got sustainable runway, we've got, you know, a few people on board, I'll be thrilled. That'll be great, you know?And like, one thing that I think will really be helpful, for me, at least in terms of determining how successful this is, is just how things travel, and not necessarily like traffic in terms of how things travel, I think that's an easy trap to fall into, but whether or not you know, the stuff that we write about is circulating in the right places and also showing up in the coverage that some of those broader business financial publications actually wind up doing. You know, if you can show that, like, the work you've been doing is influencing the conversation of some of these topics on a broader national and global scene, then for me, that'll be a home run.Corey: Taking a step back, what advice would you give someone who's toying with the idea of entering the media space in this era, whether that be starting their own publication or becoming a journalist through more traditional means? Because as you said, you've been doing this for 20 years; you've seen a lot of change. How would you get started today?Tom: It was a lot easier. It was smaller. It was just a much smaller industry when I first started doing this, and… there wasn't social media. The big challenge, I think, for a lot of people who are just starting now and trying to break through is just how many voices there are and, like, trying to get a foothold among a much, much bigger pond. Like, it was just a much [laugh] smaller pond when I started, and so you know, it was easier to stand out, I guess.I started in the trade magazine world; I started with IDG and I started—you know, which is a real, great bedrock system of knowledge for people to really get their footing in this industry on. And you know, you can count on many, many hands the number of people who started at IDG and have gone on to, you know, a very successful tech and media careers throughout. So, you know, for me, that was a big, that was a big thing. But that was a moment in time. And like, you know, the world now is so different.The only thing that has ever worked, though, is to just write, to just start, to just get out there and do what you're doing and develop a voice and find a way to get it to the people who you want to read it. And you know, if you keep at it, you can start to break through. And, like, it's a slog, I'm not going to pretend otherwise, but yeah, if it's a career you really want to do, the best way to do it is just to start. And the nice thing about the modern era, actually is, like, there's never been easier ways to get up and running. I mean you look at like things like Substack, or I'm using Ghost, you know, like, the tools are there in a way that they weren't 20 years ago when I started.Corey: Step one: learn how to build a web server is no longer your thing. No, I think that that's valuable. One of the things that I find at least is people are so focused on the nuts and bolts, the production quality. People reach out to me all the time and say, “What microphone should I get? What my audio setup should I use? What tools should I do for the rest of this?”And it's, realize that it doesn't matter how much you invest in production quality; if the content isn't interesting and the story you have to tell doesn't grip people, it doesn't matter. No one cares. You have to get their attention first and then, then you can scale up on the production quality. I think I'm on generation six or so of my current AV setup. But that happened as a result of basically, more or less recording into a string can when I first started doing this stuff. Focus on the important part of the story, the differentiated parts.Tom: The best piece of advice, I got when starting Runtime was just to start. Like, don't worry about building a perfect website, don't worry about, you know, getting everything all dialed in exactly the way you want it. Just get out there and do the work that you're doing. And it's also a weird time right now, obviously, with the [laugh] the demise of Twitter as a vehicle for a lot of this stuff. Like, I think a lot of journalists are really having to figure out what their new social media distribution strategies are and I don't think anybody's really settled on anything definitive just yet.So, that's going to be an interesting wrinkle over this year. And then I think, you know, there's also still a lot of concern about the broader economy, you know, advertising is always one of those things that can be the first to go when businesses start to look at the bottom line a little bit more closely. But those things always come around, you know, and when the economy does start to get a little bit better, I think, you know, we've seen a little bit more, maybe [unintelligible 00:26:28] of the market over the last couple of weeks, you know, with some of the earnings results that we've seen. So, you know, like, I mean, those are famous last words, obviously, but I think that looking forward into the second half of the year, people are starting to get a little more confident.Corey: I sure hope so. I really want to thank you for being so generous with your time. If people want to learn more, and—as they should—subscribe to see how Runtime plays out, where can they find you?Tom: runtime.news.Corey: Excellent. We'll, of course, put a link to that in the [show notes 00:26:54]. I'm really looking forward to getting the first issue in a few hours myself. Thanks again for your time. I really appreciate it.Tom: Thanks, Corey.Corey: Tom Krazit, Editor in Chief at Runtime. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment telling us that your company's product is being dramatically misunderstood and to please issue a correction.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
A common refrain you'll hear these days is that servers should be scaled out, easy to replace, and interchangeable—cattle, not pets. But for the ops folks who run those servers the opposite is true. You can't just throw any of them into an incident where they may not know the stack or system and expect everything to work out. Every operator has a set of skills that they've built up through research or experience, and teams should value them as such. They're people, not pets, and certainly not cattle—you can't just get a new one when you burn out your existing ones. On this episode of the podcast—sponsored by Chronosphere—we talk with Paige Cruz, Senior Developer Advocate at Chronosphere, about how teams can reduce the cognitive load on ops, the best ways to prepare for inevitable failures, and where the worst place to page Paige is. Episode notes:Chronosphere provides an observability platform for ops people, so naturally, the company has an interest in the happiness of those people. If you're interested in the history of the pets vs. cattle concept , this covers it pretty well. Previously, we spoke with the CEO of Chronosphere about making incidents easier to manage. We've covered this topic on the blog before, and two articles came up during our conversation with Paige. You can connect with Paige on Twitter, where she has a pretty apropos handle. Congrats to Stellar Question badge winner Bruno Rocha for asking How can I read large text files line by line, without loading them into memory?, which at least 100 users liked enough to bookmark.
Darko Mesaroš, Senior Developer Advocate at AWS, joins Corey on Screaming in the Cloud to discuss all the weird and wonderful things that can be done with old hardware, as well as the necessary skills for being a successful Developer Advocate. Darko walks through how he managed to deploy Kubernetes on a computer from 1986, as well as the trade-offs we've made in computer technology as hardware has progressed. Corey and Darko also explore the forgotten art of optimizing when you're developing, and how it can help to cut costs. Darko also shares what he feels is the key skill every Developer Advocate needs to have, and walks through how he has structured his presentations to ensure he is captivating and delivering value to his audience.About DarkoDarko is a Senior Developer Advocate based in Seattle, WA. His goal is to share his passion and technological know-how with Engineers, Developers, Builders, and tech enthusiasts across the world. If it can be automated, Darko will definitely try to do so. Most of his focus is towards DevOps and Management Tools, where automation, pipelines, and efficient developer tools is the name of the game – click less and code more so you do not repeat yourself ! Darko also collects a lot of old technology and tries to make it do what it should not. Like deploy AWS infrastructure through a Commodore 64.Links Referenced: AWS: https://aws.amazon.com/ Blog post RE deploying Kubernetes on a TRS-80: https://www.buildon.aws/posts/i-deployed-kubernetes-with-a-1986-tandy-102-portable-computer AWS Twitch: https://twitch.tv/aws Twitter: https://twitter.com/darkosubotica Mastodon: https://hachyderm.io/@darkosubotica TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Do you wish your developers had less permanent access to AWS? Has the complexity of Amazon's reference architecture for temporary elevated access caused you to sob uncontrollably? With Sym, you can protect your cloud infrastructure with customizable, just-in-time access workflows that can be setup in minutes. By automating the access request lifecycle, Sym helps you reduce the scope of default access while keeping your developers moving quickly. Say goodbye to your cloud access woes with Sym. Go to symops.com/corey to learn more. That's S-Y-M-O-P-S.com/coreyCorey: Welcome to Screaming in the Cloud. I'm Cloud Economist Corey Quinn and my guest today is almost as bizarre as I am, in a somewhat similar direction. Darko Mesaroš is a Senior Developer Advocate at AWS. And instead of following my path of inappropriately using things as databases that weren't designed to be used that way, he instead uses the latest of technology with the earliest of computers. Darko, thank you for joining me.Darko: Thank you so much, Corey. First of all, you know, you tell me, Darko is a senior developer advocate. No, Corey. I'm a system administrator by heart. I happen to be a developer advocate these days, but I was born in the cold, cold racks of a data center. I maintain systems, I've installed packages on Linux systems. I even set up Solaris Zones a long time ago. So yeah, but I happen to yell into the camera these days, [laugh] so thank you for having me here.Corey: No, no, it goes well. You started my career as a sysadmin. And honestly, my opinion, if you asked me—which no one does, but I share it anyway—is that the difference between an SRE and a sysadmin is about a 40% salary bump.Darko: Exactly.Corey: That's about it. It is effectively the same job. The tools are different, the approach we take is different, but the fundamental mandate of ‘keep the site up' has not materially changed.Darko: It has not. I don't know, like, what the modern SRS do, but like, I used to also semi-maintain AC units. Like, you have to walk around with a screwdriver nonetheless, so sometimes, besides just installing the freshest packages on your Red Hat 4 system, you have to also change the filters in the AC. So, not sure if that belongs into the SRE manifesto these days.Corey: Well, the reason that I wound up inviting you onto the show was a recent blog post you put up where you were able to deploy Kubernetes from the best computer from 1986, which is the TRS-80, or the Trash-80. For the record, the worst computer from 1986 was—and remains—IBM Cloud. But that's neither here nor there.What does it mean to deploy Kubernetes because, to be direct, the way that I tend to deploy anything these days, if you know, I'm sensible and being grown up about it, is a Git push and then the automation takes it away from there. I get the sense, you went a little bit deep.Darko: So, when it comes to deploying stuff from an old computer, like, you know, you kind of said the right thing here, like, I have the best computer from 1986. Actually, it's a portable version of the best computer from 1986; it's a TRS-80 Model 102. It's a portable, basically a little computer intended for journalists and people on the go to write stuff and send emails or whatever it was back in those days. And I deployed Kubernetes through that system. Now, of course, I cheated a bit because the way I did it is I just used it as a glorified terminal.I just hooked up the RS 232, the wonderful serial connection, to a Raspberry Pi somewhere out there and it just showed the stuff from a Raspberry Pi onto the TRS-80. So, the TRS-80 didn't actually know how to run kubectl—or ‘kube cuddle,' what they call it—it just asked somebody else to do it. But that's kind of the magic of it.Corey: You could have done a Lambda deployment then just as easily.Darko: Absolutely. Like that's the magic of, like, these old hunks of junks is that when you get down to it, they still do things with numbers and transmit electrical signals through some wires somewhere out there. So, if you're capable enough, if you are savvy, or if you just have a lot of time, you can take any old computer and have it do modern things, especially now. Like, and I will say 15 years ago, we could have not done anything like this because 15 years ago, a lot of the stuff at least that I was involved with, which was Microsoft products, were click only. I couldn't, for the love of me, deploy a bunch of stuff on an Active Directory domain by using a command line. PowerShell was not a thing back then. You could use VB Script, but sort of.Corey: Couldn't you wind up using something that would effect, like, Selenium or whatnot that winds up emulating a user session and moving the mouse to certain coordinates and clicking and then waiting some arbitrary time and clicking somewhere else?Darko: Yes.Corey: Which sounds like the absolute worst version of automation ever. That's like, “I deployed Kubernetes using a typewriter.” “Well, how the hell did you do that?” “Oh, I use the typewriter to hit the enter key. Problem solved.” But I don't think that counts.Darko: Well, yeah, so actually even back then, like, just thinking of, like, a 10, 12-year step back to my career, I automated stuff on Windows systems—like Windows 2000, and Windows 2003 systems—by a tool called AutoIt. It would literally emulate clicks of a mouse on a specific location on the screen. So, you were just really hoping that window pops up at the same place all the time. Otherwise, your automation doesn't work. So yeah, it was kind of like that.And so, if you look at it that way, I could take my Trash-80, I could write an AutoIt script with specific coordinates, and I could deploy Windows things. So actually, yeah, you can deploy anything with these days, with an old computer.Corey: I think that we've lost something in the world of computers. If I, like, throw a computer at you these days, you're going to be pretty annoyed with me. Those things are expensive, it'll probably break, et cetera. If I throw a computer from this era at you, your family is taking bereavement leave. Like, those things where—there would be no second hit.These things were beefy. They were a sense of solidity to them. The keyboards were phenomenal. We've been chasing that high ever since. And, yeah, they were obnoxiously heavy and the battery life was 20 seconds, but it was still something that—you felt like it is computer time. And now, all these things have faded into the background. I am not protesting the march of progress, particularly in this particular respect, but I do miss the sense of having keyboards didn't weren't overwhelmingly flimsy plastic.Darko: I think it's just a fact of, like, we have computers as commodities these days. Back then computers were workstations, computers were something you would buy to perform a specific tasks. Today, computer is anything from watching Twitch to going on Twitter, complaining about Twitter, to deploying Kubernetes, right? So, they have become such commodities such… I don't want to call them single-use items, but they're more becoming single-use items as time progresses because they're just not repairable anymore. Like, if you give me a computer that's five years old, I don't know what to do with it. I probably cannot fix it if it's broken. But if you give me a computer that's 35 years old, I bet you can fix it no matter what happened.Corey: And the sheer compute changes have come so fast and furious, it's easy to lose sight of them, especially with branding being more or less the same. But I saved up and took a additional loan out when I graduated high school to spend three grand on a Dell Inspiron laptop, this big beefy thing. And for fun, I checked the specs recently, and yep, that's a Raspberry Pi these days; they're $30, and it's not going to work super well to browse the web because it's underpowered. And I'm sitting here realizing wait a minute, even with a modern computer—forget the Raspberry Pi for a second—I'm sitting here and I'm pulling up web pages or opening Slack, or God forbid, Slack and Chrome simultaneously, and the fan spins up and it sounds incredibly anemic. And it's, these things are magical supercomputers from the future. Why are they churning this hard to show me a funny picture of a cat? What's going on here?Darko: So, my theory on this is… because we can. We can argue about this, but we currently—Corey: Oh, I think you're right.Darko: We have unlimited compute capacity in the world. Like, you can come up with an idea, you're probably going to find a supercomputer out there, you're probably going to find a cloud vendor out there that's going to give you all of the resources you need to perform this massive computation. So, we didn't really think about optimization as much as we used to do in the past. So, that's it: we can. Chrome doesn't care. You have 32 gigs of RAM, Corey. It doesn't care that it takes 28 gigs of that because you have—Corey: I have 128 gigs on this thing. I bought the Mac studio and maxed it out. I gave it the hostname of us-shitpost-1 and we run with it.Darko: [laugh]. There you go. But like, I did some fiddling around, like, recently with—and again, this is just the torture myself—I did some 6502 Assembly for the Atari 2600. 6502 is a CPU that's been used in many things, including the Commodore 64, the NES, and even a whole lot of Apple IIs, and whatnot. So, when you go down to the level of a computer that has 1.19 megahertz and it has only 128 bytes of RAM, you start to think about, okay, I can move these two numbers in memory in the following two ways: “Way number one will require four CPU cycles. Way number two will require seven CPU cycles. I'll go with way number one because it will save me three CPU cycles.”Corey: Oh, yeah. You take a look at some of the most advanced computer engineering out there and it's for embedded devices where—Darko: Yeah.Corey: You need to wind up building code to run in some very tight constraints, and that breeds creativity. And I remember those days. These days, it's well my computer is super-overpowered, what's it matter? In fact, when I go in and I look at customers' AWS bills, very often I'll start doing some digging, and sure enough, EC2 is always the number one expense—we accept that—but we take a look at the breakdown and invariably, there's one instance family and size that is the overwhelming majority, in most cases. You often a—I don't know—a c5.2xl or something or whatever it happens to be.Great. Why is that? And the answer—[unintelligible 00:10:17] to make sense is, “Well, we just started with that size and it seemed to work so we kept using it as our default.” When I'm building things, because I'm cheap, I take one of the smallest instances I possibly can—it used to be one of the Nanos and I'm sorry, half a gig or a gig of RAM is no longer really sufficient when I'm trying to build almost anything. Thanks, JavaScript. So okay, I've gone up a little bit.But at that point, when I need to do something that requires something beefier, well, I can provision those resources, but I don't have it as a default. That forces me to at least in the back of my mind, have a little bit of a sense of I should be parsimonious with what it is that I'm provisioning out there, which is apparently anathema to every data scientist I've ever met, but here we are.Darko: I mean, that's the thing, like, because we're so used to just having those resources, we don't really care about optimizations. Like, I'm not advocating that you all should go and just do assembly language. You should never do that, like, unless you're building embedded systems or you're working for something—Corey: If you need to use that level of programming, you know.Darko: Exactly.Corey: You already know and nothing you are going to talk about here is going to impact what people in that position are doing. Mostly you need to know assembly language because that's a weeder class and a lot of comp-sci programs and if you don't pass it, you don't graduate. That's the only reason to really know assembly language most of the time.Darko: But you know, like, it's also a thing, like, as a developer, right, think about the person using your thing, right? And they may have the 128 gig us—what is it you called it? Us-shitpost-1, right—that kind of power, kind of, the latest and greatest M2 Max Ultra Apple computer that just does all of the stuff. You may have a big ‘ol double Xeon workstation that does a thing.Or you just may have a Chromebook. Think about us with Chromebooks. Like, can I run your website properly? Did you really need all of those animations? Can you think about reducing the amount of animations depending on screen size? So, there's a lot of things that we need to kind of think about. Like, it goes back to the thing where ‘it works on my machine.' Oh, of course it works on your machine. You spent thousands of dollars on your machine. It's the best machine in the world. Of course, it runs smoothly.Corey: Wait 20 minutes and they'll release a new one, and now, “Who sold me this ancient piece of crap?” Honestly, the most depressing thing is watching an Apple Keynote because I love my computer until I watch the Apple Keynote and it's like, oh, like, “Look at this amazing keyboard,” and the keyboard I had was fine. It's like, “Who sold me this rickety piece of garbage?” And then we saw how the Apple butterfly keyboard worked out for everyone and who built that rickety piece of garbage. Let's go back again. And here we are.Darko: Exactly. So, that's kind of the thing, right? You know, like, your computer is the best. And if you develop for it, is great, but you always have to think other people who use it. Hence, containers are great to fix one part of that problem, but not all of the problems. So, there's a bunch of stuff you can do.And I think, like, for all of the developers out there, it's great what you're doing, you're building us so many tools, but always that take a step back and optimize stuff. Optimize, both for the end-user by the amount of JavaScript you're going to throw at me, and also for the back-end, think about if you have to run your web server on a Pentium III server, could you do it? And if you could, how bad would it be? And you don't have to run it on a Pentium III, but like, try to think about what's the bottom 5% of the capacity you need? So yeah, it's just—you'll save money. That's it. You'll save money, ultimately.Corey: So, I have to ask, what you do day to day is you're a senior developer advocate, which is, hmm, some words, yes. You spend a lot of your free time and public time talking about running ancient computers, but you also talk to customers who are looking forward, not back. How do you reconcile the two?Darko: So, I like to mix the two. There's a whole reason why I like old computers. Like, I grew up in Serbia. Like, when I was young in the '90s, I didn't have any of these computers. Like, I could only see, like, what was like a Macintosh from 1997 on TV and I would just drool. Like, I wouldn't even come close to thinking about getting that, let alone something better.So, I kind of missed all of that part. But now that I started collecting all of those old computers and just everything from the '80s and '90s, I've actually realized, well, these things are not that different from something else. So, I like to always make comparisons between, like, an old system. What does it actually do? How does it compare to a new system?So, I love to mix and match in my presentations. I like to mix it, mix and match in my videos. You saw my blog posts on deploying stuff. So, I think it's just a fun way to kind of create a little contrast. I do think we should still be moving forward. I do think that technology is getting better and better and it's going to help people do so much more things faster, hopefully cheaper, and hopefully better.So, I do think that we should definitely keep on moving forward. But I always have this nostalgic feeling about, like, old things and… sometimes I don't know why, but I miss the world without the internet. And I think that without the internet, I think I miss the world with dial-up internet. Because back then you would go on the internet for a purpose. You have to do a thing, you have to wait for a while, you have to make sure nobody's on the phone. And then—Corey: God forbid you dial into a long-distance call. And you have to figure out which town and which number would be long distance versus not, at least where I grew up, and your parents would lose their freaking minds because that was an $8 phone call, which you know, back in the '80s and early '90s was significant. And yeah, great. Now, I still think is a great prank opportunity to teach kids are something that it costs more to access websites that are far away, which I guess in theory, it kind of does, but not to the end-user. I digress.Darko: I have a story about this, and I'm going to take a little sidestep. But long-distance phone calls. Like in the '80s, the World Wide Web was not yet a thing. Like, the www, the websites all, just the general purpose internet was not yet a thing. We had things called BBSes, or Bulletin Board Systems. That was the extreme version of a dial-up system.You don't dial into the internet; you dial into a website. Imagine if you have a sole intent of visiting only one website and the cost of visiting such a website would depend on where that website currently is. If the website is in Germany and you're calling from Serbia, it's going to cost you a lot of money because you're calling internationally. I had a friend back then. The best software you can get were from American BBSes, but calling America from Serbia back then would have been prohibitively expensive, like, just insanely expensive.So, what this friend used to do, he figured out if he would be connected to a BBS six hours a day, it would actually reset the counter of his phone bill. It would loop through a mechanical counter from whatever number, it would loop back again to that number. So, it would take around six and some hours to complete the loop the entire phone counting metric—whatever they use back in the '80s—to kind of charge your bill, so it's effectively cost him zero money back then. So yeah, it was more expensive, kids, back then to call websites, the further away the websites were.[midroll 00:17:11]Corey: So, developer advocates do a lot of things. And I think it is unfair, but also true that people tend to shorthand those of those things do getting on stage and giving conference talks because that at least is the visible part of it. People see that and it's viscerally is understood that that takes work and a bit of courage for those who are not deep into public speaking and those who are, know it takes a lot of courage. And whereas writing a blog post, “Well, I have a keyboard and say dumb things on the internet all the time. I don't see why that's hard.” So, there's a definite perception story there. What's your take on giving technical presentations?Darko: So, yeah. Just as you said, like, I think being a DA, even in my head was always represented, like, oh, you're just on stage, you're traveling, you're doing presentations, you're doing all those things. But it's actually quite a lot more than that, right? We do a lot more. But still, we are the developer advocate. We are the front-facing thing towards you, the wonderful developers listening to this.And we tend to be on stage, we tend to do podcasts with wonderful internet personalities, we tend to do live streams, we tend to do videos. And I think one of the key skills that a DA needs to have—a Developer Advocate needs to have—is presentations, right? You need to be able to present a technical message in the best possible way. Now, being a good technical presenter doesn't mean you're funny, doesn't mean you're entertaining, that doesn't have to be a thing. You just need to take a complex technical message and deliver it in the best way possible so that everybody who has just given you their time, can get it fully.And this means—well, it means a lot of things, but it means taking this complicated topic, distilling it down so it can be digested within 30 to 45 minutes and it also needs to be… it needs to be interesting. Like, we can talk about the most interesting topic, but if I don't make it interesting, you're just going to walk out. So, I also lead, like, a coaching class within internally, like, to teach people how to speak better and I'm working with, like, really good speakers there, but a lot of the stuff I say applies to no matter if you're a top-level speaker, or if you're, like, just beginning out. And my challenge to all of you speakers out there, like, anybody who's listening to this and it has a plan to deliver a video, a keynote, a live stream or speak at a summit somewhere, is get outside of that box. Get outside of that PowerPoint box.I'm not saying PowerPoint is bad. I think PowerPoint is a wonderful tool, but I'm just saying you don't have to present in the way everybody else presents. The more memorable your presentation is, the more outside of that box it is, the more people will remember it. Again, you don't have to be funny. You don't have to be entertaining. You just have to take thing you are really passionate about and deliver it to us in the best possible way. What that best possible way is, well, it really depends. Like a lot of things, there is no concrete answer to this thing.Corey: One of the hard parts I found is that people will see a certain technical presenter that they like and want to emulate and they'll start trying to do what they do. And that works to a point. Like, “Well, I really enjoy how that presenter doesn't read their slides.” Yeah, that's a good thing to pick up. But past a certain point, other people's material starts to fit as well as other people's shoes and you've got to find your own path.My path has always been getting people's attention first via humor, but it's certainly not the only way. In many contexts, it's not even the most effective way. It works for me in the context in which I use it, but I assure you that when I'm presenting to clients, I don't start off with slapstick comedy. Usually. There are a couple of noteworthy exceptions because clients expect that for me, in some cases.Darko: I think one of the important things is that emulating somebody is okay, as you said, to an extent, like, just trying to figure out what the good things are, but good, very objectively good things. Never try to be funny if you're not funny. That's the thing where you can try comedy, but it's very difficult to—it's very difficult to do comedy if you're not that good at it. And I know that's very much a given, but a lot of people try to be funny when they're obviously not funny. And that's okay. You don't have to be funny.So, there are many of ways to get people's attentions, by again, just throwing a joke. What I did once on stage, I threw a bottle at the floor. I was just—I said, I said a thing and threw a bottle at the floor. Everybody started paying attention all of a sudden at me. I don't know why. So, it's going to be that. It can be something—it can be be a shocking statement. When I say shocking, I mean, something, well, not bad, but something that's potentially controversial. Like, for example, emacs is better than vim. I don't know, maybe—Corey: “Serverless is terrible.”Darko: Serverl—yeah.Corey: Like, it doesn't matter. It depends on the audience.Darko: It depends on the audience.Corey: “The cloud is a scam.” I gave a talk once called, “The Cloud is A Scam,” and it certainly got people's attention.Darko: Absolutely. So, breaking up the normal flow because as a participant of a show, of a presentation, you go there you expect, look, I'm going to sit down, Corey's going to come on stage and Corey says, “Hi, my name is Corey Quinn. I'm the CEO of The Duckbill Group. This is what I do. And welcome to my talk about blah.”Corey: Tactically, my business partner, Mike, is the CEO. I don't want to I don't want to step too close to that fire, let's be clear.Darko: Oh, okay [laugh]. Okay. Then, “Today's agenda is this. And slide one, slide two, slide three.” And that the expectation of the audience. And the audience comes in in this very autopilot way, like, “Okay, I'm just going to sit there and just nod my head as Corey speaks.”But then if Corey does a weird thing and Corey comes out in a bathtub. Just the bathtub and Corey. And Corey starts talking about how bathtubs are amazing, it's the best place to relax. “Oh, by the way, managing costs in the cloud is so easy, I can do it from a bathtub.” Right? All of a sudden, whoa [laugh], wait a second, this is something that's interesting. And then you can go through your rest of your conversation. But you just made a little—you ticked the box in our head, like, “Oh, this is something weird. This is different. I don't know what to expect anymore,” and people start paying more attention.Corey: “So, if you're managing AWS costs from your bathtub, what kind of computer do you use?” “In my case, a toaster.”Darko: [laugh]. Yes. But ultimately, like, some of those things are very good and they just kind of—they make you as a presenter, unpredictable, and that's a good thing. Because people will just want to sit on the edge of the seat and, like, listen to what you say because, I don't know what, maybe he throws that toaster in, right? I don't know. So, it is like that.And one of the things that you'll notice, Corey, especially if you see people who are more presenting for a longer time, like, they've been very common on events and people know them by name and their face, then that turns into, like, not just presenting but somebody comes, literally not because of the topic, but because they want to hear Corey talk about a thing. You can go there and talk about unicorns and cats, people will still come and listen to that because it's Corey Quinn. And that's where you, by getting outside of that box, getting outside of that ‘this is how we present things at company X,' this is what you get in the long run. People will know who you are people will know, what not to expect from your presentations, and they will ultimately be coming to your presentations to enjoy whatever you want to talk about.Corey: That is the dream. I really want to thank you for taking the time to talk so much about how you view the world and the state of ancient and modern technologies and the like. If people want to learn more, where's the best place for them to find you?Darko: The best way to find me is on twitch.tv/aws these days. So, you will find me live streaming twice a week there. You will find me on Twitter at @darkosubotica, which is my Twitter handle. You will find me at the same handle on Mastodon. And just search for my name Darko Mesaroš, I'm sure I'll pop up on MySpace as well or whatever. So, I'll post a lot of cloud-related things. I posted a lot of old computer-related things, so if you want to see me deploy Kubernetes through an Atari 2600, click that subscribe button or follow or whatever.Corey: And we will, of course, include a link to this in the show notes. Thank you so much for being so generous with your time. I appreciate it.Darko: Thank you so much, Corey, for having me.Corey: Darko Mesaroš, senior developer advocate at AWS, Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry and insulting comment that you compose and submit from your IBM Selectric typewriter.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
In this episode Rich speaks with Paige Cruz, a Developer Advocate at Chronosphere. Topics include: How Paige got started with software development and Kubernetes, how much complexity should we expose developers to, using events and traces when troubleshooting, tracing for Kubernetes (added in 1.22), OpenTelemetry, and the burdens of SRE.Show notes:Paige's Twitter | Paige's MastodonRich's Twitter | Rich's MastodonKube Cuddle TwitterLinks:Kubernetes Up and RunningKubernetes the Hard WayDORATraces for KubernetesOpenTelemetryMonitoramaAlex Jones's talk about OpenFeatureKspanPaige's blogEpisode TranscriptLogo by the amazing Emily Griffin.Music by Monplaisir.Thanks for listening. ★ Support this podcast on Patreon ★
Raj Bala, Founder of Perspect, joins Corey on Screaming in the Cloud to discuss all things generative AI. Perspect is a new generative AI company that is democratizing the e-commerce space, by making it possible to place images of products in places that would previously require expensive photoshoots and editing. Throughout the conversation, Raj shares insights into the legal questions surrounding the rise of generative AI and potential ramifications of its widespread adoption. Raj and Corey also dig into the question, “Why were the big cloud providers beaten to the market by OpenAI?” Raj also shares his thoughts on why company culture has to be organic, and how he's hoping generative AI will move the needle for mom-and-pop businesses. About RajRaj Bala, formerly a VP, Analyst at Gartner, led the Magic Quadrant for Cloud Infrastructure and Platform Services since its inception and led the Magic Quadrant for IaaS before that. He is deeply in-tune with market dynamics both in the US and Europe, but also extending to China, Africa and Latin America. Raj is also a software developer and is capable of building and deploying scalable services on the cloud providers to which he wrote about as a Gartner analyst. As such, Raj is now building Perspect, which is a SaaS offering at the intersection of AI and E-commerce.Raj's favorite language is Python and he is obsessed with making pizza and ice cream. Links Referenced:Perspect: https://perspect.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Thinkst Canary. Most folks find out way too late that they've been breached. Thinkst Canary changes this. Deploy Canaries and Canarytokens in minutes and then forget about them. Attackers tip their hand by touching 'em giving you one alert, when it matters. With 0 admin overhead and almost no false-positives, Canaries are deployed (and loved) on all 7 continents. Check out what people are saying at canary.love today!Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Back again, after a relatively brief point in time since the last time he was on, is Raj Bala. Formerly a VP analyst at Gartner, but now instead of talking about the past, we are talking, instead, about the future. Raj, welcome back. You're now the Founder at Perspect. What are you doing over there?Raj: I am indeed. I'm building a SaaS service around the generative AI space at the intersection of e-commerce. So, those two things are things that I'm interested in. And so, I'm building a SaaS offering in that space.Corey: This is the first episode in which we're having an in-depth discussion about generative AI. It's mostly been a topic that I've avoided because until now, relatively recently, it's all been very visual. And it turns into sort of the next generation's crappy version of Instagram, where, “Okay. Well, Instagram's down, so can you just describe your lunch to me?” It's not compelling to describe a generated image on an audio-based podcast. But with the advent of things like ChatGPT, where suddenly it's muscling into my realm, which is the written word, suddenly it feels like there's a lot more attention and effort being paid to it in a bunch of places where it wasn't getting a lot of coverage before, including this one. So, when you talk about generative AI, are you talking in the sense of visual, in terms of the written word, in terms of all of the above, and more? Where's your interest lie?Raj: I think it's all of the above and more. My interest is in all of it, but my focus right now is on the image aspect of things. I've been pretty knee-deep in stable diffusion and all the things that it entails, and it is largely about images at this point.Corey: So, talk to me more about how you're building something that stands between the intersection of e-commerce and generative AI. Because when I go to perspect.com, I'm not staring at a web store in the traditional sense. I'm looking at something that—again, early days, I'm not judging you based upon the content of your landing page—but it does present as a bit more of a developer tool and a little bit less of a “look how pretty it is.”Raj: Yeah. It's very much a developer-focused e-commerce offering. So, as a software developer, if you want programmatic access to all things e-commerce and generative AI that are related to e-commerce, you can do that on perspect.com. So, yeah. It is about taking images of products and being able to put them in AI-generated places essentially.Corey: Got it. So, effectively you're trying to sell, I don't know, titanium jewelry for the sake of argument. And you're talking about now you can place it on a generated model's hand to display this rather than having to either fake it or alternately have a whole bunch of very expensive product shoots and modeling sessions.Raj: Exactly. Exactly. If you want to put this piece of jewelry in front of the Eiffel Tower or the Pyramids of Giza, you can do that in a few seconds as opposed to the expensive photo shoots that were once required.Corey: On some level, given that I spend most of my time kicking around various SaaS products, I kind of love the idea of stock photography modeling, I don't know, Datadog or whatnot. I don't know how that would even begin to pan out, but I'm totally here for it.Raj: That's funny.Corey: Now, the hard part that I'm seeing right now is—I mean, you used to work at Gartner for years.Raj: I did.Corey: You folks are the origin of the Gartner-hype cycle. And given all the breathless enthusiasm, massive amounts of attention, and frankly, hilarious, more than a little horrifying, missteps that we start seeing in public, it feels like we are very much in the heady early days of hype around generative AI.Raj: No doubt about it. No doubt about it. But just thinking about what's possible and what is being accomplished even week to week in this space is just mind-blowing. I mean, this stuff didn't exist really six months ago. And now, the open-source frameworks are out there. The ecosystems are developing around it. A lot of people have compared generative AI to the iPhone. And I think it's actually maybe bigger than that. It's more internet-scale disruption, not just a single device like the iPhone.Corey: It's one of those things that I have the sneaking suspicion is going to start showing up in a whole bunch of different places, manifesting in a whole host of different ways. I've been watching academia, largely, freak out about the idea that, “Well, kids are using it to cheat on their homework.” Okay. I understand the position that they're coming from. But it seems like whenever a new technology is unleashed on the world, that is the immediate, instantaneous, reflexive blowback—not necessarily picking on academics, in particular—but rather, the way that we've always done something is now potentially very easy to use thanks to this advance in technology. “Oh, crap. What do we do?” And there's usually a bunch of scurrying around in futile attempts to put the genie back in the bottle, which frankly, never works. And you also see folks trying to sprint, to sort of keep up with this. And it never really pans out. Society adapts, adjusts, and evolves. And I don't think that that's an inherently terrible thing.Raj: I don't think so either. I mean, the same thing existed with the calculator, right? Do you remember early days in school, they said you can't use a calculator, right? And—Corey: Because remember you will not have a calculator in your pocket as you go through life. Well, that was a lie.Raj: But during the test—during the test that you have to take, you will not have a calculator. And when the rubber meets the road in person during that test, you're going to have to show your skills. And the same thing will happen here. We'll just have to have ground rules and ways to check and balance whether people are cheating or not and adapt, just like you said.Corey: On some level, you're only really cheating yourself past a certain point.Raj: Exactly.Corey: There's value in being able to tell a story in a cohesive, understandable way.Raj: Absolutely.Corey: Oh, the computer will do it for me. And I don't know that you can necessarily depend on that.Raj: Absolutely. Absolutely. You have to understand more than just the inputs and outputs. You have to understand the black box in between enough to show that you understand the subject.Corey: One thing that I find interesting is the position of the cloud providers in this entire—Raj: Mm-hmm.Corey: —space. We have Google, who has had a bunch of execs talking about how they've been working on this internally for years. Like you get how that makes you look worse instead of better, right? Like they're effectively tripping over one another on LinkedIn to talk about how they've been working on this for such a long time, and they have something just like it. Well, yeah. Okay. You got beaten to market by a company less than a decade old.Azure has partnered with OpenAI and brought a lot of this to Bing so rapidly they didn't have time to update their more of a Bing app away from the “Use Bing and earn Microsoft coins” nonsense. It's just wow. Talk about a—being caught flat-footed on this. And Amazon, of course, has said effectively nothing. The one even slightly generative AI service that they have come out with that I can think of that anyone could be forgiven for having missed is—they unleashed this one year at re:Invent's Midnight Madness where they had Dr. Matt Wood get on stage with the DeepComposer and play a little bit of a song. And it would, in turn, iterate on that. And that was the last time any of us ever really heard anything about the DeepComposer. I've got one on my shelf. And I do not hear about it mentioned even in passing other than in trivia contests.Raj: Yeah. It's pretty fascinating. Amazon with all their might, and AWS in particular—I mean, AWS has Alexa, and so they've—the thing you give to Alexa is a prompt, right? I mean, it is generative AI in a large way. You're feeding it a prompt and saying do something. And it spits out something tokenized to you. But the fact that OpenAI has upended all of these companies I think is massive. And it tells us something else about Microsoft too is that they didn't have the wherewithal integrally to really compete themselves. They had to do it with someone else, right? They couldn't muster up the effort to really build this themselves. They had to use OpenAI.Corey: On some level, it's a great time to be a cloud provider because all of these experiments are taking place on top of a whole bunch of very expensive, very specific compute.Raj: Sure.Corey: But that is necessary but not sufficient as we look at it across the board. Because even AWS's own machine-learning powered services, it's only relatively recently that they seemed to have gotten above the “Step one, get a PhD in this stuff. Step two, here's all the nuts and bolts you have to understand about how to build machine-learning models.” Whereas the thing that's really caused OpenAI's stuff to proliferate in the public awareness is, “Okay. You got to a webpage, and you tell it what to draw, and it draws the thing.” Or “go ahead and rename an AWS service if the naming manager had a sense of creativity and a slight bit of whimsy.” And it comes out with names that are six times better than anything AWS has ever come out with.Raj: So, funny. I saw your tweet on that actually. Yeah. If you want to do generative AI on AWS today, it is hard. Oh, my gosh. That's if you can get the capacity. That's if you can get the GPU capacity. That's if you can put together the ML ops necessary to make it all happen. It is extremely hard. Yeah, so putting stuff into a chat interface is 1,000 times easier. I mean, doing something like containers on GPUs is just massively difficult in the cloud today.Corey: It's hard to get them in many cases as well. I had customers that asked, “Okay. What special preferential treatment can we get to get access to more GPUs?” It's like can you break the laws of physics or change global supply chain because if so, great. You've got this unlocked. Otherwise, good luck.Raj: I think us-east-2 a couple weeks ago for like the entire week was out of the GPU capacity necessary the entire week.Corey: I haven't been really tracking a lot of the GPU-specific stuff. Do you happen to know what a lot of OpenAI's stuff is built on top of from a vendoring perspective?Raj: I mean, it's got to be Nvidia, right? Is that what you're asking me?Corey: Yeah. I'm—I don't know a lot of the—again, this is not my area.Raj: Yeah, yeah.Corey: I am not particularly deep in the differences between the various silicon manufacturers. I know that AWS has their Inferentia chipset that's named after something that sounds like what my grandfather had. You've got a bunch of AMD stuff coming out. You've have—Intel's been in this space for a while. But Nvidia has sort of been the gold standard based upon GPU stories. So, I would assume it's Nvidia.Raj: At this point, they're the only game in town. No one else matters. The frameworks simply don't support anything other than Nvidia. So, in fact, OpenAI—them and Facebook—they are kind of leading some—a bunch of the open-source right now. So, it's—Stability AI, Hugging Face, OpenAI, Facebook, and all their stuff is dependent on Nvidia. None of it—if you look through the source code, none of it really relies on Inferentia or Trainium or AMD or Intel. It's all Nvidia.Corey: As you look across the current landscape—at least—let me rephrase that. As I look across the current landscape, I am very hard-pressed to identify any company except probably OpenAI itself as doing anything other than falling all over itself having been caught—what feels like—completely flat-footed. We've seen announcements rushed. We've seen people talking breathlessly about things that are not yet actively available. When does that stop? When do we start to see a little bit of thought and consideration put into a lot of the things that are being rolled out, as opposed to “We're going to roll this out as an assistant now to our search engine” and then having to immediately turn it off because it becomes deeply and disturbingly problematic in how it responds to a lot of things?Raj: You mean Sam Altman saying he's got a lodge in Montana with a cache of firearms in case AI gets out of control? You mean that doesn't alarm you in any way?Corey: A little bit. Just a little bit. And like even now you're trying to do things that, to be clear, I am not trying to push the boundaries of these things. But all right. Write a limerick about Elon Musk hurling money at things that are ridiculous. Like, I am not going to make fun of individual people. It's like I get that. But there is a punching-up story around these things. Like, you also want to make sure that's it not “Write a limerick about the disgusting habit of my sixth-grade classmate.” Like, you don't want to, basically, automate the process of cyber-bullying. Let's be clear here. But finding that nuance, it's a societal thing to wrestle with, on some level. But I think that we're anywhere near having cohesive ideas around it.Raj: Yeah. I mean, this stuff is going to be used for nefarious ways. And it's beyond just cyberbullying, too. I think nation-states are going to use this stuff to—as a way to create disinformation. I mean, if we saw a huge flux of disinformation in 2020, just imagine what's going to happen in 2024 with AI-generated disinformation. That's going to be off the charts.Corey: It'll be at a point where you fundamentally have to go back to explicitly trusted sources as opposed to, “Well, I saw a photo of it or a video of it” or someone getting onstage and dancing about it. Well, all those things can be generated now for, effectively, pennies.Raj: I mean, think about evidence in a courtroom now. If I can generate an image of myself holding a gun to someone's head, you have to essentially dismiss all sorts of photographic evidence or video evidence soon enough in court because you can't trust the authenticity of it.Corey: It makes providence and chain-of-custody issues far more important than they were before. And it was still a big deal. Photoshop has been around for a while. And I remember thinking when I was younger, “I wonder how long it'll be until videos become the next evolution of this.” Because there was—we got to a point fairly early on in my life where you couldn't necessarily take a photograph at face value anymore because—I mean, look at some of the special effects we see in movies. Yeah, okay. Theoretically, someone could come up with an incredibly convincing fake of whatever it is that they're trying to show. But back then, it required massive render farms and significant investment to really want to screw someone over. Now, it requires drinking a couple glasses of wine, going on the internet late at night, navigating to the OpenAI webpage, and typing in the right prompt. Maybe you iterate on it a little bit, and it spits it out for basically free.Raj: That's one of the sectors, actually, that's going to adopt this stuff the soonest. It's happening now, the film and movie industry. Stability AI actually has a film director on staff. And his job is to be sort of the liaison to Hollywood. And they're going to help build AI solutions into films and so forth. So, yeah. But that's happening now.Corey: One of the more amazing things that I've seen has been the idea of generative voice where it used to be that in order to get an even halfway acceptable model of someone's voice, they had to read a script for the better part of an hour. That—and they had to make sure that they did it with certain inflection points and certain tones. Now, you can train these things on, “All right. Great. Here's this person just talking for ten minutes. Here you go.” And the reason I know this—maybe I shouldn't be disclosing this as publicly as I am, but the heck with it. We've had one of me on backup that we've used intermittently on those rare occasions when I'm traveling, don't have my recording setup with me, and this needs to go out in a short time period. And we've used it probably a dozen times over the course of the 400 and some odd episodes we've done. One person has ever noticed.Raj: Wow.Corey: Now, having a conversation going back and forth, start pairing some of those better models with something like ChatGPT, and basically, you're building your own best friend.Raj: Yeah. I mean, soon enough you'll be able to do video AI, completely AI-generated of your podcast perhaps.Corey: That would be kind of wild, on some level. Like now we're going to animate the whole thing.Raj: Yeah.Corey: Like I still feel like we need more action sequences. Don't know about you, but I don't have quite the flexibility I did when I was younger. I can't very well even do a pratfall without wondering if I just broke a hip.Raj: You can have an action sequence where you kick off a CloudFormation task. How about that?Corey: One area where I have found that generative text AI, at least, has been lackluster, has been right a parody of the following song around these particular dimensions. Their meter is off. Their—the cleverness is missing.Raj: Hmm.Corey: They at least understand what a parody is and they understand the lyrics of the song, but they're still a few iterative generations away. That said, I don't want to besmirch the work of people who put into these things. They are basically—Raj: Mm-hmm.Corey: —magic.Raj: For sure. Absolutely. I mean, I'm in wonderment of some of the artwork that I'm able to generate with generative AI. I mean, it is absolutely awe-inspiring. No doubt about it.Corey: So, what's gotten you excited about pairing this specifically with e-commerce? That seems like an odd couple to wind up smashing together. But you have had one of the best perspectives on this industry for a long time now. So, my question is not, “What's wrong with you?” But rather, “What are you seeing that I'm missing?”Raj: I think it's the biggest opportunity from an impact perspective. Generating AI avatars of yourself is pretty cool. But ultimately, I think that's a pretty small market. I think the biggest market you can go after right now is e-commerce in the generative AI space. I think that's the one that's going to move the needle for a lot of people. So, it's a big opportunity for one. I think there are interesting things you can do in it. The technical aspects are equally interesting. So, you know, there are a handful of compelling things that draw me to it.Corey: I think you're right. There's a lot of interest and a lot of energy and a lot of focus built around a lot of the neat, flashy stuff. But it's “Okay. How does this windup serving problems that people will pay money for?” Like right now to get early access to ChatGPT and not get rate-limited out, I'm paying them 20 bucks a month which, fine, whatever. I am also in a somewhat privileged position. If you're generating profile photos that same way, people are going to be very hard-pressed to wind up paying more than a couple bucks for it. That's not where the money is. But solving business problems—and I think you're onto something with the idea of generative photography of products that are for sale—that has the potential to be incredibly lucrative. It tackles what to most folks is relatively boring, if I can say that, as far as business problems go. And that's often where a lot of value is locked away.Raj: I mean, in a way, you can think of generative AI in this space as similar to what cloud providers themselves do. So, the cloud providers themselves afforded much smaller entities the ability to provision large-scale infrastructure without high fixed costs. And in some ways, I know the same applies to this space too. So, now mom-and-pop shop-type people will be able to generate interesting product photos without high fixed costs of photoshoots and Photoshop and so forth. And so, I think in some ways it helps to democratize some of the bigger tools that people have had access to.Corey: That's really what it comes down to is these technologies have existed in labs, at least, for a little while. But now, they're really coming out as interesting, I guess, technical demos, for lack of a better term. But the entire general public is having access to these things. There's not the requirement that we wind up investing an enormous pile of money in custom hardware and the rest. It feels evocative of the revolution that was cloud computing in its early days. Where suddenly, if I have an idea, I don't need either build it on a crappy computer under my desk or go buy a bunch of servers and wait eight weeks for them to show up in a rack somewhere. I can just start kicking the tires on it immediately. It's about democratizing access. That, I think, is the magic pill here.Raj: Exactly. And the entry point for people who want to do this as a business, so like me, it is a huge hurdle still to get this stuff running, lots of jagged edges, lots of difficulty. And I think that ultimately is going to dissuade huge segments of the population from doing it themselves. They're going to want completed services. They're going to want finish product, at least in some consumable form, for their persona.Corey: What do you think the shaking out of this is going to look like from a cultural perspective? I know that right now everyone's excited, both in terms of excited about the possibility and shrieking that the sky is falling, that is fairly normal for technical cycles. What does the next phase look like?Raj: The next phase, unfortunately, is probably going to be a lot of litigation. I think there's a lot of that on the horizon already. Right? Stability AI's being sued. I think the courts are going to have to decide, is this stuff above board? You know, the fact that these models have been trained on otherwise copywritten data—copywritten images and music and so forth, that amounts to billions of parameters. How does that translate—how does that affect ages of intellectual property law? I think that's a question that—it's an open question. And I don't think we know.Corey: Yeah. I wish, on some level, that we could avoid a lot of the unpleasantness. But you're right. It's going to come down to a lot of litigation, some of which clearly has a point, on some level.Raj: For sure.Corey: But it's a—but that is, frankly, a matter for the courts. I'm privileged that I don't have to sit here and worry about this in quite the same way because I am not someone who makes the majority of my income through my creative works. And I am also not on the other side of it where I've taken a bunch of other people's creative output and use that to train a bunch of models. So, I'm very curious to know how that is going to shake out as a society.Raj: Yeah.Corey: I think that regulation is also almost certainly on the horizon, on some level. I think that tech has basically burned through 25 years of goodwill at this point. And nobody trusts them to self-regulate. And based upon their track record, I don't think they should.Raj: And interestingly, I think that's actually why Google was caught so flat-footed. Google was so afraid of the ramifications of being first and the downside optics of that, that they got a little complacent. And so, they weren't sure how the market would react to saying, “Here's this company that's known for doing lots of, you know, kind of crazy things with people's data. And suddenly they come out with this AI thing that has these huge superpowers.” And how does that reflect poorly on them? But it ended up reflecting poorly on them anyway because they ended up being viewed as being very, very late to market. So, yeah. They got pie on their face one way or the other.Corey: For better or worse, that's going to be one of those things that haunts them. This is the classic example of the innovator's dilemma. By becoming so focused on avoiding downside risk and revenue protection, they effectively let their lunch get eaten. I don't know that there was another choice that they could've made. I'm not sitting here saying, “That's why they're foolish.” But it's painful. If I'm—I'm in the same position right now. If I decide I want to launch something new and exciting, my downside risk is fairly capped. The world is my theoretical oyster. Same with most small companies. I don't know about you, what do you right now as a founder, but over here at The Duckbill Group, at no point in the entire history of this company, going back six years now, have we ever sat down for a discussion around, “Okay. If we succeed at this, what are the antitrust implications?” It has never been on our roadmap. It's—that's very firmly in the category of great problems to have.Raj: Really confident companies will eat their own lunch. So, you in fact see AWS do this all the time.Corey: Yes.Raj: They will have no problem disrupting themselves. And they're lots of data points we can talk about to show this. They will disrupt themselves first because they're afraid of someone else doing it before them.Corey: And it makes perfect sense. Amazon has always had a—I'd call it a strange culture, but that doesn't do it enough of a service just because it feels like compared to virtually any other company on the planet, they may as well be an alien organism that has been dropped into the world here. And we see a fair number of times where folks have left Amazon, and they wind up being so immersed in that culture, that they go somewhere else, and “Ah, I'm just going to bring the leadership principles with me.” And it doesn't work that way. A lot of them do not pan out outside of the very specific culture that Amazon has fostered. Now, I'm not saying that they're good or that they're bad. But it is a uniquely Amazonian culture that they have going on there. And those leadership principles are a key part of it. You can transplant that in the same way to other companies.Raj: Can I tell you one of the funniest things one of these cloud providers has said to me? I'm not going to mention the cloud provider. You may be able to figure out which one anyway, though.Corey: No. I suspect I have a laundry list to go out of these various, ridiculous things I have heard from companies. Please, I am always willing to add to the list. Hit me with it.Raj: So, a cloud provider—a big cloud provider, mine you—told me that they wanted Amazon's culture so bad that they began a thing where during a meeting—before each meeting, everyone would sit quietly and read a paper that was written by someone in the room so they all got on the same page. And that is distinctly an Amazon thing, right? And this is not Amazon that is doing this. This is some other cloud provider. So, people are so desperate for that bit of weirdness that you mentioned inside of Amazon, that they're willing to replicate some of the movements and the behaviors whole cloth hoping that they can get that same level of culture. But it has to be—it has to be organic. And it has to be at the root. You can't just take an arm and stick it onto a body and expect it to work, right?Corey: My last real job before I started this place was at a small, scrappy startup for three months. And then we were bought by an enormous financial company. And one of their stated reasons for doing it was, “Well, we really admire a lot of your startup culture, and we want to, basically, socialize that and adopt that where we are.” Spoiler. This did not happen. It was more or less coming from a perspective, “Well, we visited your offices, and we saw that you had bikes in the entryway and dogs in the office. And well, we went back to our office, and we threw in some bikes and added some dogs, but we didn't get any different result. What's the story here?” It's—you cannot cargo cult bits and pieces of a culture. It has to be something holistic. And let's be clear, you're only ever going to be second best at being another company. They're going to be first place. We saw this a lot in the early-2000s of “We're going to be the next Yahoo.” It's—why would I care about that? We already have original Yahoo. The fortune's faded, but here we are.Raj: Yeah. Agreed.Corey: On our last recording, you mentioned that you would be building this out on top of AWS. Is that how it's panned out? You still are?Raj: For the most part. For the most part. I've dipped my toes into trying to use GPU capacity elsewhere, using things like ECS Anywhere, which is an interesting route. There's some promise there, but there's still lots of jagged edges there too. But for the most part, there's not another cloud provider that really has everything I need from GPU capacity to serverless functions at the edge, to CDNs, to SQL databases. That's actually a pretty disparate set of things. And there's not another cloud provider that has literally all of that except AWS at this point.Corey: So far, positive experience or annoying? Let's put it that way.Raj: Some of it's really, really hard. So, like doing GPUs with generative AI, with containers for instance, is still really, really hard. And the documentation is almost nonexistent. The documentation is wrong. I've actually submitted pull requests to fix AWS documentation because a bunch of it is just wrong. So, yeah. It's hard. Some of it's really easy. Some it's really difficult.Corey: I really want to thank you for taking time to speak about what you're up to over at Perspect. Where can people go to learn more?Raj: www.perspect.com.Corey: And we will of course put a link to that in the [show notes 00:30:02]. Thank you so much for being so generous with your time. I appreciate it.Raj: Any time, Corey.Corey: Raj Bala, Founder at Perspect. I'm Cloud Economist Corey Quinn. And this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas, if you haven't hated this podcast, please, leave a five-star review on your podcast platform of choice along with an angry, insulting comment that you got an AI generative system to write for you.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Lex Neva, Staff Site Reliability Engineer at Honeycomb and Curator of SRE Weekly, joins Corey on Screaming in the Cloud to discuss reliability and the life of a newsletter curator. Lex shares some interesting insights on how he keeps his hobbies and side projects separate, as well as the intrusion that open-source projects can have on your time. Lex and Corey also discuss the phenomenon of newsletter curators being much more demanding of themselves than their audience typically is. Lex also shares his views on how far reliability has come, as well as how far we have to go, and the critical implications reliability has on our day-to-day lives. About LexLex Neva is interested in all things related to running large, massively multiuser online services. He has years of SRE, Systems Engineering, tinkering, and troubleshooting experience and perhaps loves incident response more than he ought to. He's previously worked for Linden Lab, DeviantArt, Heroku, and Fastly, and currently works as an SRE at Honeycomb while also curating the SRE Weekly newsletter on the side.Lex lives in Massachusetts with his family including 3 adorable children, 3 ridiculous cats, and assorted other awesome humans and animals. In his copious spare time he likes to garden, play tournament poker, tinker with machine embroidery, and mess around with Arduinos.Links Referenced: SRE Weekly: https://sreweekly.com/ Honeycomb: https://www.honeycomb.io/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. Tired of observability costs going up every year without getting additional value? Or being locked into a vendor due to proprietary data collection, querying, and visualization? Modern-day, containerized environments require a new kind of observability technology that accounts for the massive increase in scale and attendant cost of data. With Chronosphere, choose where and how your data is routed and stored, query it easily, and get better context and control. 100% open-source compatibility means that no matter what your setup is, they can help. Learn how Chronosphere provides complete and real-time insight into ECS, EKS, and your microservices, wherever they may be at snark.cloud/chronosphere that's snark.cloud/chronosphere.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Once upon a time, I decided to start writing an email newsletter, and well, many things happened afterwards, some of them quite quickly. But before that, I was reading a number of email newsletters in the space. One that I'd been reading for a year at the time, was called SRE Weekly. It still comes out. I still wind up reading it most weeks.And it's written by Lex Neva, who is not only my guest today but also a staff site reliability engineer at Honeycomb. Lex, it is so good to finally talk to you, other than reading emails that we send to the entire world that pass each other like ships in the night.Lex: Yeah. I feel like we should have had some kind of meeting before now. But yeah, it's really good to [laugh] finally meet you.Corey: It was one of the inspirations that I had. And to be clear, when I signed up for your newsletter originally—I was there for issue 15, which is many, many years ago—I was also running a small-scale SRE team at the time. It was, I found as useful as a part of doing my job and keeping abreast of what was going on in the ecosystem. And I found myself, once I went independent, wishing that your newsletter and a few others had a whole bunch more AWS content. Well, why doesn't it?And the answer is because you are, you know, a reasonable person who understands that mental health is important and boundaries exist for a reason. No one sensible is going to care that much about one cloud provider all the time [sigh]. If only we were all that wise.Lex: Right? Well, [laugh] well, first of all, I love your newsletter, and also the content that you write that—I mean, I would be nowhere without content to link to. And I'm glad you took on the AWS thing because, much like how I haven't written Security Weekly, I also didn't write any kind of AWS Weekly because there's just too much. So, thanks for falling on that sword.Corey: I fell on another one about two years ago and started the Thursdays, which are Last Week in AWS Security. But I took a different bent on it because there are a whole bunch of security newsletters that litter the landscape and most of them are very good—except for the ones that seem to be entirely too vendor-captured—but the problem is, is that they lacked both a significant cloud focus, as well as an understanding that there's a universe of people out here who care about security—or at least should—but don't have the word security baked into their job title. So, it was very insular, using acronyms they assume that everyone knows, or it's totally vendor-captured and it's trying to the whole fear, uncertainty, and doubt thing, “And that's why you should buy this widget.” “Will it solve problems?” “Well, it'll solve our revenue problems at our company that sells the widgets, but other than that, not really.” And it just became such an almost incestuous ecosystem. I wanted something different.Lex: Yeah. And the snark is also very useful [laugh] in order to show us that you're not in their pocket. So yeah, nice work.Corey: Well, I'll let you in on a secret, now that we are—what, I'm somewhat like 300 and change issues in, which means I've been doing this for far too long, the snark is a byproduct of what I needed to do to write it myself. Because let's face it, this stuff is incredibly boring. I needed to keep myself interested as I started down that path. And how can I continually keep it fresh and funny and interesting, but not go too far? That's a fun game, whereas copying and pasting some announcement was never fun.Lex: Yeah, that's not—I hear you on trying to make it interesting.Corey: One regret that I've had, and I'm curious if you've ever encountered this yourself because most people don't get to see any of this. They see the finished product that lands in their inbox every Monday, and—in my case, Monday; I forget the exact day that yours comes out. I collect them and read through them for them all at once—but I find that I have often had caused a look back and regret the implicit commitment in Last Week in AWS as a name because it would be nice to skip a week here and there, just because either I don't particularly feel like it, or wow, there was not a lot of news worth talking about that came out last week. But it feels like I've forced myself onto a very particular treadmill schedule.Lex: Yeah. Yeah, it comes with, like, calling it SRE Weekly. I just followed suit for some of the other weeklies. But yeah, that can be hard. And I do give myself permission to take a week off here and there, but you know, I'll let you in on a secret.What I do is I try to target eight to ten articles a week. And if I have more than that, I save some of them. And then when it comes time to put out an issue, I'll go look at what's in that ready queue and swap some of those in and swap some of the current ones out just so I keep things fresh. And then if I need a week off, I'll just fill it from that queue, you know, if it's got enough in it. So, that lets me take vacations and whatnot. Without that, I think I would have had a lot harder of a time sticking with this, or there just would have been more gaps. So yeah.Corey: You're fortunate in that you have what appears to be a single category of content when you construct your newsletter, whereas I have three that are distinct: AWS releases and announcements and news and things to make fun of for the past week; the things from the larger community folks who do not work there, but are talking about interesting approaches or news that is germane; and then ideally a tip or a tool of the week. And I found, at least lately, that I've been able to build out the tools portion of it significantly far in advance. Because a tool that makes working with AWS easier this week is probably still going to be fairly helpful a month from now.Lex: Yeah, that's fair. Definitely.Corey: But putting some of the news out late has been something of a challenge. I've also learned—by getting it wrong—that I'm holding myself to a tighter expectation of turnaround time than any part of the audience is. The Thursday news is all written the week before, almost a full week beforehand and no one complains about that. I have put out the newsletter a couple of times an hour or two after its usual 7:30 pacific time slot that it goes out in; not a single person has complained. In one case, I moved it by a day to accommodate an announcement but didn't explain why; not a single person emailed in. So, okay. That's good to know.Lex: Yeah, I've definitely gotten to, like, Monday morning, like, a couple of times. Not much, not many times, but a couple of times, I've gotten a Monday morning be like, “Oh, hey. I didn't do that thing yesterday.” And then I just release it in the morning. And I've never had a complaint.I've cancelled last minute because life interfered. The most I've ever had was somebody emailing me and be like, you know, “Hope you feel better soon,” like when I had Covid, and stuff like that. So, [laugh] yeah, sometimes maybe we do hold ourselves to a little bit of a higher standard than is necessary. I mean, there was a point where I got—I had major eye surgery and I had to take a month off of everything and took a month off the newsletter. And yeah, I didn't lose any subscribers. I didn't have any complaints. So people, I think, appreciate it when it's there. And, you know, if it's not there, just wait till it comes out.Corey: I think that there is an additional challenge that I started feeling as soon as I started picking up sponsors for it because it's well, but at this point, I have a contractual obligation to put things out. And again, life happens, but you also don't want to have to reach out on apology tours every third week or whatnot. And I think that's in part due to the fact that I have multiple sponsors per issue and that becomes a bit of a juggling dance logistically on this end.Lex: Yeah. When I started, I really didn't think I necessarily wanted to have sponsors because, you know, it's like, I have a job. This is just for fun. It got to the point where it's like, you know, I'll probably stop this if there's not some kind of monetary advantage [laugh]. And having a sponsor has been really helpful.But I have been really careful. Like, I have always had only a single sponsor because I don't want that many people to apologize to. And that meant I took in maybe less money than I then I could have, but that's okay. And I also was very clear, you know, even from the start having a contract that I may miss a week without notice. And yes, they're paying in advance, but it's not for a specific range of time, it's for a specific number of issues, whenever those come out. That definitely helped to reduce the stress a little bit. And I think without that, you know, having that much over my head would make it hard to do this, you know? It has to stay fun, right?Corey: That's part of the things that kept me from, honestly, getting into tech for the first part of my 20s. It was the fear that I would be taking a hobby, something that I love, and turning it into something that I hated.Lex: Yeah, there is that.Corey: It's almost 20 years now and I'm still wondering whether I actually succeeded or not in avoiding hating this.Lex: Well, okay. But I mean, are you, you know, are you depressed [unintelligible 00:09:16] so there's this other thing, there's this thing that people like to say, which is like, “You should only do a job that you really love.” And I used to think that. And I don't actually think that anymore. I think that it is important to have a job that you can do and not hate day-to-day, but there's no shame in not being passionate about your work and I don't think that we should require passion from anyone when we're hiring. And I think to do so is even, like, privilege. So, you know, I think that it's totally fine to just do something because it pays the bills.Corey: Oh, absolutely. I find it annoying as hell when I'm talking to folks who are looking to hire for roles and, “Well, include a link to your GitHub profile,” is a mandatory field. It's, well, great. What about people who work in places where they're not working on open-source projects as a result, and they can't really disclose what they're doing? And the expectation that oh, well outside of work, you should be doing public stuff, too.It's, I used to do a lot of public open-source style work on GitHub, but I got yelled at all the time for random, unrelated reasons and it's, I don't want to put something out there that I have to support and people start to ask me questions about. It feels like impromptu unasked-for code review. No, thanks. So, my GitHub profile looks fairly barren.Lex: You mean like yelling at you, like, “Oh, you're not contributing enough.” Or, you know, “We need this free thing you're doing, like, immediately,” or that kind of thing?Corey: Worse than that. The worst example I've ever had for this was when I was giving a talk called “Terrible Ideas in Git,” and because I wanted to give some hilariously contrived demos that took a fair bit of work to set up, I got them ready to go inside of a Docker container because I didn't trust that my laptop would always work, I'm might have to borrow someone else's, I pushed that image called “Terrible Ideas” up to Docker Hub. And I wound up with people asking questions about it. Like, “Is this vulnerable to ShellCheck.” And it's, “You do realize that this is intentionally designed to be awful? It is only for giving a very specific version of a very specific talk. It's in public, just because I didn't bother to make it private. What are you doing? Please tell me you're not running this in production at a bank?” “No comment.” Right. I don't want that responsibility of people yelling at me for things I didn't do on purpose. I want to get yelled at for the things I did intentionally.Lex: Exactly. It's funny that sometimes people expect more out of you when you're giving them something free versus when they're paying you for it. It's an interesting quirk of psychology that I'm sure that professionals could tell me all about. Maybe there's been research on it, I don't know. But yeah, that can be difficult.Corey: Oh, absolutely. I used to work at a web hosting company and the customer spending thousands a month with us were uniformly great. But there was always the lowest tier customer of the cheapest thing that we offered that seemed to expect that that entitle them to 80 hours a month of support from engineering problems and whatnot. And it was not profitable to service some of those folks. I've also found that there's a real transitive barrier that begins as soon as you find a way to charge someone a dollar for something.There's a bit of a litmus test of can you transfer a dollar from your bank account to mine? And suddenly, the entire tenor of the conversations with people who have crossed that boundary change. I have toyed, on some level, with the idea of launching a version of this newsletter—or wondering if I retcon the whole thing—do I charge people to subscribe to this? And the answer I keep coming away with is not at all because it started in many respects is marketing for AWS bill consulting and I want the audience as fast as possible. Artificially limiting its distribution via a pay-for model just seemed a little on the strange side.Lex: Yeah. And then you're beholden to a very many people and there's that disproportionality. So, years ago, before I even started in my career in I guess, you know, things that were SRE before SRE was cool, I worked for a living in Second Life. Are you familiar with Second Life?Corey: Oh, yes. I'm very familiar with that. Linden Labs.Lex: Yep. So, I worked for Linden Lab years later, but before I worked for them, I sort of spent a lot of my time living in Second Life. And I had a product that I sold for two or three dollars. And actually, it's still in there; you could still buy it. It's interesting. I don't know if it's because the purchase price was 800 Linden dollars, which equates to, like, $2.16, or something like that, but—Corey: The original cryptocurrency.Lex: Right, exactly. Except there's no crypto involved.Corey: [laugh].Lex: But people seem to have a disproportionate amount of, like, how much of my time they expected for support. You know, I'm going to support them a little bit. You have to recognize at some point, I actually can't come give you a tutorial on using this product because you're one of 500 customers for this month. And you give me two dollars and I don't have ten hours to give you. You know, like, sorry [laugh]. Yeah, so that can be really tough.Corey: And on some level, you need to find a way to either charge more or charge for support on top of it, or ideally—it I wish more open-source projects would take this approach—“Huh. We've had 500 people asking us the exact same question. Should we improve our docs? No, of course not. They're the ones who are wrong. It's the children who are getting it wrong.”I don't find that approach [laugh] to be particularly useful, but it bothers me to no end when I keep running into the same problem onboarding with something new and I ask about it, and, “Oh, yeah, everyone runs into that problem. Here's how you get around it.” This would have been useful to mention in the documentation. I try not to ask questions without reading the manual first.Lex: Well, so there's a couple different directions. I could go with this. First of all, there's a really interesting thing that happened with the core-js project that I recommend people check out. Another thing that I think the direction I'll go at the moment—we can bookmark that other one, but I have an open-source project on the side that I kind of did for my own fun, which is a program for creating designs that can be processed by computer-controlled embroidery machines. So, this is sewing machines that can plot stitches in the x-y plane based on a program that you give it.And there really wasn't much in the way of open-source software available that could help you create these designs and so I just sort of hack something together and started hacking with Python for my own fun, and then put it out there and open-sourced. And it's kind of taken off, kind of like gotten a life of its own. But of course, I've got a newsletter, I've got three kids, I've got a family, and a day job, and I definitely hear you on the, like, you know, yeah, we should put this FAQ in the docs, but there can be so little time to even do that. And I'm finding that there's, like—you know, people talk about work-life balance, there's, like, work slash life slash open-source balance that you really—you know, you have to, like, balance all three of them.And a lot of weeks, I don't have any time to spend on the project. But you know what, it's still kicks along and people just kind of, they use my terrible little project [laugh] as best they can, even though it has a ton of rough edges. I'm sorry, everyone, I'm so sorry. I know it has a t—the UI is terrible. But yeah, it's interesting how these things sometimes take on a life of their own and you can feel dragged along by your own open-source work, you know?Corey: It always bothers me—I think this might tie back to the core-js issue you talked about a second ago—where there are people who are building and supporting open-source tools or libraries that they originally constructed to scratch an itch and now they are core dependencies of basically half the internet. And these people are still wondering on some level, how do I put food on the table this month? It's wild to me. If there were justice in the world, you'd start to think these people would wind up in never-have-to-work-again-if-they-don't-want-to positions. But in many cases, it's exactly the opposite.Lex: Well, that's the really interesting thing. So, first of all, I'm hugely privileged to have any time to get to work on open-source. There's plenty of people that don't, and yeah, so requiring people to have a GitHub link to show their open-source contributions is inherently unfair and biased and discriminatory. That aside, people have asked all along, like, “Lex, this is decent software, you could sell this. You could charge money for this thing and you could probably make a, you know, a decent living at this.”And I categorically refuse to accept money for that project because I don't want to have to support it on a commercial level like that. If I take your money, then you have an expectation that—especially if I charge what one would expect—so this software, part of the reason I decided to write my own is because it starts at two-hundred-some-off dollars for the competitors that are commercial and goes up into the five, ten-thousand dollars. For a software package. Mine is free. If I started charging money, then yeah, I'm going to have to build a support department and we're going to have a knowledge base, I'm going to have to incorporate. I don't want to do that for something I'm doing for fun, you know? So yeah, I'm going to keep it free and terrible [laugh].Corey: It becomes something you love, turns into something you hate without even noticing that it happens. Or at least something that you start to resent.Lex: Yeah. I don't think I would necessarily hate machine embroidery because I love it. It's an amazingly fun little quirky hobby, but I think it would definitely take away some of the magic for me. Where there's no stress at all, I can spend months noodling on an algorithm getting it right, whereas it'd be, you know, if I start having to have deliverables, it changes it entirely. Yeah.Corey: It's odd, it seems, on some level too, that the open-source world that I got started with has evolved in a whole bunch of different ways. Whereas it used to be write a quick fix for something and it would get merged, in many cases by the time you got back from lunch. And these days, it seems like it takes multiple weeks, especially with a corporate-controlled open-source project, and there's so much back and forth. And even getting the boilerplate, like the CLI—the Contributor License Agreement—aside and winding up getting other people to sign off on it, then there's back and forth, in some cases for weeks about, well, the right kind of test coverage and how to look at this and the right holistic framework. And I appreciate that there is validity and value to these things, but is that the bulk of the effort should be going when there's a pull request ready to go that solves a breaking customer problem?But the test coverage isn't right so we're going to delay it for two or three releases. It's what are you doing there? Someone lost the plot somewhere. And I'm sure there are reasons that makes sense, given the framework people are operating within. I just find it maddening from the side of having to [laugh] deal with this as a human.Lex: Yeah, I hear you. And it sometimes can go even beyond test coverage to something like code style, you know? It's like, “Oh, that's not really in the style of this project,” or, “You know, I would have written it this way.” And one thing I've had to really work on, on this project is to make it as inviting to developers as possible. I have to sometimes look at things and be like, yeah, I might do that a different way. But does that actually matter? Like, do I have a reason for that that really matters or is it just my style? And maybe because it's a group project I should just be like, no, that's good as it is.[midroll 00:20:23]Corey: So, you've had an interesting career. And clearly you have opinions about SRE as a result. When I started seeing that you were the author of SRE Weekly, years ago, I just assumed something that I don't believe is true. Is it possible that you have been contributing to the community around SRE, but somehow have never worked at Google?Lex: I have never worked at Google. I have never worked at Netflix. I've never worked at any of those big companies. The biggest company I've worked for is Salesforce. Although I worked for Heroku who had been bought by Salesforce a couple of years prior, and so it was kind of like working for a startup inside a big company. And here's the other thing. I created that newsletter two months after starting my first job where I had a—like, the first job in which I was titled ‘SRE.' So, that's possibly contentious right there.Corey: You know, I hadn't thought of it this way, but you're right. I did almost the exact same thing. I was no expert in AWS when I started these things. It came out of an effort that I needed to do of keeping touch with everything that came out that had potential economic impact, which it turns out are most things when you understand architecture and cost are the same thing when it comes to cloud. But I was more or less gathering what smart people were saying.And somehow there's been this osmotic effect, where people start to view me as the wise old sage of the mountain when it comes to AWS. And no, no, no, I'm just old and grumpy. That looks alike. Don't mistake it for wisdom. But people will now seek me out to get my opinion on things and I have no idea what the answer looks like for most of the stuff.But that's the old SRE model—or sysadmin model that I've followed, which is when you don't know the answer, well, how do you get to a place where you can find the answer? How do you troubleshoot this? Click the button. It doesn't work? Well, time to start taking the button apart to figure out why.Lex: Yeah, definitely. I hear you on people. So, first of all, thanks to everyone who writes the articles that I include. I would be nothing without—I mean—literally, that I could not have a newsletter without content creators. I also kind of started the newsletter as an exploration of this new career title.I mean, I've been doing things that basically fit along with SRE for a long time, but also, I think my view of SRE might be not really the same as a lot of folks, or, like, that Google passed down from the [Google Book Model 00:22:46]. I don't—I'm going to be a little heretical here—I don't necessarily a hundred percent believe in the SLI SLO SLA error budget model. I don't think that that necessarily fits everyone, I'm not sure even suits the bigger companies as well as they think it does. I think that there's a certain point to which you can't actually predict failure and just slowing down on your deploys. And it likes to cause there to be fewer incidents so that you can get—your you know, you can go back to passing in your error budget, to passing your SLO, I'm not sure that actually makes sense or is realistic and works in the real world.Corey: I've been left with the distinct impression that it's something of a framework for how to think about a lot of those things. And it's for folks on a certain point of their development along whatever maturity model or maturity curve you want to talk about, it becomes extraordinarily useful. And at some point, it feels like the path that a given company is on will deviate from that. And, on some level, if you don't wind up addressing it, it turns into what it seems like Agile did, where you wind up with the Cult of Agile around it and the entire purpose of it is to perpetuate the Cult of Agile.And I don't know that I'm necessarily willing to go so far as to say that's where SLOs are headed right now, but I'm starting to get the same sort of feeling around the early days of the formalization of frameworks like that, and the ex cathedra proclamation that this is right for everyone. So, I'm starting to wonder whether there's a reckoning, in that sense, coming down the road. I'm fortunate that I don't run anything that's production-facing, so for me, it's, I don't have to care about these things. Mostly.Lex: Yeah. I mean, we are in… we're in 2023. Things have come so much further than when I was a kid. I have a little computer in my pocket. Yeah, you know, “Hey, math teacher, turns out yeah, we do carry calculators around with us wherever we go.” We've built all these huge, complicated systems online and built our entire society around them.We're still in our infancy. We still don't know what we're doing. We're still feeling out what SRE even is, if it even makes sense, and I think there's—yeah, there's going to be more evolution. I mean, there's been the, like, what is DevOps and people coining the term DevOps and then getting, you know, almost immediately subsumed or turned into whatever other people want. Same thing for observability.I think same thing for SRE. So honestly, I'm feeling it out as I go and I think we all are. And I don't think anyone really knows what we're doing. And I think that the moment we feel like we do is probably where we're in trouble. Because this is all just so new. Look where we were even 40 years, 30, even 20 years ago. We've come really far.Corey: For me, one of the things that concerns slash scares me has been that once someone learns something and it becomes rote, it sort of crystallizes in amber within their worldview, and they don't go back and figure out, “Okay, is this still the right approach?” Or, “Has the thing that I know changed?” And I see this on a constant basis just because I'm working with AWS so often. And there are restrictions and things you cannot do and constraints that the cloud provider imposes on you. Until one day, that thing that was impossible is now possible and supported.But people don't keep up with that so they still operate under the model of what used to be. I still remember a year or so after they raised the global per-resource tag limit to 50, I was seeing references to only ten tags being allowed per resource in the AWS console because not even internal service teams are allowed to talk to each other over there, apparently. And if they can't keep it straight internally, what hope to the rest of us have? It's the same problem of once you get this knowledge solidified, it's hard to keep current and adapt to things that are progressing. Especially in tech where things are advancing so rapidly and so quickly.Lex: Yeah, I gather things are a little feudalistic over inside AWS, although I've never worked there, so I don't know. But it's also just so big. I mean, there's just—like, do you even know all of the—like, I challenge you to go through the list of services. I bet you're going to find when you don't know about. You know, the AWS services. Maybe that's a challenge I would lose, but it's so hard to keep track of all this stuff with how fast it's changing that I don't blame people for not getting that.Corey: I would agree. We've long since passed the point where I can talk incredibly convincingly about AWS services that do not exist and not get called out on it by AWS employees. Because who would just go and make something up like that? That would be psychotic. No one in the right mind would do it.“Hi, I'm Corey, we haven't met yet. But you're going to remember this, whether I want you to or not because I make an impression on people. Oops.”Lex: Yeah. Mr. AWS Snark. You're exactly who I would expect to do that. And then there was Hunter, what's his name? The guy who made the—[singing] these are the many services of AWS—song. That was pretty great, too.Corey: Oh, yeah. Forrest Brazeal. He was great. I loved having him in the AWS community. And then he took a job, head of content over at Google Cloud. It's, well, suddenly, you can't very well make fun of AWS anymore, not without it taking a very different tone. So, I feel like that's our collective loss.Lex: Yeah, definitely. But yeah, I feel like we've done amazing things as a society, but the problem is that we're still, like, at the level of, we don't know how to program the VCR as far as, like, trying to run reliable services. It's really hard to build a complex system that, by its nature of being useful for customers, it must increase in complexity. Trying to run that reliably is hugely difficult and trying to do so profitably is almost impossible.And then I look at how hard that is and then I look at people trying to make self-driving cars. And I think that I will never set foot in one of those things until I see us getting good at running reliable services. Because if we can't do this with all of these people involved, how do I expect that a little car is going to be—that they're going to be able to produce a car that can drive and understand the complexities of navigating around and all the hazards that are involved to keep me safe.Corey: It's wild to me. The more I learned about the internet, the more surprised I am that any of it works at all. It's like, “Well, at least you're only using it for ridiculous things like cat pictures, right?” “Oh, no, no, no. We do emergency services and banking and insurance on top of that, too.” “Oh, good. I'm sure that won't end horribly one day.”Lex: Right? Yeah. I mean, you look at, like—you look at how much of a concerted effort towards safety they've had to put in, in the aviation industry to go from where they were in the '70s and '80s to where we are now where it's so incredibly safe. We haven't made that kind of full industry push toward reliability and safety. And it's going to have to happen soon as more and more of the services we're building are, exactly as you say, life-critical.Corey: Yeah, the idea of having this stuff be life-critical means you have to take a very different approach to it than you do when you're running, I don't know, Twitter for Pets. Though, I probably need a new fake reference startup now that Twitter for reality is becoming more bizarre than anything I can make up. But the idea that, “Well, our ad network needs to have the same rigor and discipline applied to it as the life support system,” maybe that's the wrong framing.Lex: Or maybe it's not. I keep finding instances of situations—maybe not necessarily ad networks, although I wouldn't put it past them—but situations where a system that we're dealing with becomes life-critical when we had no idea that it could possibly do. So, for example, a couple companies back, there was this billing situation where a vendor of ours accidentally nilled our customers incorrectly and wiped bank accounts, and real people were unable to make their mortgage payments and unable to, like, their bank accounts were empty, so they couldn't buy food. Like, that's starting to become life-critical and it all came down to a single, like, this could have been any outage at any company. And that's going to happen more and more, I think.Corey: I really want to thank you for taking time to speak with me. If people want to learn more, where's the best place for them to find you?Lex: sreweekly.com. You can subscribe there. Thank you so much for having me on. It has been a real treat.Corey: It really has. You'll have to come back and we'll find other topics to talk about, I'm sure, in the very near future. Thank you so much for your time. I appreciate it.Lex: Thanks.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
This week we discuss digital transformation at Southwest and Delta Airlines, Shopify cancels all meetings, Salesforce's M&A strategy, and A.I. is everywhere. Plus, thoughts on bike lanes… Watch the YouTube Live Recording of Episode 396 (https://youtu.be/tmm8rH9fZEE) Runner-up Titles Work trying to get on my personal calendar Traveling with an infant =BLACKSWAN(A1:G453) Socks in a Costco Can't do the business case on savings until you loose it. Pay transparency for you, not me We don't pay for things on the Internet Semper Nimbus Privatus Rundown Dutch residents are the most physically active on earth, (https://twitter.com/BrentToderian/status/1611901297552396289) Digital Transformation Travel Edition Delta plans to offer free Wi-Fi starting Feb. 1 (https://www.cnbc.com/2023/01/05/delta-plans-to-offer-free-wi-fi-starting-feb-1.html) The Southwest Airlines Meltdown (https://www.nytimes.com/2023/01/10/podcasts/the-daily/the-southwest-airlines-meltdown.html) Southwest's Meltdown Could Cost It Up to $825 Million (https://www.nytimes.com/2023/01/06/business/southwest-airlines-meltdown-costs-reimbursement.html) Southwest pilots union writes scathing letter to airline executives after holiday travel fiasco (https://www.yahoo.com/now/southwest-pilots-union-writes-scathing-011720946.html) Southwest makes frequent flyer miles offer while lots of luggage remains in limbo (https://www.cnn.com/travel/article/southwest-airlines-frequent-flyer-miles-meltdown/index.html) Point of Sale: Scan and Pay (https://twitter.com/pitdesi/status/1602843962602975233?s=20&t=YdGNYzReSf4r1twJ1hRfbA) Work Life Shopify Tells Employees to Just Say No to Meetings (https://www.bloomberg.com/news/articles/2023-01-03/shopify-ceo-tobi-lutke-tells-employees-to-just-say-no-to-meetings) Netflix Revokes Some Staff's Access to Other People's Salary Information (https://apple.news/A--bGmZgJTQCgHQ-9QdWu4w) U.S. Moves to Bar Noncompete Agreements in Labor Contracts (https://www.nytimes.com/2023/01/05/business/economy/ftc-noncompete.html) Gartner HR expert: Quiet hiring will dominate U.S. workplaces in 2023 (https://www.cnbc.com/2023/01/04/gartner-hr-expert-quiet-hiring-will-dominate-us-workplaces-in-2023.html) Netflix revokes some staff's access to other people's salary information (https://www.marketwatch.com/story/netflix-revokes-some-staffs-access-to-other-peoples-salary-information-11673384493) SFDC Salesforce: There's no more Slack left to cut (https://www.theregister.com/2023/01/10/salesforce_comment/) Salesforce to Lay Off 10 Percent of Staff and Cut Office Space (https://www.nytimes.com/2023/01/04/business/salesforce-layoffs.html) After layoffs, Salesforce CEO still blasts worker productivi (https://www.sfgate.com/tech/article/salesforce-ceo-blasts-worker-productivity-17708474.php)ty (https://www.sfgate.com/tech/article/salesforce-ceo-blasts-worker-productivity-17708474.php) AI is everywhere Google execs warn company's reputation could suffer if it moves too fast on AI-chat technology (https://www.cnbc.com/2022/12/13/google-execs-warn-of-reputational-risk-with-chatgbt-like-tool.html) Microsoft and OpenAI Working on ChatGPT-Powered Bing in Challenge to Google (https://www.theinformation.com/articles/microsoft-and-openai-working-on-chatgpt-powered-bing-in-challenge-to-google?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top) Microsoft eyes $10 billion bet on ChatGPT (https://www.semafor.com/article/01/09/2023/microsoft-eyes-10-billion-bet-on-chatgpt) Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT (https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/) Relevant to your Interests 2023 Bum Steer of the Year: Austin (https://www.texasmonthly.com/news-politics/2023-bum-steer-of-year-austin/) Twitter's Rivals Try to Capitalize on Musk-Induced Chaos (https://www.nytimes.com/2022/12/07/technology/twitter-rivals-alternative-platforms.html) On Organizational Structures and the Developer Experience (https://redmonk.com/sogrady/2022/12/13/org-structure-devx/) KubeCon + CloudNativeCon North America 2022 Transparency Report | Cloud Native Computing Foundation (https://www.cncf.io/reports/kubecon-cloudnativecon-north-america-2022-transparency-report/) Inside the chaos at Washington's most connected military tech startup (https://www.vox.com/recode/23507236/inside-disruption-rebellion-defense-washington-connected-military-tech-startup) Elon Musk Starts Week As World's Second Richest Person (https://www.forbes.com/sites/mattdurot/2022/12/12/elon-musk-starts-week-as-worlds-second-richest-person/) 10 Tesla Investors Lose $132.5 Billion From Musk's Twitter Fiasco (https://www.investors.com/etfs-and-funds/sectors/tesla-stock-investors-lose-132-5-billion-from-musks-twitter-fiasco/) Rackspace's ransomware messaging dilemma (https://www.axios.com/newsletters/axios-login-83146574-380f-4e37-965d-7fd79bce7278.html?chunk=2&utm_term=emshare#story2) Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023 (https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/) A MultiCloud Rant (https://www.lastweekinaws.com/blog/a_multicloud_rant/) Great visualization of the revenue breakdown of the 4 largest tech companies. (https://twitter.com/Carnage4Life/status/1603012861017862144?s=20&t=HC2UuMCHBB408xae6tZpbQ) AG Paxton's Google Suit Makes the Perfect the Enemy of the Good (https://truthonthemarket.com/2022/12/14/ag-paxtons-google-suit-makes-the-perfect-the-enemy-of-the-good/) AWS simplifies Simple Storage Service to prevent data leaks (https://www.theregister.com/2022/12/14/aws_simple_storage_service_simplified/) Creating the ultimate smart map with new map data initiative launched by Linux Foundation (https://venturebeat.com/virtual/creating-the-ultimate-smart-map-with-new-map-data-initiative-launched-by-linux-foundation/) Spotify's grand plan to monetize developers via its open source Backstage project (https://techcrunch.com/2022/12/15/spotifys-plan-to-monetize-its-open-source-backstage-developer-project/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cubGlua2VkaW4uY29tLw&guce_referrer_sig=AQAAAAlyOmdhogtX6nuQkNHQ7mVSyci6aMv7X6QwRTvS9PHGJmjO_wjCqsJXXPKI36A9MkIclSIQoHQ_dz7wJ-WzfaYQT_clMcUijiC28ZQhEau4NOcU-70wy5m0Q9LLmtvWuQbWQQEccEbQH2Lvg4_GqfnQBYNPZWRcgpx7XMLas_2R) VMware offers subs for server consolidation vSphere cut (https://www.theregister.com/2022/12/15/vsphere_plus_standard/) Senior execs to leave VMware before acquisition by Broadcom (https://www.bizjournals.com/sanjose/news/2022/12/13/three-senior-execs-to-leave-vmware.html#:~:text=Mark%20Lohmeyer%2C%20who%20heads%20cloud,Raghuram%20announced%20in%20a%20memo) China Bans Exports of Loongson CPUs to Russia, Other Countries: Report (https://www.tomshardware.com/news/china-bans-exports-of-its-loongson-cpus-to-russia-other-countries) Dropbox buys form management platform FormSwift for $95M in cash (https://techcrunch.com/2022/12/16/dropbox-buys-form-management-platform-formswift-for-95m-in-cash/) Sweep, a no-code config tool for Salesforce software, raises $28M (https://techcrunch.com/2022/12/15/sweep-a-no-code-config-tool-for-salesforce-software-raises-28m/) Twitter Aided the Pentagon in its Covert Online Propaganda Campaign (https://theintercept.com/2022/12/20/twitter-dod-us-military-accounts/) Okta's source code stolen after GitHub repositories hacked (https://www.bleepingcomputer.com/news/security/oktas-source-code-stolen-after-github-repositories-hacked/) Workday appoints VMware veteran as co-CEO (https://www.theregister.com/2022/12/21/workday_co_ceo/) Top Paying Tools (https://softwaredefinedtalk.slack.com/archives/C04EK1VBK/p1671635825838769) Winging It: Inside Amazon's Quest to Seize the Skies (https://www.wired.com/story/amazon-air-quest-to-seize-the-skies/) CIS Benchmark Framework Scanning Tools Comparison (https://www.armosec.io/blog/cis-kubernetes-benchmark-framework-scanning-tools-comparison/) MSG defends using facial recognition to kick lawyer out of Rockettes show (https://arstechnica.com/tech-policy/2022/12/facial-recognition-flags-girl-scout-mom-as-security-risk-at-rockettes-show/) OpenAI releases Point-E, an AI that generates 3D models (https://techcrunch.com/2022/12/20/openai-releases-point-e-an-ai-that-generates-3d-models/) No, You Haven't Won a Yeti Cooler From Dick's Sporting Goods (https://www.wired.com/story/email-scam-dicks-sporting-goods-yeti-cooler/) The Lastpass hack was worse than the company first reported (https://www.engadget.com/the-lastpass-hack-was-worse-than-the-company-first-reported-000501559.html?utm_source=facebook&utm_medium=news_tab) IRS delays tax reporting change for 1099-K on Venmo, Paypal business payments (https://www.cnbc.com/2022/12/23/irs-delays-tax-reporting-change-for-1099-k-on-venmo-paypal-payments.html) Cyber attacks set to become ‘uninsurable', says Zurich chief (https://www.ft.com/content/63ea94fa-c6fc-449f-b2b8-ea29cc83637d) Google Employees Brace for a Cost-Cutting Drive as Anxiety Mounts (https://www.nytimes.com/2022/12/28/technology/google-job-cuts.html) IBM beat all its large-cap tech peers in 2022 as investors shunned growth for safety (https://www.cnbc.com/2022/12/27/ibm-stock-outperformed-technology-sector-in-2022.html) Europe Taps Tech's Power-Hungry Data Centers to Heat Homes (https://www.wsj.com/articles/europe-taps-techs-power-hungry-data-centers-to-heat-homes-11672309944?mod=djemalertNEWS) List of defunct social networking services (https://en.wikipedia.org/wiki/List_of_defunct_social_networking_services) 2023 Predictions | No Mercy / No Malice (https://www.profgalloway.com/2023-predictions/) Twitter rival Mastodon rejects funding to preserve nonprofit status (https://arstechnica.com/tech-policy/2022/12/twitter-rival-mastodon-rejects-funding-to-preserve-nonprofit-status/) TSMC Starts Next-Gen Mass Production as World Fights Over Chips (https://www.bloomberg.com/news/articles/2022-12-29/tsmc-mass-produces-next-gen-chips-to-safeguard-global-lead) Microsoft and FTC pre-trial hearing set for January 3rd (https://www.engadget.com/pre-trial-hearing-between-microsoft-and-ftc-set-for-january-3rd-203320387.html) The infrastructure behind ATMs (https://www.bitsaboutmoney.com/archive/the-infrastructure-behind-atms/) Apple is increasing battery replacement service charges for out-of-warranty devices (https://techcrunch.com/2023/01/03/apple-is-increasing-battery-replacement-service-charges-for-out-of-warranty-devices/) Snowflake's business and how the weakening economy is impacting cloud vendors (https://twitter.com/LiebermanAustin/status/1607376944873754626) Shift Happens: A book about keyboards (https://shifthappens.site/) Amazon to cut 18,000 jobs (https://www.axios.com/2023/01/05/amazon-layoffs-18000-jobs) CircleCI security alert: Rotate any secrets stored in CircleCI (https://circleci.com/blog/january-4-2023-security-alert/) Video game workers form Microsoft's first U.S. labor union (https://www.nbcnews.com/tech/tech-news/video-game-workers-form-microsofts-first-us-labor-union-rcna64103) World's Premier Investors Line Up to Partner with Netskope as the SASE Security and Networking Platform of Choice (https://www.prnewswire.com/news-releases/worlds-premier-investors-line-up-to-partner-with-netskope-as-the-sase-security-and-networking-platform-of-choice-301712417.html) omg.lol - A lovable web page and email address, just for you (https://home.omg.lol/) Alphabet led a $100 million funding of Chronosphere, a startup that helps companies monitor and cut cloud bills. (https://twitter.com/theinformation/status/1611165698868367360) Confluent expands Kafka Streams capabilities, acquires Apache Flink vendor (https://venturebeat.com/enterprise-analytics/confluent-acquires-apache-flink-vendor-immerok-to-expand-data-stream-processing/) Excel & Google Sheets AI Formula Generator - Excelformulabot.com (https://excelformulabot.com/) Has the Internet Reached Peak Clickability? (https://tedgioia.substack.com/p/has-the-internet-reached-peak-clickability) Adobe's CEO Sizes Up the State of Tech Now (https://www.wsj.com/articles/adobes-ceo-sizes-up-the-state-of-tech-now-11673151167?mod=djemalertNEWS) Researchers Hacked California's Digital License Plates, Gaining Access to GPS Location and User Info (https://jalopnik.com/researchers-hacked-californias-digital-license-plates-1849966295) Microsoft's New AI Can Simulate Anyone's Voice With 3 Seconds of Audio (https://slashdot.org/story/23/01/10/0749241/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio?utm_source=slashdot&utm_medium=twitter) Observability platform Chronosphere raises another $115M at a $1.6B valuation (https://techcrunch.com/2023/01/10/observability-platform-chronosphere-raises-another-115m-at-a-1-6b-valuation/) Why IBM is no longer interested in breaking patent records–and how it plans to measure innovation in the age of open source and quantum computing (https://fortune.com/2023/01/06/ibm-patent-record-how-to-measure-innovation-open-source-quantum-computing-tech/) New research aims to analyze how widespread COBOL is (https://www.theregister.com/2022/12/14/cobol_research/) Companies are still waiting for their cloud ROI (https://www.infoworld.com/article/3675374/companies-are-still-waiting-for-their-cloud-roi.html) What TNS Readers Want in 2023: More DevOps, API Coverage (https://thenewstack.io/what-tns-readers-want-in-2023-more-devops-api-coverage/) Tech Debt Yo-Yo Cycle. (https://twitter.com/wardleymaps/status/1605860426671177728) How a single developer dropped AWS costs by 90%, then disappeared (https://scribe.rip/@maximetopolov/how-a-single-developer-dropped-aws-costs-by-90-then-disappeared-2b46a115103a) A look at the 2022 velocity of CNCF, Linux Foundation, and top 30 open source projects (https://www.cncf.io/blog/2023/01/11/a-look-at-the-2022-velocity-of-cncf-linux-foundation-and-top-30-open-source-projects/) The golden age of the streaming wars has ended (https://www.theverge.com/2022/12/14/23507793/streaming-wars-hbo-max-netflix-ads-residuals-warrior-nun) YouTube exec says NFL Sunday Ticket will have multiscreen functionality (https://awfulannouncing.com/youtube/nfl-sunday-ticket-multiscreen-mosaic-mode.html) (https://twitter.com/theinformation/status/1611165698868367360)## Nonsense The $11,500 toilet with Alexa inside can now be put inside your home (https://www.theverge.com/2022/12/19/23510864/kohler-numi-smart-toilet-alexa-ces-2022) Starbucks updating its loyalty program starting in February (https://www.axios.com/2022/12/28/starbucks-rewards-program-changes-coming) The revenue model of a popular YouTube channel about Lego. (https://paper.dropbox.com/doc/SDT-396--BwhY9F5kpz_BI2kkdw63ZpJ~Ag-MVMKwqqBEH5SzYKqYO2Jc) Conferences THAT Conference Texas Speakers and Schedule (https://that.us/events/tx/2023/schedule/), Round Rock, TX Jan 15th-18th Use code SDT for 5% off SpringOne (https://springone.io/), Jan 24–26. Coté speaking at cfgmgmtcamp (https://cfgmgmtcamp.eu/ghent2023/), Feb 6th to 8th, Ghent. State of Open Con 2023, (https://stateofopencon.com/sponsors/) London, UK, February 7th-8th 2023 CloudNativeSecurityCon North America (https://events.linuxfoundation.org/cloudnativesecuritycon-north-america/), Seattle, Feb 1 – 2, 2023 Southern California Linux Expo, (https://www.socallinuxexpo.org/scale/20x) Los Angeles, March 9-12, 2023 DevOpsDays Birmingham, AL 2023 (https://devopsdays.org/events/2023-birmingham-al/welcome/), April 20 - 21, 2023 SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), LinkedIn (https://www.linkedin.com/company/software-defined-talk/) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off Coté's book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: Industrial Garage Shelves (https://www.homedepot.com/p/Husky-5-Tier-Industrial-Duty-Steel-Freestanding-Garage-Storage-Shelving-Unit-in-Black-90-in-W-x-90-in-H-x-24-in-D-N2W902490W5B/319132842) Matt: Oxide and Friends: Breaking it down with Ian Brown (https://oxide.computer/podcasts/oxide-and-friends/1150480) Wu Tang Saga (https://www.imdb.com/title/tt9113406/) Season 3 coming next month! Coté: Mouth to Mouth (https://www.goodreads.com/en/book/show/58438631-mouth-to-mouth) by Antoine Wilson (https://www.goodreads.com/en/book/show/58438631-mouth-to-mouth). Photo Credits Header (https://unsplash.com/photos/euaDCtB_jyw) CoverArt (https://unsplash.com/photos/9xdho4stJQ8)
About RamDr. Ram Sriharsha held engineering, product management, and VP roles at the likes of Yahoo, Databricks, and Splunk. At Yahoo, he was both a principal software engineer and then research scientist; at Databricks, he was the product and engineering lead for the unified analytics platform for genomics; and, in his three years at Splunk, he played multiple roles including Sr Principal Scientist, VP Engineering and Distinguished Engineer.Links Referenced: Pinecone: https://www.pinecone.io/ XKCD comic: https://www.explainxkcd.com/wiki/index.php/1425:_Tasks TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. Tired of observability costs going up every year without getting additional value? Or being locked into a vendor due to proprietary data collection, querying, and visualization? Modern-day, containerized environments require a new kind of observability technology that accounts for the massive increase in scale and attendant cost of data. With Chronosphere, choose where and how your data is routed and stored, query it easily, and get better context and control. 100% open-source compatibility means that no matter what your setup is, they can help. Learn how Chronosphere provides complete and real-time insight into ECS, EKS, and your microservices, wherever they may be at snark.cloud/chronosphere that's snark.cloud/chronosphere.Corey: This episode is brought to you in part by our friends at Veeam. Do you care about backups? Of course you don't. Nobody cares about backups. Stop lying to yourselves! You care about restores, usually right after you didn't care enough about backups. If you're tired of the vulnerabilities, costs, and slow recoveries when using snapshots to restore your data, assuming you even have them at all living in AWS-land, there is an alternative for you. Check out Veeam, that's V-E-E-A-M for secure, zero-fuss AWS backup that won't leave you high and dry when it's time to restore. Stop taking chances with your data. Talk to Veeam. My thanks to them for sponsoring this ridiculous podcast.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's promoted guest episode is brought to us by our friends at Pinecone and they have given their VP of Engineering and R&D over to suffer my various sling and arrows, Ram Sriharsha. Ram, thank you for joining me.Ram: Corey, great to be here. Thanks for having me.Corey: So, I was immediately intrigued when I wound up seeing your website, pinecone.io because it says right at the top—at least as of this recording—in bold text, “The Vector Database.” And if there's one thing that I love, it is using things that are not designed to be databases as databases, or inappropriately referring to things—be they JSON files or senior engineers—as databases as well. What is a vector database?Ram: That's a great question. And we do use this term correctly, I think. You can think of customers of Pinecone as having all the data management problems that they have with traditional databases; the main difference is twofold. One is there is a new data type, which is vectors. Vectors, you can think of them as arrays of floats, floating point numbers, and there is a new pattern of use cases, which is search.And what you're trying to do in vector search is you're looking for the nearest, the closest vectors to a given query. So, these two things fundamentally put a lot of stress on traditional databases. So, it's not like you can take a traditional database and make it into a vector database. That is why we coined this term vector database and we are building a new type of vector database. But fundamentally, it has all the database challenges on a new type of data and a new query pattern.Corey: Can you give me an example of what, I guess, an idealized use case would be of what the data set might look like and what sort of problem you would have in a vector database would solve?Ram: A very great question. So, one interesting thing is there's many, many use cases. I'll just pick the most natural one which is text search. So, if you're familiar with the Elastic or any other traditional text search engines, you have pieces of text, you index them, and the indexing that you do is traditionally an inverted index, and then you search over this text. And what this sort of search engine does is it matches for keywords.So, if it finds a keyword match between your query and your corpus, it's going to retrieve the relevant documents. And this is what we call text search, right, or keyword search. You can do something similar with technologies like Pinecone, but what you do here is instead of searching our text, you're searching our vectors. Now, where do these vectors come from? They come from taking deep-learning models, running your text through them, and these generate these things called vector embeddings.And now, you're taking a query as well, running them to deep-learning models, generating these query embeddings, and looking for the closest record embeddings in your corpus that are similar to the query embeddings. This notion of proximity in this space of vectors tells you something about semantic similarity between the query and the text. So suddenly, you're going beyond keyword search into semantic similarity. An example is if you had a whole lot of text data, and maybe you were looking for ‘soda,' and you were doing keyword search. Keyword search will only match on variations of soda. It will never match ‘Coca-Cola' because Coca-Cola and soda have nothing to do with each other.Corey: Or Pepsi, or pop, as they say in the American Midwest.Ram: Exactly.Corey: Yeah.Ram: Exactly. However, semantic search engines can actually match the two because they're matching for intent, right? If they find in this piece of text, enough intent to suggest that soda and Coca-Cola or Pepsi or pop are related to each other, they will actually match those and score them higher. And you're very likely to retrieve those sort of candidates that traditional search engines simply cannot. So, this is a canonical example, what's called semantic search, and it's known to be done better by these other vector search engines. There are also other examples in say, image search. Just if you're looking for near duplicate images, you can't even do this today without a technology like vector search.Corey: What is the, I guess, translation or conversion process of existing dataset into something that a vector database could use? Because you mentioned it was an array of floats was the natural vector datatype. I don't think I've ever seen even the most arcane markdown implementation that expected people to wind up writing in arrays of floats. What does that look like? How do you wind up, I guess, internalizing or ingesting existing bodies of text for your example use case?Ram: Yeah, this is a very great question. This used to be a very hard problem and what has happened over the last several years in deep-learning literature, as well as in deep-learning as a field itself, is that there have been these large, publicly trained models, examples will be OpenAI, examples will be the models that are available in Hugging Face like Cohere, and a large number of these companies have come forward with very well trained models through which you can pass pieces of text and get these vectors. So, you no longer have to actually train these sort of models, you don't have to really have the expertise to deeply figured out how to take pieces of text and build these embedding models. What you can do is just take a stock model, if you're familiar with OpenAI, you can just go to OpenAIs homepage and pick a model that works for you, Hugging Face models, and so on. There's a lot of literature to help you do this.Sophisticated customers can also do something called fine-tuning, which is built on top of these models to fine-tune for their use cases. The technology is out there already, there's a lot of documentation available. Even Pinecone's website has plenty of documentation to do this. Customers of Pinecone do this [unintelligible 00:07:45], which is they take piece of text, run them through either these pre-trained models or through fine-tuned models, get the series of floats which represent them, vector embeddings, and then send it to us. So, that's the workflow. The workflow is basically a machine-learning pipeline that either takes a pre-trained model, passes them through these pieces of text or images or what have you, or actually has a fine-tuning step in it.Corey: Is that ingest process something that not only benefits from but also requires the use of a GPU or something similar to that to wind up doing the in-depth, very specific type of expensive math for data ingestion?Ram: Yes, very often these run on GPUs. Sometimes, depending on budget, you may have compressed models or smaller models that run on CPUs, but most often they do run on GPUs, most often, we actually find people make just API calls to services that do this for them. So, very often, people are actually not deploying these GPU models themselves, they are maybe making a call to Hugging Face's service, or to OpenAI's service, and so on. And by the way, these companies also democratized this quite a bit. It was much, much harder to do this before they came around.Corey: Oh, yeah. I mean, I'm reminded of the old XKCD comic from years ago, which was, “Okay, I want to give you a picture. And I want you to tell me it was taken within the boundaries of a national park.” Like, “Sure. Easy enough. Geolocation information is attached. It'll take me two hours.” “Cool. And I also want you to tell me if it's a picture of a bird.” “Okay, that'll take five years and a research team.”And sure enough, now we can basically do that. The future is now and it's kind of wild to see that unfolding in a human perceivable timespan on these things. But I guess my question now is, so that is what a vector database does? What does Pinecone specifically do? It turns out that as much as I wish it were otherwise, not a lot of companies are founded on, “Well, we have this really neat technology, so we're just going to be here, well, in a foundational sense to wind up ensuring the uptake of that technology.” No, no, there's usually a monetization model in there somewhere. Where does Pinecone start, where does it stop, and how does it differentiate itself from typical vector databases? If such a thing could be said to exist yet.Ram: Such a thing doesn't exist yet. We were the first vector database, so in a sense, building this infrastructure, scaling it, and making it easy for people to operate it in a SaaS fashion is our primary core product offering. On top of that, this very recently started also enabling people who have who actually have raw text to not just be able to get value from these vector search engines and so on, but also be able to take advantage of traditional what we call keyword search or sparse retrieval and do a combined search better, in Pinecone. So, there's value-add on top of this that we do, but I would say the core of it is building a SaaS managed platform that allows people to actually easily store as data, scale it, query it in a way that's very hands off and doesn't require a lot of tuning or operational burden on their side. This is, like, our core value proposition.Corey: Got it. There's something to be said for making something accessible when previously it had only really been available to people who completed the Hello World tutorial—which generally resembled a doctorate at Berkeley or Waterloo or somewhere else—and turn it into something that's fundamentally, click the button. Where on that, I guess, a spectrum of evolution do you find that Pinecone is today?Ram: Yeah. So, you know, prior to Pinecone, we didn't really have this notion of a vector database. For several years, we've had libraries that are really good that you can pre-train on your embeddings, generate this thing called an index, and then you can search over that index. There is still a lot of work to be done even to deploy that and scale it and operate it in production and so on. Even that was not being, kind of, offered as a managed service before.What Pinecone does which is novel, is you no longer have to have this pre-training be done by somebody, you no longer have to worry about when to retrain your indexes, what to do when you have new data, what to do when there is deletions, updates, and the usual data management operations. You can just think of this is, like, a database that you just throw your data in. It does all the right things for you, you just worry about querying. This has never existed before, right? This is—it's not even like we are trying to make the operational part of something easier. It is that we are offering something that hasn't existed before, at the same time, making it operationally simple.So, we're solving two problems, which is we building a better database that hasn't existed before. So, if you really had this sort of data management problems and you wanted to build an index that was fresh that you didn't have to super manually tune for your own use cases, that simply couldn't have been done before. But at the same time, we are doing all of this in a cloud-native fashion; it's easy for you to just operate and not worry about.Corey: You've said that this hasn't really been done before, but this does sound like it is more than passingly familiar specifically to the idea of nearest neighbor search, which has been around since the '70s in a bunch of different ways. So, how is it different? And let me of course, ask my follow-up to that right now: why is this even an interesting problem to start exploring?Ram: This is a great question. First of all, nearest neighbor search is one of the oldest forms of machine learning. It's been known for decades. There's a lot of literature out there, there are a lot of great libraries as I mentioned in the passing before. All of these problems have primarily focused on static corpuses. So basically, you have a set of some amount of data, you want to create an index out of it, and you want to query it.A lot of literature has focused on this problem. Even there, once you go from small number of dimensions to large number of dimensions, things become computationally far more challenging. So, traditional nearest neighbor search actually doesn't scale very well. What do I mean by large number of dimensions? Today, deep-learning models that produce image representations typically operate in 2048 dimensions of photos [unintelligible 00:13:38] dimensions. Some of the OpenAI models are even 10,000 dimensional and above. So, these are very, very large dimensions.Most of the literature prior to maybe even less than ten years back has focused on less than ten dimensions. So, it's like a scale apart in dealing with small dimensional data versus large dimensional data. But even as of a couple of years back, there hasn't been enough, if any, focus on what happens when your data rapidly evolves. For example, what happens when people add new data? What happens if people delete some data? What happens if your vectors get updated? These aren't just theoretical problems; they happen all the time. Customers of ours face this all the time.In fact, the classic example is in recommendation systems where user preferences change all the time, right, and you want to adapt to that, which means your user vectors change constantly. When even these sort of things change constantly, you want your index to reflect it because you want your queries to catch on to the most recent data. [unintelligible 00:14:33] have to reflect the recency of your data. This is a solved problem for traditional databases. Relational databases are great at solving this problem. A lot of work has been done for decades to solve this problem really well.This is a fundamentally hard problem for vector databases and that's one of the core focus areas [unintelligible 00:14:48] painful. Another problem that is hard for these sort of databases is simple things like filtering. For example, you have a corpus of say product images and you want to only look at images that maybe are for the Fall shopping line, right? Seems like a very natural query. Again, databases have known and solved this problem for many, many years.The moment you do nearest neighbor search with these sort of constraints, it's a hard problem. So, it's just the fact that nearest neighbor search and lots of research in this area has simply not focused on what happens to that, so those are of techniques when combined with data management challenges, filtering, and all the traditional challenges of a database. So, when you start doing that you enter a very novel area to begin with.Corey: This episode is sponsored in part by our friends at Redis, the company behind the incredibly popular open-source database. If you're tired of managing open-source Redis on your own, or if you are looking to go beyond just caching and unlocking your data's full potential, these folks have you covered. Redis Enterprise is the go-to managed Redis service that allows you to reimagine how your geo-distributed applications process, deliver and store data. To learn more from the experts in Redis how to be real-time, right now, from anywhere, visit snark.cloud/redis. That's snark dot cloud slash R-E-D-I-S.Corey: So, where's this space going, I guess is sort of the dangerous but inevitable question I have to ask. Because whenever you talk to someone who is involved in a very early stage of what is potentially a transformative idea, it's almost indistinguishable from someone who is whatever the polite term for being wrapped around their own axle is, in a technological sense. It's almost a form of reverse Schneier's Law of anyone can create an encryption algorithm that they themselves cannot break. So, the possibility that this may come back to bite us in the future if it turns out that this is not potentially the revelation that you see it as, where do you see the future of this going?Ram: Really great question. The way I think about it is, and the reason why I keep going back to databases and these sort of ideas is, we have a really great way to deal with structured data and structured queries, right? This is the evolution of the last maybe 40, 50 years is to come up with relational databases, come up with SQL engines, come up with scalable ways of running structured queries on large amounts of data. What I feel like this sort of technology does is it takes it to the next level, which is you can actually ask unstructured questions on unstructured data, right? So, even the couple of examples we just talked about, doing near duplicate detection of images, that's a very unstructured question. What does it even mean to say that two images are nearly duplicate of each other? I couldn't even phrase it as kind of a concrete thing. I certainly cannot write a SQL statement for it, but I cannot even phrase it properly.With these sort of technologies, with the vector embeddings, with deep learning and so on, you can actually mathematically phrase it, right? The mathematical phrasing is very simple once you have the right representation that understands your image as a vector. Two images are nearly duplicate if they are close enough in the space of vectors. Suddenly you've taken a problem that was even hard to express, let alone compute, made it precise to express, precise to compute. This is going to happen not just for images, not just for semantic search, it's going to happen for all sorts of unstructured data, whether it's time series, where it's anomaly detection, whether it's security analytics, and so on.I actually think that fundamentally, a lot of fields are going to get disrupted by this sort of way of thinking about things. We are just scratching the surface here with semantic search, in my opinion.Corey: What is I guess your barometer for success? I mean, if I could take a very cynical point of view on this, it's, “Oh, well, whenever there's a managed vector database offering from AWS.” They'll probably call it Amazon Basics Vector or something like that. Well, that is a—it used to be a snarky observation that, “Oh, we're not competing, we're just validating their market.” Lately, with some of their competitive database offerings, there's a lot more truth to that than I suspect AWS would like.Their offerings are nowhere near as robust as what they pretend to be competing against. How far away do you think we are from the larger cloud providers starting to say, “Ah, we got the sense there was money in here, so we're launching an entire service around this?”Ram: Yeah. I mean, this is a—first of all, this is a great question. There's always something that's constantly, things that any innovator or disrupter has to be thinking about, especially these days. I would say that having a multi-year head, start in the use cases, in thinking about how this system should even look, what sort of use cases should it [unintelligible 00:19:34], what the operating points for the [unintelligible 00:19:37] database even look like, and how to build something that's cloud-native and scalable, is very hard to replicate. Meaning if you look at what we have already done and kind of tried to base the architecture of that, you're probably already a couple of years behind us in terms of just where we are at, right, not just in the architecture, but also in the use cases in where this is evolving forward.That said, I think it is, for all of these companies—and I would put—for example, Snowflake is a great example of this, which is Snowflake needn't have existed if Redshift had done a phenomenal job of being cloud-native, right, and kind of done that before Snowflake did it. In hindsight, it seems like it's obvious, but when Snowflake did this, it wasn't obvious that that's where everything was headed. And Snowflake built something that's very technologically innovative, in a sense that it's even now hard to replicate. Plus, it takes a long time to replicate something like that. I think that's where we are at.If Pinecone does its job really well and if we simply execute efficiently, it's very hard to replicate that. So, I'm not super worried about cloud providers, to be honest, in this space, I'm more worried about our execution.Corey: If it helps anything, I'm not very deep into your specific area of the world, obviously, but I am optimistic when I hear people say things like that. Whenever I find folks who are relatively early along in their technological journey being very concerned about oh, the large cloud provider is going to come crashing in, it feels on some level like their perspective is that they have one weird trick, and they were able to crack that, but they have no defensive mode because once someone else figures out the trick, well, okay, now we're done. The idea of sustained and lasting innovation in a space, I think, is the more defensible position to take, with the counterargument, of course, that that's a lot harder to find.Ram: Absolutely. And I think for technologies like this, that's the only solution, which is, if you really want to avoid being disrupted by cloud providers, I think that's the way to go.Corey: I want to talk a little bit about your own background. Before you wound up as the VP of R&D over at Pinecone, you were in a bunch of similar… I guess, similar styled roles—if we'll call it that—at Yahoo, Databricks, and Splunk. I'm curious as to what your experience in those companies wound up impressing on you that made you say, “Ah, that's great and all, but you know what's next? That's right, vector databases.” And off, you went to Pinecone. What did you see?Ram: So, first of all, in was some way or the other, I have been involved in machine learning and systems and the intersection of these two for maybe the last decade-and-a-half. So, it's always been something, like, in the in between the two and that's been personally exciting to me. So, I'm kind of very excited by trying to think about new type of databases, new type of data platforms that really leverages machine learning and data. This has been personally exciting to me. I obviously learned very different things from different companies.I would say that Yahoo was just the learning in cloud to begin with because prior to joining Yahoo, I wasn't familiar with Silicon Valley cloud companies at that scale and Yahoo is a big company and there's a lot to learn from there. It was also my first introduction to Hadoop, Spark, and even machine learning where I really got into machine learning at scale, in online advertising and areas like that, which was a massive scale. And I got into that in Yahoo, and it was personally exciting to me because there's very few opportunities where you can work on machine learning at that scale, right?Databricks was very exciting to me because it was an earlier-stage company than I had been at before. Extremely well run and I learned a lot from Databricks, just the team, the culture, the focus on innovation, and the focus on product thinking. I joined Databricks as a product manager. I hadn't played the product manager hat before that, so it was very much a learning experience for me and I think I learned from some of the best in that area. And even at Pinecone, I carry that forward, which is think about how my learnings at Databricks informs how we should be thinking about products at Pinecone, and so on. So, I think I learned—if I had to pick one company I learned a lot from, I would say, it's Databricks. The most [unintelligible 00:23:50].Corey: I would also like to point out, normally when people say, “Oh, the one company I've learned the most from,” and they pick one of them out of their history, it's invariably the most recent one, but you left there in 2018—Ram: Yeah.Corey: —then went to go spend the next three years over at Splunk, where you were a Senior Principal, Scientist, a Senior Director and Head of Machine-Learning, and then you decided, okay, that's enough hard work. You're going to do something easier and be the VP of Engineering, which is just wild at a company of that scale.Ram: Yeah. At Splunk, I learned a lot about management. I think managing large teams, managing multiple different teams, while working on very different areas is something I learned at Splunk. You know, I was at this point in my career when I was right around trying to start my own company. Basically, I was at a point where I'd taken enough learnings and I really wanted to do something myself.That's when Edo and I—you know, the CEO of Pinecone—and I started talking. And we had worked together for many years, and we started working together at Yahoo. We kept in touch with each other. And we started talking about the sort of problems that I was excited about working on and then I came to realize what he was working on and what Pinecone was doing. And we thought it was a very good fit for the two of us to work together.So, that is kind of how it happened. It sort of happened by chance, as many things do in Silicon Valley, where a lot of things just happen by network and chance. That's what happened in my case. I was just thinking of starting my own company at the time when just a chance encounter with Edo led me to Pinecone.Corey: It feels from my admittedly uninformed perspective, that a lot of what you're doing right now in the vector database area, it feels on some level, like it follows the trajectory of machine learning, in that for a long time, the only people really excited about it were either sci-fi authors or folks who had trouble explaining it to someone without a degree in higher math. And then it turned into—a couple of big stories from the mid-2010s stick out at me when we've been people were trying to sell this to me in a variety of different ways. One of them was, “Oh, yeah, if you're a giant credit card processing company and trying to detect fraud with this kind of transaction volume—” it's, yeah, there are maybe three companies in the world that fall into that exact category. The other was WeWork where they did a lot of computer vision work. And they used this to determine that at certain times of day there was congestion in certain parts of the buildings and that this was best addressed by hiring a second barista. Which distilled down to, “Wait a minute, you're telling me that you spent how much money on machine-learning and advanced analyses and data scientists and the rest have figured out that people like to drink coffee in the morning?” Like, that is a little on the ridiculous side.Now, I think that it is past the time for skepticism around machine learning when you can go to a website and type in a description of something and it paints a picture of the thing you just described. Or you can show it a picture and it describes what is in that picture fairly accurately. At this point, the only people who are skeptics, from my position on this, seem to be holding out for some sort of either next-generation miracle or are just being bloody-minded. Do you think that there's a tipping point for vector search where it's going to become blindingly obvious to, if not the mass market, at least more run-of-the-mill, more prosaic level of engineer that haven't specialized in this?Ram: Yeah. It's already, frankly, started happening. So, two years back, I wouldn't have suspected this fast of an adoption for this new of technology from this varied number of use cases. I just wouldn't have suspected it because I, you know, I still thought, it's going to take some time for this field to mature and, kind of, everybody to really start taking advantage of this. This has happened much faster than even I assumed.So, to some extent, it's already happening. A lot of it is because the barrier to entry is quite low right now, right? So, it's very easy and cost-effective for people to create these embeddings. There is a lot of documentation out there, things are getting easier and easier, day by day. Some of it is by Pinecone itself, by a lot of work we do. Some of it is by, like, companies that I mentioned before who are building better and better models, making it easier and easier for people to take these machine-learning models and use them without having to even fine-tune anything.And as technologies like Pinecone really mature and dramatically become cost-effective, the barrier to entry is very low. So, what we tend to see people do, it's not so much about confidence in this new technology; it is connecting something simple that I need this sort of value out of, and find the least critical path or the simplest way to get going on this sort of technology. And as long as it can make that barrier to entry very small and make this cost-effective and easy for people to explore, this is going to start exploding. And that's what we are seeing. And a lot of Pinecone's focus has been on ease-of-use, in simplicity in connecting the zero-to-one journey for precisely this reason. Because not only do we strongly believe in the value of this technology, it's becoming more and more obvious to the broader community as well. The remaining work to be done is just the ease of use and making things cost-effective. And cost-effectiveness is also what the focus on a lot. Like, this technology can be even more cost-effective than it is today.Corey: I think that it is one of those never-mistaken ideas to wind up making something more accessible to folks than keeping it in a relatively rarefied environment. We take a look throughout the history of computing in general and cloud in particular, were formerly very hard things have largely been reduced down to click the button. Yes, yes, and then get yelled at because you haven't done infrastructure-as-code, but click the button is still possible. I feel like this is on that trendline based upon what you're saying.Ram: Absolutely. And the more we can do here, both Pinecone and the broader community, I think the better, the faster the adoption of this sort of technology is going to be.Corey: I really want to thank you for spending so much time talking me through what it is you folks are working on. If people want to learn more, where's the best place for them to go to find you?Ram: Pinecone.io. Our website has a ton of information about Pinecone, as well as a lot of standard documentation. We have a free tier as well where you can play around with small data sets, really get a feel for vector search. It's completely free. And you can reach me at Ram at Pinecone. I'm always happy to answer any questions. Once again, thanks so much for having me.Corey: Of course. I will put links to all of that in the show notes. This promoted guest episode is brought to us by our friends at Pinecone. Ram Sriharsha is their VP of Engineering and R&D. And I'm Cloud Economist Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment that I will never read because the search on your podcast platform is broken because it's not using a vector database.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
About RickRick is the Product Leader of the AWS Optimization team. He previously led the cloud optimization product organization at Turbonomic, and previously was the Microsoft Azure Resource Optimization program owner.Links Referenced: AWS: https://console.aws.amazon.com LinkedIn: https://www.linkedin.com/in/rick-ochs-06469833/ Twitter: https://twitter.com/rickyo1138 TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. Tired of observability costs going up every year without getting additional value? Or being locked in to a vendor due to proprietary data collection, querying and visualization? Modern day, containerized environments require a new kind of observability technology that accounts for the massive increase in scale and attendant cost of data. With Chronosphere, choose where and how your data is routed and stored, query it easily, and get better context and control. 100% open source compatibility means that no matter what your setup is, they can help. Learn how Chronosphere provides complete and real-time insight into ECS, EKS, and your microservices, whereever they may be at snark.cloud/chronosphere That's snark.cloud/chronosphere Corey: This episode is bought to you in part by our friends at Veeam. Do you care about backups? Of course you don't. Nobody cares about backups. Stop lying to yourselves! You care about restores, usually right after you didn't care enough about backups. If you're tired of the vulnerabilities, costs and slow recoveries when using snapshots to restore your data, assuming you even have them at all living in AWS-land, there is an alternative for you. Check out Veeam, thats V-E-E-A-M for secure, zero-fuss AWS backup that won't leave you high and dry when it's time to restore. Stop taking chances with your data. Talk to Veeam. My thanks to them for sponsoring this ridiculous podcast.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. For those of you who've been listening to this show for a while, the theme has probably emerged, and that is that one of the key values of this show is to give the guest a chance to tell their story. It doesn't beat the guests up about how they approach things, it doesn't call them out for being completely wrong on things because honestly, I'm pretty good at choosing guests, and I don't bring people on that are, you know, walking trash fires. And that is certainly not a concern for this episode.But this might devolve into a screaming loud argument, despite my best effort. Today, I'm joined by Rick Ochs, Principal Product Manager at AWS. Rick, thank you for coming back on the show. The last time we spoke, you were not here you were at, I believe it was Turbonomic.Rick: Yeah, that's right. Thanks for having me on the show, Corey. I'm really excited to talk to you about optimization and my current role and what we're doing.Corey: Well, let's start at the beginning. Principal product manager. It sounds like one of those corporate titles that can mean a different thing in every company or every team that you're talking to. What is your area of responsibility? Where do you start and where do you stop?Rick: Awesome. So, I am the product manager lead for all of AWS Optimizations Team. So, I lead the product team. That includes several other product managers that focus in on Compute Optimizer, Cost Explorer, right-sizing recommendations, as well as Reservation and Savings Plan purchase recommendations.Corey: In other words, you are the person who effectively oversees all of the AWS cost optimization tooling and approaches to same?Rick: Yeah.Corey: Give or take. I mean, you could argue that oh, every team winds up focusing on helping customers save money. I could fight that argument just as effectively. But you effectively start and stop with respect to helping customers save money or understand where the money is going on their AWS bill.Rick: I think that's a fair statement. And I also agree with your comment that I think a lot of service teams do think through those use cases and provide capabilities, you know? There's, like, S3 storage lines. You know, there's all sorts of other products that do offer optimization capabilities as well, but as far as, like, the unified purpose of my team, it is, unilaterally focused on how do we help customers safely reduce their spend and not hurt their business at the same time.Corey: Safely being the key word. For those who are unaware of my day job, I am a partial owner of The Duckbill Group, a consultancy where we fix exactly one problem: the horrifying AWS bill. This is all that I've been doing for the last six years, so I have some opinions on AWS bill reduction as well. So, this is going to be a fun episode for the two of us to wind up, mmm, more or less smacking each other around, but politely because we are both professionals. So, let's start with a very high level. How does AWS think about AWS bills from a customer perspective? You talk about optimizing it, but what does that mean to you?Rick: Yeah. So, I mean, there's a lot of ways to think about it, especially depending on who I'm talking to, you know, where they sit in our organization. I would say I think about optimization in four major themes. The first is how do you scale correctly, whether that's right-sizing or architecting things to scale in and out? The second thing I would say is, how do you do pricing and discounting, whether that's Reservation management, Savings Plan Management, coverage, how do you handle the expenditures of prepayments and things like that?Then I would say suspension. What that means is turn the lights off when you leave the room. We have a lot of customers that do this and I think there's a lot of opportunity for more. Turning EC2 instances off when they're not needed if they're non-production workloads or other, sort of, stateful services that charge by the hour, I think there's a lot of opportunity there.And then the last of the four methods is clean up. And I think it's maybe one of the lowest-hanging fruit, but essentially, are you done using this thing? Delete it. And there's a whole opportunity of cleaning up, you know, IP addresses unattached EBS volumes, sort of, these resources that hang around in AWS accounts that sort of getting lost and forgotten as well. So, those are the four kind of major thematic strategies for how to optimize a cloud environment that we think about and spend a lot of time working on.Corey: I feel like there's—or at least the way that I approach these things—that there are a number of different levels you can look at AWS billing constructs on. The way that I tend to structure most of my engagements when I'm working with clients is we come in and, step one: cool. Why do you care about the AWS bill? It's a weird question to ask because most of the engineering folks look at me like I've just grown a second head. Like, “So, why do you care about your AWS bill?” Like, “What? Why do you? You run a company doing this?”It's no, no, no, it's not that I'm being rhetorical and I don't—I'm trying to be clever somehow and pretend that I don't understand all the nuances around this, but why does your business care about lowering the AWS bill? Because very often, the answer is they kind of don't. What they care about from a business perspective is being able to accurately attribute costs for the service or good that they provide, being able to predict what that spend is going to be, and also yes, a sense of being good stewards of the money that has been entrusted to them by via investors, public markets, or the budget allocation process of their companies and make sure that they're not doing foolish things with it. And that makes an awful lot of sense. It is rare at the corporate level that the stated number one concern is make the bills lower.Because at that point, well, easy enough. Let's just turn off everything you're running in production. You'll save a lot of money in your AWS bill. You won't be in business anymore, but you'll be saving a lot of money on the AWS bill. The answer is always deceptively nuanced and complicated.At least, that's how I see it. Let's also be clear that I talk with a relatively narrow subset of the AWS customer totality. The things that I do are very much intentionally things that do not scale. Definitionally, everything that you do has to scale. How do you wind up approaching this in ways that will work for customers spending billions versus independent learners who are paying for this out of their own personal pocket?Rick: It's not easy [laugh], let me just preface that. The team we have is incredible and we spent so much time thinking about scale and the different personas that engage with our products and how they're—what their experience is when they interact with a bill or AWS platform at large. There's also a couple of different personas here, right? We have a persona that focuses in on that cloud cost, the cloud bill, the finance, whether that's—if an organization is created a FinOps organization, if they have a Cloud Center of Excellence, versus an engineering team that maybe has started to go towards decentralized IT and has some accountability for the spend that they attribute to their AWS bill. And so, these different personas interact with us in really different ways, where Cost Explorer downloading the CUR and taking a look at the bill.And one thing that I always kind of imagine is somebody putting a headlamp on and going into the caves in the depths of their AWS bill and kind of like spelunking through their bill sometimes, right? And so, you have these FinOps folks and billing people that are deeply interested in making sure that the spend they do have meets their business goals, meaning this is providing high value to our company, it's providing high value to our customers, and we're spending on the right things, we're spending the right amount on the right things. Versus the engineering organization that's like, “Hey, how do we configure these resources? What types of instances should we be focused on using? What services should we be building on top of that maybe are more flexible for our business needs?”And so, there's really, like, two major personas that I spend a lot of time—our organization spends a lot of time wrapping our heads around. Because they're really different, very different approaches to how we think about cost. Because you're right, if you just wanted to lower your AWS bill, it's really easy. Just size everything to a t2.nano and you're done and move on [laugh], right? But you're [crosstalk 00:08:53]—Corey: Aw, t3 or t4.nano, depending upon whether regional availability is going to save you less. I'm still better at this. Let's not kid ourselves I kid. Mostly.Rick: For sure. So t4.nano, absolutely.Corey: T4g. Remember, now the way forward is everything has an explicit letter designator to define which processor company made the CPU that underpins the instance itself because that's a level of abstraction we certainly wouldn't want the cloud provider to take away from us any.Rick: Absolutely. And actually, the performance differences of those different processor models can be pretty incredible [laugh]. So, there's huge decisions behind all of that as well.Corey: Oh, yeah. There's so many factors that factor in all these things. It's gotten to a point of you see this usually with lawyers and very senior engineers, but the answer to almost everything is, “It depends.” There are always going to be edge cases. Easy example of, if you check a box and enable an S3 Gateway endpoint inside of a private subnet, suddenly, you're not passing traffic through a 4.5 cent per gigabyte managed NAT Gateway; it's being sent over that endpoint for no additional cost whatsoever.Check the box, save a bunch of money. But there are scenarios where you don't want to do it, so always double-checking and talking to customers about this is critically important. Just because, the first time you make a recommendation that does not work for their constraints, you lose trust. And make a few of those and it looks like you're more or less just making naive recommendations that don't add any value, and they learn to ignore you. So, down the road, when you make a really high-value, great recommendation for them, they stop paying attention.Rick: Absolutely. And we have that really high bar for recommendation accuracy, especially with right sizing, that's such a key one. Although I guess Savings Plan purchase recommendations can be critical as well. If a customer over commits on the amount of Savings Plan purchase they need to make, right, that's a really big problem for them.So, recommendation accuracy must be above reproach. Essentially, if a customer takes a recommendation and it breaks an application, they're probably never going to take another right-sizing recommendation again [laugh]. And so, this bar of trust must be exceptionally high. That's also why out of the box, the compute optimizer recommendations can be a little bit mild, they're a little time because the first order of business is do no harm; focus on the performance requirements of the application first because we have to make sure that the reason you build these workloads in AWS is served.Now ideally, we do that without overspending and without overprovisioning the capacity of these workloads, right? And so, for example, like if we make these right-sizing recommendations from Compute Optimizer, we're taking a look at the utilization of CPU, memory, disk, network, throughput, iops, and we're vending these recommendations to customers. And when you take that recommendation, you must still have great application performance for your business to be served, right? It's such a crucial part of how we optimize and run long-term. Because optimization is not a one-time Band-Aid; it's an ongoing behavior, so it's really critical that for that accuracy to be exceptionally high so we can build business process on top of it as well.Corey: Let me ask you this. How do you contextualize what the right approach to optimization is? What is your entire—there are certain tools that you have… by ‘you,' I mean, of course, as an organization—have repeatedly gone back to and different approaches that don't seem to deviate all that much from year to year, and customer to customer. How do you think about the general things that apply universally?Rick: So, we know that EC2 is a very popular service for us. We know that sizing EC2 is difficult. We think about that optimization pillar of scaling. It's an obvious area for us to help customers. We run into this sort of industry-wide experience where whenever somebody picks the size of a resource, they're going to pick one generally larger than they need.It's almost like asking a new employee at your company, “Hey, pick your laptop. We have a 16 gig model or a 32 gig model. Which one do you want?” That person [laugh] making the decision on capacity, hardware capacity, they're always going to pick the 32 gig model laptop, right? And so, we have this sort of human nature in IT of, we don't want to get called at two in the morning for performance issues, we don't want our apps to fall over, we want them to run really well, so we're going to size things very conservatively and we're going to oversize things.So, we can help customers by providing those recommendations to say, you can size things up in a different way using math and analytics based on the utilization patterns, and we can provide and pick different instance types. There's hundreds and hundreds of instance types in all of these regions across the globe. How do you know which is the right one for every single resource you have? It's a very, very hard problem to solve and it's not something that is lucrative to solve one by one if you have 100 EC2 instances. Trying to pick the correct size for each and every one can take hours and hours of IT engineering resources to look at utilization graphs, look at all of these types available, look at what is the performance difference between processor models and providers of those processors, is there application compatibility constraints that I have to consider? The complexity is astronomical.And then not only that, as soon as you make that sizing decision, one week later, it's out of date and you need a different size. So, [laugh] you didn't really solve the problem. So, we have to programmatically use data science and math to say, “Based on these utilization values, these are the sizes that would make sense for your business, that would have the lowest cost and the highest performance together at the same time.” And it's super important that we provide this capability from a technology standpoint because it would cost so much money to try to solve that problem that the savings you would achieve might not be meaningful. Then at the same time… you know, that's really from an engineering perspective, but when we talk to the FinOps and the finance folks, the conversations are more about Reservations and Savings Plans.How do we correctly apply Savings Plans and Reservations across a high percentage of our portfolio to reduce the costs on those workloads, but not so much that dynamic capacity levels in our organization mean we all of a sudden have a bunch of unused Reservations or Savings Plans? And so, a lot of organizations that engage with us and we have conversations with, we start with the Reservation and Savings Plan conversation because it's much easier to click a few buttons and buy a Savings Plan than to go institute an entire right-sizing campaign across multiple engineering teams. That can be very difficult, a much higher bar. So, some companies are ready to dive into the engineering task of sizing; some are not there yet. And they're a little maybe a little earlier in their FinOps journey, or the building optimization technology stacks, or achieving higher value out of their cloud environments, so starting with kind of the low hanging fruit, it can vary depending on the company, size of company, technical aptitudes, skill sets, all sorts of things like that.And so, those finance-focused teams are definitely spending more time looking at and studying what are the best practices for purchasing Savings Plans, covering my environment, getting the most out of my dollar that way. Then they don't have to engage the engineering teams; they can kind of take a nice chunk off the top of their bill and sort of have something to show for that amount of effort. So, there's a lot of different approaches to start in optimization.Corey: My philosophy runs somewhat counter to this because everything you're saying does work globally, it's safe, it's non-threatening, and then also really, on some level, feels like it is an approach that can be driven forward by finance or business. Whereas my worldview is that cost and architecture in cloud are one and the same. And there are architectural consequences of cost decisions and vice versa that can be adjusted and addressed. Like, one of my favorite party tricks—although I admit, it's a weird party—is I can look at the exploded PDF view of a customer's AWS bill and describe their architecture to them. And people have questioned that a few times, and now I have a testimonial on my client website that mentions, “It was weird how he was able to do this.”Yeah, it's real, I can do it. And it's not a skill, I would recommend cultivating for most people. But it does also mean that I think I'm onto something here, where there's always context that needs to be applied. It feels like there's an entire ecosystem of product companies out there trying to build what amount to a better Cost Explorer that also is not free the way that Cost Explorer is. So, the challenge I see there's they all tend to look more or less the same; there is very little differentiation in that space. And in the fullness of time, Cost Explorer does—ideally—get better. How do you think about it?Rick: Absolutely. If you're looking at ways to understand your bill, there's obviously Cost Explorer, the CUR, that's a very common approach is to take the CUR and put a BI front-end on top of it. That's a common experience. A lot of companies that have chops in that space will do that themselves instead of purchasing a third-party product that does do bill breakdown and dissemination. There's also the cross-charge show-back organizational breakdown and boundaries because you have these super large organizations that have fiefdoms.You know, if HR IT and sales IT, and [laugh] you know, product IT, you have all these different IT departments that are fiefdoms within your AWS bill and construct, whether they have different ABS accounts or say different AWS organizations sometimes, right, it can get extremely complicated. And some organizations require the ability to break down their bill based on those organizational boundaries. Maybe tagging works, maybe it doesn't. Maybe they do that by using a third-party product that lets them set custom scopes on their resources based on organizational boundaries. That's a common approach as well.We do also have our first-party solutions, they can do that, like the CUDOS dashboard as well. That's something that's really popular and highly used across our customer base. It allows you to have kind of a dashboard and customizable view of your AWS costs and, kind of, split it up based on tag organizational value, account name, and things like that as well. So, you mentioned that you feel like the architectural and cost problem is the same problem. I really don't disagree with that at all.I think what it comes down to is some organizations are prepared to tackle the architectural elements of cost and some are not. And it really comes down to how does the customer view their bill? Is it somebody in the finance organization looking at the bill? Is it somebody in the engineering organization looking at the bill? Ideally, it would be both.Ideally, you would have some of those skill sets that overlap, or you would have an organization that does focus in on FinOps or cloud operations as it relates to cost. But then at the same time, there are organizations that are like, “Hey, we need to go to cloud. Our CIO told us go to cloud. We don't want to pay the lease renewal on this building.” There's a lot of reasons why customers move to cloud, a lot of great reasons, right? Three major reasons you move to cloud: agility, [crosstalk 00:20:11]—Corey: And several terrible ones.Rick: Yeah, [laugh] and some not-so-great ones, too. So, there's so many different dynamics that get exposed when customers engage with us that they might or might not be ready to engage on the architectural element of how to build hyperscale systems. So, many of these customers are bringing legacy workloads and applications to the cloud, and something like a re-architecture to use stateless resources or something like Spot, that's just not possible for them. So, how can they take 20% off the top of their bill? Savings Plans or Reservations are kind of that easy, low-hanging fruit answer to just say, “We know these are fairly static environments that don't change a whole lot, that are going to exist for some amount of time.”They're legacy, you know, we can't turn them off. It doesn't make sense to rewrite these applications because they just don't change, they don't have high business value, or something like that. And so, the architecture part of that conversation doesn't always come into play. Should it? Yes.The long-term maturity and approach for cloud optimization does absolutely account for architecture, thinking strategically about how you do scaling, what services you're using, are you going down the Kubernetes path, which I know you're going to laugh about, but you know, how do you take these applications and componentize them? What services are you using to do that? How do you get that long-term scale and manageability out of those environments? Like you said at the beginning, the complexity is staggering and there's no one unified answer. That's why there's so many different entrance paths into, “How do I optimize my AWS bill?”There's no one answer, and every customer I talk to has a different comfort level and appetite. And some of them have tried suspension, some of them have gone heavy down Savings Plans, some of them want to dabble in right-sizing. So, every customer is different and we want to provide those capabilities for all of those different customers that have different appetites or comfort levels with each of these approaches.Corey: This episode is sponsored in part by our friends at Redis, the company behind the incredibly popular open source database. If you're tired of managing open source Redis on your own, or if you are looking to go beyond just caching and unlocking your data's full potential, these folks have you covered. Redis Enterprise is the go-to managed Redis service that allows you to reimagine how your geo-distributed applications process, deliver, and store data. To learn more from the experts in Redis how to be real-time, right now, from anywhere, visit redis.com/duckbill. That's R - E - D - I - S dot com slash duckbill.Corey: And I think that's very fair. I think that it is not necessarily a bad thing that you wind up presenting a lot of these options to customers. But there are some rough edges. An example of this is something I encountered myself somewhat recently and put on Twitter—because I have those kinds of problems—where originally, I remember this, that you were able to buy hourly Savings Plans, which again, Savings Plans are great; no knock there. I would wish that they applied to more services rather than, “Oh, SageMaker is going to do its own Savings Pla”—no, stop keeping me from going from something where I have to manage myself on EC2 to something you manage for me and making that cost money. You nailed it with Fargate. You nailed it with Lambda. Please just have one unified Savings Plan thing. But I digress.But you had a limit, once upon a time, of $1,000 per hour. Now, it's $5,000 per hour, which I believe in a three-year all-up-front means you will cheerfully add $130 million purchase to your shopping cart. And I kept adding a bunch of them and then had a little over a billion dollars a single button click away from being charged to my account. Let me begin with what's up with that?Rick: [laugh]. Thank you for the tweet, by the way, Corey.Corey: Always thrilled to ruin your month, Rick. You know that.Rick: Yeah. Fantastic. We took that tweet—you know, it was tongue in cheek, but also it was a serious opportunity for us to ask a question of what does happen? And it's something we did ask internally and have some fun conversations about. I can tell you that if you clicked purchase, it would have been declined [laugh]. So, you would have not been—Corey: Yeah, American Express would have had a problem with that. But the question is, would you have attempted to charge American Express, or would something internally have gone, “This has a few too many commas for us to wind up presenting it to the card issuer with a straight face?”Rick: [laugh]. Right. So, it wouldn't have gone through and I can tell you that, you know, if your account was on a PO-based configuration, you know, it would have gone to the account team. And it would have gone through our standard process for having a conversation with our customer there. That being said, we are—it's an awesome opportunity for us to examine what is that shopping cart experience.We did increase the limit, you're right. And we increased the limit for a lot of reasons that we sat down and worked through, but at the same time, there's always an opportunity for improvement of our product and experience, we want to make sure that it's really easy and lightweight to use our products, especially purchasing Savings Plans. Savings Plans are already kind of wrought with mental concern and risk of purchasing something so expensive and large that has a big impact on your AWS bill, so we don't really want to add any more friction necessarily the process but we do want to build an awareness and make sure customers understand, “Hey, you're purchasing this. This has a pretty big impact.” And so, we're also looking at other ways we can kind of improve the ability for the Savings Plan shopping cart experience to ensure customers don't put themselves in a position where you have to unwind or make phone calls and say, “Oops.” Right? We [laugh] want to avoid those sorts of situations for our customers. So, we are looking at quite a few additional improvements to that experience as well that I'm really excited about that I really can't share here, but stay tuned.Corey: I am looking forward to it. I will say the counterpoint to that is having worked with customers who do make large eight-figure purchases at once, there's a psychology element that plays into it. Everyone is very scared to click the button on the ‘Buy It Now' thing or the ‘Approve It.' So, what I've often found is at that scale, one, you can reduce what you're buying by half of it, and then see how that treats you and then continue to iterate forward rather than doing it all at once, or reach out to your account team and have them orchestrate the buy. In previous engagements, I had a customer do this religiously and at one point, the concierge team bought the wrong thing in the wrong region, and from my perspective, I would much rather have AWS apologize for that and fix it on their end, than from us having to go with a customer side of, “Oh crap, oh, crap. Please be nice to us.”Not that I doubt you would do it, but that's not the nervous conversation I want to have in quite the same way. It just seems odd to me that someone would want to make that scale of purchase without ever talking to a human. I mean, I get it. I'm as antisocial as they come some days, but for that kind of money, I kind of just want another human being to validate that I'm not making a giant mistake.Rick: We love that. That's such a tremendous opportunity for us to engage and discuss with an organization that's going to make a large commitment, that here's the impact, here's how we can help. How does it align to our strategy? We also do recommend, from a strategic perspective, those more incremental purchases. I think it creates a better experience long-term when you don't have a single Savings Plan that's going to expire on a specific day that all of a sudden increases your entire bill by a significant percentage.So, making staggered monthly purchases makes a lot of sense. And it also works better for incremental growth, right? If your organization is growing 5% month-over-month or year-over-year or something like that, you can purchase those incremental Savings Plans that sort of stack up on top of each other and then you don't have that risk of a cliff one day where one super-large SP expires and boom, you have to scramble and repurchase within minutes because every minute that goes by is an additional expense, right? That's not a great experience. And so that's, really, a large part of why those staggered purchase experiences make a lot of sense.That being said, a lot of companies do their math and their finance in different ways. And single large purchases makes sense to go through their process and their rigor as well. So, we try to support both types of purchasing patterns.Corey: I think that is an underappreciated aspect of cloud cost savings and cloud cost optimization, where it is much more about humans than it is about math. I see this most notably when I'm helping customers negotiate their AWS contracts with AWS, where they are often perspectives such as, “Well, we feel like we really got screwed over last time, so we want to stick it to them and make them give us a bigger percentage discount on something.” And it's like, look, you can do that, but I would much rather, if it were me, go for something that moves the needle on your actual business and empowers you to move faster, more effectively, and lead to an outcome that is a positive for everyone versus the well, we're just going to be difficult in this one point because they were difficult on something last time. But ego is a thing. Human psychology is never going to have an API for it. And again, customers get to decide their own destiny in some cases.Rick: I completely agree. I've actually experienced that. So, this is the third company I've been working at on Cloud optimization. I spent several years at Microsoft running an optimization program. I went to Turbonomic for several years, building out the right-sizing and savings plan reservation purchase capabilities there, and now here at AWS.And through all of these journeys and experiences working with companies to help optimize their cloud spend, I can tell you that the psychological needle—moving the needle is significantly harder than the technology stack of sizing something correctly or deleting something that's unused. We can solve the technology part. We can build great products that identify opportunities to save money. There's still this psychological component of IT, for the last several decades has gone through this maturity curve of if it's not broken, don't touch it. Five-nines, six sigma, all of these methods of IT sort of rationalizing do no harm, don't touch anything, everything must be up.And it even kind of goes back several decades. Back when if you rebooted a physical server, the motherboard capacitors would pop, right? So, there's even this anti—or this stigma against even rebooting servers sometimes. In the cloud really does away with a lot of that stuff because we have live migration and we have all of these, sort of, stateless designs and capabilities, but we still carry along with us this mentality of don't touch it; it might fall over. And we have to really get past that.And that means that the trust, we went back to the trust conversation where we talk about the recommendations must be incredibly accurate. You're risking your job, in some cases; if you are a DevOps engineer, and your commitments on your yearly goals are uptime, latency, response time, load time, these sorts of things, these operational metrics, KPIs that you use, you don't want to take a downsized recommendation. It has a severe risk of harming your job and your bonus.Corey: “These instances are idle. Turn them off.” It's like, yeah, these instances are the backup site, or the DR environment, or—Rick: Exactly.Corey: —something that takes very bursty but occasional traffic. And yeah, I know it costs us some money, but here's the revenue figures for having that thing available. Like, “Oh, yeah. Maybe we should shut up and not make dumb recommendations around things,” is the human response, but computers don't have that context.Rick: Absolutely. And so, the accuracy and trust component has to be the highest bar we meet for any optimization activity or behavior. We have to circumvent or supersede the human aversion, the risk aversion, that IT is built on, right?Corey: Oh, absolutely. And let's be clear, we see this all the time where I'm talking to customers and they have been burned before because we tried to save money and then we took a production outage as a side effect of a change that we made, and now we're not allowed to try to save money anymore. And there's a hidden truth in there, which is auto-scaling is something that a lot of customers talk about, but very few have instrumented true auto-scaling because they interpret is we can scale up to meet demand. Because yeah, if you don't do that you're dropping customers on the floor.Well, what about scaling back down again? And the answer there is like, yeah, that's not really a priority because it's just money. We're not disappointing customers, causing brand reputation, and we're still able to take people's money when that happens. It's only money; we can fix it later. Covid shined a real light on a lot of the stuff just because there are customers that we've spoken to who's—their user traffic dropped off a cliff, infrastructure spend remained constant day over day.And yeah, they believe, genuinely, they were auto-scaling. The most interesting lies are the ones that customers tell themselves, but the bill speaks. So, getting a lot of modernization traction from things like that was really neat to watch. But customers I don't think necessarily intuitively understand most aspects of their bill because it is a multidisciplinary problem. It's engineering, its finance, its accounting—which is not the same thing as finance—and you need all three of those constituencies to be able to communicate effectively using a shared and common language. It feels like we're marriage counseling between engineering and finance, most weeks.Rick: Absolutely, we are. And it's important we get it right, that the data is accurate, that the recommendations we provide are trustworthy. If the finance team gets their hands on the savings potential they see out of right-sizing, takes it to engineering, and then engineering comes back and says, “No, no, no, we can't actually do that. We can't actually size those,” right, we have problems. And they're cultural, they're transformational. Organizations' appetite for these things varies greatly and so it's important that we address that problem from all of those angles. And it's not easy to do.Corey: How big do you find the optimization problem is when you talk to customers? How focused are they on it? I have my answers, but that's the scale of anec-data. I want to hear your actual answer.Rick: Yeah. So, we talk with a lot of customers that are very interested in optimization. And we're very interested in helping them on the journey towards having an optimal estate. There are so many nuances and barriers, most of them psychological like we already talked about.I think there's this opportunity for us to go do better exposing the potential of what an optimal AWS estate would look like from a dollar and savings perspective. And so, I think it's kind of not well understood. I think it's one of the biggest areas or barriers of companies really attacking the optimization problem with more vigor is if they knew that the potential savings they could achieve out of their AWS environment would really align their spend much more closely with the business value they get, I think everybody would go bonkers. And so, I'm really excited about us making progress on exposing that capability or the total savings potential and amount. It's something we're looking into doing in a much more obvious way.And we're really excited about customers doing that on AWS where they know they can trust AWS to get the best value for their cloud spend, that it's a long-term good bet because their resources that they're using on AWS are all focused on giving business value. And that's the whole key. How can we align the dollars to the business value, right? And I think optimization is that connection between those two concepts.Corey: Companies are generally not going to greenlight a project whose sole job is to save money unless there's something very urgent going on. What will happen is as they iterate forward on the next generation of services or a migration of a service from one thing to another, they will make design decisions that benefit those optimizations. There's low-hanging fruit we can find, usually of the form, “Turn that thing off,” or, “Configure this thing slightly differently,” that doesn't take a lot of engineering effort in place. But, on some level, it is not worth the engineering effort it takes to do an optimization project. We've all met those engineers—speaking is one of them myself—who, left to our own devices, will spend two months just knocking a few hundred bucks a month off of our AWS developer environment.We steal more than office supplies. I'm not entirely sure what the business value of doing that is, in most cases. For me, yes, okay, things that work in small environments work very well in large environments, generally speaking, so I learned how to save 80 cents here and that's a few million bucks a month somewhere else. Most folks don't have that benefit happening, so it's a question of meeting them where they are.Rick: Absolutely. And I think the scale component is huge, which you just touched on. When you're talking about a hundred EC2 instances versus a thousand, optimization becomes kind of a different component of how you manage that AWS environment. And while single-decision recommendations to scale an individual server, the dollar amount might be different, the percentages are just about the same when you look at what is it to be sized correctly, what is it to be configured correctly? And so, it really does come down to priority.And so, it's really important to really support all of those companies of all different sizes and industries because they will have different experiences on AWS. And some will have more sensitivity to cost than others, but all of them want to get great business value out of their AWS spend. And so, as long as we're meeting that need and we're supporting our customers to make sure they understand the commitment we have to ensuring that their AWS spend is valuable, it is meaningful, right, they're not spending money on things that are not adding value, that's really important to us.Corey: I do want to have as the last topic of discussion here, how AWS views optimization, where there have been a number of repeated statements where helping customers optimize their cloud spend is extremely important to us. And I'm trying to figure out where that falls on the spectrum from, “It's the thing we say because they make us say it, but no, we're here to milk them like cows,” all the way on over to, “No, no, we passionately believe in this at every level, top to bottom, in every company. We are just bad at it.” So, I'm trying to understand how that winds up being expressed from your lived experience having solved this problem first outside, and then inside.Rick: Yeah. So, it's kind of like part of my personal story. It's the main reason I joined AWS. And, you know, when you go through the interview loops and you talk to the leaders of an organization you're thinking about joining, they always stop at the end of the interview and ask, “Do you have any questions for us?” And I asked that question to pretty much every single person I interviewed with. Like, “What is AWS's appetite for helping customers save money?”Because, like, from a business perspective, it kind of is a little bit wonky, right? But the answers were varied, and all of them were customer-obsessed and passionate. And I got this sense that my personal passion for helping companies have better efficiency of their IT resources was an absolute primary goal of AWS and a big element of Amazon's leadership principle, be customer obsessed. Now, I'm not a spokesperson, so [laugh] we'll see, but we are deeply interested in making sure our customers have a great long-term experience and a high-trust relationship. And so, when I asked these questions in these interviews, the answers were all about, “We have to do the right thing for the customer. It's imperative. It's also in our DNA. It's one of the most important leadership principles we have to be customer-obsessed.”And it is the primary reason why I joined: because of that answer to that question. Because it's so important that we achieve a better efficiency for our IT resources, not just for, like, AWS, but for our planet. If we can reduce consumption patterns and usage across the planet for how we use data centers and all the power that goes into them, we can talk about meaningful reductions of greenhouse gas emissions, the cost and energy needed to run IT business applications, and not only that, but most all new technology that's developed in the world seems to come out of a data center these days, we have a real opportunity to make a material impact to how much resource we use to build and use these things. And I think we owe it to the planet, to humanity, and I think Amazon takes that really seriously. And I'm really excited to be here because of that.Corey: As I recall—and feel free to make sure that this comment never sees the light of day—you asked me before interviewing for the role and then deciding to accept it, what I thought about you working there and whether I would recommend it, whether I wouldn't. And I think my answer was fairly nuanced. And you're working there now and we still are on speaking terms, so people can probably guess what my comments took the shape of, generally speaking. So, I'm going to have to ask now; it's been, what, a year since you joined?Rick: Almost. I think it's been about eight months.Corey: Time during a pandemic is always strange. But I have to ask, did I steer you wrong?Rick: No. Definitely not. I'm very happy to be here. The opportunity to help such a broad range of companies get more value out of technology—and it's not just cost, right, like we talked about. It's actually not about the dollar number going down on a bill. It's about getting more value and moving the needle on how do we efficiently use technology to solve business needs.And that's been my career goal for a really long time, I've been working on optimization for, like, seven or eight, I don't know, maybe even nine years now. And it's like this strange passion for me, this combination of my dad taught me how to be a really good steward of money and a great budget manager, and then my passion for technology. So, it's this really cool combination of, like, childhood life skills that really came together for me to create a career that I'm really passionate about. And this move to AWS has been such a tremendous way to supercharge my ability to scale my personal mission, and really align it to AWS's broader mission of helping companies achieve more with cloud platforms, right?And so, it's been a really nice eight months. It's been wild. Learning AWS culture has been wild. It's a sharp diverging culture from where I've been in the past, but it's also really cool to experience the leadership principles in action. They're not just things we put on a website; they're actually things people talk about every day [laugh]. And so, that journey has been humbling and a great learning opportunity as well.Corey: If people want to learn more, where's the best place to find you?Rick: Oh, yeah. Contact me on LinkedIn or Twitter. My Twitter account is @rickyo1138. Let me know if you get the 1138 reference. That's a fun one.Corey: THX 1138. Who doesn't?Rick: Yeah, there you go. And it's hidden in almost every single George Lucas movie as well. You can contact me on any of those social media platforms and I'd be happy to engage with anybody that's interested in optimization, cloud technology, bill, anything like that. Or even not [laugh]. Even anything else, either.Corey: Thank you so much for being so generous with your time. I really appreciate it.Rick: My pleasure, Corey. It was wonderful talking to you.Corey: Rick Ochs, Principal Product Manager at AWS. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment, rightly pointing out that while AWS is great and all, Azure is far more cost-effective for your workloads because, given their lack security, it is trivially easy to just run your workloads in someone else's account.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
About SamSam Nicholls: Veeam's Director of Public Cloud Product Marketing, with 10+ years of sales, alliance management and product marketing experience in IT. Sam has evolved from his on-premises storage days and is now laser-focused on spreading the word about cloud-native backup and recovery, packing in thousands of viewers on his webinars, blogs and webpages.Links Referenced: Veeam AWS Backup: https://www.veeam.com/aws-backup.html Veeam: https://veeam.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. Tired of observability costs going up every year without getting additional value? Or being locked in to a vendor due to proprietary data collection, querying and visualization? Modern day, containerized environments require a new kind of observability technology that accounts for the massive increase in scale and attendant cost of data. With Chronosphere, choose where and how your data is routed and stored, query it easily, and get better context and control. 100% open source compatibility means that no matter what your setup is, they can help. Learn how Chronosphere provides complete and real-time insight into ECS, EKS, and your microservices, whereever they may be at snark.cloud/chronosphere That's snark.cloud/chronosphere Corey: This episode is brought to us by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out. Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted guest episode is brought to us by and sponsored by our friends over at Veeam. And as a part of that, they have thrown one of their own to the proverbial lion. My guest today is Sam Nicholls, Director of Public Cloud over at Veeam. Sam, thank you for joining me.Sam: Hey. Thanks for having me, Corey, and thanks for everyone joining and listening in. I do know that I've been thrown into the lion's den, and I am [laugh] hopefully well-prepared to answer anything and everything that Corey throws my way. Fingers crossed. [laugh].Corey: I don't think there's too much room for criticizing here, to be direct. I mean, Veeam is a company that is solidly and thoroughly built around a problem that absolutely no one cares about. I mean, what could possibly be wrong with that? You do backups; which no one ever cares about. Restores, on the other hand, people care very much about restores. And that's when they learn, “Oh, I really should have cared about backups at any point prior to 20 minutes ago.”Sam: Yeah, it's a great point. It's kind of like taxes and insurance. It's almost like, you know, something that you have to do that you don't necessarily want to do, but when push comes to shove, and something's burning down, a file has been deleted, someone's made their way into your account and, you know, running a right mess within there, that's when you really, kind of, care about what you mentioned, which is the recovery piece, the speed of recovery, the reliability of recovery.Corey: It's been over a decade, and I'm still sore about losing my email archives from 2006 to 2009. There's no way to get it back. I ran my own mail server; it was an iPhone setting that said, “Oh, yeah, automatically delete everything in your trash folder—or archive folder—after 30 days.” It was just a weird default setting back in that era. I didn't realize it was doing that. Yeah, painful stuff.And we learned the hard way in some of these cases. Not that I really have much need for email from that era of my life, but every once in a while it still bugs me. Which gets speaks to the point that the people who are the most fanatical about backing things up are the people who have been burned by not having a backup. And I'm fortunate in that it wasn't someone else's data with which I had been entrusted that really cemented that lesson for me.Sam: Yeah, yeah. It's a good point. I could remember a few years ago, my wife migrated a very aging, polycarbonate white Mac to one of the shiny new aluminum ones and thought everything was good—Corey: As the white polycarbonate Mac becomes yellow, then yeah, all right, you know, it's time to replace it. Yeah. So yeah, so she wiped the drive, and what happened?Sam: That was her moment where she learned the value and importance of backup unless she backs everything up now. I fortunately have never gone through it. But I'm employed by a backup vendor and that's why I care about it. But it's incredibly important to have, of course.Corey: Oh, yes. My spouse has many wonderful qualities, but one that drives me slightly nuts is she's something of a digital packrat where her hard drives on her laptop will periodically fill up. And I used to take the approach of oh, you can be more efficient and do the rest. And I realized no, telling other people they're doing it wrong is generally poor practice, whereas just buying bigger drives is way easier. Let's go ahead and do that. It's small price to pay for domestic tranquility.And there's a lesson in that. We can map that almost perfectly to the corporate world where you folks tend to operate in. You're not doing home backup, last time I checked; you are doing public cloud backup. Actually, I should ask that. Where do you folks start and where do you stop?Sam: Yeah, no, it's a great question. You know, we started over 15 years ago when virtualization, specifically VMware vSphere, was really the up-and-coming thing, and, you know, a lot of folks were there trying to utilize agents to protect their vSphere instances, just like they were doing with physical Windows and Linux boxes. And, you know, it kind of got the job done, but was it the best way of doing it? No. And that's kind of why Veeam was pioneered; it was this agentless backup, image-based backup for vSphere.And, of course, you know, in the last 15 years, we've seen lots of transitions, of course, we're here at Screaming in the Cloud, with you, Corey, so AWS, as well as a number of other public cloud vendors we can help protect as well, as a number of SaaS applications like Microsoft 365, metadata and data within Salesforce. So, Veeam's really kind of come a long way from just virtual machines to really taking a global look at the entirety of modern environments, and how can we best protect each and every single one of those without trying to take a square peg and fit it in a round hole?Corey: It's a good question and a common one. We wind up with an awful lot of folks who are confused by the proliferation of data. And I'm one of them, let's be very clear here. It comes down to a problem where backups are a multifaceted, deep problem, and I don't think that people necessarily think of it that way. But I take a look at all of the different, even AWS services that I use for my various nonsense, and which ones can be used to store data?Well, all of them. Some of them, you have to hold it in a particularly wrong sort of way, but they all store data. And in various contexts, a lot of that data becomes very important. So, what service am I using, in which account am I using, and in what region am I using it, and you wind up with data sprawl, where it's a tremendous amount of data that you can generally only track down by looking at your bills at the end of the month. Okay, so what am I being charged, and for what service?That seems like a good place to start, but where is it getting backed up? How do you think about that? So, some people, I think, tend to ignore the problem, which we're seeing less and less, but other folks tend to go to the opposite extreme and we're just going to backup absolutely everything, and we're going to keep that data for the rest of our natural lives. It feels to me that there's probably an answer that is more appropriate somewhere nestled between those two extremes.Sam: Yeah, snapshot sprawl is a real thing, and it gets very, very expensive very, very quickly. You know, your snapshots of EC2 instances are stored on those attached EBS volumes. Five cents per gig per month doesn't sound like a lot, but when you're dealing with thousands of snapshots for thousands machines, it gets out of hand very, very quickly. And you don't know when to delete them. Like you say, folks are just retaining them forever and dealing with this unfortunate bill shock.So, you know, where to start is automating the lifecycle of a snapshot, right, from its creation—how often do we want to be creating them—from the retention—how long do we want to keep these for—and where do we want to keep them because there are other storage services outside of just EBS volumes. And then, of course, the ultimate: deletion. And that's important even from a compliance perspective as well, right? You've got to retain data for a specific number of years, I think healthcare is like seven years, but then you've—Corey: And then not a day more.Sam: Yeah, and then not a day more because that puts you out of compliance, too. So, policy-based automation is your friend and we see a number of folks building these policies out: gold, silver, bronze tiers based on criticality of data compliance and really just kind of letting the machine do the rest. And you can focus on not babysitting backup.Corey: What was it that led to the rise of snapshots? Because back in my very early days, there was no such thing. We wound up using a bunch of servers stuffed in a rack somewhere and virtualization was not really in play, so we had file systems on physical disks. And how do you back that up? Well, you have an agent of some sort that basically looks at all the files and according to some ruleset that it has, it copies them off somewhere else.It was slow, it was fraught, it had a whole bunch of logic that was pushed out to the very edge, and forget about restoring that data in a timely fashion or even validating a lot of those backups worked other than via checksum. And God help you if you had data that was constantly in the state of flux, where anything changing during the backup run would leave your backups in an inconsistent state. That on some level seems to have largely been solved by snapshots. But what's your take on it? You're a lot closer to this part of the world than I am.Sam: Yeah, snapshots, I think folks have turned to snapshots for the speed, the lack of impact that they have on production performance, and again, just the ease of accessibility. We have access to all different kinds of snapshots for EC2, RDS, EFS throughout the entirety of our AWS environment. So, I think the snapshots are kind of like the default go-to for folks. They can help deliver those very, very quick RPOs, especially in, for example, databases, like you were saying, that change very, very quickly and we all of a sudden are stranded with a crash-consistent backup or snapshot versus an application-consistent snapshot. And then they're also very, very quick to recover from.So, snapshots are very, very appealing, but they absolutely do have their limitations. And I think, you know, it's not a one or the other; it's that they've got to go hand-in-hand with something else. And typically, that is an image-based backup that is stored in a separate location to the snapshot because that snapshot is not independent of the disk that it is protecting.Corey: One of the challenges with snapshots is most of them are created in a copy-on-write sense. It takes basically an instant frozen point in time back—once upon a time when we ran MySQL databases on top of the NetApp Filer—which works surprisingly well—we would have a script that would automatically quiesce the database so that it would be in a consistent state, snapshot the file and then un-quiesce it, which took less than a second, start to finish. And that was awesome, but then you had this snapshot type of thing. It wasn't super portable, it needed to reference a previous snapshot in some cases, and AWS takes the same approach where the first snapshot it captures every block, then subsequent snapshots wind up only taking up as much size as there have been changes since the first snapshots. So, large quantities of data that generally don't get access to a whole lot have remarkably small, subsequent snapshot sizes.But that's not at all obvious from the outside, and looking at these things. They're not the most portable thing in the world. But it's definitely the direction that the industry has trended in. So, rather than having a cron job fire off an AWS API call to take snapshots of my volumes as a sort of the baseline approach that we all started with, what is the value proposition that you folks bring? And please don't say it's, “Well, cron jobs are hard and we have a friendlier interface for that.”Sam: [laugh]. I think it's really starting to look at the proliferation of those snapshots, understanding what they're good at, and what they are good for within your environment—as previously mentioned, low RPOs, low RTOs, how quickly can I take a backup, how frequently can I take a backup, and more importantly, how quickly can I restore—but then looking at their limitations. So, I mentioned that they were not independent of that disk, so that certainly does introduce a single point of failure as well as being not so secure. We've kind of touched on the cost component of that as well. So, what Veeam can come in and do is then take an image-based backup of those snapshots, right—so you've got your initial snapshot and then your incremental ones—we'll take the backup from that snapshot, and then we'll start to store that elsewhere.And that is likely going to be in a different account. We can look at the Well-Architected Framework, AWS deeming accounts as a security boundary, so having that cross-account function is critically important so you don't have that single point of failure. Locking down with IAM roles is also incredibly important so we haven't just got a big wide open door between the two. But that data is then stored in a separate account—potentially in a separate region, maybe in the same region—Amazon S3 storage. And S3 has the wonderful benefit of being still relatively performant, so we can have quick recoveries, but it is much, much cheaper. You're dealing with 2.3 cents per gig per month, instead of—Corey: To start, and it goes down from there with sizeable volumes.Sam: Absolutely, yeah. You can go down to S3 Glacier, where you're looking at, I forget how many points and zeros and nines it is, but it's fractions of a cent per gig per month, but it's going to take you a couple of days to recover that da—Corey: Even infrequent access cuts that in half.Sam: Oh yeah.Corey: And let's be clear, these are snapshot backups; you probably should not be accessing them on a consistent, sustained basis.Sam: Well, exactly. And this is where it's kind of almost like having your cake and eating it as well. Compliance or regulatory mandates or corporate mandates are saying you must keep this data for this length of time. Keeping that—you know, let's just say it's three years' worth of snapshots in an EBS volume is going to be incredibly expensive. What's the likelihood of you needing to recover something from two years—actually, even two months ago? It's very, very small.So, the performance part of S3 is, you don't need to take it as much into consideration. Can you recover? Yes. Is it going to take a little bit longer? Absolutely. But it's going to help you meet those retention requirements while keeping your backup bill low, avoiding that bill shock, right, spending tens and tens of thousands every single month on snapshots. This is what I mean by kind of having your cake and eating it.Corey: I somewhat recently have had a client where EBS snapshots are one of the driving costs behind their bill. It is one of their largest single line items. And I want to be very clear here because if one of those people who listen to this and thinking, “Well, hang on. Wait, they're telling stories about us, even though they're not naming us by name?” Yeah, there were three of you in the last quarter.So, at that point, it becomes clear it is not about something that one individual company has done and more about an overall driving trend. I am personalizing it a little bit by referring to as one company when there were three of you. This is a narrative device, not me breaking confidentiality. Disclaimer over. Now, when you talk to people about, “So, tell me why you've got 80 times more snapshots than you do EBS volumes?” The answer is as, “Well, we wanted to back things up and we needed to get hourly backups to a point, then daily backups, then monthly, and so on and so forth. And when this was set up, there wasn't a great way to do this natively and we don't always necessarily know what we need versus what we don't. And the cost of us backing this up, well, you can see it on the bill. The cost of us deleting too much and needing it as soon as we do? Well, that cost is almost incalculable. So, this is the safe way to go.” And they're not wrong in anything that they're saying. But the world has definitely evolved since then.Sam: Yeah, yeah. It's a really great point. Again, it just folds back into my whole having your cake and eating it conversation. Yes, you need to retain data; it gives you that kind of nice, warm, cozy feeling, it's a nice blanket on a winter's day that that data, irrespective of what happens, you're going to have something to recover from. But the question is does that need to be living on an EBS volume as a snapshot? Why can't it be living on much, much more cost-effective storage that's going to give you the warm and fuzzies, but is going to make your finance team much, much happier [laugh].Corey: One of the inherent challenges I think people have is that snapshots by themselves are almost worthless, in that I have an EBS snapshot, it is sitting there now, it's costing me an undetermined amount of money because it's not exactly clear on a per snapshot basis exactly how large it is, and okay, great. Well, I'm looking for a file that was not modified since X date, as it was on this time. Well, great, you're going to have to take that snapshot, restore it to a volume and then go exploring by hand. Oh, it was the wrong one. Great. Try it again, with a different one.And after, like, the fifth or six in a row, you start doing a binary search approach on this thing. But it's expensive, it's time-consuming, it takes forever, and it's not a fun user experience at all. Part of the problem is it seems that historically, backup systems have no context or no contextual awareness whatsoever around what is actually contained within that backup.Sam: Yeah, yeah. I mean, you kind of highlighted two of the steps. It's more like a ten-step process to do, you know, granular file or folder-level recovery from a snapshot, right? You've got to, like you say, you've got to determine the point in time when that, you know, you knew the last time that it was around, then you're going to have to determine the volume size, the region, the OS, you're going to have to create an EBS volume of the same size, region, from that snapshot, create the EC2 instance with the same OS, connect the two together, boot the EC2 instance, mount the volume search for the files to restore, download them manually, at which point you have your file back. It's not back in the machine where it was, it's now been downloaded locally to whatever machine you're accessing that from. And then you got to tear it all down.And that is again, like you say, predicated on the fact that you knew exactly that that was the right time. It might not be and then you have to start from scratch from a different point in time. So, backup tooling from backup vendors that have been doing this for many, many years, knew about this problem long, long ago, and really seek to not only automate the entirety of that process but make the whole e-discovery, the search, the location of those files, much, much easier. I don't necessarily want to do a vendor pitch, but I will say with Veeam, we have explorer-like functionality, whereby it's just a simple web browser. Once that machine is all spun up again, automatic process, you can just search for your individual file, folder, locate it, you can download it locally, you can inject it back into the instance where it was through Amazon Kinesis or AWS Kinesis—I forget the right terminology for it; some of its AWS, some of its Amazon.But by-the-by, the whole recovery process, especially from a file or folder level, is much more pain-free, but also much faster. And that's ultimately what people care about how reliable is my backup? How quickly can I get stuff online? Because the time that I'm down is costing me an indescribable amount of time or money.Corey: This episode is sponsored in part by our friends at Redis, the company behind the incredibly popular open source database. If you're tired of managing open source Redis on your own, or if you are looking to go beyond just caching and unlocking your data's full potential, these folks have you covered. Redis Enterprise is the go-to managed Redis service that allows you to reimagine how your geo-distributed applications process, deliver, and store data. To learn more from the experts in Redis how to be real-time, right now, from anywhere, visit redis.com/duckbill. That's R - E - D - I - S dot com slash duckbill.Corey: Right, the idea of RPO versus RTO: recovery point objective and recovery time objective. With an RPO, it's great, disaster strikes right now, how long is acceptable to it have been since the last time we backed up data to a restorable point? Sometimes it's measured in minutes, sometimes it's measured in fractions of a second. It really depends on what we're talking about. Payments databases, that needs to be—the RPO is basically an asymptotically approaches zero.The RTO is okay, how long is acceptable before we have that data restored and are back up and running? And that is almost always a longer time, but not always. And there's a different series of trade-offs that go into that. But both of those also presuppose that you've already dealt with the existential question of is it possible for us to recover this data. And that's where I know that you are obviously—you have a position on this that is informed by where you work, but I don't, and I will call this out as what I see in the industry: AWS backup is compelling to me except for one fatal flaw that it has, and that is it starts and stops with AWS.I am not a proponent of multi-cloud. Lord knows I've gotten flack for that position a bunch of times, but the one area where it makes absolute sense to me is backups. Have your data in a rehydrate-the-business level state backed up somewhere that is not your primary cloud provider because you're otherwise single point of failure-ing through a company, through the payment instrument you have on file with that company, in the blast radius of someone who can successfully impersonate you to that vendor. There has to be a gap of some sort for the truly business-critical data. Yes, egress to other providers is expensive, but you know what also is expensive? Irrevocably losing the data that powers your business. Is it likely? No, but I would much rather do it than have to justify why I'm not doing it.Sam: Yeah. Wasn't likely that I was going to win that 2 billion or 2.1 billion on the Powerball, but [laugh] I still play [laugh]. But I understand your standpoint on multi-cloud and I read your newsletters and understand where you're coming from, but I think the reality is that we do live in at least a hybrid cloud world, if not multi-cloud. The number of organizations that are sole-sourced on a single cloud and nothing else is relatively small, single-digit percentage. It's around 80-some percent that are hybrid, and the remainder of them are your favorite: multi-cloud.But again, having something that is one hundred percent sole-source on a single platform or a single vendor does expose you to a certain degree of risk. So, having the ability to do cross-platform backups, recoveries, migrations, for whatever reason, right, because it might not just be a disaster like you'd mentioned, it might also just be… I don't know, the company has been taken over and all of a sudden, the preference is now towards another cloud provider and I want you to refactor and re-architect everything for this other cloud provider. If all that data is locked into one platform, that's going to make your job very, very difficult. So, we mentioned at the beginning of the call, Veeam is capable of protecting a vast number of heterogeneous workloads on different platforms, in different environments, on-premises, in multiple different clouds, but the other key piece is that we always use the same backup file format. And why that's key is because it enables portability.If I have backups of EC2 instances that are stored in S3, I could copy those onto on-premises disk, I could copy those into Azure, I could do the same with my Azure VMs and store those on S3, or again, on-premises disk, and any other endless combination that goes with that. And it's really kind of centered around, like control and ownership of your data. We are not prescriptive by any means. Like, you do what is best for your organization. We just want to provide you with the toolset that enables you to do that without steering you one direction or the other with fee structures, disparate feature sets, whatever it might be.Corey: One of the big challenges that I keep seeing across the board is just a lack of awareness of what the data that matters is, where you see people backing up endless fleets of web server instances that are auto-scaled into existence and then removed, but you can create those things at will; why do you care about the actual data that's on these things? It winds up almost at the library management problem, on some level. And in that scenario, snapshots are almost certainly the wrong answer. One thing that I saw previously that really changed my way of thinking about this was back many years ago when I was working at a startup that had just started using GitHub and they were paying for a third-party service that wound up backing up Git repos. Today, that makes a lot more sense because you have a bunch of other stuff on GitHub that goes well beyond the stuff contained within Git, but at the time, it was silly. It was, why do that? Every Git clone is a full copy of the entire repository history. Just grab it off some developer's laptop somewhere.It's like, “Really? You want to bet the company, slash your job, slash everyone else's job on that being feasible and doable or do you want to spend the 39 bucks a month or whatever it was to wind up getting that out the door now so we don't have to think about it, and they validate that it works?” And that was really a shift in my way of thinking because, yeah, backing up things can get expensive when you have multiple copies of the data living in different places, but what's really expensive is not having a company anymore.Sam: Yeah, yeah, absolutely. We can tie it back to my insurance dynamic earlier where, you know, it's something that you know that you have to have, but you don't necessarily want to pay for it. Well, you know, just like with insurances, there's multiple different ways to go about recovering your data and it's only in crunch time, do you really care about what it is that you've been paying for, right, when it comes to backup?Could you get your backup through a git clone? Absolutely. Could you get your data back—how long is that going to take you? How painful is that going to be? What's going to be the impact to the business where you're trying to figure that out versus, like you say, the 39 bucks a month, a year, or whatever it might be to have something purpose-built for that, that is going to make the recovery process as quick and painless as possible and just get things back up online.Corey: I am not a big fan of the fear, uncertainty, and doubt approach, but I do practice what I preach here in that yeah, there is a real fear against data loss. It's not, “People are coming to get you, so you absolutely have to buy whatever it is I'm selling,” but it is something you absolutely have to think about. My core consulting proposition is that I optimize the AWS bill. And sometimes that means spending more. Okay, that one S3 bucket is extremely important to you and you say you can't sustain the loss of it ever so one zone is not an option. Where is it being backed up? Oh, it's not? Yeah, I suggest you spend more money and back that thing up if it's as irreplaceable as you say. It's about doing the right thing.Sam: Yeah, yeah, it's interesting, and it's going to be hard for you to prove the value of doing that when you are driving their bill up when you're trying to bring it down. But again, you have to look at something that's not itemized on that bill, which is going to be the impact of downtime. I'm not going to pretend to try and recall the exact figures because it also varies depending on your business, your industry, the size, but the impact of downtime is massive financially. Tens of thousands of dollars for small organizations per hour, millions and millions of dollars per hour for much larger organizations. The backup component of that is relatively small in comparison, so having something that is purpose-built, and is going to protect your data and help mitigate that impact of downtime.Because that's ultimately what you're trying to protect against. It is the recovery piece that you're buying is the most important piece. And like you, I would say, at least be cognizant of it and evaluate your options and what can you live with and what can you live without.Corey: That's the big burning question that I think a lot of people do not have a good answer to. And when you don't have an answer, you either backup everything or nothing. And I'm not a big fan of doing either of those things blindly.Sam: Yeah, absolutely. And I think this is why we see varying different backup options as well, you know? You're not going to try and apply the same data protection policies each and every single workload within your environment because they've all got different types of workload criticality. And like you say, some of them might not even need to be backed up at all, just because they don't have data that needs to be protected. So, you need something that is going to be able to be flexible enough to apply across the entirety of your environment, protect it with the right policy, in terms of how frequently do you protect it, where do you store it, how often, or when are you eventually going to delete that and apply that on a workload by workload basis. And this is where the joy of things like tags come into play as well.Corey: One last thing I want to bring up is that I'm a big fan of watching for companies saying the quiet part out loud. And one area in which they do this—because they're forced to by brevity—is in the title tag of their website. I pull up veeam.com and I hover over the tab in my browser, and it says, “Veeam Software: Modern Data Protection.”And I want to call that out because you're not framing it as explicitly backup. So, the last topic I want to get into is the idea of security. Because I think it is not fully appreciated on a lived-experience basis—although people will of course agree to this when they're having ivory tower whiteboard discussions—that every place your data lives is a potential for a security breach to happen. So, you want to have your data living in a bunch of places ideally, for backup and resiliency purposes. But you also want it to be completely unworkable or illegible to anyone who is not authorized to have access to it.How do you balance those trade-offs yourself given that what you're fundamentally saying is, “Trust us with your Holy of Holies when it comes to things that power your entire business?” I mean, I can barely get some companies to agree to show me their AWS bill, let alone this is the data that contains all of this stuff to destroy our company.Sam: Yeah. Yeah, it's a great question. Before I explicitly answer that piece, I will just go to say that modern data protection does absolutely have a security component to it, and I think that backup absolutely needs to be a—I'm going to say this an air quotes—a “first class citizen” of any security strategy. I think when people think about security, their mind goes to the preventative, like how do we keep these bad people out?This is going to be a bit of the FUD that you love, but ultimately, the bad guys on the outside have an infinite number of attempts to get into your environment and only have to be right once to get in and start wreaking havoc. You on the other hand, as the good guy with your cape and whatnot, you have got to be right each and every single one of those times. And we as humans are fallible, right? None of us are perfect, and it's incredibly difficult to defend against these ever-evolving, more complex attacks. So backup, if someone does get in, having a clean, verifiable, recoverable backup, is really going to be the only thing that is going to save your organization, should that actually happen.And what's key to a secure backup? I would say separation, isolation of backup data from the production data, I would say utilizing things like immutability, so in AWS, we've got Amazon S3 object lock, so it's that write once, read many state for whatever retention period that you put on it. So, the data that they're seeking to encrypt, whether it's in production or in their backup, they cannot encrypt it. And then the other piece that I think is becoming more and more into play, and it's almost table stakes is encryption, right? And we can utilize things like AWS KMS for that encryption.But that's there to help defend against the exfiltration attempts. Because these bad guys are realizing, “Hey, people aren't paying me my ransom because they're just recovering from a clean backup, so now I'm going to take that backup data, I'm going to leak the personally identifiable information, trade secrets, or whatever on the internet, and that's going to put them in breach compliance and give them a hefty fine that way unless they pay me my ransom.” So encryption, so they can't read that data. So, not only can they not change it, but they can't read it is equally important. So, I would say those are the three big things for me on what's needed for backup to make sure it is clean and recoverable.Corey: I think that is one of those areas where people need to put additional levels of thought in. I think that if you have access to the production environment and have full administrative rights throughout it, you should definitionally not—at least with that account and ideally not you at all personally—have access to alter the backups. Full stop. I would say, on some level, there should not be the ability to alter backups for some particular workloads, the idea being that if you get hit with a ransomware infection, it's pretty bad, let's be clear, but if you can get all of your data back, it's more of an annoyance than it is, again, the existential business crisis that becomes something that redefines you as a company if you still are a company.Sam: Yeah. Yeah, I mean, we can turn to a number of organizations. Code Spaces always springs to mind for me, I love Code Spaces. It was kind of one of those precursors to—Corey: It's amazing.Sam: Yeah, but they were running on AWS and they had everything, production and backups, all stored in one account. Got into the account. “We're going to delete your data if you don't pay us this ransom.” They were like, “Well, we're not paying you the ransoms. We got backups.” Well, they deleted those, too. And, you know, unfortunately, Code Spaces isn't around anymore. But it really kind of goes to show just the importance of at least logically separating your data across different accounts and not having that god-like access to absolutely everything.Corey: Yeah, when you talked about Code Spaces, I was in [unintelligible 00:32:29] talking about GitHub Codespaces specifically, where they have their developer workstations in the cloud. They're still very much around, at least last time I saw unless you know something I don't.Sam: Precursor to that. I can send you the link—Corey: Oh oh—Sam: You can share it with the listeners.Corey: Oh, yes, please do. I'd love to see that.Sam: Yeah. Yeah, absolutely.Corey: And it's been a long and strange time in this industry. Speaking of links for the show notes, I appreciate you're spending so much time with me. Where can people go to learn more?Sam: Yeah, absolutely. I think veeam.com is kind of the first place that people gravitate towards. Me personally, I'm kind of like a hands-on learning kind of guy, so we always make free product available.And then you can find that on the AWS Marketplace. Simply search ‘Veeam' through there. A number of free products; we don't put time limits on it, we don't put feature limitations. You can backup ten instances, including your VPCs, which we actually didn't talk about today, but I do think is important. But I won't waste any more time on that.Corey: Oh, configuration of these things is critically important. If you don't know how everything was structured and built out, you're basically trying to re-architect from first principles based upon archaeology.Sam: Yeah [laugh], that's a real pain. So, we can help protect those VPCs and we actually don't put any limitations on the number of VPCs that you can protect; it's always free. So, if you're going to use it for anything, use it for that. But hands-on, marketplace, if you want more documentation, want to learn more, want to speak to someone veeam.com is the place to go.Corey: And we will, of course, include that in the show notes. Thank you so much for taking so much time to speak with me today. It's appreciated.Sam: Thank you, Corey, and thanks for all the listeners tuning in today.Corey: Sam Nicholls, Director of Public Cloud at Veeam. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry insulting comment that takes you two hours to type out but then you lose it because you forgot to back it up.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
In this new series, Founder Fables, we talk with Martin Mao, the co-founder and CEO of cloud monitoring firm @Chronosphere. Here we talk with startups who have interesting business that deserves watching as well as interesting thoughts on building a successful venture during this unique time. Martin started the the business at the start of Covid and following engineering roles with @Uber, @AWS, @Microsoft and @Google (nice pedigree as one might say). In this Episode we dive into his view of big co experience and then rolling to a startup along with the challenges of raising funding and driving a startup in challenging economic times, competing with giants and leading a co from the engineering side of the house.
Martin Mao is the co-founder and CEO of Chronosphere, the provider of the leading observability platform for cloud-native. He previously held roles at leading tech companies including Uber, Microsoft, Google and Amazon. Top 3 Value Bombs: 1. Luck and timing play a significant role in the result of your success. 2. More industries are now adopting the same technology, and Chronosphere is there to provide better solutions. 3. Be ready when an opportunity presents itself. Take a step back, look at more significant trends in your economy and industry, and be prepared to capitalize on these opportunities. Visit and take back control of Observability - Chronosphere Website Sponsor: HubSpot: A platform that's easy for your entire team to use! Learn how HubSpot can make it easier for your business to grow better at Hubspot.com!
Bronze President shows both enduring interests and adaptability. Iranian threat actor activity is reported. Cybersecurity and small-to-medium businesses. An initial access broker repurposes Conti's old playbook for use against Ukraine. Johannes Ullrich from SANS on Scanning for VoIP Servers. Our guest is Ian Smith from Chronosphere on observability. And Kyivstar as a case study in telco resiliency. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/11/173 Selected reading. BRONZE PRESIDENT Targets Government Officials (Secureworks) APT42: Crooked Charms, Cons, and Compromises (Mandiant) Profiling DEV-0270: PHOSPHORUS' ransomware operations (Microsoft) Albania cuts diplomatic ties with Iran over July cyberattack (The Washington Post) Initial access broker repurposing techniques in targeted attacks against Ukraine (Google) Unprecedented Shift: The Trickbot Group is Systematically Attacking Ukraine (IBM SecurityIntelligence) Ransomware gang's Cobalt Strike servers DDoSed with anti-Russia messages (BleepingComputer) Ukraine's largest telecom stands against Russian cyberattacks (POLITICO)