Podcasts about duffie

  • 52PODCASTS
  • 73EPISODES
  • 45mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 28, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about duffie

Latest podcast episodes about duffie

ABA Inside Track
Episode 311 - Practicing Within a School Context w/ John Staubitz

ABA Inside Track

Play Episode Listen Later May 28, 2025 79:57


As we come to the end of another school year in the US, we take a moment to ponder the question: What is it that a BCBA is supposed to do when they work in a school context? And to help us answer that question, we've enlisted the help of the other half of one of our favorite behavior analyst duos, John Staubitz! John walks us through all the stuff about comprehending the school ecology that you didn't learn in grad school—unless you went to teacher grad school—like the laws and regulations that mandate policies and the do's and dont's of providing services to students. If you haven't set foot in a public school since you grabbed your old principal handed you a diploma, you're about to take the first step into a larger world. This episode is available for 1.0 LEARNING CEU. Articles discussed this episode: Stevenson, B.S. & Correa, V.I. (2019). Applied behavior analysis, students with autism, and the requirement to provide a free appropriate public education. Journal of Disability Policy Studies, 29, 206-215. doi: 10.1177/1044207318799644 Stevenson, B., Bethune, K., & Gardner, R. (2024). Still left behind: How behavior analysts can improve children's access, equity, and inclusion to their entitled education. Behavior Analysis in Practice. doi: 10.1007/s40617-024-00992-4 Copeland, S.R., Duffie, P., & Maez, R. (2025). Preparation of behavior analysts for school-based practice. Behavior Analysis in Practice. doi: 10.1007/s40617-024-01028-7 If you're interested in ordering CEs for listening to this episode, click here to go to the store page. You'll need to enter your name, BCBA #, and the two episode secret code words to complete the purchase. Email us at abainsidetrack@gmail.com for further assistance.

ABA Inside Track
May 2025 Preview

ABA Inside Track

Play Episode Listen Later May 7, 2025 18:22


While Jackie's away (and stuck in an elevator) Rob and Diana will play…podcast hosting duties for the month. This month last spring's Book Club choice, “Activity Schedules for Children with Autism” gets released to the free feed (with free CEs for Patreon subscribers!) with new episodes on preference assessments and practicing within a school context. And speaking of pairs of awesome behavior analysts, special guest, John Staubitz, gives us the rundown on special education laws and what BCBAs really need to know about the scope of schoolwork. Now's the time on podcast when we dance! Articles for May 2025 (UNLOCKED) Activity Schedules for Children with Autism Book Club McClannahan, L.E. & Krantz, P.J. (1999). Activity schedules for children with autism: Teaching independent behavior. Woodbine House. McClannahan, L.E. & Krantz, P.J. (2010). Activity schedules for children with autism: Teaching independent behavior. (2nd ed.). Woodbine House. Rehfeldt, R.A. (2002). A review of McClannahan and Krantz's "Activity schedules for children with autism: Teaching independent behavior": Toward the inclusion and integration of children with disabilities. The Behavior Analyst, 25, 103-108. doi: 10.1007/BF03392048   PIctorial and Video-Based Preference Assessments Heinicke, M.R., Carr, J.E., Pence, S.T., Zias, D.R., Valentino, A.L., & Falligant, J.M. (2016). Assessing the efficacy of pictorial preference assessments for children with developmental disabilities. Journal of Applied Behavior Analysis, 49, 848-868. doi: 10.1002/jaba.342 Brodhead, M.T., Al-Dubayan, M.N., Mates, M., Abel, E.A., & Brouwers, L. (2016). An evaluation of a brief video-based multiple-stimulus without replacement preference assessment. Behavior Analysis in Practice, 9, 160-164. doi: 10.1007/s40617-015-0081-0 Wolfe, K., Kunnavatana, S.S., & Shoemaker, A.M. (2018). An investigation of a video-based preference assessment of social interactions. Behavior Modification, 42, 729-746. doi: 10.1177/0145445517731062   Practicing Within a School Context w/ John Staubitz Stevenson, B.S. & Correa, V.I. (2019). Applied behavior analysis, students with autism, and the requirement to provide a free appropriate public education. Journal of Disability Policy Studies, 29, 206-215. doi: 10.1177/1044207318799644 Stevenson, B., Bethune, K., & Gardner, R. (2024). Still left behind: How behavior analysts can improve children's access, equity, and inclusion to their entitled education. Behavior Analysis in Practice. doi: 10.1007/s40617-024-00992-4 Copeland, S.R., Duffie, P., & Maez, R. (2025). Preparation of behavior analysts for school-based practice. Behavior Analysis in Practice. doi: 10.1007/s40617-024-01028-7  

Safety With Purpose Podcast
Safeonomics Season 3 | Episode 2 with Chip Duffie

Safety With Purpose Podcast

Play Episode Listen Later Mar 19, 2025 42:52


✅ Join the Safeopedia Community at https://members.safeopedia.comNavigating EHS in a Shifting Regulatory Landscape with Chip DuffieIn this episode of Safeonomics, we sit down with Chip Duffie to explore how political shifts impact environmental, health, and safety (EHS) regulations—and what that means for safety professionals.With the change of administration, EHS enforcement may face major changes. Will deregulation empower companies to take control of safety, or will it create new risks and uncertainties? Chip shares his insights on what businesses should expect, how safety professionals can stay ahead, and whether less enforcement makes EHS professionals more valuable or more expendable.We tackle big questions, including:

Couch Potato Radio
Bison CB Jailen Duffie

Couch Potato Radio

Play Episode Listen Later Oct 10, 2024 4:21


Derek talks with Duffie about the win over UND and the upcoming game vs. Southern Illinois. See omnystudio.com/listener for privacy information.

Bison Central
NDSU CB Jailen Duffie with Derek Hanson

Bison Central

Play Episode Listen Later Oct 10, 2024 4:21


Derek talks with redshirt freshman Jailen Duffie about his great performance against UND and previews the match-up on the road vs. Southern Illinois. See omnystudio.com/listener for privacy information.

If/Then: Research findings to help us navigate complex issues in business, leadership, and society
Cashless: Is Digital Currency the Future of Finance? With Darrell Duffie

If/Then: Research findings to help us navigate complex issues in business, leadership, and society

Play Episode Listen Later Apr 17, 2024 19:07


Digital currency — whether privately-developed or government-issued — seems like an inevitability to Stanford Graduate School of Business finance professor Darrell Duffie. “Virtually all countries are exploring a central bank digital currency for potential use,” he says.An expert on banking, financial market infrastructure, and fintech payments, Duffie is interested in how central bank digital currencies (CBDC) could revolutionize economies around the world. The shift to a digital version of a fiat currency, still backed by a country's central bank, could offer significant benefits compared to the current financial system. These include improved financial inclusion, lower cross-border payment costs, and more timely and secure transaction processing.The key, Duffie says, is striking the right regulatory balance to foster innovation while mitigating risks. As this episode of If/Then explores, if the U.S. wants to future-proof banking, then a digital dollar could be a solution.Key Takeaways:The benefits of central bank digital currencies: As digital versions of a country's fiat currency, backed by its central bank, CBDCs could provide advantages over the current financial system. These include improved financial inclusion, lower cross-border payment costs, and more timely and secure transaction processing.Challenges could be ahead: Duffie sees two major impediments — privacy concerns and the potential impact on the U.S. dollar's global dominance.The U.S. dollar's reserve currency status is secure for now: China's development of a "digital renminbi" raises questions about the dollar's dominance. Even so, Duffie believes the U.S. currency will maintain its position as the world's reserve currency for decades to come.Regulation will be crucial: Duffie says the U.S. lags behind other countries in establishing clear rules for cryptocurrencies and digital assets. Finding the right regulatory balance is critical if we're going to foster innovation while mitigating risks.More Resources:Darrell Duffie, The Adams Distinguished Professor of Management and Professor of Finance.Capitol Gains: GSB Professors Share Their Expertise in DC and BeyondIf/Then is a podcast from Stanford Graduate School of Business that examines research findings that can help us navigate the complex issues we face in business, leadership, and society.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

VOICE for Mount Pleasant
Why Vote Clay Duffie for Mount Pleasant Waterworks Commissioner on Nov 7?

VOICE for Mount Pleasant

Play Episode Listen Later Oct 30, 2023 21:49


Answering the question why should you Vote Clay Duffie for Mount Pleasant Waterworks Commissioner on Nov 7, Clay Duffie speaks with host Brian Cleary about his candidacy for the only open seat on the Mount Pleasant Public Waterworks Commission. Duffie was the General Manager of Mount Pleasant Public Waterworks for 32 years, so he knows about water, the commission, our community and our customers. Duffie retired a couple of years ago from daily operations but feels like his knowledge and experience in networking is a good opportunity to give back to the community. He says the State Water Plan is the future of the state's water resources for the next 50 years and Duffie wants to pick up from his role as an Advisory Member for the past two-and-a-half years and he'd like to see it through to the end to adoption. He says a sustainable and long-term source of water is obviously very important, as is keeping the cost of that water at an affordable cost.

The King-dom
The Voices of Alben Duffie and Michael Swerdlow

The King-dom

Play Episode Listen Later Oct 27, 2023 29:32


Today, we welcome affordable housing developers Alben Duffie & Michael Swerdlow who are responsible for the Block 55 mixed-use development in Overtown. The property features 578 high-quality apartments for low-income seniors on top of 250,000 square feet of retail space and a parking garage. The entire retail space has been completely leased to some exciting retailers. I'm thrilled to see developments of this nature that continue to revive Overtown as the destination it was once known to be.

Concrete Genius Media
Coach Kyle Duffie Talks BMore/DMV Hoop, Coaching Jarace Walker..More

Concrete Genius Media

Play Episode Listen Later Aug 14, 2023 29:17


Join Us As Coach McDuffie discusses the Baltimore Hoop scene, coaching a young Jarace Walker, Team Thrill, Team Melo, Battles vs Indiana teams, developing our sons and much more Please follow this page and follow me on all socials

Problem Solved: The IISE Podcast
#IISEAnnual2023 Podcast Break — Holden Duffie, Clemson University

Problem Solved: The IISE Podcast

Play Episode Listen Later May 23, 2023 4:27


SPONSORED BY COLUMBIA UNIVERSITY'S MASTER OF SCIENCE IN CONSTRUCTION ADMINISTRATION. Holden Duffie, Clemson University, spoke with IISE's Frank Reddy at the Hyatt Regency in New Orleans – site of #IISEAnnual2023. Duffie discussed what he's enjoyed so far about the conference as well as talking about his poster presentation.Learn more about our sponsor at: https://sps.columbia.edu/iise

The Business Savvy Singer Podcast
Meet Duffie Adelson

The Business Savvy Singer Podcast

Play Episode Listen Later May 22, 2023 23:23


Learn more about Duffie by visiting her LinkedIn. Interested in growing your music career this year? Check out our coaching classes on The Private Music Studio. Visit out our blog.  Follow The Business Savvy Singer private Facebook page.Follow us on social media:Instagram: @gretapopeFacebook: The Private Music StudioTwitter:  @gretapope The Business Savvy Singer Podcast is sponsored by:Private Music StudioGreta Pope EntertainmentEternal Wolf MusicPerformance Ear TrainingEdward W. Wimp, Esq 

Billfish Radio
The BILLFISHER, Catching 57 White Marlin In Ft. Jon Duffie - State of Sportfishing Podcast EP112

Billfish Radio

Play Episode Listen Later May 2, 2023 53:34


In today's podcast, we Re upload a podcast we did last year with Jon Duffie to appreciate his legendary moment catching 57 White Marlin in Day that was actually a miscalculation it was actually 58 White Marlin in a day, Growing up on the Docs in Ocean City Maryland.If you have any requests or awesome guest for the podcast let us know via: podcast@billfish.siteIf you like the podcast, please support us by checking out some of our latest products, especially the Performance Pants, onCheck out our gear at https://billfishgear.com/?utm_source=...Billfish Group specializes in enhancing human outdoor performance through technical products. We create products, in all forms, which enhance the outdoor experience both on and off the water. We felt there was a need for true performance wear, as the elements get harsher, it's up to us to become better. Billfish was created by fishermen, for fishermen with the goal of building a community of outdoors enthusiasts around the globe. We do this through engaging with our community on social media platforms and IRL events.

Sportstalk with D'Arcy Waldegrave
Matt Duffie: Storm pathways manager on why he's rooting against the Warriors at tomorrow's annual clash

Sportstalk with D'Arcy Waldegrave

Play Episode Listen Later Apr 24, 2023 18:15


Tomorrow night, the Warriors play their traditional ANZAC day fixture against the Melbourne Storm. Sadly, it's almost become a tradition for the Warriors to lose this particular game every year, so fans here will be hoping to see that trend bucked. One Kiwi that will be rooting for a Storm victory is their pathways manager Matt Duffie - he spoke to Jason Pine on Sportstalk about the game, and his role. LISTEN ABOVE  See omnystudio.com/listener for privacy information.

Asleep at Work
Phil Duffie - Miami Dolphons

Asleep at Work

Play Episode Listen Later Feb 22, 2023 36:18


Hosted on Acast. See acast.com/privacy for more information.

Afternoons with Staffy
Matt Duffie | Former Kiwi and All Black

Afternoons with Staffy

Play Episode Listen Later Feb 16, 2023 7:05


Stephen catches up former Kiwi and All Black Matt Duffie who is now the current Pathway coach for his old NRL club Melbourne Storm. The Storm have a pre-season match against the NZ Warriors this Sunday in Christchurch. Learn more about your ad choices. Visit megaphone.fm/adchoices

Finding Your Voice After 40
S1 Ep 15: "Creating Paths to Joy & Success" Interview with Kendall Duffie, Music Producer/Radio Promotion Executive/Vegan Chef

Finding Your Voice After 40

Play Episode Listen Later Dec 15, 2022 48:24


In this week's Art Voices Matter episode, listeners will learn more about our special guest, Kendall Duffie who is a music producer, radio promotion executive and vegan chef. Our discussion explores the many ways you can create joy in your life while still living with purpose after 40. Kendall shares his extraordinary journey in the music industry as well as his recent incredible venture within vegan culinary arts and how that has impacted his sense of self-care. To learn more about Kendall, visit www.deepseavegan.com and follow on IG @deepseavegan and @kloud9 To learn more about Finding Your Voice coaching services, retreats, memberships and to connect with Kenya for a free discovery call in order to THRIVE and begin your personal transformation, visit www.findingyourvoiceafter40.com. To listen to future bonus episodes, receive self-care newsletters and access to upcoming events including virtual min-retreats, join our Patreon membership at any tier level at patreon.com/findingyourvoice Follow Kenya on social media using @kenyamjmusic for Instagram, Facebook, Twitter and YouTube For other services, this week's events and information, including Kenya's music visit https://linktr.ee/kenyamjmusic *Podcast music written, performed and co-produced by KENYA; Song Title: "Starting Over (Reprise)" Full length version of "Starting Over" is available on all major streaming platforms including Spotify, Apple Music and Google Play. Finding Your Voice After 40 Spotify Playlist coming soon! --- Send in a voice message: https://anchor.fm/findingyourvoiceafter40/message

What a Lad
Matt Duffie- What a Lad

What a Lad

Play Episode Listen Later Nov 6, 2022 96:09 Very Popular


Our first ever dual code international has jumped on What a Lad in the form of one of the greats Matt Duffie!   Matt who started off as a teenager with the Melbourne Storm playing with the big 4 Greg Inglis, Cam Smith, Cooper Cronk and Billy Slater! He then had the chance to represent the Kiwis before being hammered by injuries which eventually saw him switch codes.   After a rough start in Super Rugby with the Blues and spending a lot of the season playing club. He worked hard, he learnt the game, he found his form, and went on to be one of the best outside backs in the country, which saw him named as an All Black.    The Matt Duffie story like so many guys careers, is full of highs and lows but something he wouldn't change at all, shaping him into the lad he is today! Such a good episode this one and heaps of great insights and tips into becoming a professional rugby player or athlete.   If you enjoy this episode please give it a share!   If you're interested in more information about O-Studio from the legend Tim Bateman then click here O-Studio If you enjoy your racing go and give champion trainer Regan Todd at Todds Racing a follow! Or DM me if you're keen to get in on What a Lad (the horse). Finally if you're keen to get your hands on some Pure Sport products, then click this link here Pure Sport and use the discount code whatalad20 for 20% off   

Cloud Native in 15 Minutes
Networking in Kubernetes, and, The SpringOne Missing Track

Cloud Native in 15 Minutes

Play Episode Listen Later Oct 18, 2022 33:45


Conferences are a great chance, sometimes the only change, to discuss your peculiar problems with peers and experts. In the case of SpringOne, the topics range from programming, operations, and even management. It's the hallway track! Ben and Coté discuss two talks where you can maximize the hallway track and get your questions answered. And, then, stay to listen to and interview with Duffie Cooley about eBPF, networking on kubernetes, and more, from Explore US 2022. The two talks from SpringOne:  A Study on Soldier-led Software Development to Reduce Security Vulnerabilities and Tanzu Vanguard: Priceless Insights. When you register, use the code COTE200 to get $200 off. Just think of how many hot dogs you could buy with that! Here is the video version of Ben and Coté talking, and the video of the full presentation and interview with Duffie.

Cloud & Culture
Networking in Kubernetes, and, The SpringOne Missing Track

Cloud & Culture

Play Episode Listen Later Oct 18, 2022 33:45


Conferences are a great chance, sometimes the only change, to discuss your peculiar problems with peers and experts. In the case of SpringOne, the topics range from programming, operations, and even management. It's the hallway track! Ben and Coté discuss two talks where you can maximize the hallway track and get your questions answered. And, then, stay to listen to and interview with Duffie Cooley about eBPF, networking on kubernetes, and more, from Explore US 2022. The two talks from SpringOne:  A Study on Soldier-led Software Development to Reduce Security Vulnerabilities and Tanzu Vanguard: Priceless Insights. When you register, use the code COTE200 to get $200 off. Just think of how many hot dogs you could buy with that! Here is the video version of Ben and Coté talking, and the video of the full presentation and interview with Duffie.

Let's Get Intimate with Dr. Jeshana Johnson
All About Love and Relationships with Mischa Duffie | Let's Get Intimate with Dr. Jeshana Johnson

Let's Get Intimate with Dr. Jeshana Johnson

Play Episode Listen Later Sep 10, 2022 92:13


Join Dr. Jeshana Johnson and host of The Back Story, Mischa Duffie as we discuss our personal love and relationships stories. Why did we get married? Is it worth it? What does marriage and relationships really teach us? Do we have the tools to really do marriage or has marriage done us? Is divorce an option? If divorce, then what?

Top News from WTOP
The Md man who won the White Marlin Open & $4.5 million

Top News from WTOP

Play Episode Listen Later Aug 19, 2022 15:03


There's no question that Jeremy Duffie is having a good week. Last weekend, he won the 49th annual White Marlin Open in Ocean City, Maryland - billed as the world's largest and richest billfish tournament. But this is not something that Duffie and his family took lightly. We talk to Jeremy Duffie about how he wrangled a 77.5 lb. marlin into the boat his brother, Jon, built.

DMV Download from WTOP News
The Md man who won the White Marlin Open & $4.5 million

DMV Download from WTOP News

Play Episode Listen Later Aug 18, 2022 14:45 Transcription Available


There's no question that Jeremy Duffie is having a good week. Last weekend, he won the 49th annual White Marlin Open in Ocean City, Maryland - billed as the world's largest and richest billfish tournament. But this is not something that Duffie and his family took lightly. We talk to Jeremy Duffie about how he wrangled a 77.5 lb. marlin into the boat his brother, Jon, built.

The Safety Justice League
What's In Chip's Duffie Bag?

The Safety Justice League

Play Episode Listen Later Jul 29, 2022 35:31


Murdaugh Murders Podcast
Doubling Down On Duffie Stone - Conduct and Conflicts In The 14th Circuit (S01E39)

Murdaugh Murders Podcast

Play Episode Listen Later Apr 6, 2022 47:39 Very Popular


While there was a lot of news related to the Murdaugh saga in the last week, this episode will focus on 14th Circuit Solicitor Duffie Stone and why his involvement in this case matters. We still have questions that need to be answered and, as residents who live in the 14th Circuit, we are concerned about our solicitor and justice system in the Lowcountry.  In this episode, hear special guest Will Folks of FITSNews, explain why Stone's involvement was concerning and what it means to the case.  Check out the new FITSNEWS podcast here: https://apple.co/3Kr3dDC Bonus! You'll hear comedian Kathleen Madigan talk about Mandy from her live show in Charleston. Check our her AMAZING pubcast here: https://apple.co/3DHXCq0 Oh! And please vote for Mandy as the Best Local Investigative Reporter in the Charleston City Paper here: https://bit.ly/38l2fuA And a special thank you to our sponsors: Cerebral, Hunt-A-Killer, Priceline, Embark Vet, VOURI, Hello Fresh, Babbel, Article, Daily Harvest, and others. The Murdaugh Murders Podcast is created by Mandy Matney and produced by Luna Shark Productions. Our Executive Editor is Liz Farrell. Advertising is curated by the talented team at AdLarge Media. Find us on social media: https://www.facebook.com/MurdaughPod/ https://www.instagram.com/murdaughmurderspod/ ** Click to ADVERTISE WITH OUR MEDIA PARTNERS AT FITSNEWS.COM ** For current and accurate updates: Fitsnews.com or Twitter.com/mandymatney Support Our Podcast at: https://murdaughmurderspodcast.com/support-the-show Please consider sharing your support by leaving a review on Apple at the following link: https://podcasts.apple.com/us/podcast/murdaugh-murders-podcast/id1573560247 Support the Reporting: https://www.fitsnews.com/fitsnews-subscription-options Learn more about your ad choices. Visit podcastchoices.com/adchoices

Murdaugh Murders Podcast
Doubling Down On Duffie Stone - Conduct and Conflicts In The 14th Circuit (S01E39)

Murdaugh Murders Podcast

Play Episode Listen Later Apr 6, 2022 45:39


While there was a lot of news related to the Murdaugh saga in the last week, this episode will focus on 14th Circuit Solicitor Duffie Stone and why his involvement in this case matters. We still have questions that need to be answered and, as residents who live in the 14th Circuit, we are concerned about our solicitor and justice system in the Lowcountry.  In this episode, hear special guest explain why Stone's involvement was concerning and what it means to the case.  Bonus! You'll hear comedian Kathleen Madigan talk about Mandy from her live show in Charleston. Check our her AMAZING pubcast here: https://apple.co/3DHXCq0 Oh! And please vote for Mandy as the Best Local Investigative Reporter in the Charleston City Paper here: https://bit.ly/38l2fuA And a special thank you to our sponsors: Cerebral, Hunt-A-Killer, Priceline, Embark Vet, VOURI, Hello Fresh, Babbel, Article, Daily Harvest, and others. The Murdaugh Murders Podcast is created by Mandy Matney and produced by Luna Shark Productions. Our Executive Editor is Liz Farrell. Advertising is curated by the talented team at AdLarge Media. Find us on social media: https://www.facebook.com/MurdaughPod/ https://www.instagram.com/murdaughmurderspod/ ** Click to ADVERTISE WITH OUR MEDIA PARTNERS AT FITSNEWS.COM ** For current and accurate updates: Twitter.com/mandymatney Support Our Podcast at: https://murdaughmurderspodcast.com/support-the-show Please consider sharing your support by leaving a review on Apple at the following link: https://podcasts.apple.com/us/podcast/murdaugh-murders-podcast/id1573560247 Learn more about your ad choices. Visit podcastchoices.com/adchoices Learn more about your ad choices. Visit megaphone.fm/adchoices

SpearFactor Spearfishing Podcast
Spearfactor #040: Charles (CJ) Duffie, Neritic Diving Owner & Spearfishing Guide

SpearFactor Spearfishing Podcast

Play Episode Listen Later Feb 1, 2022 76:45


In this episode, I speak with Floridian diver and guide CJ Duffie. CJ is not only just a spear fisherman and guide but he's also the owner of Neritic spearfishing equipment. We talked extensively about pole spears and the ins and outs of being in the spearfishing guide industry. Specifically, CJ and I talk about being freediving industry and guiding divers around the world. Thank you CJ for being on the show. You can find more about Neritic Diving at: https://neriticdiving.com Contact CJ at: @cjduffie22 Learn more about your ad choices. Visit megaphone.fm/adchoices

Billfish Radio
EP38: Catching 57 White Marlin In a Day FT. Jon Duffie

Billfish Radio

Play Episode Listen Later Jan 11, 2022 49:35


Jon Duffie with his legendary moment catching 57 White Marlin in Day that was actually a mis calculation it was actually 58 White Marlin in a day, Growing up on the Docs in Ocean City Maryland.

Chicks & Balls The Podcast
HALF-TIME HUDDLE: "What [my injuries] taught me was alot more about myself and life, than if I'd had a clean bill of health' ft. Matt Duffie

Chicks & Balls The Podcast

Play Episode Listen Later Nov 14, 2021 42:21


Hello and Welcome Back to Chicks & Balls the Podcast: A Sports Podcast by Women, about more than womens sport!  What better way to start your Monday than with a special conversation like this Half-Time Huddle episode. These chats are deeper dive conversations than what we get in our regular eps and they feature some of our favourite athletes and sports insiders. If you are unfamiliar with today's guest– let us give you some context. Georgia first met Matt when he signed with the Melbourne Storm back in 2009 where he stayed until 2015, since then, he has gone onto play rugby union in New Zealand, including representing the All Blacks, and has now moved his family to Japan where he plays for the Honda Heat.  We chat with Matt about getting through the array of injury setbacks he's had in his career, the switch from league to union and what it feels like to wear an All Blacks jersey, while he's killing time before his team has clean up duty - a regular punishment for being late to training in Japanese rugby! We hope you love this chat as much as we had having it! Massive thanks to Matt for jumping on! . . . Follow us on IG: @chicksandballspod Follow our hosts on IG: @marlee.silva @keelyysilva @g.moore  The best way to support us & the show? Subscribe + Give us a 5 star rating! Share with your mates! Need to contact us further, email us: chicksandballspod@gmail.com . . . Produced and edited by Blake Mannes from Diamantina Media on the unceded lands of the Gadigal people. We pay our respects to the Traditional Owners of each of the countries we record from and where all our listeners tune in from today and everyday. Always was, always will be Aboriginal land. See omnystudio.com/listener for privacy information.

Chicks & Balls The Podcast
HALF-TIME HUDDLE: "What [my injuries] taught me was alot more about myself and life, than if I'd had a clean bill of health' ft. Matt Duffie

Chicks & Balls The Podcast

Play Episode Listen Later Nov 14, 2021 42:21


Hello and Welcome Back to Chicks & Balls the Podcast: A Sports Podcast by Women, about more than womens sport!  What better way to start your Monday than with a special conversation like this Half-Time Huddle episode. These chats are deeper dive conversations than what we get in our regular eps and they feature some of our favourite athletes and sports insiders. If you are unfamiliar with today's guest– let us give you some context. Georgia first met Matt when he signed with the Melbourne Storm back in 2009 where he stayed until 2015, since then, he has gone onto play rugby union in New Zealand, including representing the All Blacks, and has now moved his family to Japan where he plays for the Honda Heat.  We chat with Matt about getting through the array of injury setbacks he's had in his career, the switch from league to union and what it feels like to wear an All Blacks jersey, while he's killing time before his team has clean up duty - a regular punishment for being late to training in Japanese rugby! We hope you love this chat as much as we had having it! Massive thanks to Matt for jumping on! . . . Follow us on IG: @chicksandballspod Follow our hosts on IG: @marlee.silva @keelyysilva @g.moore  The best way to support us & the show? Subscribe + Give us a 5 star rating! Share with your mates! Need to contact us further, email us: chicksandballspod@gmail.com . . . Produced and edited by Blake Mannes from Diamantina Media on the unceded lands of the Gadigal people. We pay our respects to the Traditional Owners of each of the countries we record from and where all our listeners tune in from today and everyday. Always was, always will be Aboriginal land. See omnystudio.com/listener for privacy information.

Epigenetics Podcast
DNA Methylation and Mammalian Development (Déborah Bourc'his)

Epigenetics Podcast

Play Episode Listen Later May 12, 2021 35:54


In this episode of the Epigenetics Podcast, we caught up with Déborah Bourc'his from L'Institut Curie in Paris to talk about her work on the role of DNA methylation in mammalian development. During her postdoc years Déborah Bourc'his was able to characterize DNMT3L, a protein with unknown function at that time. It turned out that this protein is the cofactor responsible for stimulating DNA methylation activity in both the male and the female germline. Later on she discovered a novel DNA methylation enzyme called DNMT3C, which was unknown because it was not properly annotated, there was no sign of expression, and it was only expressed in male fetal germ cells. Furthermore, this enzyme only evolved in rodents, as a defense against young transposons. In this episode we discuss the story behind how Déborah Bourc'his was able to discover and characterize the DNA methylation enzymes DNMT3L and DNMT3C and their role in mammalian development.   References R. Duffie, S. Ajjan, … D. Bourc’his (2014) The Gpr1/Zdbf2 locus provides new paradigms for transient and dynamic genomic imprinting in mammals (Genes & Development) DOI: 10.1101/gad.232058.113 Natasha Zamudio, Joan Barau, … Déborah Bourc’his (2015) DNA methylation restrains transposons from adopting a chromatin signature permissive for meiotic recombination (Genes & Development) DOI: 10.1101/gad.257840.114](https://doi.org/10.1101/gad.257840.114) Marius Walter, Aurélie Teissandier, … Déborah Bourc’his (2016) An epigenetic switch ensures transposon repression upon dynamic loss of DNA methylation in embryonic stem cells (eLife) DOI: 10.7554/eLife.11418](https://doi.org/10.7554/eLife.11418) Joan Barau, Aurélie Teissandier, … Déborah Bourc’his (2016) The DNA methyltransferase DNMT3C protects male germ cells from transposon activity (Science (New York, N.Y.)) DOI: 10.1126/science.aah5143 Maxim V. C. Greenberg, Juliane Glaser, … Déborah Bourc’his (2017) Transient transcription in the early embryo sets an epigenetic state that programs postnatal growth (Nature Genetics) DOI: 10.1038/ng.3718 Roberta Ragazzini, Raquel Pérez-Palacios, … Raphaël Margueron (2019) EZHIP constrains Polycomb Repressive Complex 2 activity in germ cells (Nature Communications) DOI: 10.1038/s41467-019-11800-x Dura, M., Teissandier, A., Armand, M., Barau, J., Bonneville, L., Weber, M., Baudrin, L. G., Lameiras, S., & Bourc’his, D. (2021). DNMT3A-dependent DNA methylation is required for spermatogonial stem cells to commit to spermatogenesis [Preprint]. Developmental Biology. https://doi.org/10.1101/2021.04.19.440465   Related Episodes Effects of DNA Methylation on Diabetes (Charlotte Ling) Epigenetic Reprogramming During Mammalian Development (Wolf Reik) Effects of DNA Methylation on Chromatin Structure and Transcription (Dirk Schübeler) CpG Islands, DNA Methylation, and Disease (Sir Adrian Bird)   Contact Active Motif on Twitter Epigenetics Podcast on Twitter Active Motif on LinkedIn Active Motif on Facebook Email: podcast@activemotif.com

The Food Blogger Pro Podcast
296: Web Stories - An Informal, Intentional Way to Engage Your Audience and Reach New People with Kingston Duffie

The Food Blogger Pro Podcast

Play Episode Listen Later Mar 16, 2021 57:33


Repurposing your content for Google Web Stories, the elements of a successful Web Story, and how Google Web Stories translate to traffic for your site with Kingston Duffie. ----- Welcome to episode 296 of The Food Blogger Pro Podcast! This week on the podcast, Bjork interviews Kingston Duffie from Slickstream about Google Web Stories. Web Stories  Have you heard of Google Web Stories? Are you creating them for your content right now? Do you know how they work? Slickstream’s Kingston Duffie is here on the podcast to talk about how creators can utilize them today! He’ll chat about his top tips for creating engaging Web Stories and getting the most value out of the Web Stories you create for your blog, as well as the ins and outs of how these Web Stories work within search results. In this episode, you’ll learn: How Slickstream works His approach for building businesses What he thinks about the future of the web The history of Stories on the web How to repurpose your social media Stories for Web Stories The elements of a successful Web Story Why creators publish Web Stories What Google Discover is and how it works How Google Web Story views translate to website traffic How UTM tags work Resources: 231: A Better Experience – Building Engagement, Not Just Traffic with Kingston Duffie Slickstream Pinch of Yum NerdPress Web Stories, not Web Teasers Web Stories Q&A Google App on the Google Play Store and the Apple App Store How Web Stories Appear Across Google Google Search Console Web Stories plugin for WordPress Slickstream’s Engagement Suite If you have any comments, questions, or suggestions for interviews, be sure to email them to podcast@foodbloggerpro.com. Learn more about joining the Food Blogger Pro community at foodbloggerpro.com/membership

Untold: Behind the Scenes in the UK Food & Drink Industry
S1 Ep7: Hugh Duffie Co-Founder of Sandows: Closing Down, Managing Investors and Moving On.

Untold: Behind the Scenes in the UK Food & Drink Industry

Play Episode Listen Later Mar 12, 2021 41:50


In this episode we catch up with Hugh Duffie, co-founder of Sandows, UK pioneer of cold brew coffee. Hugh speaks candidly about the experience of closing the business down in 2020.  A barista by training, Hugh co-founded Sandows in 2014. Since deciding to close Sandows in 2020, Hugh has been working as a consultant focusing on strategy, positioning, E-Commerce and digital marketing for fledgling DTC brands. 

LaSalle Street
Treasury Market Structure

LaSalle Street

Play Episode Listen Later Jan 14, 2021 49:17


In this episode, Darrell Duffie discusses stresses in the Treasury market with Lou Crandall, Ken Garbade, and Barbara Novick. This market has a huge impact across the financial system—from determining the borrowing costs for governments to serving as a key benchmark within the financial system to helping to keep credit flowing to people who need it.

Four Quarters Podcast
Acadia Axemen Men's Basketball Head Coach Kevin Duffie - The Four Quarters Podcast

Four Quarters Podcast

Play Episode Listen Later Nov 19, 2020 117:34


Acadia Axemen Men's Basketball Head Coach Kevin Duffie joins host Tyler Bennett on Episode 37 of the Four Quarters Podcast, powered by Four Quarters Media! Off the top, the pair discuss the impact that the COVID-19 pandemic has had on the program and Acadia University as a whole, and how the city of Wolfville, NS has adapted to the pandemic. Then, Duffie takes us down memory lane, looking back on his career with the Axemen program that now spans a total of 14 seasons. Duffie talks about the 2008 CIS Silver Medal that he won as a player, and how he ended up a member of the Coaching Staff the following year. As he discusses his coaching career, Duffie talks about how he has adapted and changed his approach over the years, and finding the right balance with the Player - Coach relationship. To close, Duffie shares some thoughts on how we can better grow the level of coverage surrounding the game in Canada. (MUSIC: bensound.com)

U.G.L.Y
Mister Duffie@ U.G.L.Y

U.G.L.Y

Play Episode Listen Later Nov 10, 2020 1:59


ARTIST MISTER DUFFIE with his hit "best offer" chevk him out on YouTube and other social media as sites --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/ugly7/support

Spit Talk
Run’n it wit Duff

Spit Talk

Play Episode Listen Later Oct 12, 2020 27:56


Just a quick run’n wit ya girl Duff on some real raw true uncut shit... I also have a Duffie original song on this episode just right after the ad after the episode

PaperPlayer biorxiv neuroscience
Fruitless decommissions regulatory elements to implement cell-type-specific neuronal masculinization

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Sep 4, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.04.281345v1?rss=1 Authors: Brovkina, M. V., Duffie, R., Burtis, A. E., Clowney, E. J. Abstract: In the fruit fly Drosophila melanogaster, male-specific splicing and translation of the Fruitless transcription factor (FruM) alters the presence, anatomy, and/or connectivity of >60 types of central brain neurons that interconnect to generate male-typical behaviors. While the indispensable function of FruM in sex-specific behavior has been understood for decades, the molecular mechanisms underlying its activity remain unknown. Here, we take a genome-wide, brain-wide approach to identifying regulatory elements whose activity depends on the presence of FruM. We identify 436 high-confidence genomic regions differentially accessible in male fruitless neurons, validate candidate regions as bona-fide, differentially regulated enhancers, and describe the particular cell types in which these enhancers are active. We find that individual enhancers are not activated universally but are dedicated to specific fru+ cell types. Aside from fru itself, genes are not dedicated to or common across the fru circuit; rather, FruM appears to masculinize each cell type differently, by tweaking expression of the same effector genes used in other circuits. Finally, we find FruM motifs enriched among regulatory elements that are open in the female but closed in the male. Together, these results suggest that FruM acts cell-type-specifically to decommission regulatory elements in male fruitless neurons. Copy rights belong to original authors. Visit the link for more info

Rumors of Grace with Bob Hutchins
Episode 56- Kendall Duffie

Rumors of Grace with Bob Hutchins

Play Episode Listen Later May 29, 2020 54:00


Kendall Duffie is a seasoned music producer, radio promotions executive, and the VP of D3 Entertainment Group, a Strategic Marketing, Management, Radio Promotions and Music Production company. This is a sobering discussion about racism in America.

Birdman Bedroom Podcast
Birdman BEDROOM podcast SPECIAL GUEST ARDELL MC DUFFIE (CEO of Snaps Photography 5/19/20

Birdman Bedroom Podcast

Play Episode Listen Later May 19, 2020 45:45


Todays Show Ardell tell us what makes a great photographer, he also let's the listeners know how important it is to communicate with a photographer clearly before you book them ,make sure you know the delivery time of your pictures ,and all details . You can find Ardell on Facebook & Instagram ( ig is snaps 215

The Podlets - A Cloud Native Podcast
Kubernetes Sucks for Developers, Right? No. (Ep 21)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later May 11, 2020 47:56


We are joined by Ellen Körbes for this episode, where we focus on Kubernetes and its tooling. Ellen has a position at Tilt where they work in developer relations. Before Tilt, they were doing closely related kinds of work at Garden, a similar company! Both companies are directly related to working with Kubernetes and Ellen is here to talk to us about why Kubernetes does not have to be the difficult thing that it is made out to be. According to her, this mostly comes down to tooling. Ellen believes that with the right set of tools at your disposal it is not actually necessary to completely understand all of Kubernetes or even be familiar with a lot of its functions. You do not have to start from the bottom every time you start a new project and developers who are new to Kubernetes need not becomes experts in it in order to take advantage of its benefits.The major goal for Ellen and Tilt is to get developers code up, running and live in as quick a time as possible. When the system is standing in the way this process can take much longer, whereas, with Tilt, Ellen believes the process should be around two seconds! Ellen comments on who should be using Kubernetes and who it would most benefit. We also discuss where Kubernetes should be run, either locally or externally, for best results and Tilt's part in the process of unit testing and feedback. We finish off peering into the future of Kubernetes, so make sure to join us for this highly informative and empowering chat! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://www.notion.so/thepodlets/The-Podlets-Guest-Central-9cec18726e924863b559ef278cf695c9 Guest: Ellen Körbes https://twitter.com/ellenkorbes Hosts: Carlisia Campos Bryan Liles Olive Power Key Points From This Episode: Ellen's work at Tilt and the jumping-off point for today's discussion. The projects and companies that Ellen and Tilt work with, that they are allowed to mention! Who Ellen is referring to when they say 'developers' in this context. Tilt's goal of getting all developers' code up and running in the two seconds range. Who should be using Kubernetes? Is it necessary in development if it is used in production? Operating and deploying Kubernetes — who is it that does this? Where developers seem to be running Kubernetes; considerations around space and speed. Possible security concerns using Tilt; avoiding damage through Kubernetes options. Allowing greater possibilities for developers through useful shortcuts. VS Code extensions and IDE integrations that are possible with Kubernetes at present. Where to start with Kubernetes and getting a handle on the tooling like Tilt. Using unit testing for feedback and Tilt's part in this process. The future of Kubernetes tooling and looking across possible developments in the space. Quotes: “You're not meant to edit Kubernetes YAML by hand.” — @ellenkorbes [0:07:43] “I think from the point of view of a developer, you should try and stay away from Kubernetes for as long as you can.” — @ellenkorbes [0:11:50] “I've heard from many companies that the main reason they decided to use Kubernetes in development is that they wanted to mimic production as closely as possible.” — @ellenkorbes [0:13:21] Links Mentioned in Today’s Episode: Ellen Körbes — http://ellenkorbes.com/ Ellen Körbes on Twitter — https://twitter.com/ellenkorbes?lang=en Tilt — https://tilt.dev/ Garden — https://garden.io/ Cluster API — https://cluster-api.sigs.k8s.io/ Lyft — https://www.lyft.com/ KubeCon — https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/ Unu Motors — https://unumotors.com/en Mindspace — https://www.mindspace.me/ Docker — https://www.docker.com/ Netflix — https://www.netflix.com/ GCP — https://cloud.google.com/ Azure — https://azure.microsoft.com/en-us/ AWS — https://aws.amazon.com/ ksonnet — https://ksonnet.io/ Ruby on Rails — https://rubyonrails.org/ Lambda – https://aws.amazon.com/lambda/ DynamoDB — https://aws.amazon.com/dynamodb/ Telepresence — https://www.telepresence.io/ Skaffold Google — https://cloud.google.com/blog/products/application-development/kubernetes-development-simplified-skaffold-is-now-ga Python — https://www.python.org/ REPL — https://repl.it/ Spring — https://spring.io/community Go — https://golang.org/ Helm — https://helm.sh/ Pulumi — https://www.pulumi.com/ Starlark — https://github.com/bazelbuild/starlark Transcript: EPISODE 22 [ANNOUNCER] Welcome to The Podlets Podcast, a weekly show that explores cloud native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision-maker, this podcast is for you. [EPISODE] [0:00:41.8] CC: Hi, everybody. This is The Podlets. We are back this week with a special guest, Ellen Körbes. Ellen will introduce themselves in a little bit. Also on the show, it’s myself, Carlisia Campos, Michael Gasch and Duffie Cooley. [0:00:57.9] DC: Hey, everybody. [0:00:59.2] CC: Today’s topic is Kubernetes Sucks for Developers, right? No. Ellen is going to introduce themselves now and tell us all about what that even means. [0:01:11.7] EK: Hi. I’m L. I do developer relations at Tilt. Tilt is a company whose main focus is development experience when it comes to Kubernetes and multi-service development. Before Tilt, I used to work at Garden. They basically do the same thing, it's just a different approach. That is basically the topic that we're going to discuss, the fact that Kubernetes does not have to suck for developers. You just need to – you need some hacks and fixes and tools and then things get better. [0:01:46.4] DC: Yeah, I’m looking forward to this one. I've actually seen Tilt being used in some pretty high-profile open source projects. I've seen it being used in Cluster API and some of the work we've seen there and some of the other ones. What are some of the larger projects that you are aware of that are using it today? [0:02:02.6] EK: Oh, boy. That's complicated, because every company has a different policy as to whether I can name them publicly or not. Let's go around that question a little bit. You might notice that Lyft has a talk at KubeCon, where they're going to talk about Tilt. I can't tell you right now that they use Tilt, but there's that. Hopefully, I found a legal loophole here. I think they're the biggest name that you can find right now. Cluster API is of course huge and Cluster API is fun, because the way they're doing things is very different. We're used to seeing mostly companies that do apps in some way or another, like websites, phone apps, etc. Then Cluster API is completely insane. It's something else totally. There's tons of other companies. I'm not sure which ones that are large I can name specifically. There are smaller companies. Unu Motors, they do electric motorcycles. It's a company here in Berlin. They have 25 developers. They’re using Tilt. We have very tiny companies, like Mindspace, their studio in Tucson, Arizona. They also use Tilt and it's a three-person team. We have the whole spectrum, from very, very tiny companies that are using Docker for Mac and pretty happy with it, all the way up to huge companies with their own fleet of development clusters and all of that and they're using Tilt as well. [0:03:38.2] DC: That field is awesome. [0:03:39.3] MG: Quick question, Ellen. The title says ‘developers’. Developers is a pretty broad name. I have people saying that okay, Kubernetes is too raw. It's more like a Linux kernel that we want this past experience. Our business developers, our application developers are developing in there. How would you do describe developer interfacing with Kubernetes using the tools that you just mentioned? Is it the traditional enterprise developer, or more Kubernetes developers developing on Kubernetes? [0:04:10.4] EK: No. I specifically mean not Kubernetes developers. You have people work in Kubernetes. For example, the Cluster API folks, they're doing stuff that is Kubernetes specific. That is not my focus. The focus is you’re a back-end developer, you’re a front-end developer, you're the person configuring, I don't know the databases or whatever. Basically, you work at a company, you have your own business logic, you have your own product, your own app, your own internal stuff, all of that, but you're not a Kubernetes developer.It just so happens that if the stuff you are working on is going to be pointing at Kubernetes, it's going to target Kubernetes, then one, you're the target developer for me, for my work. Two, usually you're going to have a hard time doing your job. We can talk a bit about why. One issue is development clusters. If you're using Kubernetes in prod, rule of thumb, you should be using Kubernetes in dev, because you don't want completely separate environments where things work in your environment as a developer and then you push them and they break. You don't want that. You need some development cluster. The type of cluster that that's going to be is going to vary according to the level of complexity that you want and that you can deal with. Like I said, some people are pretty happy with Docker for Mac. I hear all the time these complaints that, “Oh, you're running Kubernetes on your machine. It's going to catch fire.” Okay, there's some truth to that, but also it depends on what you're doing. No one tries to run Netflix, let's say the whole Netflix on their laptop, because we all know that's not reasonable. People try to do similar things on their mini-Kube, or Docker for Mac. Then it doesn't work and they say, “Oh, Kubernetes on the laptop doesn't work.” No. Yeah, it does. Just not for you. That's a complaint I particularly dislike, because it comes from a – it's a blanket statement that has no – let's say, no facts behind it. Yeah, if you're a small company, Docker for Mac is going to work fine for you. Let's say you have a beefy laptop with 30 gigs of ram, you can put a lot of software in 30 gigs. You can put a lot of microservices in 30 gigs. That's going to work up to a point and then it's going to break. When it breaks, you're going to need to move to a cloud, you're going to need to do remote development and then you're going to Go to GCP, or Azure, or Amazon. You're going to set up a cluster there. Some people use the managed Kubernetes options. Some people just spin up a bunch of machines and wire up Kubernetes by themselves. That's going to depend on basically how much you have in terms of resources and in terms of needs. Usually, keeping up a remote cluster that works is going to demand more infrastructure work. You're going to need people who know how to do that, to keep an eye on that. There's all the billing aspect, which is you can run Docker for Mac all day and you're not going to pay extra. If you leave a bunch of stuff running on Google, you're going to have a bill at the end of the month that you need to pay attention to. That is one thing for people to think about. Another aspect that I see very often that people don't know what to do with is config files. You scroll Twitter, you scroll Kubernetes Twitter for five minutes and there's a joke about YAML. We all hate editing YAML. Again, the same way people make jokes about using about Kubernetes setting your laptop on fire, I would argue that you're not meant to edit Kubernetes YAML by hand. The tooling for that is arguably not as mature as the tooling when it comes to Kubernetes clusters to run on your laptop. You have stuff like YAML templates, you have ksonnet. I think there's one called customize, but I haven't used it myself. What I see in every company from the two-person team to the 600 person team is no one writes Kubernetes YAML by hand. Everyone uses a template solution, a templating solution of some sort. That is the first thing that I always tell people when they start making jokes about YAML, is if you’re editing YAML by hand, you're doing it wrong. You shouldn't do that in the first place. It's something that you set up once at some point and you look at it whenever you need to. On your day-to-day when you're writing your code, you should not touch those files, not by hand. [0:08:40.6] CC: We're five minutes in and you threw so much at us. We need to start breaking some of this stuff down. [0:08:45.9] EK: Okay. Let me throw you one last thing then, because that is what I do personally. One more thing that we can discuss is the development feedback loop. You're writing your code, you're working on your application, you make a change to your code. How much work is it for you to see that new line of code that you just wrote live and running? For most people, it's a very long journey. I asked that on Twitter, a lot of people said it was over half an hour. A very tiny amount of people said it was between five minutes and half an hour and only a very tiny fraction of people said it was two seconds or less. The goal of my job, of my work, the goal of Tilt, the tool, which is made by the company I work for, also called Tilt, is to get everyone in that two seconds range. I've done that on stage and talks, where we take an application and we go from, “Okay, every time you make a change, you need to build a new Docker image. You need to push it to a registry. You need to update your cluster, blah, blah, blah, and that's going to take minutes, whole minutes.” We take that from all that long and we dial it down to a couple seconds. You make a change, or save your file, snap your fingers and poof, it's up and running, the new version of your app. It's basically a real-time, perceptually real-time, just like back and when everyone was doing Ruby on Rails and you would just save your file and see if it worked basically instantly. That is the part of this discussion that personally I focus more on. [0:10:20.7] CC: I'm going to love to jump to the how in a little bit. I want to circle back to the beginning. I love the question that Michael asked at the beginning, what is considered developer, because that really makes a difference, first to understand who we are talking about. I think this conversation can go in circles and not that I'm saying we are going circles, but this conversation out in the wild can go in circles. Until we have an understanding of the difference between can you as a developer use Kubernetes in a somewhat not painful way, but should you? I'm very interested to get your take and Michael and Duffie’s take as well as far as should we be doing this and should all of the developers will be using Kubernetes through the development process? Then we also have to consider people who are not using Kubernetes, because a lot of people out there are not using communities. For developers and special, they hear Kubernetes is painful and definitely enough for developers. Obviously, that is not going to qualify Kubernetes as a tool that they’re going to look into. It's just not motivating. If there is anything that that would make people motivated to look into Kubernetes that would be beneficial for them not just for using Kubernetes for Kubernetes sake, but would it be useful? Basically why? Why would it be useful? [0:11:50.7] EK: I think from the point of view of a developer, you should try and stay away from Kubernetes for as long as you can. Kubernetes comes in when you start having issues of scale. It's a production matter, it's not a development matter. I don't know, like a DevOps issue, operations issue. Ideally, you put off moving your application to Kubernetes as long as possible. This is an opinion. We can argue about this forever. Just because it introduces a lot of complexity and if you don't need that complexity, you should probably stay away from it. To get to the other half of the question, which is if you're using Kubernetes in production, should you use Kubernetes in development? Now here, I'm going to say yes a 100% of the time. Blanket statement of course, we can argue about minutiae, but I think so. Because if you don't, you end up having separate environments. Let's say you're using Docker Compose, because you don't like Kubernetes. You’re using Kubernetes in production, so in development you are going to need containers of some sort. Let's say you're using Docker Compose. Now you're maintaining two different environments. You update something here, you have to update it there. One day, it's going to be Friday, you're going to be tired, you're going to update something here, you're going to forget to update something there, or you're going to update something there and it's going to be slightly different. Or maybe you're doing something that has no equivalent between what you're using locally and what you're using in production. Then basically, you're in trouble. I've heard from many companies that the main reason they decided to use Kubernetes in development is that they wanted to mimic production as closely as possible. One argument we can have here is that – oh, but if you're using Kubernetes in development, that's going to add a lot of overhead and you're not going to be able to do your job right. I agree that that was true for a while, but right now we have enough tooling that you can basically make Kubernetes disappear and you just focus on being a developer, writing your code, doing all of that stuff. Kubernetes is sitting there in the background. You don't have to think about it and you can just go on about your business with the advantage that now, your development environment and your production environment are going to very closely mimic each other, so you're not going to have issues with those potential disparities. [0:14:10.0] CC: All right. Another thing too is that I think we're making an assumption that the developers we are talking about are the developers that are also responsible for deployment. Sometimes that's the case, sometimes that's not the case and I'm going to shut up now. It would be interesting to talk about that too between all of us, is that what we see? Is that the case that now developers are responsible? It's like, developers DevOps is just so ubiquitous that we don't even consider differentiating between developers and ops people? All right? [0:14:45.2] DC: I think I have a different spin on that. I think that it's not necessarily that developers are the ones operating the infrastructure. The problem is that if your infrastructure is operated by a platform that may require some integration at the application layer to really hit its stride, then the question becomes how do you as a developer become more familiar? What is the user experience as of, or what I should say, what's the developer experience around that integration? What can you do to improve that, so that the developer can understand better, or play with how service discovery works, or understand better, or play with how the different services in their application will be able to interact without having to redefine that in different environments? Which is I think what Ellen point was. [0:15:33.0] EK: Yeah. At the most basic level, you have issues as such as you made a change to a service here, let's say on your local Docker Compose. Now you need to update your Kubernetes manifest on your cluster for things to make sense. Let's say, I don't know, you change the name of a service, something as simple as that. Even those kinds of things that sounds silly to even describe, when you're doing that every day, one day you're going to forget it, things are going to explode, you're not going to know why, you're going to lose hours trying to figure out where things went wrong. [0:16:08.7] MG: Also the same with [inaudible] maybe. Even if you use Kubernetes locally, you might run a later version of Kubernetes, maybe use kind for local development, but then your cluster, your remote cluster is on three or four versions behind. Shouldn't be because of the versions of product policy, but it might happen, right? Then APIs might be deprecated, or you're using different API. I totally agree with you, Ellen, that your development environment should reflect production as close as possible. Even there, you have to make sure that prod, like your APIs matches, API types matches and all the stuff right, because they could also break. [0:16:42.4] EK: You were definitely right that bugs are not going away anytime soon. [0:16:47.1] MG: Yeah. I think this discussion also remembers me of the discussion that the folks in the cloud will have with AWS Lambda for example, because there's similar, even though there are tools to simulate, or mimic these platforms, like serverless platforms locally, the general recommendation there is to embrace the cloud and develop in the cloud natively in the cloud, because that's something you cannot resemble. You cannot run DynamoDB locally. You could mimic it. You could mimic lambda runtimes locally. Essentially, it's going to be different. That's also a common complaint in the world of AWS and cloud development, which is it's really not that easy to develop locally, where you're supposed to develop on the platform that the code is being shipped and run on to, because you cannot run the cloud locally. It sounds crazy, but it is. I think the same is with Kubernetes, even though we have the tools. I don't think that every developer runs Kubernetes locally. Most of them maybe doesn't even have Docker locally, so they use some spring tools and then they have some pipeline and eventually it gets shipped as a container part in Kubernetes. That's what I wanted to throw in here as more like a question experience, especially for you Ellen with these customers that you work with, what are the different profiles that you see from the maturity perspective and these customers large enterprises might be different and the smaller ones that you mentioned. How do you see them having different requirements, as also Carlisia said, do they do ops, or DevOps, or is it strictly separated there, especially in large enterprises? [0:18:21.9] EK: What I see the most, let's get the last part first. [0:18:24.6] MG: Yeah, it was a lot of questions. Sorry for that. [0:18:27.7] EK: Yeah. When it comes to who operates Kubernetes, who deploys Kubernetes, definitely most developers push their code to Kubernetes themselves. Of course, this involves CI and testing and PRs and all of that, so it's not you can just go crazy and break everything. When it comes to operating the production cluster, then that's separate. Usually, you have someone writing code and someone else operating clusters and infrastructure. Sometimes it's the same person, but they're clearly separate roles, even if it's the same person doing it. Usually, you go from your IDE to PR and that goes straight into production once the whole process is done. Now we were talking about workflows and Lambda and all of that. I don't see a good solution for lambda, a good development experience for Lambda just yet. It feels a bit like it needs some refinement still. When it comes to Kubernetes, you asked do most developers run Kubernetes locally? Do they not? I don't know about the numbers, the absolute numbers. Is it most doing this, or most doing that? I'm not sure. I only know the companies I'm in touch with. Definitely not all developers run Kubernetes on their laptops, because it's a problem of scale. Right now, we are basically stuck with 30 gigs of RAM on our laptops. If your app is bigger than that, tough luck, you're not going to run it on the laptop. What most developers do is they still maintain a local development environment, where they can do development without going to CI. I think that is the main question. They maintain agility in their development process. What we usually see when you don't have Kubernetes on your laptop and you're using remote Kubernetes, so a remote development cluster in some cloud provider. What most people do and this is not the companies I talk to. This is basically everyone else. What most people will do is they make their development environment be the same, or work the same way as their production environment. You make a change to your code, you have to push a PR that has to get tested by CI. It has to get approved. Then it ends up in the Kubernetes cluster. Your feedback loop as a developer is insanely slow, because there's so much red tape between you changing a line of code and you getting a new process running in your cluster. Now when you use tools, I call the category MDX. I basically coined that category name myself. MDX is a multi-service development experience tooling. When you use MDX tools, and that's not just Tilt; it’s Tilt, it’s Garden where I used to work, people use telepresence like that. There is Scaffold from Google and so on. There's a bunch of tools. When you use a tool like that, you can have your feedback loop down to a second like I said before. I think that is the major improvement developers can do if they're using Kubernetes remotely and even if they’re using Kubernetes locally. I would guess most people do not run Kubernetes locally. They use remotely. We have clients who have clients — we have users who don't even have Docker on their local machines, because if you have the right tooling, you can change the files on your machine. You have tooling running that detects those five changes. It syncs those five changes to your cluster. The cluster then rebuilds images, or restarts containers, or syncs live code that's already running. Then you can see those changes reflected in your development cluster right, away even though you don't even have Docker in your machine. There's all of those possibilities. [0:22:28.4] MG: Do you see security issues with that approach with not knowing the architecture of Tilt? Even though it's just the development clusters, there might be stuff that could break, or you could break by bypassing the red tape as you said? [0:22:42.3] EK: Usually, we assign one user per namespace. Usually, every developer has a namespace. Kubernetes itself has enough options that if that's a concern to you, you can make it secure. Most people don't worry about it that much, because it's development clusters. They're not accessible to the public. Usually, there's – you can only access it through a VPN or something of that sort. We haven't heard about security issues so far. I'm sure they’re going to pop out at some point. I'm not sure how severe it’s going to be, or how hard it's going to be to fix. I am assuming, because none of this stuff is meant to be accessible to the wider Internet that it's not going to be a hard problem to tackle. [0:23:26.7] DC: I would like to back up for a second, because I feel we're pretty far down the road on what the value of this particular pattern is without really explaining what it is. I want to back this up for just a minute and talk about some of the things that a tooling like this is trying to solve in a use case model, right? Back in the day when I was learning Python, I remember really struggling with the idea of being able to debug Python live. I came across iPython, which is a REPL and that was – which was hugely eye-opening, because it gave me the ability to interact with my code live and also open me up to the idea that it was an improve over things like having to commit a new log line against a particular function and then push that new function up to a place where it would actually get some use and then be able to go look at that log line and see what's coming out of it, or do I actually have enough logs to even put together what went wrong. That whole set of use case is I think is somewhat addressed by tooling like this. I do think we should talk about how did we get here and how does that actually and how does like this address some of those things, and which use cases specifically is it looking to address. I guess where I'm going with this is to your point, so a tooling like Tilt, for example, is the idea that you can, as far as I understand it, inject into a running application, a new instance that would be able to – that you would have a local development control over. If you make a change to that code, then the instance running inside of your target environment would be represented by that new code change very quickly, right? Basically, solving the problem of making sure that you have some very quick feedback loop. I mean, functionally, that's the killer feature here. I think it’s really interesting to see tooling like that start to develop, right? Another example of that tooling would be the REPL thing, wherein instead of writing your code and compiling your code and seeing the output, you could do a thing where you're actually inside, running as a thread inside of the code and you can dump a data structure and you can modify that data structure and you can see if your function actually does the right thing, without having to go back and write that code while imagining all those data structures in your head. Basic tooling like this, I think is pretty killer. [0:25:56.8] EK: Yeah. I think one area where that is still partially untapped right now where this tooling could go, and I'm pushing it, but it's a process. It's not something we can do overnight, is to have very high-level patterns, the let's say codified. For example, everyone's copying Docker files and Kubernetes manifests and Terraform can take files, which I forgot what they're called. Everyone's copying that stuff from the Internet from other websites. That's cool. Oh, you need a container that does such-and-such and sets up this environment and provides these tools. Just download this image and everything is all set up for you. One area where I see things going is for us to have that same portability, but for development environments. For example, I did this whole talk about how to take your Go code, your Go application from I don't know, a 30-seconds feedback loop where you're rebuilding an image every time you make a code change and all of that, down to 1 second. There's a lot of hacks in there that span all kinds of stuff, like should you use Go vendor, or should you keep your dependencies cached inside a Docker layer? Those kinds of things. Then I went down a bunch of those things and eventually came up with a workflow that was basically the best I could find in terms of development experience. What is the snappiest workflow? Or for example, you could have what is a workflow that makes it really easy to debug my Go app? You would use applications like Squash and that's a debugger that you can connect to a process running in a large container. Those kinds of things. If we can prepackage those and offer those to users and not just for Go and not just for debugging, but for all kinds of development workflows, I think that would be really great. We can offer those types of experiences to people who don't necessarily have the inclination to develop those workflows themselves. [0:28:06.8] DC: Yeah, I agree. I mean, it is interesting. I've had a few conversations lately about the fact that the abstraction layer of coding in the way that we think about it really hasn't changed over time, right? It's the same thing. That's actually a really relevant point. It's also interesting to think about with these sorts of frameworks and this tooling, it might be interesting to think of what else we can – what else we can enable the developer to have a feedback loop on more quickly, right? To your point, right? We talked about how these different environments, your development environment and your production environment, the general consensus is they should be as close as you can get them reasonably, so that the behavior in one should somewhat mimic the behavior in the other. At least that's the story we tell ourselves. Given that, it would also be interesting if the developer was getting feedback from effectively how the security posture of that particular cluster might affect the work that they're doing. You do actually have to define network policy. Maybe you don't necessarily have to think about it if we can provide tooling that can abstract that away, but at least you should be aware that it's happening so that you understand if it's not working correctly, this is where you might be able to see the sharp edges pop up, you know what I mean? That sort of thing. [0:29:26.0] EK: Yeah. At the last KubeCon, where was it? In San Diego. There was this running joke. I was running around with the security crowd and there was this joke about KubeCon applies security.yaml. It was in a mocking tone. I'm not disparaging their joke. It was a good joke. Then I was thinking, “What if we can make this real?” I mean, maybe it is real. I don't know. I don't do security myself. What if we can apply a comprehensive enough set of security measures, security monitoring, security scanning, all of that stuff, we prepackage it, we allow users to access all of that with one command, or even less than that, maybe you pre-configure it as a team lead and then everyone else in your team can just use it without even knowing that it's there. Then it just lets you know like, “Oh, hey. This thing you just did, this is a potential security issue that you should know about.” Yeah, I think coming up with these developer shortcuts, it's my hobby. [0:30:38.4] MG: That's cool. What you just mentioned Ellen and Duffie remembers me on – reminds me on the Spring community, the Spring framework, where a lot of the boilerplate, or beat security stuff, or connections, integrations, etc., is being abstracted away and you just annotate your code a bit and then some framework and Spring obviously, it's a spring framework. In your case Ellen, what you were hinting to is maybe this build environment that gives me these integration hooks where I just annotate. Or even those annotations could be enforced. Standards could be enforced if I don't annotate at all, right? I could maybe override them. Then this build environment would just pick it up, because it scans the code, right? It has the source code access, so I could just scan it and hook into it and then apply security policies, lock it down, see ports being used, maybe just open them up to the application, the other ones will automatically get blocked, etc., etc. It just came to my mind. I have not done any research there, or whether there's already some place or activity. [0:31:42.2] EK: Yeah. Because I won't shut up about this stuff, because I just love it, we are doing a – it's in a very early stage right now. We are doing a thing at Tile, we're calling extensions. Very creative name, I suppose. It's basically Go in parts, but for those were closed. It's still at a very early stage. We still have some road ahead of us. For example, we have – let's say this one user and they did some very special integration of Helm and Tilt. You don't have to use Helm by hand anymore. You can just make all of your Helm stuff happen automatically when you're using Tilt. Usually, you would have to copy I don't know, a 100 lines of code from your Tilt config file and copy that around for other developers to be able to use it. Now we have this thing that it's basically going parts where you can just say load extension and give it a name, it fetches it from a repository and it's running. I think that is basically an early stage of what you just described with Spring, but more geared towards let's say an infra-Kubernetes, like how do you tie infra-Kubernetes, that stuff with a higher level functionality that you might want to use? [0:33:07.5] MG: Cool. I have another one. Speaking of which, is there any other integrations for IDEs with Tilt? Because I know that VS code for example, has Kubernetes integrations, does the fabric aid and may even plugin, which handles some stuff under the covers. [0:33:24.3] EK: Usually, Tilt watches your code files and it doesn't care which IDEs you use. It has its own dashboard, which is just a page that you open on your browser. I have just heard this week. Someone mentioned on Slack that they wrote an extension for Tilt. I'm not sure if it was for VS code or the other VS code-like .NET editors. I don't remember what it’s called, but they have a family of those. I just heard that someone wrote one of those and they shared the repo. We have someone looking into that. I haven't seen it myself. The idea has come up when I was working at Garden, which is in the same area as Tilt. I think it's pertinent. We also had the idea of a VS code extension. I think the question is what do you put in the extension? What do you make the VS code extension do? Because both Tilt and Garden. They have their own web dashboards that show users what should be shown and in the manner that we think should be shown. If you're going to create a VS code extension, you either replicate that completely and you basically take this stuff that was in the browser and put it in the IDE. I don't particularly see much benefit in that. If enough people ask, maybe we'll do it, but it's not something that I find particularly useful. Either you do that and you replicate the functionality, or you come up with new functionality. In both cases, I just don't see a very strong point as to what different and IDE-specific functionality should you want. [0:35:09.0] MG: Yes. The reason why I was asking is that we see all these Pulumi, CDKs, AWS CDKs coming up, where you basically use a programming language to write your application/application infrastructure code and your IDE and then all the templating, that YAML stuff, etc., gets generated under covers. Just this week, AWS announced the CDKs, like the CDK basically for Kubernetes. I was thinking, with this happening where some of these providers abstract the scaffolding as well, including the build. You don't even have to build, because it's abstracted away under the covers. I was seeing this trend. Then obviously, we still have Helm and the templating and the customize and then you still have the manual as I mentioned in the beginning. I do like the IDE integration, because that's where I spend most of my time. Whenever I have to leave the IDE, it's a context switch that I have to go through. Even if it's just for opening another file also that I need to edit somewhere. That's why I think having IDE integration is useful for developers, because that's where they most spend up their time. As you said, there might be reasons to not do it in an IDE, because it's just replicating functionality that might not be useful there. [0:36:29.8] EK: Yeah. In the case of Tilt, all the config is written in Starlark, which is a language and it's basically Python. If your IDE can syntax highlight Python, it can syntax highlight the Tilt config files. About Pulumi and that stuff, I'm not that familiar. It's stuff that I know how it works, but I haven't used it myself. I'm not familiar with the browse and the IDE integration side of it. The thing about tools like Tilt is that usually, if you set it up right, you can just write your code all day and you don't have to look at the tool. You just switch from your IDE to let's say, your browser where your app is running, so you get feedback and that kind of thing. Once you configure it, you don't really spend much time looking at it. You're going to look at it when there are errors. You try to refresh your application and it fails. You need to find that error. By the time that happened, you already lost focus from your code anyway. Whether you're going to look for your error on a terminal, or on the Tilt dashboard, that's not much an issue. [0:37:37.7] MG: That's right. That’s right. I agree. [0:37:39.8] CC: All this talk about tooling and IDEs is making me think to ask you Ellen. If I'm a developer and let's say, my company decides that we’re going to use Kubernetes. What we are advocating here with this episode is to think about well, if you're going to be the point to Kubernetes in production, you should consider running Kubernetes as a local development environment. Now for those developers who don't even – haven't even worked with Kubernetes, where do you suggest they jump in? Should they get a handle on – because it's too many things. I mean, Kubernetes already is so big and there are so many toolings around to how to operate Kubernetes itself. For a developer who is, “Okay, I like this idea of having my own local Kubernetes environment, or a development environment somehow may also be in the cloud,” should they start with a tooling like Tilt, or something similar? Would that make it easier for them to wrap their head around Kubernetes and what Kubernetes does? Or should they first get a handle on Kubernetes and then look at a tool like this? [0:38:56.2] EK: Okay. There are a few sides to this question. If you have a very large team, ideally you should get one or a few people to actually really learn Kubernetes and then make it so that everyone else doesn't have to. Something we have seen is very large company, they are going to do Kubernetes in development. They set up a developer experience team and then for example, they have their own wrapper around Kubectl and then basically, they automate a bunch of stuff so that everyone in the team doesn't have to take a certified Kubernetes application development certificate. Because for people who don't know that certificate, it's basically how much Kubectl can you do off top of your head? That is basically what that certificate is about, because Kubectl is an insanely huge and powerful tool. On the one hand, you should do that. If you have a big team, take a few people, learn all that you can about Kubernetes, write some wrappers so that people don't have to do Kubectl or something, something by hand. Just make very easy functions, like Kubectl, let’s say you know a name of your wrapper, context and the name and then that's going to switch you to a namespace let's say, where some version of your app is running, so that thing. Now about the tooling. Once you have your development environment set up and you're going to need someone who has some experience with Kubernetes to set that up in the first place, but once that is set up, if you have the right tooling, you don't really have to know everything that Kubernetes does. You should have at least a conceptual overview. I can tell you for sure, that there's hundreds of developers out there writing code that is going to be deployed to Kubernetes, writing codes that whenever they make a change to their code, it goes to a Kubernetes development cluster and they don't have the first – well, I’m not going to say the first clue, but they are not experienced Kubernetes users. That's because of all the tooling that you can put around. [0:41:10.5] CC: Yeah, that makes sense. [0:41:12.2] EK: Yeah. You can abstract a bunch of stuff with basically good sense, so that you know the common operations that need to be done for your team and then you just abstract them away, so that people don't have to become Kubectl experts. On the other side, you can also abstract a bunch of stuff away with tooling. Basically, as long as your developer has the basic grasp of containers and basics of Kubernetes, that stuff, they don't need to know how to operate it, with any depth. [0:41:44.0] MG: Hey Ellen, in the beginning you said that it's all about this feedback loop and iterating fast. Part of a feedback loop for a developer is unit testing, integration testing, or all sorts of testing. How do you see that changing, or benefiting from tools like Tilt, especially when it comes to integration testing? Unit tests usually locally, but the integration testing. [0:42:05.8] EK: One thing that people can do when they're using Tilt is once you have Tilt running, you basically have all of your application running. You can just set up one-off tasks with Tilt. You could basically set up a script that there's a bunch of stuff, which would basically be what your test does. If it returns zero, it succeeded. If it doesn’t, it failed. You can set something up like that. It's not something that we have right now in a prepackaged farm that you can use right away. You would basically just say, “Hey Tilt, run this thing for me,” and then you would see if it worked or not. I have to make a plug to the competition right now. Garden has more of that part of it, that part of things set up. They have tests as a separate primitive right next to building and deploying, which is what you usually see. They also have testing. It does basically what I just said about Tilt, but they have a special little framework around it. With Garden, you would say, “Oh, here's a test. Here's how you run the test. Here's what the test depends on, etc.” Then it runs it and it tells you if it failed or not. With Tilt, it would be a more generic approach where you would just say, “Hey Tilt, run this and tell me if it fails or not,” but without the little wrapping around it that's specific for testing. When it comes to how things work, like when you're trying to push the production, let's say you did a bunch of stuff locally, you're happy with it, now it's time to push the production. Then there's all that headache with CI and waiting for tests to run and flaky tests and all of that, that I don't know. That is a big open question that everyone's unhappy about and no one really knows which way to run to. [0:43:57.5] DC: That’s awesome. Where do you see this space going in the future? I mean, as you look at the tooling that’s out there, maybe not specifically to the Tilt particular service or capability, but where do you see some other people exploring that space? We were talking about AWS dropping and CDK and there are different people trying to solve the YAML problem, but more from the developer user experience tooling way, where do you see that space going? [0:44:23.9] EK: For me, it's all about higher level abstractions and well-defined best practices. Right now, everyone is fumbling around in the dark not knowing what to do, trying to figure out what works and what doesn't. The main thing that I see changing is that given enough time, best practices are going to emerge and it's going to be clear for everyone. If you're doing this thing, you should use this workflow. If you're doing that thing, you should use that workflow. Basically, what happened when IDEs emerged and became a thing, that is the best practice aside. [0:44:57.1] DC: That's a great example. [0:44:58.4] EK: Yeah. What I see in terms of things being offered for me tomorrow of — in terms of prepackaged higher level abstractions. I don't think developers should, everyone know how to deal with Kubernetes at a deeper level, the same way as I don't know how to build the Linux kernel even though I use Linux every day. I think things should be wrapped up in a way that developers can focus on what matters to them, which is right now basically writing code. Developers should be able to get to the office in the morning, open up their computer, start writing code, or doing whatever else they want to do and not worry about Kubernetes, not worry about lambda, not worry about how is this getting built and how is this getting deployed and how is this getting tested, what's the underlying mechanism. I'd love for higher-level patterns of those to emerge and be very easy to use for everyone. [0:45:53.3] CC: Yeah, it's going to be very interesting. I think best practices is such an interesting thing to think about, because somebody could sit down and write, “Oh, these are the best practices we should be following in the space.” I think, my opinion it's really going to come out of what worked historically when we have enough data to look at over the years. I think it's going to be as far as tooling goes, like a survival of the fittest. Whatever tool has been used the most, that's what's going to be the best practice way to do things. Yeah, we know today there are so many tools, but I think probably we're going to get to a point where we know what to use for what in the future. With that, we have to wrap-up, because we are at the top of the hour. It was so great to have Ellen, or L, how they I think prefer to be called and to have you on the show, Elle. Thank you so much. I mean, L. See, I can't even follow my own. You're very active on Twitter. We're going to have all the information for how to reach you on the show notes. We're going to have a transcript. As always people, subscribe, follow us on Twitter, so you can be up to date with what we are doing and suggest episodes too on our github repo. With that, thank you everybody. Thank you L. [0:47:23.1] DC: Thank you, everybody. [0:47:23.3] CC: Thank you, Michael and – [0:47:24.3] MG: Thank you. [0:47:24.8] CC: - thank you, Duffie. [0:47:26.2] EK: Thank you. It was great. [0:47:26.8] MG: Until next time. [0:47:27.0] CC: Until next week. [0:47:27.7] MG: Bye-bye. [0:47:28.5] EK: Bye. [0:47:28.6] CC: It really was. [END OF EPISODE] [0:47:31.0] ANNOUNCER: Thank you for listening to the Podlets Cloud Native Podcast. Find us on Twitter @thepodlets and on thepodlets.io website. That is ThePodlets, altogether, where you will find transcripts and show notes. We’ll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

Fuelled by Fire
Episode 4 | Matt Duffie

Fuelled by Fire

Play Episode Listen Later Apr 24, 2020 86:30


Dual international, current Auckland Blues rugby player and former NRL player and club favorite with the Melbourne Storm.Had a great time chatting to Duff about his own journey through sport, Storm and the All Blacks.I could’ve sat down with him for hours. A true gentleman and great athlete with a wealth of experience in all areas.This was one of my favourites that I know you guys will enjoy. See acast.com/privacy for privacy and opt-out information.

Kube Cuddle
Duffie Cooley

Kube Cuddle

Play Episode Listen Later Mar 18, 2020 61:27


In this episode Rich speaks with Duffie Cooley, from VMware and TGIK. Topics include: How Duffie got into distributed systems, infrastructure APIs as a force multiplier, Chaos Engineering and SRE, TGIK, kind, Cluster API, and multitenancy.

The Podlets - A Cloud Native Podcast
Application Transformation with Chris Umbel and Shaun Anderson (Ep 19)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Mar 2, 2020 45:43


Today on the show we are very lucky to be joined by Chris Umbel and Shaun Anderson from Pivotal to talk about app transformation and modernization! Our guests help companies to update their systems and move into more up-to-date setups through the Swift methodology and our conversation focusses on this journey from legacy code to a more manageable solution. We lay the groundwork for the conversation, defining a few of the key terms and concerns that arise for typical clients and then Shaun and Chris share a bit about their approach to moving things forward. From there, we move into the Swift methodology and how it plays out on a project before considering the benefits of further modernization that can occur after the initial project. Chris and Shaun share their thoughts on measuring success, advantages of their system and how to avoid roll back towards legacy code. For all this and more, join us on The Podlets Podcast, today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Josh Rosso Duffie Cooley Olive Power Key Points From This Episode: A quick introduction to our two guests and their roles at Pivotal. Differentiating between application organization and application transformation. Defining legacy and the important characteristics of technical debt and pain. The two-pronged approach at Pivotal; focusing on apps and the platform. The process of helping companies through app transformation and what it looks like. Overlap between the Java and .NET worlds; lessons to be applied to both. Breaking down the Swift methodology and how it is being used in app transformation. Incremental releases and slow modernization to avoid roll back to legacy systems. The advantages that the Swift methodology offers a new team. Possibilities of further modernization and transformation after a successful implementation. Measuring success in modernization projects in an organization using initial objectives. Quotes: “App transformation, to me, is the bucket of things that you need to do to move your product down the line.” — Shaun Anderson [0:04:54] “The pioneering teams set a lot of the guidelines for how the following teams can be doing their modernization work and it just keeps rolling down the track that way.” — Shaun Anderson [0:17:26] “Swift is a series of exercises that we use to go from a business problem into what we call a notional architecture for an application.” — Chris Umbel [0:24:16] “I think what's interesting about a lot of large organizations is that they've been so used to doing big bang releases in general. This goes from software to even process changes in their organizations.” — Chris Umbel [0:30:58] Links Mentioned in Today’s Episode: Chris Umbel — https://github.com/chrisumbel Shaun Anderson — https://www.crunchbase.com/person/shaun-anderson Pivotal — https://pivotal.io/ VMware — https://www.vmware.com/ Michael Feathers — https://michaelfeathers.silvrback.com/ Steeltoe — https://steeltoe.io/ Alberto Brandolini — https://leanpub.com/u/ziobrando Swiftbird — https://www.swiftbird.us/ EventStorming — https://www.eventstorming.com/book/ Stephen Hawking — http://www.hawking.org.uk/ Istio — https://istio.io/ Stateful and Stateless Workload Episode — https://thepodlets.io/episodes/009-stateful-and-stateless/ Pivotal Presentation on Application Transformation: https://content.pivotal.io/slides/application-transformation-workshop Transcript: EPISODE 19 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.0] CC: Hi, everybody. Welcome back to The Podlets. Today, we have an exciting show. It's myself on, Carlisia Campos. We have our usual guest hosts, Duffie Cooley, Olive Power and Josh Rosso. We also have two special guests, Chris Umbel. Did I say that right, Chris? [0:01:03.3] CU: Close enough. [0:01:03.9] CC: I should have checked before. [0:01:05.7] CU: Umbel is good. [0:01:07.1] CC: Umbel. Yeah. I'm not even the native English speaker, so you have to bear with me. Shaun Anderson. Hi. [0:01:15.6] SA: You said my name perfectly. Thank you. [0:01:18.5] CC: Yours is more standard American. Let's see, the topic of today is application modernization. Oh, I just found out word I cannot pronounce. That's my non-pronounceable words list. Also known as application transformation, I think those two terms correctly used alternatively? The experts in the house should say something. [0:01:43.8] CU: Yeah. I don't know that I would necessarily say that they're interchangeable. They're used interchangeably, I think by the general population though. [0:01:53.0] CC: Okay. We're going to definitely dig into that, how it does it not make sense to use them interchangeably, because just by the meaning, I would think so, but I'm also not in that world day-to-day and that Shaun and Chris are. By the way, please give us a brief introduction the two of you. Why don't you go first, Chris? [0:02:14.1] CU: Sure. I am Chris Umbel. I believe it was probably actually pronounced Umbel in Germany, but I go with Umbel. My title this week is the – I think .NET App Transformation Journey Lead. Even though I focus on .NET modernization, it doesn't end there. Touch a little bit of everything with Pivotal. [0:02:34.2] SA: I'm Shaun Anderson and I share the same title of the week as Chris, except for where you say .NET, I would say Java. In general, we play the same role and have slightly different focuses, but there's a lot of overlap. [0:02:48.5] CU: We get along, despite the .NET and Java thing. [0:02:50.9] SA: Usually. [0:02:51.8] CC: You both are coming from Pivotal, yeah? As most people should know, but I'm sure now everybody knows, Pivotal was just recently as of these date, which is what we are? End of January. This episode is going to be a while to release, but Pivotal was just acquired by VMware. Here we are. [0:03:10.2] SA: It's good to be here. [0:03:11.4] CC: All right. Somebody, one of you, may be let's say Chris, because you've brought this up, how this application organization differs from application transformation? Because I think we need to lay the ground and lay the definitions before we can go off and talk about things and sound experts and make sure that everybody can follow us. [0:03:33.9] CU: Sure. I think you might even get different definitions, even from within our own practice. I'll at least lay it out as I see it. I think it's probably consistent with how Shaun's going to see it as well, but it's what we tell customers anyway. At the end of the day, there are – app transformation is the larger [inaudible] bucket. That's going to include, say just the re-hosting of applications, taking applications from point A to some new point B, without necessarily improving the state of the application itself. We'd say that that's not necessarily an exercise in paying down technical debt, it's just making some change to an application or its environment. Then on the modernization side, that's when things start to get potentially a little more architectural. That's when the focus becomes paying down technical debt and really improving the application itself, usually from an architectural point of view and things start to look maybe a little bit more like rewrites at that point. [0:04:31.8] DC: Would you say that the transformation is more in-line with re-platforming, those of you that might think about it? [0:04:36.8] CU: We'd say that app transformation might include re-platforming and also the modernization. What do you think of that, Shaun? [0:04:43.0] SA: I would say transformation is not just the re-platforming, re-hosting and modernization, but also the practice to figure out which should happen as well. There's a little bit more meta in there. Typically, app transformation to me is the bucket of things that you need to do to move your product down the line. [0:05:04.2] CC: Very cool. I have two questions before we start really digging to the show, is still to lay the ground for everyone. My next question will be are we talking about modernizing and transforming apps, so they go to the clouds? Or is there a certain cut-off that we start thinking, “Oh, we need to – things get done differently for them to be called native.” Is there a differentiation, or is this one is the same as the other, like the process will be the same either way? [0:05:38.6] CU: Yeah, there's definitely a distinction. The re-platforming bucket, that re-hosting bucket of things is where your target state, at least for us coming out of Pivotal, we had definitely a product focus, where we're probably only going to be doing work if it intersects with our product, right? We're going to be doing both re-platforming targeted, say typically at a cloud environment, usually Cloud Foundry or something to that effect. Then modernization, while we're usually doing that with customers who have been running our platform, there's nothing to say that you necessarily need a cloud, or any cloud to do modernization. We tend to based on who we work for, but you could say that those disciplines and practices really are agnostic to where things run. [0:06:26.7] CC: Sorry, I was muted. I wanted to ask Shaun if you wanted to add to that. Do you have the same view? [0:06:33.1] SA: Yeah. I have the same view. I think part of what makes our process unique that way is we're not necessarily trying to target a platform for deployment, when we're going through the modernization part anyway. We're really looking at how can we design this application to be the best application it can be. It just so happens that that tends to be more 12-factor compliant that is very cloud compatible, but it's not necessarily the way that we start trying to aim for a particular platform. [0:07:02.8] CC: All right. If everybody allows me, after this next question, I'll let other hosts speak too. Sorry for monopolizing, but I'm so excited about this topic. Again, in the spirit of understanding what we're talking about, what do you define as legacy? Because that's what we're talking about, right? We’re definitely talking about a move up, move forwards. We're not talking about regression and we're not talking about scaling down. We're talking about moving up to a modern technology stack. That means, that implies we're talking about something that's legacy. What is legacy? Is it contextual? Do we have a hard definition? Is there a best practice to follow? Is there something public people can look at? Okay, if my app, or system fits this recipe then it’s considered legacy, like a diagnosis that has a consensus. [0:07:58.0] CU: I can certainly tell you how you can't necessarily define legacy. One of the ways is by the year that it was written. You can certainly say that there are certainly shops who are writing legacy code today. They're still writing legacy code. As soon as they're done with a project, it's instantly legacy. There's people that are trying to define, like another Michael Feathers definition, which is I think any application that doesn't have tests, I don't know that that fits what – our practice necessarily sees legacy as. Basically, anything that's occurred a significant amount of technical debt, regardless of when the application was written or conceived fits into that legacy bucket. Really, our work isn't necessarily as concerned about whether something's legacy or not as much as is there pain that we can solve with our practice? Like I said, we've modernized things that were in for all intents and purposes, quite modern in terms of the year they were written. [0:08:53.3] SA: Yeah. I would double down on the pain. Legacy to us often is something that was written as a prototype a year ago. Now it's ready to prove itself. It's going to be scaled up, but it wasn't built with scale in mind, or something like that. Even though it may be the latest technology, it just wasn't built for the load, for example. Sometimes legacy can be – the pain is we have applications on a mainframe and we can't find Cobol developers and we're leasing a giant mainframe and it's costing a lot of money, right? There's different flavors of pain. It also could be something as simple as a data center move. Something like that, where we've got all of our applications running on Iron and we need to go to a virtual data center somewhere, whether it's cloud or on-prem. Each one of those to us is legacy. It's all about the pain. [0:09:47.4] CU: I think is miserable as that might sound, that's really where it starts and is listening to that pain and hearing directly from customers what that pain is. Sounds terrible when you think about it that you're always in search of pain, but that isn't indeed what we do and try to alleviate that in some way. That pain is what dictates the solution that you come up with, because there are certain kinds of pain that aren't going to be solved with say, modernization approach, a more a platformed approach even. You have to listen and make sure that you're applying the right medicine to the right pain. [0:10:24.7] OP: Seems like an interesting thing bringing what you said, Chris, and then what you said earlier, Shaun. Shaun you had mentioned the target platform doesn't necessarily matter, at least upfront. Then Chris, you had implied bringing the right thing in to solve the pain, or to help remedy the pain to some degree. I think what's interesting may be about the perspectives for those on this call and you too is a lot of times our entry points are a lot more focused with infrastructure and platform teams, where they have these objectives to solve, like cost and ability to scale and so on and so forth. It seems like your entry point, at least historically is maybe a little bit more focused on finding pain points on more of the app side of the house. I'm wondering if that's a fair assessment, or if you could speak to how you find opportunities and what you're really targeting. [0:11:10.6] SA: I would say that's a fair assessment from the perspective of our services team. We're mainly app-focused, but it's almost there's a two-pronged approach, where there's platform pain and application pain. What we've seen is often solving one without the other is not a great solution, right? I think that's where it's challenging, because there's so much to know, right? It's hard to find one team or one person who can point out the pain on both sides. It just depends on often, how the customer approaches us. If they are saying something like, “We’re a credit card company and we're getting our butts kicked by this other company, because they can do biometrics and we can't yet, because of the limitations of our application.” Then we would approach it from the app-first perspective. If it's another pain point, where our operations, day two operations is really suffering, we can't scale, where we have issues that the platform is really good at solving, then we may start there. It always tends to merge together in the end. [0:12:16.4] CU: You might be surprised how much variety there is in terms of the drivers for people coming to us. There are a lot of cases where the work came to us by way of the platform work that we've done. It started with our sister team who focuses on the platform side of things. They solve the infrastructure problems ahead of us and then we close things out on the application side. We if our account teams and our organization is really listening to each individual customer that you'll find that there – that the pain is drastically different, right? There are some cases where the driver is cost and that's an easy one to understand. There are also drivers that are usually like a date, such as this data center goes dark on this date and I have to do something about it. If I'm not out of that data center, then my apps no longer run. The solution to that is very different than the solution you would have to, "Look, my application is difficult for me to maintain. It takes me forever to ship features. Help me with that." There's two very different solutions to those problems, but each of which are things that come our way. It's just that former probably comes in by way of our platform team. [0:13:31.1] DC: Yeah, that’s an interesting space to operate in in the application transformation and stuff. I've seen entities within some of the larger companies that represent this field as well. Sometimes that's called production engineering or there are a few other examples of this that I'm aware of. I'm curious how you see that happening within larger companies. Do you find that there is a particular size entity that is actually striving to do this work with the tools that they have internally, or do you find that typically, most companies are just need something like an application transformation so you can come in and help them figure out this part of it out? [0:14:09.9] SA: We've seen a wide variety, I think. One of them is maybe a company really has a commitment to get to the cloud and they get a platform and then they start putting some simple apps up, just to learn how to do it. Then they get stuck with, “Okay. Now how do we with trust get some workloads that are running our business on it?” They will often bring us in at that point, because they haven't done it before. Experimenting with something that valuable to them is — usually means that they slow down. There's other times where we've come in to modernize applications, whether it's a particular business unit for example, that may have been trying to get off the mainframe for the last two years. They’re smart people, but they get stuck again, because they haven't figured out how to do it. What often happens and Chris can talk about some examples of this is once we help them figure out how to modernize, or the recipes to follow to start getting their systems systematically on to the platform and modernize, that they tend to like forming a competency area around it, where they'll start to staff it with the people who are really interested and they take over where we started from. [0:15:27.9] CU: There might be a little bit of bias to that response, in that typically, in order to even get in the door with us, you're probably a Fortune 100, or at least a 500, or government, or something to that effect. We're going to be seeing people that one, have a mainframe to begin with. Two, would have say, capacity to fund say a dedicated transformation team, or to build a unit around that. You could say that the smaller an organization gets, maybe the easier it is to just have the entire organization just write software the modern way to begin with. At least at the large side, we do tend to see people try to build a – they'll use different names for it. Try to have a dedicated center of excellence or practice around modernization. Our hope is to help them build that and hopefully, put them in a position that that can eventually disappear, because eventually, you should no longer need that as a separate discipline. [0:16:26.0] JR: I think that's an interesting point. For me, I argue that you do need it going forward, because of the cognitive overhead between understanding how your application is going to thrive on today's complex infrastructure models and understanding how to write code that works. I think that one person that has all of that in their head all the time is a little too much, a little too far to go sometimes. [0:16:52.0] CU: That's probably true. When you consider the size the portfolios and the size of the backlog for modernization that people have, I mean, people are going to be busy on that for a very long time anyway. It's either — even if it is finite, it still has a very long life span at a minimum. [0:17:10.7] SA: At a certain point, it becomes like painting the Golden Gate Bridge. As soon as you finish, you have to start again, because of technology changes, or business needs and that thing. It's probably a very dynamic organization, but there's a lot of overlap. The pioneering teams set a lot of the guidelines for how the following teams can be doing their modernization work and it just keeps rolling down the track that way. It may be that people are busy modernizing applications off of WebLogic, or WebSphere, and it takes a two years or more to get that completed for this enterprise. It was 20, 50 different projects. To them, it was brand-new each time, which is cool actually to come into that. [0:17:56.3] JR: I'm curious, I definitely love hear it from Olive. I have one more question before I pass it out and I think we’d love to hear your thoughts on all of this. The question I have is when you're going through your day-to-day working on .NET and Java applications and helping people figure out how to go about modernizing them, what we've talked about so far is that represents some of the deeper architectural issues and stuff. You've already mentioned 12 factor after and being able to move, or thinking about the way that you frame the application as far as inputs of those things that it takes to configure, or to think with the lifecycle of those things. Are there some other common patterns that you see across the two practices, Java and .NET, that you think are just concrete examples of stuff that people should take away maybe from this episode, that they could look at their app – and they’re trying to get ahead of the game a little bit? [0:18:46.3] SA: I would say a big part of the commonality that Chris and I both work on a lot is we have a methodology called the SWIFT methodology that we use to help discover how the applications really want to behave, define a notional architecture that is again, agnostic of the implementation details. We’ll often come in with a the same process and I don't need to be a .NET expert and a .NET shop to figure out how the system really wants to be designed, how you want to break things into microservices and then the implementation becomes where those details are. Chris and I both collaborate on a lot of that work. It makes you feel a little bit better about the output when you know that the technology isn't as important. You get to actually pick which technology fits the solution best, as opposed to starting with the technology and letting a solution form around it, if that makes sense. [0:19:42.4] CU: Yeah. I'd say that interesting thing is just how difficult it is while we're going through the SWIFT process with customers, to get them to not get terribly attached to the nouns of the technology and the solution. They've usually gone in where it's not just a matter of the language, but they have something picked in their head already for data storage, for messaging, etc., and they're deeply attached to some of these decisions, deeply and emotionally attached to them. Fundamentally, when we're designing a notional architecture as we call it, really you should be making decisions on what nouns you're going to pick based on that architecture to use the tools that fit that. That's generally a bit of a process the customers have to go through. It's difficult for them to do that, because the more technical their stakeholders tend to be, often the more attached they are to the individual technology choices and breaking that is the principal role for us. [0:20:37.4] OP: Is there any help, or any investment, or any coordination with those vendors, or the purveyors of the technologies that perhaps legacy applications are, or indeed the platforms they're running on, is there any help on that side from those vendors to help with application transformation, or making those applications better? Or do organizations have to rely on a completely independent, so the team like you guys to come in and help them with that? Do you understand my point? Is there any internal – like you mentioned WebLogic, WebSphere, do the purveyors of those platforms try and drive the transformation from within there? Or is it organizations who are running those apps have to rely on independent companies like you, or like us to help them with that? [0:21:26.2] SA: I think some of it depends on what the goal of the modernization is. If it's something like, we no longer want to pay Oracle licensing fees, then of course, obviously they – WebLogic teams aren't going to be happy to help. That's not always the case. Sometimes it's a case where we may have a lot of WebLogic. It's working fine, but we just don't like where it's deployed and we'd like to containerize it, move it to Kubernetes or something like that. In that case, they're more willing to help. At least in my experience, I've found that the technology vendors are rightfully focused just on upgrading things from their perspective and they want to own the world, right? WebLogic will say, “Hey, we can do everything. We have clustering. We have messaging. We've got good access to data stores.” It's hard to find a technology vendor that has that broader vision, or the discipline to not try to fit their solutions into the problem, when maybe they're not the best fit. [0:22:30.8] CU: I think it's a broad generalization, but specifically on the Java side it seems that at least with app server vendors, the status quo is usually serving them quite well. Quite often, we’re adversary – a bit of an adversarial relationship with them on occasion. I could certainly say that within the .NET space, we've worked a relatively collaboratively with Microsoft on things like Steeltoe, which is a I wouldn't say it's a springboot analog, but at least a microservice library that helps people achieve 12-factor cloud nativeness. That's something where I guess Microsoft represents both the legacy side, but also the future side and were part of a solution together there. [0:23:19.4] SA: Actually, that's a good point because the other way that we're seeing vendors be involved is in creating operators on Kubernetes side, or Cloud Foundry tiles, something that makes it easy for their system to still be used in the new world. That's definitely helpful as well. [0:23:38.1] CC: Yeah, that's interesting. [0:23:39.7] JR: Recently, myself me people on my team went through a training from both Shaun and Chris, interestingly enough in Colorado about this thing called the SWIFT methodology. I know it's a really important methodology to how you approach some of the application transformation-like engagements. Could you two give us a high-level overview of what that methodology is? [0:24:02.3] SA: I want to hear Chris go through it, since I always answer that question first. [0:24:09.0] CU: Sure. I figured since you were the inventor, you might want to go with it Shaun, but I'll give it a stab anyway. Swift is a series of exercises that we use to go from a business problem into what we call a notional architecture for an application. The one thing that you'll hear Shaun say all the time that I think is pretty apt, which is we're trying to understand how the application wants to behave. This is a very analog process, especially at the beginning. It's one where we get people who can speak about the business problem behind an application and the business processes behind an application. We get them into a room, a relatively large room typically with a bunch of wall space and we go through a series of exercises with them, where we tease that business process apart. We start with a relatively lightweight version of Alberto Brandolini’s event storing method, where we map out with the subject matter experts, what all of the business events that occur in a system are. That is a non-technical exercise, a completely non-technical exercise. As a matter of fact, all of this uses sticky notes and arts and crafts. After we've gone through that process, we transition into Boris diagram, which is an exercise of Shaun's design that we take the domains that we've, or at least service candidates that we've extrapolated from that event storming and start to draw out a notional architecture. Like an 80% idea of what we think the architecture is going to look like. We're going to do this for slices of – thin slices of that business problem. At that point, it starts to become something that a software developer might be interested in. We have an exercise called Snappy that generally occurs concurrently, which translates that message flow, Boris diagram thing into something that's at least a little bit closer to what a developer could act upon. Again, these are sticky note and analog exercises that generally go on for about a week or so, things that we do interactively with customers to try to get a purely non-technical way, at least at first, so that we can understand that problem and tell you what an architecture is that you can then act on. We try to position this as a customer. You already have all of the answers here. What we're going to do as facilitators of these is try to pull those out of your head. You just don't know how to get to the truth, but you already know that truth and we're going to design this architecture together. How did I do, Shaun? [0:26:44.7] SA: I couldn't have said it better myself. I would say one of the interest things about this process is the reason why it was developed the way it was is because in the world of technology and especially engineers, I definitely seen that you have two modes of thought when you come from the business world to the to the technical world. Often, engineers will approach a problem in a very different way and a very focused, blindered way than business folks. Ultimately, what we try to think of is that the purpose for the software is to enable the business to run well. In order to do that, you really need to understand at least at a high-level, what the heck is the business doing? Surprisingly and almost consistently, the engineering team doing the work is separated from the business team enough that it's like playing the telephone game, right? Where the business folks say, “Well, I told them to do this.” The technical team is like, “Oh, awesome. Well then, we're going to use all this amazing technology and build something that really doesn't support you.” This process really brings everybody together to discover how the system really wants to behave. Also as a side effect, you get everybody agreeing that yes, that is the way it's supposed to be. It's exciting to see teams come together that actually never even work together. You see the light bulbs go on and say, “Oh, that's why you do that.” The end result is in a week, we can go from nobody really knows each other, or quite understands the system as a whole, to we have a backlog of work that we can prioritize based on the learnings that we have, and feel pretty comfortable that the end result is going to be pretty close to how we want to get there. Then the biggest challenge is defining how do we get from point A to point B. That's part of that layering of the Swift method is knowing when to ask those questions. [0:28:43.0] JR: A micro follow-up and then I'll keep my mouth shut for a little bit. Is there a place that people could go online to read about this methodology, or just get some ideas of what you just described? [0:28:52.7] SA: Yeah. You can go to swiftbird.us. That has a high-level overview of more the public facing of how the methodology works. Then there's also internal resources that are constantly being developed as well. That's where I would start. [0:29:10.9] CC: That sounds really neat. As always, we are going to have links on the show notes for all of this. I checked out the website for the EventStorming book. There is a resources page there and has a list of a bunch of presentations. Sounds very interesting. I wanted to ask Chris and Shaun, have you ever seen, or heard of a case where a company went through the transformation, or modernization process and then they roll back to their legacy system for any reason? [0:29:49.2] SA: That's actually a really good question. It implies that often, the way people think about modernization would be more of a big bang approach, right? Where at a certain point in time, we switch to the new system. If it doesn't work, then we roll back. Part of what we try to do is have incremental releases, where we're actually putting small slices into production where you're not rolling back a whole – from modern back to legacy. It's more of you have a week's worth of work that's going into production that's for one of the thin slices, like Chris mentioned. If that doesn't work where there's something that is unexpected about it, then you're rolling back just a small chunk. You're not really jumping off a cliff for modernization. You're really taking baby steps. If it's a two step forward and one step back, you're still making a lot of really good progress. You're also gaining confidence as you go that in the end in two years, you're going to have a completely shiny new modern system and you're comfortable with it, because you're getting there an inch of the time, as opposed to taking a big leap. [0:30:58.8] CU: I think what's interesting about a lot of large organizations is that they've been so used to doing big bang releases in general. This goes from software to even process changes in their organizations. They’ve become so used to that that it often doesn't even cross their mind that it's possible to do something incrementally. We really do often times have to get spend time getting buy-in from them on that approach. You'd be surprised that even in industries that you’d think would be fantastic with managing risk, when you look at how they actually deal with deployment of software and the rolling out of software, they’re oftentimes taking approaches that maximize their risk. There's no way to make something riskier by doing a big bang. Yeah, as Shaun mentioned, the specifics of the swift are to find a way, so that you can understand where and get a roadmap for how to carve out incremental slices, so that you can strangle a large monolithic system slowly over time. That's something that's pretty powerful. Once someone gets bought in on that, they absolutely see the value, because they're minimizing risk. They're making small changes. They're easy to roll back one at a time. You might see people who would stop somewhere along the way, and we wouldn't necessarily say that that's a problem, right? Just like not every app needs to be modernized, maybe there's portions of systems that could stay where they are. Is that a bad thing? I wouldn't necessarily say that it is. Maybe that's the way that – the best way for that organization. [0:32:35.9] DC: We've bumped into this idea now a couple of different times and I think that both Chris and Shaun have brought this up. It's a little prelude to a show that we are planning on doing. One of the operable quotes from that show is the greatest enemy of knowledge is not the ignorance, it is the illusion of knowledge. It's a quote by Stephen Hawking. It speaks exactly to that, right? When you come to a problem with a solution in your mind that is frequently difficult to understand the problem on its merit, right? It’s really interesting seeing that crop up again in this show. [0:33:08.6] CU: I think even oftentimes, the advantage of a very discovery-oriented method, such as Swift is that it allows you to start from scratch with a problem set with people maybe that you aren't familiar with and don't have some of that baggage and can ask the dumb questions to get to some of the real answers. It's another phrase that I know Shaun likes to use is that our roles is facilitator to this method are to ask dumb questions. I mean, you just can't put enough value on that, right? The only way that you're going to break that established thinking is by asking questions at the root. [0:33:43.7] OP: One question, actually there was something recently that happened in the Kubernetes community, which I thought was pretty interesting and I'd like to get your thoughts on it, which is that Istio, which is a project that operates as a service mesh, I’m sure you all are familiar with it, has recently decided to unmodernize itself in a way. It was originally developed as a set of microservices. They have had no end of difficulty in getting in optimizing the different interactions between those services and the nodes. Then recently, they decided this might be a good example of when to monolith, versus when to microservice. I'm curious what your thoughts are on that, or if you have familiarity with it. [0:34:23.0] CU: What's actually quite – I'm not going to necessarily speak too much to this. Time will tell as to if the monolithing that they're doing at the moment is appropriate or not. Quite often, the starting point for us isn't necessarily a monolith. What it is is a proposed architecture coming from a customer that they're proud of, that this is my microservice design. You'll see a simple system with maybe hundreds of nano-services. The surprise that they have is that the recommendation from us coming out of our Swift sessions is that actually, you're overthinking this. We're going to take that idea that you have any way and maybe shrink that down and to save tens of services, or just a handful of services. I think one of the mistakes that people make within enterprises, or on microservices at the moment is to say, “Well, that's not a microcservice. It’s too big.” Well, how big or how small dictates a microservice, right? Oftentimes, we at least conceptually are taking and combining services based on the customers architecture very common. [0:35:28.3] SA: Monoliths aren't necessarily bad. I mean, people use them almost as a pejorative, “Oh, you have a monolith.” In our world it's like, well monoliths are bad when they're bad. If they're not bad, then that's great. The corollary to that is micro-servicing for the sake of micro-servicing isn't necessarily a good thing either. When we go through the Boris exercise, really what we're doing is we're showing how domain-based, or capabilities relate to each other. That happens to map really well in our opinion to first, cut microservices, right? You may have an order service, or a customer service that manages some of that. Just because we map capabilities and how they relate to each other, it doesn't mean the implementation can't even be as a single monolith, but componentized inside it, right? That's part of what we try really hard to do is avoid the religion of monolith versus microservices, or even having to spend a lot of time trying to define what a microservice is to you. It's really more of well, a system wants to behave this way. Now, surprise, you just did domain-driven design and mapped out some good 12-factor compliant microservices should you choose to build it that way, but there's other constraints that always apply at that point. [0:36:47.1] OP: Is there more traction in organizations implementing this methodology on a net new business, rather than current running businesses or applications? Is there are situations more so that you have seen where a new project, or a new functionality within a business starts to drive and implement this methodology and then it creeps through the other lines of business within the organization, because that first one was successful? [0:37:14.8] CU: I'd say that based on the nature of who our customers are as an app transformation practice, based on who those customers are and what their problems are, we're generally used to having a starting point of a process, or software that exists already. There's nothing at all to mandate that it has to be that way. As a matter of fact, with folks from our labs organization, we've used these methods in what you could probably call greener fields. At the end of the day when you have a process, or even a candidate process, something that doesn't exist yet, as long as you can get those ideas onto sticky notes and onto a wall, this is a very valid way of getting – turning ideas into an architecture and an architecture into software. [0:37:59.4] SA: We've seen that happen in practice a couple times, where maybe a piece of the methodology was used, like EventStorming just to get a feel for how the business wants to behave. Then to rapidly try something out in maybe more of a evolutionary architecture approach, MVP approach to let's just build something from a user perspective just to solve this problem and then try it out. If it starts to catch hold, then iterate back and now drill into it a little bit more and say, “All right. Now we know this is going to work.” We're modernizing something that may be two weeks old just because hooray, we proved it's valuable. We didn't necessarily have to spend as much upfront time on designing that as we would in this system that's already proven itself to be of business value. [0:38:49.2] OP: This might be a bit of a broad question, but what defines success of projects like this? I mean, we mentioned earlier about cost and maybe some of the drivers are to move off certain mainframes and things like that. If you're undergoing an application transformation, it seems to me like it's an ongoing thing. How do enterprises try to evaluate that return on investment? How does it relate to success criteria? I mean, faster release times, etc., potentially might be one, but how was that typically evaluated and somebody internally saying, “Look, we are running a successful project.” [0:39:24.4] SA: I think part of what we tried to do upfront is identify what the objectives are for a particular engagement. Often, those objectives start out with one thing, right? It's too costly to keep paying IBM or Oracle for WebLogic, or WebSphere. As we go through and talk through what types of things that we can solve, those objectives get added to, right? It may be the first thing, our primary objective is we need to start moving workloads off of the mainframe, or workloads off of WebLogic, or WebSphere, or something like that. There's other objectives that are part of this too, which can include things as interesting as developer happiness, right? They have a large team of a 150 developers that are really just getting sick of doing the same old thing and having new technology. That's actually a success criteria maybe down the road a little bit, but it's more of a nice to have. In a long-winded answer of saying, when we start these and when we incept these projects, we usually start out with let's talk through what our objectives are and how we measure success, those key results for those objectives. As we're iterating through, we keep measuring ourselves against those. Sometimes the objectives change over time, which is fine because you learn more as you're going through it. Part of that incremental iterative process is measuring yourself along the way, as opposed to waiting until the end. [0:40:52.0] CC: Yeah, makes sense. I guess these projects are as you say, are continuous and constantly self-adjusting and self-analyzing to re-evaluate success criteria to go along. Yeah, so that's interesting. [0:41:05.1] SA: One other interesting note though that personally we like to measure ourselves when we see one project is moving along and if the customers start to form other projects that are similar, then we know, “Okay, great. It's taking hold.” Now other teams are starting to do the same thing. We've become the cool kids and people want to be like us. The only reason it happens for that is when you're able to show success, right? Then other teams want to be able to replicate that. [0:41:32.9] CU: The customers OKRs, oftentimes they can be a little bit easier to understand. Sometimes they're not. Typically, they involve time or money, where I'm trying to take release times from X to Y, or decrease my spend on X to Y. The way that we I think measure ourselves as a team is around how clean do we leave the campsite when we're done. We want the customers to be able to run with this and to continue to do this work and to be experts. As much as we'd love to take money from someone forever, we have a lot of people to help, right? Our goal is to help to build that practice and center of excellence and expertise within an organization, so that as their goals or ideas change, they have a team to help them with that, so we can ride off into the sunset and go help other customers. [0:42:21.1] CC: We are coming up to the end of the episode, unfortunately, because this has been such a great conversation. It turned out to be a more of an interview style, which was great. It was great getting the chance to pick your brains, Chris and Shaun. Going along with the interview format, I like to ask you, is there any question that wasn't asked, but you wish was asked? The intent here is to illuminates what this process for us and for people who are listening, especially people who they might be in pain, but they might be thinking this is just normal. [0:42:58.4] CU: That's an interesting one. I guess to some degree, that pain is unfortunately normal. That's just unfortunate. Our role is to help solve that. I think the complacency is the absolute worst thing in an organization. If there is pain, rather than saying that the solution won't work here, let’s start to talk about solutions to that. We've seen customers of all shapes and sizes. No matter how large, or cumbersome they might be, we've seen a lot of big organizations make great progress. If your organization's in pain, you can use them as an example. There is light at the end of the tunnel. [0:43:34.3] SA: It's usually not a train. [0:43:35.8] CU: Right. Usually not. [0:43:39.2] SA: Other than that, I think you asked all the questions that we always try to convey to customers of how we do things, what is modernization. There's probably a little bit about re-platforming, doing the bare minimum to get something onto to the cloud. We didn't talk a lot about that, but it's a little bit less meta, anyway. It's more technical and more recipe-driven as you discover what the workload looks like. It's more about, is it something we can easily do a CF push, or just create a container and move it up to the cloud with minimal changes? There's not conceptually not a lot of complexity. Implementation-wise, there's still a lot of challenges there too. They're not as fun to talk about for me anyway. [0:44:27.7] CC: Maybe that's a good excuse to have some of our colleagues back on here with you. [0:44:30.7] SA: Absolutely. [0:44:32.0] DC: Yeah, in a previous episode we talked about persistence and state of those sorts of things and how they relate to your applications and how when you're thinking about re-platforming and even just where you're planning on putting those applications. For us, that question comes up quite a lot. That's almost zero trying to figure out the state model and those sort of things. [0:44:48.3] CC: That episode was named States in Stateless Apps, I think. We are at the end, unfortunately. It was so great having you both here. Thank you Duffie, Shaun, Chris and I'm going by the order I'm seeing people on my video. Josh and Olive. Until next time. Make sure please to let us know your feedback. Subscribe. Give us a thumbs up. Give us a like. You know the drill. Thank you so much. Glad to be here. Bye, everybody. [0:45:16.0] JR: Bye all. [0:45:16.5] CU: Bye. [END OF EPISODE] [0:45:17.8] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

The Podlets - A Cloud Native Podcast

There are two words that get the blame more often than not when a problem cannot be rooted: the network! Today, along with special guest, Scott Lowe, we try to dig into what the network actually means. We discover, through our discussion that the network is, in fact, a distributed system. This means that each component of the network has a degree of independence and the complexity of them makes it difficult to understand the true state of the network. We also look at some of the fascinating parallels between networks and other systems, such as the configuration patterns for distributed systems. A large portion of the show deals with infrastructure and networks, but we also look at how developers understand networks. In a changing space, despite self-service becoming more common, there is still generally a poor understanding of networks from the developers’ vantage point. We also cover other network-related topics, such as the future of the network engineer’s role, transferability of their skills and other similarities between network problem-solving and development problem-solving. Tune in today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Duffie Cooley Nicholas Lane Josh Rosso Key Points From This Episode: • The network is often confused with the server or other elements when there is a problem.• People forget that the network is a distributed system, which has independent routers.• The distributed pieces that make up a network could be standalone computers.• The parallels between routing protocols and configuration patterns for distributed systems.• There is not a model for eventually achieving consistent networks, particularly if they are old.• Most routing patterns have a time-sensitive mechanism where traffic can be re-dispersed.• Understanding a network is a distributed system gives insights into other ones, like Kubernetes.• Even from a developers’ perspective, there is a limited understanding of the network.• There are many overlaps between developers and infrastructural thinking about systems.• How can network engineers apply their skills across different systems?• As the future changes, understanding the systems and theories is crucial for network engineers.• There is a chasm between networking and development.• The same ‘primitive’ tools are still being used for software application layers.• An explanation of CSMACD, collisions and their applicability. • Examples of cloud native applications where the network does not work at all.• How Spanning Tree works and the problems that it solves.• The relationship between software-defined networking and the adoption of cloud native technologies.• Software-defined networking increases the ability to self-service.• With self-service on-prem solutions, there is still not a great deal of self-service. Quotes: “In reality, what we have are 10 or hundreds of devices with the state of the network as a system, distributed in little bitty pieces across all of these devices.” — @scott_lowe [0:03:11] “If you understand how a network is a distributed system and how these theories apply to a network, then you can extrapolate those concepts and apply them to something like Kubernetes or other distributed systems.” — @scott_lowe [0:14:05] “A lot of these software defined networking concepts are still seeing use in the modern clouds these days” — @scott_lowe [0:44:38] “The problems that we are trying to solve in networking are not different than the problems that you are trying to solve in applications.” — @mauilion [0:51:55] Links Mentioned in Today’s Episode: Scott Lowe on LinkedIn — https://www.linkedin.com/in/scottslowe/ Scott Lowe’s blog — https://blog.scottlowe.org/ Kafka — https://kafka.apache.org/ Redis — https://redis.io/ Raft — https://raft.github.io/ Packet Pushers — https://packetpushers.net/ AWS — https://aws.amazon.com/ Azure — https://azure.microsoft.com/en-us/ Martin Casado — http://yuba.stanford.edu/~casado/ Transcript: EPISODE 15 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.4] DC: Good afternoon everybody. In this episode, we’re going to talk about the network. My name is Duffie Cooley and I’ll be the lead of this episode and with me, I have Nick. [0:00:49.0] NL: Hey, what’s up everyone. [0:00:51.5] DC: And Josh. [0:00:52.5] JS: Hi. [0:00:53.6] DC: And Mr. Scott Lowe joining us as a guest speaker. [0:00:56.2] SL: Hey everyone. [0:00:57.6] DC: Welcome, Scott. [0:00:58.6] SL: Thank you. [0:01:00.5] DC: In this discussion, we’re going to try and stay away, like we do always, we’re going to try and stay away from particular products or solutions that are related to the problem. The goal of it is to really kind of dig in to like what the network means when we refer to it as it relates to like cloud native applications or just application design in general. One of the things that I’ve noticed over time and I’m curious, what you all think but like, one of the things I’ve done over time is that people are kind of the mind that if it can’t root cause a particular issue that they run into, they’re like, “That was the network.” Have you all seen that kind of stuff out there? [0:01:31.4] NL: Yes, absolutely. In my previous life, before being a Kubernetes architect, I actually used my networking and engineering degree to be a network administrator for the Boeing Company, under the Boeing Corporation. Time and time again, someone would come to me and say, “This isn’t working. The network is down.” And I’m like, “Is the network down or is the server down?” Because those are different things. Turns out it was usually the server. [0:01:58.5] SL: I used to tell my kids that they would come to me and they would say, the Internet is down and I would say, “Well, you know. I don’t think the entire Internet is down, I think it’s just our connection to the Internet.” [0:02:10.1] DC: Exactly. [0:02:11.7] JS: Dad, the entire global economy is just taking a total hit. [0:02:15.8] SL: Exactly, right. [0:02:17.2] DC: I frequently tell people that my first distributed system that I ever had a real understanding of was the network, you know? It’s interesting because it kind of like, relies on the premises that I think a good distributed system should in that there is some autonomy to each of the systems, right? They are dependent on each other or even are inter communicate with each other but fundamentally, like when you look at routers and things like that, they are autonomous in their own way. There’s work that they do exclusive to the work that others do and exclusive to their dependencies which I think is very interesting. [0:02:50.6] SL: I think the fact that the network is a distributed system and I’m glad you said that Duffie, I think the fact the network is a distributed system is what most people overlook when they start sort of blaming the network, right? Let’s face it, in the diagrams, right, the network’s always just this blob, right? Here’s the network, right? It’s this thing, this one singular thing. When in reality, what we have are like 10 or hundreds of devices with the state of the network as a system, distributed in little bitty pieces across all of these devices. And no way, aside from logging in to each one of these devices are we able to assemble what the overall state is, right? Even routing protocols mean, their entire purpose is to assemble some sort of common understanding of what the state of the network is. Melding together, not just IP addresses which are these abstract concept but physical addresses and physical connections. And trying to reason to make decisions about them, how we center across and it’s far more complex and a lot of people understand, I think that’s why it’s just like the network is down, right? When reality, it’s probably something else entirely. [0:03:58.1] DC: Yeah, absolutely. Another good point to bring up is that each of these distributed pieces of this distributed system are in themselves like basically like just a computer. A lot of times, I’ve talked to people and they were like, “Well, the router is something special.” And I’m like, “Not really. Technically, a Linux box could just be a router if you have enough ports that you plug into it. Or it could be a switch if you needed to, just plug in ports.” [0:04:24.4] NL: Another good interesting parallel there is like when we talk about like routing protocols which are a way of – a way that allow configuration changes to particular components within that distributed system to be known about by other components within that distributed system. I think there’s an interesting parallel here between the way that works and the way that configuration patterns that we have for distributed systems work, right? If you wanted to make a configuration only change to a set of applications that make up some distributed system, you might go about like leveraging Ansible or one of the many other configuration models for this. I think it’s interesting because it represents sort of an evolution of that same idea in that you’re making it so that each of the components is responsible for informing the other components of the change, rather than taking the outside approach of my job is to actually push a change that should be known about by all of these concepts, down to them. Really, it’s an interesting parallel. What do you all think of that? [0:05:22.2] SL: I don’t know, I’m not sure. I’d have to process that for a bit. But I mean, are you saying like the interesting thought here is that in contrast to typical systems management where we push configuration out to something, using a tool like an Ansible, whatever, these things are talking amongst themselves to determine state? [0:05:41.4] DC: Yeah, it’s like, there are patterns for this like inside of distributed systems today, things like Kafka and you know, Kafka and Gossip protocol, stuff like this actually allows all of the components of a particular distributed system to understand the common state or things that would be shared across them and if you think about them, they’re not all that different from a routing protocol, right? Like the goal being that you give the systems the ability to inform the other systems in some distributed system of the changes that they may have to react to. Another good example of this one, which I think is interesting is like, what they call – when you have a feature behind a flag, right? You might have some distributed configuration model, like a Redis cache or database somewhere that you’ve actually – that you’ve held the running configuration of this distributed system. And when you want to turn on this particular feature flag, you want all of the components that are associated with that feature flag to enable that new capability. Some of the patterns for that are pretty darn close to the way that routing protocol models work. [0:06:44.6] SL: Yeah, I see what you're saying. Actually, that’ makes a lot of sense. I mean, if we think about things like Gossip protocols or even consensus protocols like Raft, right? They are similar to routing protocols in that they are responsible for distributing state and then coming to an agreement on what that state is across the entire system. And we even apply terms like convergence to both environments like we talk about how long it takes routing protocol to converge. And we might also talk about how long it takes for and ETCD cluster to converge after changing the number of members in the cluster of that nature. The point at which everybody in that distributed system, whether it be the network ETCD or some other system comes to the same understanding of what that shared state is. [0:07:33.1] DC: Yeah, I think that’s a perfect breakdown, honestly. Pretty much every routing technology that’s out there. You know, if you’re taking that – the computer of the network, you know, it takes a while but eventually, everyone will reconcile the fact that, “Yeah, that node is gone now.” [0:07:47.5] NL: I think one thing that’s interesting and I don’t know how much of a parallel there is in this one but like as we consider these systems like with modern systems that we’re building at scale, frequently we can make use of things like eventual consistency in which it’s not required per se for a transaction to be persisted across all of the components that it would affect immediately. Just that they eventually converge, right? Whereas with the network, not so much, right? The network needs to be right now and every time and there’s not really a model for eventually consistent networks, right? [0:08:19.9] SL: I don’t know. I would contend that there is a model for eventually consistent networks, right? Certainly not on you know, most organizations, relatively simple, local area networks, right? But even if we were to take it and look at something like a Clos fabric, right, where we have top of rack switches and this is getting too deep for none networking blokes that we know, right? Where you take top of rack switches that are talking layer to the servers below them or the end point below them. And they’re talking layer three across a multi-link piece up to the top, right? To the spine switches, so you have leaf switches, talking up spine switches, they’re going to have multiple uplinks. If one of those uplinks goes down, it doesn’t really matter if the rest off that fabric knows that that link is down because we have the SQL cost multi pathing going across that one, right? In a situation like that, that fabric is eventually consistent in that it’s okay if you know, knee dropping link number one of leaf A up to spine A is down and the rest of the system doesn’t know about that yet. But, on the other hand, if you are looking at network designs where convergence is being handled on active standby links or something of that nature or there aren’t enough paths to get from point A to point B until convergence happens then yes, you’re right. I think it kind of comes down to network design and the underlying architecture and there are so many factors that affect that and so many designs over the years that it’s hard to – I would agree and from the perspective of like if you have an older network and it’s been around for some period of time, right? You probably have one that is not going to be tolerant, a link being down like it will cause problems. [0:09:58.4] NL: Adds another really great parallel in software development, I think. Another great example of that, right? If we consider for a minute like the circuit breaking pattern or even like you know, most load balancer patterns, right? In which you have some way of understanding a list of healthy end points behind the load balancer and were able to react when certain end points are no longer available. I don’t consider that a pattern that I would relate to specifically if they consent to eventual consistency. I feel like that still has to be immediate, right? We have to be able to not send the new transaction to the dead thing. That has to stop immediately, right? It does in most routing patterns that are described by multi path, there is a very time sensitive mechanism that allows for the re-dispersal of that traffic across known paths that are still good. And the work, the amazing amount of work that protocol architects and network engineers go through to understand just exactly how the behavior of those systems will work. Such that we don’t see traffic. Black hole in the network for a period of time, right? If we don’t send traffic to the trash when we know or we have for a period of time, while things converge is really has a lot going for it. [0:11:07.0] SL: Yeah, I would agree. I think the interesting thing about discussing eventual consistency with regards to the networking is that even if we take a relatively simple model like the DOD model where we only have four layers to contend with, right? We don’t have to go all the way to this seven-layer OSI model. But even if we take a simple layer like the DOD four-layer model, we could be talking about the rapid response of a device connected at layer two but the less than rapid response of something operating at layer three or layer four, right? In the case of a network where we have these discreet layers that are intentionally loosely coupled which is another topic, we could talk about from a distribution perspective, right? We have these layers that are intentionally loosely coupled, we might even see consistency and the application of the cap theorem, behave differently at different layers of their model. [0:12:04.4] DC: That’s right. I think it’s fascinating like how much parallel there is here. As you get into like you know, deep architectures around software, you’re thinking of these things as it relates to like these distributed systems, especially as you’re moving toward more cloud native systems in which you start employing things like control theory and thinking about the behaviours of those systems both in aggregate like you know, some component of my application, can I scale this particular component horizontally or can I not, how am I handling state. So many of those things have parallels to the network that I feel like it kind of highlights I’m sure what everybody has heard a million times, you know, that there’s nothing new under the sun. There’s million things that we could learn from things that we’ve done in the past. [0:12:47.0] NL: Yeah, totally agree. I recently have been getting more and more development practice and something that I do sometimes is like draw out like how all of my functions and my methods, and take that in rack with each other across a consisting code base and lo and behold when I draw everything out, it sure does look a lot like a network diagram. All these things have to flow together in a very specific way and you expect the kind of returns that you’re looking for. It looks exactly the same, it’s kind of the – you know, how an atom kind of looks like a galaxy from our diagram? All these things are extrapolated across like – [0:13:23.4] SL: Yeah, totally. [0:13:24.3] NL: Different models. Or an atom looks like a solar system which looks like a galaxy. [0:13:28.8] SL: Nicholas, you said your network administrator at Boeing? [0:13:30.9] NL: I was, I was a network engineer at Boeing. [0:13:34.0] SL: You know, as you were sitting there talking, Duffie, so, I thought back to you Nick, I think all the times, I have a personal passion for helping people continue to grow and evolve in their career and not being stuck. I talk to a lot of networking folks, probably dating because of my involvement, back in the NSX team, right? But folks being like, “I’m just a network engineer, there’s so much for me to learn if I have to go learn Kubernetes, I wouldn’t even know where to start.” This discussion to me underscores the fact that if you understand how a network is a distributed system and how these theories apply to a network, then you can extrapolate those concepts and apply them to something like Kubernetes or other distributed systems, right? Immediately begin to understand, okay. Well, you know, this is how these pieces talk to each other, this is how they come, the consensus, this is where the state is stored, this is how they understand and exchange date, I got this. [0:14:33.9] NL: if you want to go down that that path, the controlled plane of your cluster is just like your central routing back bone and then the kublets themselves are just your edge switches going to each of your individual smaller network and then the pods themselves have been nodes inside of the network, right? You can easily – look at that, holy crap, it looks exactly the same. [0:14:54.5] SL: Yeah, that’s a good point. [0:14:55.1] DC: I mean, another interesting part, when you think about how we characterize systems, like where we learn that, where that skillset comes from. You raise a very good point. I think it’s an easier – maybe slightly easier thing to learn inside of networking, how to characterize that particular distributed system because of the way the components themselves are laid out and in such a common way. Where when we start looking at different applications, we find a myriad of different patterns with particular components that may behave slightly differently depending, right? Like there are different patterns within software like almost on per application bases whereas like with networks, they’re pretty consistently applied, right? Every once in a while, they’ll be kind of like a new pattern that emerges, that it just changes the behavior a little bit, right? Or changes the behavior like a lot but at the same time, consistently across all of those things that we call data center networks or what have you. To learn to troubleshoot though, I think the key part of this is to be able to spend the time and the effort to actually understand that system and you know, whether you light that fire with networking or whether you light that fire with like just understanding how to operationalize applications or even just developing and architecting them, all of those things come into play I think. [0:16:08.2] NL: I agree. I’m actually kind of curious, the three of us have been talking quite a bit about networking from the perspective that we have which is more infrastructure focused. But Josh, you have more of a developer focused background, what’s your interaction and understanding of the network and how it plays? [0:16:24.1] JS: Yeah, I’ve always been a consumer of the network. It’s something that is sat behind an API and some library, right? I call out to something that makes a TCP connection or an http interaction and then things just happen. I think what’s really interesting hearing talk and especially the point about network engineers getting into thee distributed system space is that I really think that as we started to put infrastructure behind API’s and made it more and more accessible to people like myself, app developers and programmers, we started – by we, you know, I’m obviously generalizing here. But we started owning more and more of the infrastructure. When I go into teams that are doing big Kubernetes deployments, it’s pretty rare, that’s the conventional infrastructure and networking teams that are standing up distributed systems, Kubernetes or not, right? It's a lot of times, a bunch of app developers who have maybe what we call dev-ops, whatever that means but they have an application development background, they understand how they interact with API’s, how to write code that respects or interacts with their infrastructure and they’re standing up these systems and I think one of the gaps of that really creates is a lot of people including myself just hearing you all talk, we don’t understand networking at that level. When stuff falls over and it’s either truly the network or it’s getting blamed on the network, it’s often times, just because we truly don’t understand a lot of these things, right? Encapsulation, meshes, whatever it might be, we just don’t understand these concepts at a deep level and I think if we had a lot more people with network engineering backgrounds, shifting into the distributed system space. It would alleviate a bit of that, right? Bringing more understanding into the space that we work in nowadays. [0:18:05.4] DC: I wonder if maybe it also would be a benefit to have like more cross discussions like this one between developers and infrastructure kind of focused people, because we’re starting to see like as we’re crossing boundaries, we see that the same things that we’re doing on the infrastructure side, you’re also doing in the developer side. Like cap theorem as Scott mention which is the idea that you can have two out of three of consistency, availability and partitioning. That also applies to networking in a lot of ways. You can only have a network that is either like consistent or available but it can’t handle partitioning. It can be a consistent to handle partitioning but it’s not always going to be available, that sort of thing. These things that apply in from the software perspective also apply to us but we think about them as being so completely different. [0:18:52.5] JS: Yeah, I totally agree. I really think like on the app side, a couple of years ago, you know, I really just didn’t care anything outside of the JVM like my stuff on the JVM and if it got out to the network layer of the host like just didn’t care, know, need to know about that at all. But ever since cloud computing and distributed systems and everything became more prevalent, the overlap has become extremely obvious, right? In all these different concepts and it’s been really interesting to try to ramp up on that. [0:19:19.6]:19.3] NNL: Yeah, I think you know Scott and I both do this. I think as I imagine, actually, this is true of all four of us to be honest. But I think that it’s really interesting when you are out there talking to people who do feel like they’re stuck in some particular role like they’re specialists in some particular area and we end up having the same discussion with them over and over again. You know, like, “Look, that may pay the bills right now but it’s not going to pay the bills in the future.” And so you know, the question becomes, how can you, as a network engineer take your skills forward and not feel as though you’re just going to have to like learn everything all over again. I think that one of the things that network engineers are pretty decent at is characterizing those systems and being able to troubleshoot them and being able to do it right now and being able to like firefight those capabilities and those skills are incredibly valuable in the software development and in operationalizing applications and in SRE models. I mean, all of those skills transfer, you know? If you’re out there and you’re listening and you feel like I will always be a network engineer, consider that you could actually take those skills forward into some other role if you chose to. [0:20:25.1] JS: Yeah, totally agree. I mean, look at me, the lofty career that I’ve been come to. [0:20:31.4] SL: You know, I would also say that the fascinating thing to me and one of the reasons I launched, I don’t say this to like try and plug it but just as a way of talking about the reason I launched my own podcast which is now part of packet pushers, was exploring this very space and that is like we’ve got folks like Josh who comes from the application development spacing is now being, you know, in a way, forced to own and understand more infrastructure and we’ve got the infrastructure folks who now in a way, whether it be through the rise of cloud computing and abstractions away from visible items are being forced kind of up the stack and so they’re coming together and this idea of what does the future of the folks that are kind of like in our space, what does that look like? How much longer does a network engineer really need to be deeply versed in all the different layers? Because everything’s been abstracted away by some other type of thing whether it’s VPC’s or Azure V Nets or whatever the case is, right? I mean, you’ve got companies bringing the VPC model to on premises networks, right? As API’s become more prevalent, as everything gets sort of abstracted away, what does the future look like, what are the most important skills and it seems to me that it’s these concepts that we’re talking about, right? This idea of distributed systems and how distributed systems behave and how the components react to one another and understanding things like the cap theorem that are going to be most applicable rather than the details of trouble shooting VGP or understanding AWS VPC’s or whatever the case may be. [0:22:08.5] NL: I think there is always going to be a place for the people who know how things are running under the hood from like a physical layer perspective, that sort of thing, there’s always going to be the need for the grave beards, right? Even in software development, we still have the people who are slinging kernel code in C. And you know, they’re the best, we salute you but that is not something that I’m interested in it for sure. We always need someone there to pick up the pieces as it were. I think that yeah, having just being like, I’m a Cisco guy, I’m a Juniper guy, you know? I know how to pawn that or RSH into the switch and execute these commands and suddenly I’ve got this port is now you know, trunk to this V neck crap, I was like, Nick, remember your training, you know? How to issue those commands, I wonder, I think that that isn’t necessarily going away but it will be less in demand in the future. [0:22:08.5] SL: I’m curious to hear Josh’s perspective as like having to own more and more of the infrastructure underneath like what seems to be the right path forward for those folks? [0:23:08.7] JS: Yeah, I mean, unfortunately, I feel like a lot of times, it just ends up being trial by fire and it probably shouldn’t be that. But the amount of times that I have seen a deployment of some technology fall over because we overlapped the site range or something like that is crazy. Because we just didn’t think about it or really understand it that well. You know, like using one protocol, you just described BGP. I never ever dreamt of what BGP was until I started using attributed systems, right? Started using BGP as a way to communicate routes and the amount off times that I’ve messed up that connection because I don’t have a background in how to set that up appropriately, it’s been rough. I guess my perspective is that the technology has gotten better overall and I’m mostly obviously in the Kubernetes space, speaking to the technologies around a lot of the container networking solutions but I’m sure this is true overall. It seems like a lot of the sharp edges have been buffed out quite a bit and I have less of an opportunity to do things terribly wrong. I’ve also noticed for what it’s worth, a lot of folks that have my kind of background or going out to like the AWS is the Azure’s of the world. They’re using all these like, abstracted networking technologies that allow t hem to do really cool stuff without really having to understand how it works and they’re often times going back to their networking team on prem when they have on prem requirements and being like it should be this easy or XY and Z and they’re almost like pushing the networking team to modernize that and make things simpler. Based on experiences they’re having with these cloud providers. [0:24:44.2] DC: Yeah, what do you mean I can’t create a load balancer that crosses between these two disparate data centers as it easily is. Just issuing a single command. Doesn’t this just exist from a networking standpoint? Even just the idea that you can issue an API command and get a load balancer, just that idea alone, the thousands of times I have heard that request in my career. [0:25:08.8] JS: And like the actual work under the hood to get that to work properly is it’s a lot, there’s a lot of stuff going on. [0:25:16.5] SL: Absolutely, yeah, [0:25:17.5] DC: Especially when you’re into plumbing, you know? If you’re going to create a load balancer with API, well then, what API does the load balancer use to understand where to send that traffic when it’s being balanced. How do you handle discovery, how do you hit like – obviously, yeah, there’s no shortage on the amount of work there. [0:25:36.0] JS: Yeah. [0:25:36.3] DC: That’s a really good point, I mean, I think sometimes it’s easy for me to think about some of these API driven networking models and the cost that come with them, the hidden cost that come with them. An example of this is, if you’re in AWS and you have a connectivity between wo availability, actually could be any cloud, it doesn’t have to be an AWS, right? If you have connectivity between two different availability zones and you’re relying on that to be reliable and consistent and definitely not to experience, what tools do you have at your disposal, what guarantees do you have that that network has even operating in a way that is responsive, right? And in a way, this is kind of taking us towards the observability conversation that I think we’ve talked a little bit about the past. Because I think it highlights the same set of problems again, right? You have to understand, you have to be able to provide the consumers of any service, whether that service is plumbing, whether it’s networking, whether it’s your application that you’ve developed that represents a set of micro service. You have to provide everybody a way or you know, have to provide the people who are going to answer the phone at two in the morning. Or even the robots that are going to answer the phone at two in the morning. I have to provide them some mechanism by which to observe those systems as they are in use. [0:26:51.7] JS: I’m not convinced that very many of the cloud providers do that terribly well today, you know? I feel like I’ve been burned in the past without actually having an understanding of the state that we’re in and so it is interesting maybe the software development team can actually start pushing that down toward the networking vendors out there out in the world. [0:27:09.9] NL: Yeah that would be great. I mean I have been recently using a managed Kubernetes service. I have been kicking the tires on it a little bit. And yeah there has been a couple of times where I had just been got by networking issues. I am not going to get into what I have seen in a container network interface or any of the technologies around that. We are going to talk about that another time. But the CNI that I am using in this managed service was just so wonky and weird. And it was failing from a network standpoint. The actual network was failing in a sense because the IP addresses for the nodes themselves or the pods wasn’t being released properly and because of our bag. And so, the rules associated with my account could not remove IP addresses from a node in the network because it wasn’t allowed to and so from a network, I ran out of IP addresses in my very small site there. [0:28:02.1] SL: And this could happen in database, right? This could happen in a cache of information, this could happen in pretty much the same pattern that you are describing is absolutely relevant in both of these fields, right? And that is a fascinating thing about this is that you know we talk about the network generally in these nebulous terms and that it is like a black box and I don’t want them to know anything about it. I want to learn about it, I don’t want to understand it. I just want to be able to consume it via an API and I want to have the expectation that everything will work the way it is supposed to. I think it is fascinating that on the other side of that API are people maybe just like you who are doing their level best to provide, to chase the cap theorum into it’s happy end and figure out how to actually give you what you need out of that service, you know? So, empathy I think is important. [0:28:50.4] NL: Absolutely, to bring that to an interesting thought that I just had where on both sides of this chasm or whatever it is between networking and develop, the same principles exists like we have been saying but just to elicited on it a little bit more, it’s like on one side you have like I need to make sure that these ETCD nodes communicate with each other and that the data is consistent across the other ones. So, we use a protocol called RAFT, right? And so that’s eventually existent tool then that information is sent onto a network, which is probably using OSPF, which is “open shortest path first” routing protocol to become eventually consistent on the data getting from one point to the other by opening the shortest path possible. And so these two things are very similar. They are both these communication protocols, which is I mean that is what protocol means, right? The center for communication but they’re just so many different layers. Obviously of the OSI model but people don’t put them together but they really are and we keep coming back to that where it is all the same thing but we think about it so differently. And I am actually really appreciating this conversation because now I am having a galaxy brain moment like boo. [0:30:01.1] SL: Another really interesting one like another galaxy moment, I think that is interesting is if you think about – so let us break them down like TCP and UTP. These are interesting patterns that actually do totally relate again just in software patterns, right? In TCP the guarantee is that every data gram, if you didn’t get the entire data gram you will understand that you are missing data and you will request a new version of that same packet. And so, you can provide consistency in the form of retries or repeats if things don’t work, right? Not dissimilar from the ability to understand like that whether you chuck some in data across the network or like in a particular data base, if you make a query for a bunch of information you have to have some way of understanding that you got the most recent version of it, right? Or ETCD supports us by using the revision by understanding what revision you received last or whether that is the most recent one. And other software patterns kind of follow the same model and I think that is also kind of interesting. Like we are still using the same primitive tools to solve the same problems whether we are doing it at a software application layer or whether we are doing it down in the plumbing at the network there, these tools are still very similar. Another example is like UTP where it is basically there are no repeats. You either got the packet or you didn’t, which sounds a lot like an event stream to me in some ways, right? Like it is very interesting, you just figured out like I put in on the line, you didn’t get it? It is okay, I will put another line here in a minute you can react to that one, right? It is an interesting overlap. [0:31:30.6] NL: Yeah, totally. [0:31:32.9] JS: Yeah, the comparison to event streams or message queues, right? There is an interesting one that I hadn’t considered before but yeah, there are certainly parallels between saying, “Okay I am going to put this on the message queue,” and wait for the acknowledgement that somebody has taken it and taken ownership of it as oppose to an event stream where it is like this happened. I admit this event. If you get it and you do something with it, great. If you don’t get it then you don’t do something with it, great because another event is going to come along soon. So, there you go. [0:32:02.1] DC: Yep, I am going to go down a weird topic associated with what we are just talking about. But I am going to get a little bit more into the weeds of networking and this is actually directed into us in a way. So, talking about the kind of parallels between networking and development, in networking at least with TCP and networking, there is something called CSMACD, which is “carry your sense multi,” oh I can’t remember what the A stands for and the CD. [0:32:29.2] SL: Access. [0:32:29.8] DC: Multi access and then CD is collision detection and so basically what that means is whenever you sent out a packet on the network, the network device itself is listening on the network for any collisions and if it detects a collision it will refuse to send a packet until a certain period of time and they will do a retry to make sure that these packets are getting sent as efficiently as possible. There is an alternative to that called CMSCA, which was used by Mac before they switched over to using a Linux based operating system. And then putting a fancy UI in front of it, which collision avoidance would listen and try and – I can’t remember exactly, it would time it differently so that it would totally just avoid any chance that there could be collision. It would make sure that no packets were being sent right then and then send it back up. And so I was wondering if something like that exists in the realm between the communication path between applications. [0:33:22.5] JS: Is it collision two of the same packets being sent or what exactly is that? [0:33:26.9] DC: With the packets so basically any data going back and forth. [0:33:29.7] JS: What makes it a collision? [0:33:32.0] SL: It is the idea that you can only transmit one message at a time because if they both populate the same media it is trash, both of them are trash. [0:33:39.2] JS: And how do you qualify that. Do you receive an ac from the system or? [0:33:42.8] NL: No there is just nothing returned essentially so it is like literally like the electrical signals going down the wire. They physically collide with each other and then the signal breaks. [0:33:56.9] JS: Oh, I see, yeah, I am not sure. I think there is some parallels to that maybe with like queuing technologies and things like that but can’t think of anything on like direct app dev side. [0:34:08.6] DC: Okay, anyway sorry for that tangent. I just wanted to go down that little rabbit-hole a little bit. It was like while we are talking about networking, I was like, “Oh yeah, I wanted to see how deep down we can make this parallel going?” so that was the direction I went. [0:34:20.5] SL: Like where is that that CSMACD, a piece is like seriously old school, right? Because it only applied to half duplex Ethernet and as soon as we went to full duplex Ethernet it didn’t matter anymore. [0:34:33.7] DC: That is true. I totally forgot about that. [0:34:33.8] JS: It applied the satellite with all of these as well. [0:34:35.9] DC: Yeah, I totally forgot about that. Yeah and with full duplex, we totally just space on that. This is – damn Scott, way to make me feel old. [0:34:45.9] SL: Well I mean satellite stuff, too, right? I mean it is actually any shared media upon which you have to – where if this stuff goes and overlap there, you are not going to be able to make it work right? And so, I mean it is interesting. It is actually an interesting PNL. I am struggling to think of an example of this as well. I mean my brain is going towards circuit breaking but I don’t think that that is quite the same thing. It is sort the same thing that in a circuit breaking pattern, the application that is making the request has the ability obviously because it is the thing making the request to understand that the target it is trying to connect to is not working correctly. And so, it is able to make an almost instantaneous decision or at least a very shortly, a very timely decision about what to do when it detects that state. And so that’s a little similar and that you can and from the requester side you can do things if you see things going awry. And really and in reality, in the circuit breaking pattern we are making the assumption that only the application making the request will ever get that information fast enough to react to it. [0:35:51.8] JS: Yeah where my head was kind of going with it but I think it is pretty off is like on a low level piece of code like it is maybe something you write in C where you implement your own queue in that area and then multiple threads are firing off the same time and there is no block system or mechanism if two threads contend to put something in the same memory space that that queue represents. That is really going down the rabbit hole. I can’t even speak to what degree that is possible in modern programming but that is where my head was. [0:36:20.3] NL: Yeah that is a good point. [0:36:21.4] SL: Yeah, I think that is actually a pretty good analogy because the key commonality here is some sort of shared access, right? Multiple threads accessing the same stack or memory buffer. The other thing that came to mind to me was like some sort of session multiplexing, right? Where you are running multiple application layer sessions inside a single sort of network connection and those network sessions getting comingled in some fashion. Whether through identifiers or sequence number or something else of that nature and therefore, you know garbling the ultimate communication that is trying to be sent. [0:36:59.2] DC: Yeah, locks are exactly the right direction, I think. [0:37:03.6] NL: That is a very good point. [0:37:05.2] DC: Yeah, I think that makes perfect sense. Good, all right. Yes, we nailed it. [0:37:09.7] SL: Good job. [0:37:10.8] DC: Can anybody here think of a software pattern that maybe doesn’t come across that way? When you are thinking about some of the patterns that you see today in cloud native applications, is there a counter example, something that the network does not do at all? [0:37:24.1] NL: That is interesting. I am trying to think where event streams. No, that is just straight up packets. [0:37:30.7] JS: I feel like we should open up one of those old school Java books of like 9,000 design patterns you need to know and we should go one by one and be like, “What about this” you know? There is probably something I can’t think of it off the top of my head. [0:37:43.6] DC: Yeah me neither. I was trying to think of it. I mean like I can think of a myriad of things that do cross over even the idea of only locally relevant state, right? That is like a cam table on a switch that is only locally relevant because once you get outside of that switching domain it doesn’t matter anymore and it is like there is a ton of those things that totally do relate, you know? But I am really struggling to come up with one that doesn’t – One thing that is actually interesting is I was going to bring up – we mentioned the cap theorem and it is an interesting one that you can only pick like two and three of consistency availability and partition tolerance. And I think you know, when I think about the way that networks solve or try to address this problem, they do it in some pretty interesting way. It’s like if you were to consider like Spanning Tree, right? The idea that there can really only be one path through a series of broadcast domains. Because we have multiple paths then obviously we are going to get duplicity and the things are going to get bad because they are going to have packets that are addressed the same things across and you are going to have all kinds of bad behaviors, switching loops and broadcast storms and all kinds of stuff like that and so Spanning Tree came along and Spanning Tree was invented by an amazing woman engineer who created it to basically ensure that there was only one path through a set of broadcast domains. And in a way, this solved that camp through them because you are getting to the point where you said like since I understand that for availability purpose, I only need one path through the whole thing and so to ensure consistency, I am going to turn off the other paths and to allow for partition tolerance, I am going to enable the system to learn when one of those paths is no longer viable so that it can re-enable one of the other paths. Now the challenge of course is there is a transition period in which we lose traffic because we haven’t been able to open one of those other paths fast enough, right? And so, it is interesting to think about how the network is trying to solve with the part that same set of problems that is described by the cap theorem that we see people trying to solve with software routine. [0:39:44.9] SL: No man I totally agree. In a case like Spanning Tree, you are sacrificing availability essentially for consistency and partition tolerance when the network achieves consistency then availability will be restored and there is other ways to doing that. So as we move into systems like I mentioned clos fabrics earlier, you know a cost fabric is a different way of establishing a solution to that and that is saying I’d later too. I will have multiple connections. I will wait those connections using the higher-level protocol and I will sacrifice consistency in terms of how the routes are exchanged to get across that fabric in exchange for availability and partition columns. So, it is a different way of solving the same problem and using a different set of tools to do that, right? [0:40:34.7] DC: I personally find it funny that in the cap theorem there is at no point do we mention complexity, right? We are just trying to get all three and we don’t care if it’s complex. But at the same time, as a consumer of all of these systems, you care a lot about the complexity. I hear it all the time. Whether that complexity is in a way that the API itself works or whether even in this episode we are talking about like I maybe don’t want to learn how to make the network work. I am busy trying to figure out how to make my application work, right? Like cognitive load is a thing. I can only really focus on so many things at a time where am I going to spend my time? Am I going to spend it learning how to do plumbing or am I going to spend it actually trying the right application that solves my business problem, right? It is an interesting thing. [0:41:17.7] NL: So, with the rise of software defined networking, how did that play into the adoption of cloud native technologies? [0:41:27.9] DC: I think it is actually one of the more interesting overlaps in the space because I think to Josh’s point again. his is where we were taking I mean I work for a company called [inaudible 0:41:37], in which we were virtualizing the network and this is fascinating because effectively we are looking at this as a software service that we had to bring up and build and build reliably and scalable. Reliably and consistently and scalable. We want to create this all while we are solving problems. But we need it to do within an API. It is like we couldn’t make the assumption with the way that networks were being defined today like going to each component and configuring them or using protocols was actually going to work in this new model of software confined networking. And so, we had an incredible amount of engineers who were really focused from a computer science perspective on how to effectively reinvent network as a software solution. And I do think that there is a huge amount of cross over here like this is actually where I think the waters meet between the way the developers think about the problems and the way that network engineers think about the problem but it has been a rough road I will say. I will say that STN I think is actually has definitely thrown a lot of network engineers under their heels because they’re like, “Wait, wait but that is not a network,” you know? Because I can’t actually look at it and characterize it in the way that I am accustomed to looking at characterizing the other networks that I play with. And then from the software side, you’re like, “Well maybe that is okay” right? Maybe that is enough, it is really interesting. [0:42:57.5] SL: You know I don’t know enough about the details of how AWS or Azure or Google are actually doing their networking like and I don’t even know and maybe you guys all do know – but I don’t even know that aside from a few tidbits here and there that AWS is going to even divulge the details of how things work under the covers for VPC’s right? But I can’t imagine that any modern cloud networking solution whether it would be VBPC’s or VNET’s or whatever doesn’t have a significant software to find aspect to it. You know, we don’t need to get into the definitions of what STN is or isn’t. That was a big discussion Duffie and I had six years ago, right? But there has to be some part of it that is taking and using the concepts that are common in STN right? And applying that. Just as the same way as the cloud vendors are using the concepts from compute virtualization to enable what they are doing. I mean like the reality is that you know the work that was done by the Cambridge folks on Zen was a massive enabler trade for AWS, right? The word done on KVM also a massive enabler for lots of people. I think GCP is KBM based and V Sphere where VM Ware data as well. I mean all of this stuff was a massive enablers for what we do with compute virtualization in the cloud. I have to think that whether it is – even if it wasn’t necessarily directly stemming out of Martin Casado’s open flow work at Stanford, right? That a lot of these software define networking concepts are still seeing use in the modern clouds these days and that is what enables us to do things like issue an API call and have an isolated network space with its own address space and its own routing and satiated in some way and managed. [0:44:56.4] JS: Yeah and on that latter point, you know as a consumer of this new software defined nature of networking, it is amazing the amount of I don’t know, I started using like a blanket marketing term here but agility that it is added, right? Because it has turned all of these constructs that I used to file a ticket and follow up with people into self-service things that when I need to poke holes in the network, hopefully the rights are locked down, so I just can’t open it all up. Assuming I know what I am doing and the rights are correct it is totally self-service for me. I go into AWS, I change the security group roll and boom, the ports have changed and it never looked like that prior to this full takeover of what I believe is STN almost end to end in the case of AWS and so on. So, it is really just not only has it made people like myself have to understand more about networking but it has allowed us to self-service a lot of the things. That I would imagine most network engineers were probably tired of doing anyways, right? How many times do you want to go to that firewall and open up that port? Are you really that excited about that? I would imagine not so. [0:45:57.1] NL: Well I can only speak from experience and I think a lot of network engineers kind of get into that field because it really love control. And so, they want to know what these ports are that are opening and it is scary to be like this person has opened up these ports, “Wait what?” Like without them even totally knowing. I mean I was generalizing, I was more so speaking to myself as being self-deprecating. It doesn’t apply to you listener. [0:46:22.9] JS: I mean it is a really interesting point though. I mean do you think it makes the networking people or network engineers maybe a little bit more into the realm of observability and like knowing when to trigger when something has gone wrong? Does it make them more reactive in their role I guess. Or maybe self-service is not as common as I think it is. It is just from my point of view, it seems like with STN’s the ability to modify the network more power has been put into the developers’ hands is how I look at it, you know? [0:46:50.7] DC: I definitely agree with that. It is interesting like if we go back a few years there was a time when all of us in the room here I think are employed by VMware. So, there was a time where VMware’s thing was like the real value or one of the key values that VMware brought to the table was the idea that a developer come and say “Give me 10 servers.” And you could just call an API or make it or you could quickly provision those 10 servers on behalf of that developer and hand them right back. You wouldn’t have to go out and get 10 new machines and put them into a rack, power them and provision them and go through that whole process that you could actually just stamp those things out, right? And that is absolutely parallel to the network piece as well. I mean if there is nothing else that SPN did bring to the fore is that, right? That you can get that same capability of just stamping up virtual machines but with networks that the API is important in almost everything we do. Whether it is a service that you were developing, whether it is a network itself, whether it is the firewall that we need to do these things programmatically. [0:47:53.7] SL: I agree with you Duffie. Although I would contend that the one area that and I will call it on premises STN shall we say right? Which is the people putting on STN solutions. I’d say the one area at least in my observation that they haven’t done well is that self-service model. Like in the cloud, self-service is paramount to Josh’s point. They can go out there, they can create their own BPC’s, create their own sub nets, create their own NAT gateways, Internet gateways to run security groups. Load balancers, blah-blah, all of that right? But it still seems to me that even though we are probably 90, 95% of the way there, maybe farther in terms of on premise STN solutions right that you still typically don’t see self-service being pushed out in the same way you would in the public cloud, right? That is almost the final piece that is needed to bring that cloud experience to the on-premises environment. [0:48:52.6] DC: That is an interesting point. I think from an infrastructure as a service perspective, it falls into that realm. It is a problem to solve in that space, right? So when you look at things like OpenStack and things like AWS and things like JKE or not JKE but GCE and areas like that, it is a requirement that if you are going to provide infrastructure as a service that you provide some capability around networking but at the same time, if we look at some of the platforms that are used for things like cloud native applications. Things like Kubernetes, what is fascinating about that is that we have agreed on a least come – we agreed on abstraction of networking that is maybe I don’t know, maybe a little more precooked you know what I mean? In the assumption within like most of the platforms as a service that I have seen, the assumption is that when I deploy a container or I deploy a pod or I deploy some function as a service or any of these things that the networking is going to be handled for me. I shouldn’t have to think about whether it is being routed to the Internet or not or routed back and forth between these domains. I should if anything only have to actually give you intent, be able to describe to you the intent of what could be connected to this and what ports I am actually going to be exposing and that the platform actually hides all of the complexity of that network away from me, which is an interesting round to strike. [0:50:16.3] SL: So, this is one of my favorite things, one of my favorite distinctions to make, right? And that is this is the two worlds that we have been talking about, applications and infrastructure and the perfect example of these different perspectives and you even said it or you talked there Duffie like from an IS perspective it is considered a given that you have to be able to say I want a network, right? But when you come at this from the application perspective, you don’t care about a network. You just want network connectivity, right? And so, when you look at the abstractions that IS vendors and solutions or products have created then they are IS centric but when you look at the abstractions that have been created in the cloud data space like within Kubernetes, they are application centric, right? And so, we are talking about infrastructure artifacts versus application artifacts and they end up meeting but they are coming at this from two different very different perspectives. [0:51:18.5] DC: Yeah. [0:51:19.4] NL: Yeah, I agree. [0:51:21.2] DC: All right, well that was a great discussion. I imagine that we are probably get into – at least I have a couple of different networking discussions that I wanted to dig into and this conversation I hope that we’ve helped draw some parallels back and forth between the way – I mean there is both some empathy to spend here, right? I mean the people who are providing the service of networking to you in your cloud environments and your data centers are solving almost exactly the same sorts of availability problems and capabilities that you are trying to solve with your own software. And I think in itself is a really interesting takeaway. Another one is that again there is nothing new under the sun. The problems that we are trying to solve in networking are not different than the problems that you are trying to solve in applications. We have far fewer tools and we generally network engineers are focused on specific changes that happen in the industry rather than looking at a breathe of industries like I mean as Josh pointed out, you could break open a Java book. And see 8,000 patterns for how to do Java and this is true, every programming language that I am aware of I mean if you look at Go and see a bunch of different patterns there and we have talked about different patterns for just developing cloud native aware applications as well, right? I mean there is so many options in the software versus what we can do and what are available to us within networks. And so I think I am rambling a little bit but I think that is the takeaway from this session. Is that there is a lot of overlap and there is a lot of really great stuff out there. So, this is Duffie, thank you for tuning in and I look forward to the next episode. [0:52:49.9] NL: Yep and I think we can all agree that Token Ring should have won. [0:52:53.4] DC: Thank you Josh and thank you Scott. [0:52:55.8] JS: Thanks. [0:52:57.0] SL: Thanks guys, this was a blast. [END OF EPISODE] [0:52:59.4] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

The Podlets - A Cloud Native Podcast
Learning Distributed Systems (Ep 12)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Jan 13, 2020 47:09


In this episode of The Podlets Podcast, we welcome Michael Gasch from VMware to join our discussion on the necessity (or not) of formal education in working in the realm of distributed systems. There is a common belief that studying computer science is a must if you want to enter this field, but today we talk about the various ways in which individuals can teach themselves everything they need to know. What we establish, however, is that you need a good dose of curiosity and craziness to find your feet in this world, and we discuss the many different pathways you can take to fully equip yourself. Long gone are the days when you needed a degree from a prestigious school: we give you our hit-list of top resources that will go a long way in helping you succeed in this industry. Whether you are someone who prefers learning by reading, attending Meetups or listening to podcasts, this episode will provide you with lots of new perspectives on learning about distributed systems. Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Michael Gasch Key Points From This Episode: • Introducing our new host, Michael Gasch, and a brief overview of his role at VMware. • Duffie and Carlisia’s educational backgrounds and the value of hands-on work experience. • How they first got introduced to distributed systems and the confusion around what it involves. • Why distributed systems are about more than simply streamlining communication and making things work. • The importance and benefit of educating oneself on the fundamentals of this topic. • Our top recommended resources for learning about distributed systems and their concepts. • The practical downside of not having a formal education in software development. • The different ways in which people learn, index and approach problem-solving. • Ensuring that you balance reading with implementation and practical experience. • Why it’s important to expose yourself to discussions on the topic you want to learn about. • The value of getting different perspectives around ideas that you think you understand. • How systems thinking is applicable to things outside of computer science.• The various factors that influence how we build systems. Quotes: “When people are interacting with distributed systems today, or if I were to ask like 50 people what a distributed system is, I would probably get 50 different answers.” — @mauilion [0:14:43] “Try to expose yourself to the words, because our brains are amazing. Once you get exposure, it’s like your brain works in the background. All of a sudden, you go, ‘Oh, yeah! I know this word.’” — @carlisia [0:14:43] “If you’re just curious a little bit and maybe a little bit crazy, you can totally get down the rabbit hole in distributed systems and get totally excited about it. There’s no need for having formal education and the degree to enter this world.” — @embano1 [0:44:08] Learning resources suggested by the hosts: Book, Designing Data-Intensive Applications, M. Kleppmann Book, Distributed Systems, M. van Steen and A.S. Tanenbaum (free with registration) Book, Thesis on Raft, D. Ongaro. - Consensus - Bridging Theory and Practice (free PDF) Book, Enterprise Integration Patterns, B.Woolf, G. Hohpe Book, Designing Distributed Systems, B. Burns (free with registration) Video, Distributed Systems Video, Architecting Distributed Cloud Applications Video, Distributed Algorithms Video, Operating System - IIT Lectures Video, Intro to Database Systems (Fall 2018) Video, Advanced Database Systems (Spring 2018) Paper, Time, Clocks, and the Ordering of Events in a Distributed System Post, Notes on Distributed Systems for Young Bloods Post, Distributed Systems for Fun and Profit Post, On Time Post, Distributed Systems @The Morning Paper Post, Distributed Systems @Brave New Geek Post, Aphyr’s Class materials for a distributed systems lecture series Post, The Log - What every software engineer should know about real-time data’s unifying abstraction Post, Github - awesome-distributed-systems Post, Your Coffee Shop Doesn’t Use Two-Phase Commit Podcast, Distributed Systems Engineering with Apache Kafka ft. Jason Gustafson Podcast, The Systems Bible - The Beginner’s Guide to Systems Large and Small - John Gall Podcast, Systems Programming - Designing and Developing Distributed Applications - Richard Anthony Podcast, Distributed Systems - Design Concepts - Sunil Kumar Links Mentioned in Today’s Episode: The Podlets on Twitter — https://twitter.com/thepodlets Michael Gasch on LinkedIn — https://de.linkedin.com/in/michael-gasch-10603298 Michael Gasch on Twitter — https://twitter.com/embano1 Carlisia Campos on LinkedIn — https://www.linkedin.com/in/carlisia Duffie Cooley on LinkedIn — https://www.linkedin.com/in/mauilion VMware — https://www.vmware.com/ Kubernetes — https://kubernetes.io/ Linux — https://www.linux.org Brian Grant on LinkedIn — https://www.linkedin.com/in/bgrant0607 Kafka — https://kafka.apache.org/ Lamport Article — https://lamport.azurewebsites.net/pubs/time-clocks.pdf Designing Date-Intensive Applications — https://www.amazon.com/Designing-Data-Intensive-Applications-Reliable-Maintainable-ebook/dp/B06XPJML5D Designing Distributed Systems — https://www.amazon.com/Designing-Distributed-Systems-Patterns-Paradigms/dp/1491983647 Papers We Love Meetup — https://www.meetup.com/papers-we-love/ The Systems Bible — https://www.amazon.com/Systems-Bible-Beginners-Guide-Large/dp/0961825170 Enterprise Integration Patterns — https://www.amazon.com/Enterprise-Integration-Patterns-Designing-Deploying/dp/0321200683 Transcript: EPISODE 12 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [00:00:41] CC: Hi, everybody. Welcome back. This is Episode 12, and we are going to talk about distributed systems without a degree or even with a degree, because who knows how much we learn in university. I am Carlisia Campos, one of your hosts. Today, I also have Duffie Cooley. Say hi, Duffie. [00:01:02] DC: Hey, everybody. [00:01:03] CC: And a new host for you, and this is such a treat. Michael Gasch, please tell us a little bit of your background. [00:01:11] MG: Hey! Hey, everyone! Thanks, Carlisia. Yes. So I’m new to the show. I just want to keep it brief because I think over the show we’ll discuss our backgrounds a little bit further. So right now, I’m with VMware. So I’ve been with VMware almost for five years. Currently, I'm in the office of the CTO. I’m a platform architect in the office of the CTO and I mainly use Kubernetes on a daily basis from an engineering perspective. So we build a lot of prototypes based on customer input or ideas that we have, and we work with different engineering teams. Kurbernetes has become kind of my bread and butter but lately more from a consumer perspective like developing with Kurbenetes or against Kubernetes, instead of the formal ware of mostly being around implementing and architecting Kubernetes. [00:01:55] CC: Nice. Very impressive. Duffie? [00:01:58] MG: Thank you. [00:01:59] DC: Yeah. [00:02:00] CC: Let’s give the audience a little bit of your backgrounds. We’ve done this before but just to frame the episodes, so people will know how we come in as distributed systems. [00:02:13] DC: Sure. In my experience, I spent – I don’t have a formal education history. I spent most of my time kind of like in a high school time. Then from there, basically worked into different systems administration, network administration, network architect, and up into virtualization and now containerization. So I’ve got a pretty hands-on kind of bootstrap experience around managing infrastructure, both at small-scale, inside of offices, and the way up to very large scale, working for some of the larger companies here in the Silicon Valley. [00:02:46] CC: All right. My turn I guess. So I do have a computer science degree but I don’t feel that I really went deep at all in distributed systems. My degree is also from a long time ago. So mainly, what I do know now is almost entirely from hands-on work experience. Even so, I think I'm very much lacking and I’m very interested in this episode, because we are going to go through some great resources that I am also going to check out later. So let’s get this party started. [00:03:22] DC: Awesome. So you want to just talk about kind of the general ideas behind distributed systems and like how you became introduced to them or like where you started in that journey? [00:03:32] CC: Yeah. Let’s do that. [00:03:35] DC: My first experience with the idea of distributed systems was in using them before I knew that they were distributed systems, right? One of the very first distributed systems as I look back on it that I ever actually spent any real time with was DNS, which I consider to be something of a distributed system. If you think about it, they have name servers, they have a bunch of caching servers. They solve many of the same sorts of problems. In a previous episode, we talked about how networking, just the general idea of networking and handling large-scale architecting networks. It’s also in a way very – has a lot of analogues into distributed systems. For me, I think working with and helping solve the problems that are associated with them over time gave me a good foundational understanding for when we were doing distributed systems as a thing later on in my career. [00:04:25] CC: You said something that caught my interest, and it’s very interesting, because obviously for people who have been writing algorithms, writing papers about distributed systems, they’re going to go yawning right now, because I’m going to say the obvious. As you start your journey programming, you read job requirements. You read or you must – should know distributed systems. Then I go, “What is distributed system? What do they really mean?” Because, yes, we understand apps stuck to apps and then there is API, but there’s always for me at least a question at the back of my head. Is that all there is to it? It sounds like it should be a lot more involved and complex and complicated than just having an app stuck on another app. In fact, it is because there are so many concepts and problems involved in distributed systems, right? From timing, clock, and sequence, and networking, and failures, how do you recover. There is a whole world in how do you log this properly, how do you monitor. There’s a whole world that revolves around this concept of systems residing in different places and [inaudible 00:05:34] each other. [00:05:37] DC: I think you made a very good point. I think this is sort of like there’s an analog to this in containers, oddly enough. When people say, “I want a container within and then the orchestration systems,” they think that that's just a thing that you can ask for. That you get a container and inside of that is going to be your file system and it’s going to do all those things. In a way, I feel like that same confusion is definitely related to distributed systems. When people are interacting with distributed systems today or if I were to ask like 50 people what a distributed system is, I would probably get 50 different answers. I think that you got a pretty concise definition there in that it is a set of systems that intercommunicate to perform some function. It’s like found at its base line. I feel like that's a pretty reasonable definition of what distributed systems are, and then we can figure out from there like what functions are they trying to achieve and what are some of the problems that we’re trying to solve with them. [00:06:29] CC: Yeah. That’s what it’s all about in my head is solving the problems because at the beginning, I was thinking, “Well, it must be just about communicating and making things work.” It’s the opposite of that. It’s like that’s a given. When a job says you need to understand about distributed systems, what they are really saying is you need to know how to deal with failures, not just to make it work. Make it work is sort of the easy part, but the whole world of where the failures can happen, how do you handle it, and that, to me is what needing to know distributed system comes in handy. In a couple different things, like at the top layer or 5% is knowing how to make things work, and 95% is knowing how to handle things when they don’t work, because it’s inevitable. [00:07:19] DC: Yeah, I agree. What do you think, Michael? How would you describe the context around distributed systems? What was the first one that you worked with? [00:07:27] MG: Exactly. It’s kind of similar to your background, Duffie, which is no formal degree or education on computer science right after high school and jumping into kind of my first job, working with computers, computer administration. I must say that from the age of I think seven or so, I was interested in computers and all that stuff but more from a hardware perspective, less from a software development perspective. So my take always was on disassembling the pieces and building my own computers than writing programs. In the early days, that just was me. So I completely almost missed the whole education and principles and fundamentals of how you would write a program for a single computer and then obviously also for how to write programs that run across a network of computers. So over time, as I progress on my career, especially kind of in the first job, which was like seven years of different Linux systems, Linux administrations, I kind of – Like you, Duffie, I dealt with distributed systems without necessarily knowing that I'm dealing with distributed systems. I knew that it was mostly storage systems, Linux file servers, but distributed file servers. Samba, if some of you recall that project. So I knew that things could fail. I know it could fail, for example, or I know it could not be writable, and so a client must be stuck but not necessarily I think directly related to fundamentals of how distributed systems work or don’t work. Over time, and this is really why I appreciate the Kubernetes project in community, I got more questions, especially when this whole container movement came up. I got so many questions around how does that thing work. How does scheduling work? Because scheduling kind of was close to my interest in the hardware design and low-level details. But I was looking at Kubernetes like, “Okay. There is the scheduler.” In the beginning, the documentation was pretty scarce around the implementation and all the control as for what’s going on. So I had to – I listen to a lot of podcasts and Brian Grant’s great talks and different shows that he gave from the Kubernetes space and other people there as well. In the end, I had more questions than answers. So I had to dig deeper. Eventually, that led me to a path of wanting to understand more formal theory behind distributed systems by reading the papers, reading books, taking some online classes just to get a basic understanding of those issues. So I got interested in results scheduling in distributed systems and consensus. So those were two areas that kind of caught my eyes like, “What is it? How do machines agree in a distributed system if so many things can go wrong?” Maybe we can explore this later on. So I’m going to park this for a bit. But back to your question, which was kind of a long-winded answer or a road to answering your question, Duffie. For me, a distributed system is like this kind of coherent network of computer machines that from the outside to an end-user or to another client looks like one gigantic big machine that is [inaudible 00:10:31] to run as fast. That is performing also efficient. It constitutes a lot of characteristics and properties that we want from our systems that a single machine usually can’t handle. But it looks like it's a big single machine to a client. [00:10:46] DC: I think that – I mean, it is interesting like, I don’t want to get into – I guess this is probably not just a distributed systems talk. But obviously, one of the questions that falls out for me when I hear that answer is then what is the difference between a micro service architecture and distributed systems, because I think it's – I mean, to your point, the way that a lot of people work with the app to learn to develop software, it’s like we’re going to develop a monolithic application just by nature. We’re going to solve a software problem using code. Then later on, when we decide to actually scale this thing or understand how to better operate it under a significant load, then we started thinking about, “Okay. Well, how do we have to architect this differently in such a way that it can support that load?” That’s where I feel like the beams cut across, right? We’re suddenly in a world where you’re not only just talking about microservices. You’re also talking about distributed systems because you’re going to start thinking about how to understand transactionality throughout that system, how to understand all of those consensus things that you're referring to. How do they affect it when I add mister network in there? That’s cool. [00:11:55] MG: Just one comment on this, Duffie, which took me a very long time to realize, which is coming – From my definition of what a distributed system is like this group of machines that they perform work in a certain sense or maybe even more abstracted like at a bunch of computers network together. What I kind of missed most of the time, and this goes back to the DNS example that you gave in the beginning, was the client or the clients are also part of this distributed system, because they might have caches, especially in DNS. So you always deal with this kind of state that is distributed everywhere. Maybe you don't even know where it kind of is distributed, and the client kind of works with a local stale data. So that is also part of a distributed system, and something I want to give credit to the Kafka community and some of the engineers on Kafka, because there was a great talk lately that I heard. It’s like, “Right. The client is also part of your distributed system, even though usually we think it's just the server. That those many server machines, all those microservices.” At least I missed that a long time. [00:12:58] DC: You should put a link to that talk in our [inaudible 00:13:00]. That would be awesome. It sounds great. So what do you think, Carlisia? [00:13:08] CC: Well, one thing that I wanted to mention is that Michael was saying how he’s been self-teaching distributed systems, and I think if we want to be competent in the area, we have to do that. I’m saying this to myself even. It’s very refreshing when you read a book or you read a paper and you really understand the fundamentals of an aspect of distributed system. A lot of things fall into place in your hands. I’m saying this because even prioritizing reading about and learning about the fundamentals is really hard for me, because you have your life. You have things to do. You have the minutiae in things to get done. But so many times, I struggle. In the rare occasions where I go, “Okay. Let me just learn this stuff trial and error,” it makes such a difference. Then once you learn, it stays with you forever. So it’s really good. It’s so refreshing to read a paper and understand things at a different level, and that is what this episode is. I don’t know if this is the time to jump in into, “So there are our recommendations.” I don't know how deep, Michael, you’re going to go. You have a ton of things listed. Everything we mention on the show is going to be on our website, on the show notes. So nobody needs to be necessarily taking notes. Anything thing I wanted to say is it would be lovely if people would get back to us once you listened to this. Let us know if you want to add anything to this list. It would be awesome. We can even add it to this list later and give a shout out to you. So it’d be great. [00:14:53] MG: Right. I don’t want to cover this whole list. I just wanted to be as complete as possible about a stuff that I kind of read or watched. So I just put it in and I just picked some highlights there if you want. [00:15:05] CC: Yeah. Go for it. [00:15:06] MG: Yeah. Okay. Perfect. Honestly, even though not the first in the list, but the first thing that I read, so maybe from kind of my history of how I approach things, was searching for how do computers work and what are some of the issues and how do computers and machines agree. Obviously, the classic paper that I read was the Lamport paper on “Time, Clocks, and the Ordering of Events in a Distributed System”. I want to be honest. First time I read it, I didn’t really get the full essence of the paper, because it doesn't prove in there. The mathematic proof for me didn't click immediately, and there were so many things and concepts and physics and time that were thrown at me where I was looking for answers and I had more questions than answers. But this is not to Leslie. This is more like by the time I just wasn't prepared for how deep the rabbit hole goes. So I thought, if someone asked me for – I only have time to read one book out of this huge list that I have there and all the other resources. Which one would it be? Which one would I recommend? I would recommend Designing Data-Intensive Apps by Martin Kleppmann, which I’ve been following his blog posts and some partial releases that he's done before fully releasing that book, which took him more than four years to release that book. It’s kind of almost the Bible, state-of-the-art Bible when it comes to all concepts in distributed systems. Obviously, consensus, network failures, and all that stuff but then also leading into modern data streaming, data platform architectures inspired by, for example, LinkedIn and other communities. So that would be the book that I would recommend to someone if – Who does have time to read one book. [00:16:52] DC: That’s a neat approach. I like the idea of like if you had one thing, if you have one way to help somebody ramp on distributed systems and stuff, what would it be? For me, it’s actually I don't think I would recommend a book, oddly enough. I feel like I would actually – I’d probably drive them toward the kind of project, like the kind [inaudible 00:17:09] project and say, “This is a distributed system all but itself.” Start tearing it apart to pieces and seeing how they work and breaking them and then exploring and kind of just playing with the parts. You can do a lot of really interesting things. This is actually another book in your list that was written by Brendan Burns about Designing Distributed Systems I think it’s called. That book, I think he actually uses Kubernetes as a model for how to go about achieving these things, which I think is incredibly valuable, because it really gets into some of the more stable distributed systems patterns that are around. I feel like that's a great entry point. So if I had one thing, if I had to pick one way to help somebody or to push somebody in the direction of trying to learn distributed systems, I would say identify those distributed systems that maybe you’re already aware of and really explore how they work and what the problems with them are and how they went about solving those problems. Really dig into the idea of it. It’s something you could put your hands on and play with. I mean, Kubernetes is a great example of this, and this is actually why I referred to it. [00:18:19] CC: The way that works for me when I’m learning something like that is to really think about where the boundaries are, where the limitations are, where the tradeoffs are. If you can take a smaller system, maybe something like The Kind Project and identify what those things are. If you can’t, then ask around. Ask someone. Google it. I don’t know. Maybe it will be a good episode topic for us to do that. This part is doing this to map things out. So maybe we can understand better and help people understand things better. So mainly like yeah. They try to do the distributed system thesis are. But for people who don’t even know what they could be, it’s harder to identify it. I don’t know what a good strategy for that would be, because you can read about distributed systems and then you can go and look at a project. How do you map the concept to learning to what you’re seeing in the code base? For me, that’s the hardest thing. [00:19:26] MG: Exactly. Something that kind of I had related experience was like when I went into software development, without having formal education on algorithms and data structures, sometimes in your head, you have the problem statement and you're like, “Okay. I would do it like that.” But you don't know the word that describes, for example, a heap structure or queue because you’ve never – Someone told you that is heap, that is a queue, and/or that is a stick. So, for me, reading the book was a bit easier. Even though I have done distributed systems, if you will, administration for many years, many years ago, I didn't realize that it was a distributed system because I never had this definition or I never had those failure scenarios in mind and it never had a word for consensus. So how would I search for something like how do machines agree? I mean, if you put that on Google, then likely they will come – Have a lot of stuff. But if you put it in consensus algorithm, likely you get a good hit on what the answer should be. [00:20:29] CC: It is really problematic when we don't know the names of things because – What you said is so right, because we are probably doing a lot of distributed systems without even knowing that that’s what it is. Then we go in the job interview, and people are, “Oh! Have you done a distributed system?” No. You have but you just don’t know how to name things. But that’s one – [00:20:51] DC: Yeah, exactly. [00:20:52] CC: Yeah. Right? That’s one issue. Another issue, which is a bigger issue though is at least that’s how it is for me. I don’t want to speak for anybody else but for me definitely. If I can’t name things and I face a problem and I solve it, every time I face that problem it’s a one-off thing because I can’t map to a higher concept. So every time I face that problem, it’s like, “Oh!” It’s not like, “Oh, yeah!” If this is this kind of problem, I have a pattern. I’m going to use that to this problem. So that’s what I’m saying. Once you learn the concept, you need to be able to name it. Then you can map that concept to problems you have. All of a sudden, if you have like three things [inaudible 00:21:35] use to solve this problem, because as you work with computers, coding, it’s like you see the same thing over and over again. But when you don’t understand the fundamentals, things are just like – It’s a bunch of different one-offs. It’s like when you have an argument with your spouse or girlfriend or boyfriend. Sometimes, it’s like you’re arguing 10 times in a month and you thought, “Oh! I had 10 arguments.” But if you’d stop and think about it, no. We had one argument 10 times. It’s very different than having 10 problems versus having 1 problem 10 times, if that makes sense. [00:22:12] MG: It does. [00:22:11] DC: I think it does, right? [00:22:12] MG: I just want to agree. [00:22:16] DC: I think it does make sense. I think it’s interesting. You’ve highlighted kind of an interesting pattern around the way that people learn, which I think is really interesting. That is like some people are able to read about patterns or software patterns or algorithms or architectures and have that suddenly be an index of their heads. They can actually then later on correlate what they've read with the experience that they’re having around the things they're working on. For some, it needs to be hands-on. They need to actually be able to explore that idea and understand and manipulate it and be able to describe how it works or functions in person, in reality. They need to have that hands-on like, “I need to touch it to understand it,” kind of experience. Those people also, as they go through those experiences, start building this index of patterns or algorithms in their head. They have this thing that they can correlate to, right, like, “Oh! This is a time problem,” or, “This is a consensus problem,” or what have you, right? [00:23:19] CC: Exactly. [00:23:19] DC: You may not know the word for that saying but you're still going to develop a pattern in your mind like the ability to correlate this particular problem with some pattern that you’ve seen before. What's interesting is I feel like people have taken different approaches to building that index, right? For me, it’s been troubleshooting. Somebody gives me a hard problem, and I dig into it and I figure out what the problem is, regardless of whether it's to do with distributed systems or cooking. It could be anything, but I always want to get right in there and figure out what that problem and start building a map in my mind of all of the players that are involved. For others, I feel like with an educational background, if you have an education background, I think that sometimes you end up coming to this with a set of patterns already instilled that you understand and you're just trying to apply those patterns to the experience you’re having instead. It’s just very – It’s like horse before the cart or cart before the horse. It’s very interesting when you think about it. [00:24:21] CC: Yes. [00:24:22] MG: The recommendation that I just want to give to people that are like me who like reading is that I went overboard a bit in the beginnings because I was so fascinated by all the stuff, and it went down the rabbit hole deeper, deeper, deeper, deeper. Reading and reading and reading. At some point, even coming to weird YouTube channels that talk about like, “Is time real and where does time emerge from?” It became philosophical even like the past where I went to. Now, the thing is, and this is why I like Duffie’s approach with like breaking things and then undergo like trying to break things and understanding how they work and how they can fail is that immediately you practice. You’re hands-on. So that would be my advice to people who are more like me who are fascinated by reading and all the theory that your brain and your mind is not really capable of kind of absorbing all the stuff and then remembering without practicing. Practicing can be breaking things or installing things or administrating things or even writing software. But for me, that was also a late realization that I should have maybe started doing things earlier than the time I spent reading. [00:25:32] CC: By doing, you mean, hands-on? [00:25:35] MG: Yeah. [00:25:35] CC: Anything specific that you would have started with? [00:25:38] MG: Yes. On Kubernetes – So going back those 15 years to my early days of Linux and Samba, which is a project. By the time, I think it was written in C or C++. But the problem was I wasn’t able to read the code. So the only thing that I had by then was some mailing lists and asking questions and not even knowing which questions to ask because of lack of words of understanding. Now, fast-forward into Kubernetes’ time, which got me deeper in distributed systems, I still couldn't read the code because I didn't know [inaudible 00:26:10]. But I forced myself to read the code, which helped a little bit for myself to understand what was going on because the documentation by then was lacking. These days, it’s easier, because you can just install [inaudible 00:26:20] way easier today. The hands-on piece, I mean. [00:26:23] CC: You said something interesting, Michael, and I have given this advice before because I use this practice all the time. It's so important to have a vocabulary. Like you just said, I didn't know what to ask because I didn’t know the words. I practice this all the time. To people who are in this position of distributed systems or whatever it is or something more specific that you are trying to learn, try to expose yourself to the words, because our brains are amazing. Once you get exposure, it’s like your brain works in the background. All of a sudden, you go, “Oh, yeah! I know this word.” So podcasts are great for me. If I don't know something, I will look for a podcast on the subject and I start listening to it. As the words get repeated, just contextually. I don’t have to go and get a degree or anything. Just by listening to the words being spoken in context, absorb the meaning of it. So podcasting is great or YouTube or anything that you can listen. Just in reading too, of course. The best thing is talking to people. But, again, it’s really – Sometimes, it’s not trivial to put yourself in positions where people are discussing these things. [00:27:38] DC: There are actually a number of Meetups here in the Bay Area, and there’s a number of Meetups – That whole Meetup thing is sort of nationwide across the entire US and around the world it seems like now lately. Those Meetups I feel like there are a number of Meetups in different subject areas. There’s one here in the Bay Area called Papers We Love, where they actually do explore interesting technical papers, which are obviously a great place to learn the words for things, right? This is actually where those words are being defined, right? When you get into the consensus stuff, they really get into – One even is Raft. There are many papers on Raft and many papers on multiple things that get into consensus. So definitely, whether you explore a meetup on a distributed system or in a particular application or in a particular theme like Kubernetes, those things are great places just to kind of get more exposure to what people are thinking about in these problems. [00:28:31] CC: That is such a great tip. [00:28:34] MG: Yeah. The podcast is twice as good as well, because for people, non-natives – English speaker, I mean. Oh, people. Not speakers. People. The thing is that the word you’re looking for might be totally different than the English word. For example, consensus in Germany has this totally different meaning. So if I would look that up in German, likely I would find nothing or not really related at all. So you have to go through translation and then finding the stuff. So what you said, Duffie, with PWL, Papers We Love, or podcasts, those words, often they are in English, those podcasts and they are natural consensus or charting or partitioning. Those are the words that you can at least look up like what does it mean. That’s what I did as well thus far. [00:29:16] CC: Yes. I also wanted to do a plus one for Papers We Love. It’s – They are everywhere and they also have an online. They have an online version of the Papers We Love Meetup, and a lot of the local ones film their meetups. So you can go through the history and see if they talked about any paper that you are interested in. Probably, I’m sure multiple locations talk about the same paper, so you can get different takes too. It’s really, really cool. Sometimes, it’s completely obscure like, “I didn’t get a word of what they were saying. Not one. What am I doing here?” But sometimes, they talk about things. You at least know what the thing is and you get like 10% of it. But some paper you don’t. People who deal with papers day in and day out, it’s very much – I don’t know. [00:30:07] DC: It’s super easy when going through a paper like that to have the imposter syndrome wash over you, right, because you’re like – [00:30:13] CC: Yes. Thank you. That’s what I wanted to say. [00:30:15] DC: I feel like I’ve been in this for 20 years. I probably know a few things, right. But in talking about reading this consensus paper going, “Can I buy a vowel? What is happening?” [00:30:24] CC: Yeah. Can I buy a vowel? That’s awesome, Duffie. [00:30:28] DC: But the other piece I want to call out to your point, which I think is important is that some people don't want to go out and be there in person. They don’t feel comfortable or safe exploring those things in person. So there are tons of resources like you have just pointed out like the online version of Papers We Love. You can also sign into Slack and just interact with people via text messaging, right? There’s a lot of really great resources out there for people of all types, including the amount of time that you have. [00:30:53] CC: For Papers We Love, it’s like going to language class. If you go and take a class in Italian, your first day, even though that is going to be super basic, you’re going to be like, “What?” You’ll go back in your third week. You start, “Oh! I’m getting this.” Then a month, three months, “Oh! I’m starting to be competent.” So you go once. You’re going to feel lost and experience imposter syndrome. But you keep going, because that is a format. First, you start absorbing what the format is, and that helps you understand the content. So once your mind absorbs the format, you’re like, “Okay. Now, I have – I know how to navigate this. I know what’s coming next.” So you don’t have to focus on that. You start focusing in the content. Then little but little, you become more proficient in understanding. Very soon, you’re going to be willing to write a paper. I’m not there yet. [00:31:51] DC: That’s awesome. [00:31:52] CC: At least that’s how I think it goes. I don’t know. [00:31:54] MG: I agree. [00:31:55] DC: It’s also changed over time. It’s fascinating. If you read papers from like 20 years ago and you read papers that are written more recently, it's interesting. The papers have changed their language when considering competition. When you're introducing a new idea with a paper, frequently that you are introducing it into a market full of competition. You're being very careful about the language, almost in a way to complicate the idea rather than to make it clear, which is challenging. There are definitely some papers that I’ve read where I was like, “Why are you using so many words to describe this simple idea?” It makes no sense, but yeah. [00:32:37] CC: I don’t want to make this episode all about Papers We Love. It was so good that you mentioned that, Duffie. It’s really good to be in a room where we’ll be watching something online where you see people asking questions and people go, “Oh! Why is this thing like this? Why is X like this,” or, “Why is Y doing like this?” Then you go, “Oh! I didn’t even think that X was important. I didn’t even know that Y was important.” So you stop picking up what the important things are, and that’s what makes it click is now you’ve – Hooking into the important concepts because people who know more than you are pointing out and asking questions. So you start paying attention to learning what the main things it should be paying attention to, which is different from reading the paper by yourself. It’s just a ton of content that you need to sort through. [00:33:34] DC: Yeah. I frequently self-describe it as a perspective junkie, because I feel like for any of us really to learn more about a subject that we feel we understand, we need the perspective of others to really engage, to expand our understanding of that thing. I feel like and I know how to make a peanut butter and jelly sandwich. I’ve done it a million times. It’s a solid thing. But then I watch my kid do it and I’m like, “I hadn’t thought of that problem.” [inaudible 00:33:59], right? This is a great example of that. Those communities like Papers We Love are great opportunity to understand the perspective of others around these hard ideas. When we’re trying to understand complex things like distributed systems, this is where it’s at. This is actually how we go about achieving this. There is a lot that you can do on your own but there is always going to be more that you can do together, right? You can always do more. You can always understand this idea faster. You can understand the complexity of a system and how to break it down into these things by exploiting it with other people. That's I feel like – [00:34:40] CC: That is so well said, so well said, and it’s the reason for this show to exist, right? We come on a show and we give our perspectives, and people get to learn from people with different backgrounds, what their takes are on distributed systems, cloud native. So this was such a major plug for the show. Keep coming back. You’re going to learn a ton. Also, it was funny that you – It was the second time you mentioned cooking, made a cooking reference, Duffie, which brings me to something I want to make sure I say on this episode. I added a few things for reference, three books. But the one that I definitely would recommend starting with is The Systems Bible by John Gall. This book is so cool, because it helps you see everything through systems. Everything is a system. A conversation can be a system. An interaction between two people can be a system. I’m not saying this book says that. It’s just like my translation and that you can look – Cooking is a system. There is a process. There is a sequence. It’s really, really cool and it really helps to have things framed in this way and then go out and read the other books on systems. I think it helps a lot. This is definitely what I am starting with and what I would recommend people start with, The Systems Bible. Did you two know this book? [00:36:15] MG: I did not. I don’t. [00:36:17] DC: I’m not aware of it either but I really appreciate the idea. I do think that that's true. If you develop a skill for understanding systems as they are, you’ll basically develop – Frequently, what you’re developing is the ability to recognize patterns, right? [00:36:32] CC: Exactly. [00:36:32] DC: You could recognize those patterns on anything. [00:36:37] MG: Yeah. That's a good segue for just something that came to my mind. Recently, I gave a talk on event-driven architectures. For someone who's not a software developer or architect, it can be really hard to grab all those concepts on asynchrony and eventual consistency and idempotency. There are so many words of like, “What is this all – It sounds weird, way too complex.” But I was reading a book some years ago by Gregor Hohpe. He’s the guy behind Enterprise Integration Patterns. That’s also a book that I have on my list here. He said, “Your barista doesn't use two-phase commit.” So he was basically making this analogy of he was in a coffee shop and he was just looking at the process of how the barista makes the coffee. You pay for it and all the things that can go wrong while your coffee is brewed and served to you. So he was making this relation between the real world and the life and human society to computer systems. There it clicked to me where I was like, “So many problems we solve every day, for example, agreeing on a time where we should meet for dinner or cooking, is a consensus problem, and we solve it.” We even solve it in the case of failure. I might not be able to call Duffie, because he is not available right now. So somehow, we figure out. I always thought that those problems just exist in computer science and distributed systems. But I realized actually that's just a subset of the real world as is. Looking at those problems through the lens of your daily life and you get up and all the stuff, there are so many things that are related to computer systems. [00:38:13] CC: Michael, I missed it. Was it an article you read? [00:38:16] MG: Yes. I need to put that in there as well. Yeah. It’s a plug. [00:38:19] CC: Please put that in there. Absolutely. So far from being any kind of expert in distributed systems, but I have noticed. I have caught myself using systems thinking for even complicated conversations. Even in my personal life, I started approaching things in the systems oriented and just the – just a high-level example. When I am working with systems, I can approach from the beginning, the end. It’s like a puzzle, putting the puzzle together, right? Sometimes, it starts from the middle. Sometimes, it starts from the edges. When I‘m having conversations that I need to be very strategic like I have one shot. Let’s say maybe I’m in a school meeting and I have to reach a consensus or have a solution or have a plan of action. I have to ask the right questions. My private self would do things linearly. Historically like, “Let’s go from the beginning and work out through the end.” Now, I don’t do that anymore. Not necessarily. Sometimes, I like, “Let me maybe ask the last question I would ask and see where it leads and just approach things from a different way.” I don’t know if this is making sense. [00:39:31] MG: It does. It does. [00:39:32] CC: But my thinking has changed. The way I see the possibilities is not a linear thing anymore. I see how you can truly switch things. I use this in programming a lot and also writing. Sometimes, when you’re a beginner writer, you start at the top and you go down to the conclusion. Sometimes, I start I the middle and go up, right? So you can start anywhere. It’s beautiful or it just gives you so many more options. Or maybe I’m just crazy. Don’t listen to me. [00:40:03] DC: I don’t think you’re crazy. I was going to say, one of the funny things about Michael’s point and your point both, it’s like in a way that they have kind of referred to Conway's law, the idea that people will build systems in the way that they communicate. So this is actually – It totally brings it back to that same point of thing, right? We by nature will build systems that we can understand, because that is the constraint in which we have to work, right? So it’s very interesting. [00:40:29] CC: Yeah. But it’s an interesting thing, because we are [inaudible 00:40:32] by the way we are forced to work. For example, I work with constraints and what I'm saying is that that has been influencing my way of thinking. So, yes, I built systems in the way I think but also because of the constraints that I’m dealing with that I have to be – the tradeoffs I need to make, that also turns around and influences the way I think, the way I see the world and the rest of the systems and all the rest of the world. Of course, as I change my thinking, possibly you can theorize that you go back and apply that. Apply things that you learn outside of your work back to your work. It’s a beautiful back-and-forth I think. [00:41:17] MG: I had the same experience with some – When I had to design kind of my first API and think of, “Okay. What would the consumer contract be and what would a consumer expect me to deliver in response and so on?” I was forcing myself and being explicit in communicating and not throwing everything at the client back to confusing but being very explicit and precise. Also on communication every day when you talk to people, being explicit and precise really helps to avoid a lot of problems and trouble. Be it partnership or amongst friends or at work. This is what I took from computer science actually back into my real world in order to taking all those perceptions, perceiving things from a different perspective, and being more precise and explicit in how I respond or communicated. [00:42:07] CC: My take on what you just said, Michael, is we design systems thinking how is this going to fail. We know this is going to fail. We’re going to design for that. We’re going to implement for that. In real life, for example, if I need to get an agreement from someone, I try to understand the person's thinking and just go, “I just had this huge thing this week. This is in my mind.” I’m not constantly thinking about this, I’m not crazy like that. Just a little bit crazy. It’s like, “How does this person think? What do they need to know? How far can I push?” Right? We need to make a decision quickly, so the approach is everything, and sometimes you only get one shot, so yeah. I mean, correct me if I’m wrong. That's how I heard or I interpreted what you just said. [00:42:52] MG: Yeah, absolutely. Spot on. Spot on. So I’m not crazy as well. [00:42:55] CC: Basically, I think we ended up turning this episode into a little bit of like, “Here are great references,” and also a huge endorsement for really going deep into distributed systems, because it’s going to be good for your jobs. It’s going to be good for your life. It’s going to be good for your health. We are crazy. [00:43:17] DC: I’m definitely crazy. You guys might be. I’m not. All right. So we started this episode with the idea of coming to learning distributed systems perhaps without a degree or without a formal education in it. We talked about a ride of different ideas on that subject. Like different approaches that each of us took, how each of us see the problem. Is there any important point that either of you want to throw back into the mix here or bring up in relation to that? [00:43:48] MG: Well, what I take from this episode, being my first episode and getting to know your background, Duffie and Carlisia, is that whoever is going to listen to this episode, whatever background you have, even though you might not be in computer systems or industry at all, I think we three all had approved that whatever background you have, if you’re just curious a little bit and maybe a little bit crazy, you can totally get down the rabbit hole in distributed systems and get totally excited about it. There’s no need for having formal education and the degree to enter this world. It might help but it’s kind of not a high bar that I was perceiving it to be 10 years ago, for example. [00:44:28] CC: Yeah. That’s a good point. My takeaway is it always puzzled me how some people are so good and experienced and such experts in distributed systems. I always look at myself. It’s like, “How am I lacking?” It’s like, “What memo did I miss? What class did I miss? What project did I not work on to get the experience?” What I’m seeing is you just need to put yourself in that place. You need to do the work. But the good news is achieving competency in distributed systems is doable. [00:45:02] DC: My takeaway is as we discussed before, I think that there is no one thing that comprises a distributed system. It is a number of things, right, and basically a number of behaviors or patterns that we see that comprise what a distributed system is. So when I hear people say, “I’m not an expert in distributed systems,” I think, “Well, perhaps you are and maybe you don’t know it already.” Maybe there's some particular set of patterns with which you are incredibly familiar. Like you understand DNS better than the other 20 people in the room. That exposes you to a set of patterns that certainly give you the capability of saying that you are an expert in that particular set of patterns. So I think that to both of your points, it’s like you can enter this stage where you want to learn about distributed systems from pretty much any direction. You can learn it from a CIS background. You can come it with no computer experience whatsoever, and it will obviously take a bit more work. But this is really just about developing and understanding around how these things communicate and the patterns with which they accomplish that communication. I think that’s the important part. [00:46:19] CC: All right, everybody. Thank you, Michael Gasch, for being with us now. I hope to – [00:46:25] MG: Thank you. [00:46:25] CC: To see you in more episodes [inaudible 00:46:27]. Thank you, Duffie. [00:46:30] DC: My pleasure. [00:46:31] CC: Again, I’m Carlisia Campos. With us was Duffie Cooley and Michael Gesh. This was episode 12, and I hope to see you next time. Bye. [00:46:41] DC: Bye. [00:46:41] MG: Goodbye. [END OF EPISODE] [00:46:43] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

The Podlets - A Cloud Native Podcast
The Dichotomy of Security (Ep 10)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Dec 30, 2019 44:20


Security is inherently dichotomous because it involves hardening an application to protect it from external threats, while at the same time ensuring agility and the ability to iterate as fast as possible. This in-built tension is the major focal point of today’s show, where we talk about all things security. From our discussion, we discover that there are several reasons for this tension. The overarching problem with security is that the starting point is often rules and parameters, rather than understanding what the system is used for. This results in security being heavily constraining. For this to change, a culture shift is necessary, where security people and developers come around the same table and define what optimizing to each of them means. This, however, is much easier said than done as security is usually only brought in at the later stages of development. We also discuss why the problem of security needs to be reframed, the importance of defining what normal functionality is and issues around response and detection, along with many other security insights. The intersection of cloud native and security is an interesting one, so tune in today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Bryan Liles Nicholas Lane Key Points From This Episode: Often application and program security constrain optimum functionality. Generally, when security is talked about, it relates to the symptoms, not the root problem. Developers have not adapted internal interfaces to security. Look at what a framework or tool might be used for and then make constraints from there. The three frameworks people point to when talking about security: FISMA, NIST, and CIS. Trying to abide by all of the parameters is impossible. It is important to define what normal access is to understand what constraints look like. Why it is useful to use auditing logs in pre-production. There needs to be a discussion between developers and security people. How security with Kubernetes and other cloud native programs work. There has been some growth in securing secrets in Kubernetes over the past year. Blast radius – why understanding the extent of security malfunction effect is important. Chaos engineering is a useful framework for understanding vulnerability. Reaching across the table – why open conversations are the best solution to the dichotomy. Security and developers need to have the same goals and jargon from the outset. The current model only brings security in at the end stages of development. There needs to be a place to learn what normal functionality looks like outside of production. How Google manages to run everything in production. It is difficult to come up with security solutions for differing contexts. Why people want service meshes. Quotes: “You’re not able to actually make use of the platform as it was designed to be made use of, when those constraints are too tight.” — @mauilion [0:02:21] “The reason that people are scared of security is because security is opaque and security is opaque because a lot of people like to keep it opaque but it doesn’t have to be that way.” — @bryanl [0:04:15] “Defining what that normal access looks like is critical to us to our ability to constrain it.” — @mauilion [0:08:21] “Understanding all the avenues that you could be impacted is a daunting task.” — @apinick [0:18:44] “There has to be a place where you can go play and learn what normal is and then you can move into a world in which you can actually enforce what that normal looks like with reasonable constraints.” — @mauilion [0:33:04] “You don’t learn to ride a motorcycle on the street. You’d learn to ride a motorcycle on the dirt.” — @apinick [0:33:57] Links Mentioned in Today’s Episode: AWS — https://aws.amazon.com/Kubernetes https://kubernetes.io/IAM https://aws.amazon.com/iam/Securing a Cluster — https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/TGI Kubernetes 065 — https://www.youtube.com/watch?v=0uy2V2kYl4U&list=PL7bmigfV0EqQzxcNpmcdTJ9eFRPBe-iZa&index=33&t=0sTGI Kubernetes 066 —https://www.youtube.com/watch?v=C-vRlW7VYio&list=PL7bmigfV0EqQzxcNpmcdTJ9eFRPBe-iZa&index=32&t=0sBitnami — https://bitnami.com/Target — https://www.target.com/Netflix — https://www.netflix.com/HashiCorp — https://www.hashicorp.com/Aqua Sec — https://www.aquasec.com/CyberArk — https://www.cyberark.com/Jeff Bezos — https://www.forbes.com/profile/jeff-bezos/#4c3104291b23Istio — https://istio.io/Linkerd — https://linkerd.io/ Transcript: EPISODE 10 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores cloud native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.2] NL: Hello and welcome back to The Kubelets Podcast. My name is Nicholas Lane and this time, we’re going to be talking about the dichotomy of security. And to talk about such an interesting topic, joining me are Duffie Coolie. [0:00:54.3] DC: Hey, everybody. [0:00:55.6] NL: Bryan Liles. [0:00:57.0] BM: Hello [0:00:57.5] NL: And Carlisia Campos. [0:00:59.4] CC: Glad to be here. [0:01:00.8] NL: So, how’s it going everybody? [0:01:01.8] DC: Great. [0:01:03.2] NL: Yeah, this I think is an interesting topic. Duffie, you introduced us to this topic. And basically, what I understand, what you wanted to talk about, we’re calling it the dichotomy of security because it’s the relationship between security, like hardening your application to protect it from attack and influence from outside actors and agility to be able to create something that’s useful, the ability to iterate as fast as possible. [0:01:30.2] DC: Exactly. I mean, the idea from this came from putting together a talks for the security conference coming up here in a couple of weeks. And I was noticing that obviously, if you look at the job of somebody who is trying to provide some security for applications on their particular platform, whether that be AWS or GCE or OpenStack or Kubernetes or anything of these things. It’s frequently in their domain to kind of define constraints for all of the applications that would be deployed there, right? Such that you can provide rational defaults for things, right? Maybe you want to make sure that things can’t do a particular action because you don’t want to allow that for any application within your platform or you want to provide some constraint around quota or all of these things. And some of those constraints make total sense and some of them I think actually do impact your ability to design the systems or to consume that platform directly, right? You’re not able to actually make use of the platform as it was designed to be made use of, when those constraints are too tight. [0:02:27.1] DC: Yeah. I totally agree. There’s kind of a joke that we have in certain tech fields which is the primary responsibility of security is to halt productivity. It isn’t actually true, right? But there are tradeoffs, right? If security is too tight, you can’t move forward, right? Example of this that kind of mind are like, if you’re too tight on your firewall rules where you can’t actually use anything of value. That’s a quick example of like security gone haywire. That’s too controlling, I think. [0:02:58.2] BM: Actually. This is an interesting topic just in general but I think that before we fall prey to what everyone does when they talk about security, let’s take a step back and understand why things are the way they are. Because all we’re talking about are the symptoms of what’s going on and I’ll give you one quick example of why I say this. Things are the way they are because we haven’t made them any better. In developer land, whenever we consume external resources, what we were supposed to do and what we should be doing but what we don’t do is we should create our internal interfaces. Only program to those interfaces and then let that interface of that adapt or talk to the external service and in security world, we should be doing the same thing and we don’t do this. My canonical example for this is IAM on AWS. It’s hard to create a secure IM configuration and it’s even harder to keep it over time and it’s even harder to do it whenever you have 150, 100, 5,000 people dealing with this. What companies do is they actually create interfaces where they could describe the part of IAM they want to use and then they translate that over. The reason I bring this up is because the reason that people are scared of security is because security is opaque and security is opaque because a lot of people like to keep it opaque. But it doesn’t have to be that way. [0:04:24.3] NL: That’s a good point, that’s a reasonable design and wherever I see that devoted actually is very helpful, right? Because you highlight a critical point in that these constraints have to be understood by the people who are constrained by them, right? It will just continue to kind of like drive that wedge between the people who are responsible for them top finding t hem and the people who are being affected by them, right? That transparency, I think it’s definitely key. [0:04:48.0] BM: Right, this is our cloud native discussion, any idea of where we should start thinking about this in cloud native land? [0:04:56.0] DC: For my part, I think it’s important to understand if you can like what the consumer of a particular framework or tool might need, right? And then, just take it from there and figure out what rational constraints are. Rather than the opposite which is frequently where people go and evaluate a set of rules as defined by some particular, some third-part company. Like you look at CIS packs and you look at like a lot of these other tooling. I feel like a lot of people look at those as like, these are the hard rules, we must comply to all of these things. Legally, in some cases, that’s the case. But frequently, I think they’re just kind of like casting about for some semblance of a way to start defining constraint and they go too far, they’re no longer taking into account what the consumers of that particular platform might meet, right? Kubernetes is a great example of this. If you look at the CIS spec for Kubernetes or if you look at a lot of the talks that I’ve seen kind of around how to secure Kubernetes, we defined like best practices for security and a lot of them are incredibly restrictive, right? I think of the problem there is that restriction comes at a cost of agility. You’re no longer able to use Kubernetes as a platform for developing microservices because you provided so much constraints that it breaks the model, you know? [0:06:12.4] NL: Okay. Let’s break this down again. I can think of a top of my head, three types of things people point to when I’m thinking about security. And spoiler alert, I am going to do some acronyms but don’t worry about the acronyms are, just understand they are security things. The first one I’ll bring up is FISMA and then I’ll think about NIST and the next one is CIS like you brought up. Really, the reason they’re so prevalent is because depending on where you are, whether you’re in a highly regulated place like a bank or you’re working for the government or you have some kind of automate concern to say a PIPA or something like that. These are the words that the auditors will use with you. There is good in those because people don’t like the CIS benchmarks because sometimes, we don’t understand why they’re there. But, from someone who is starting from nothing, those are actually great, there’s at least a great set of suggestions. But the problem is you have to understand that they’re only suggestions and they are trying to get you to a better place than you might need. But, the other side of this is that, we should never start with NIST or CIS or FISMA. What we really should do is our CISO or our Chief Security Officer or the person in charge of security. Or even just our – people who are in charge, making sure our stack, they should be defining, they should be taking what they know, whether it’s the standards and they should be building up this security posture in this security document and these rules that are built to protect whatever we’re trying to do. And then, the developers of whoever else can operate within that rather than everything literally. [0:07:46.4] DC: Yeah, agreed. Another thing I’ve spent some time talking to people about like when they start rationalizing how to implement these things or even just think about the secure surface or develop a threat model or any of those things, right? One of the things that I think it’s important is the ability to define kind of like what normal looks like, right? What normal access between applications or normal access of resources looks like. I think that your point earlier, maybe provides some abstraction in front of a secure resource such that you can actually just share that same fraction across all the things that might try to consume that external resource is a great example of the thing. Defining what that normal access looks like is critical to us to our ability to constrain it, right? I think that frequently people don’t start there, they start with the other side, they’re saying, here are all the constraints, you need to tell me which ones are too tight. You need to tell me which ones to loosen up so that you can do your job. You need to tell me which application needs access to whichever application so that I can open the firewall for you. I’m like, we need to turn that on its head. We need the environments that are perhaps less secure so that we can actually define what normal looks like and then take that definition and move it into a more secured state, perhaps by defining these across different environments, right? [0:08:58.1] BM: A good example of that would be in larger organizations, at every part of the organization does this but there is environments running your application where there are really no rules applied. What we do with that is we turn on auditing in those environments so you have two applications or a single application that talks to something and you let that application run and then after the application run, you go take a look at the audit logs and then you determine at that point what a good profile of this application is. Whenever it’s in production, you set up the security parameters, whether it be identity access or network, based on what you saw in auditing in your preproduction environment. That’s all you could run because we tested it fully in our preproduction environment, it should not do any more than that. And that’s actually something – I’ve seen tools that will do it for AWS IM. I’m sure you can do for anything else that creates auditing law. That’s a good way to get started. [0:09:54.5] NL: It sounds like what we’re coming to is that the breakdown of security or the way that security has impacted agility is when people don’t take a rational look at their own use case. instead, rely too much on the guidance of other people essentially. Instead of using things like the CIS benchmarking or NIST or FISMA, that’s one that I knew the other two and I’m like, I don’t know this other one. If they follow them less as guidelines and more as like hard set rules, that’s when we get impacts or agility. Instead of like, “Hey. This is what my application needs like you’re saying, let’s go from there.” What does this one look like? Duffie is for saying. I’m kind of curious, let’s flip that on its head a little bit, are there examples of times when agility impacts security? [0:10:39.7] BM: You want to move fast and moving fast is counter to being secure? [0:10:44.5] NL: Yes. [0:10:46.0] DC: Yeah, literally every single time we run software. When it comes down to is developers are going to want to develop and then security people are going to want to secure. And generally, I’m looking at it from a developer who has written security software that a lot of people have used, you guys had know that. Really, there needs to be a conversation, it’s the same thing as we had this dev ops conversation for a year – and then over the last couple of years, this whole dev set ops conversation has been happening. We need to have this conversation because from a security person’s point of view, you know, no access is great access. No data, you can’t get owned if you don’t have any data going across the wire. You know what? Can’t get into that server if there’s no ports opened. But practically, that doesn’t work and we find is that there is actually a failing on both sides to understand what the other person was optimizing for. [0:11:41.2] BM: That’s actually where a lot of this comes from. I will offer up that the only default secure posture is no access to anything and you should be working from that direction to where you want to be rather than working from, what should we close down? You should close down everything and then you work with allowing this list for other than block list. [0:12:00.9] NL: Yeah, I agree with that model but I think that there’s an important step that has to happen before that and that’s you know, the tooling or thee wireless phone to define what the application looks like when it’s in a normal state or the running state and if we can accomplish that, then I feel like we’re in a better position to find what that LOI list looks like and I think that one of the other challenges there of course, let’s backup for a second. I have actually worked on a platform that supported many services, hundreds of services, right? Clearly, if I needed to define what normal looked like for a hundred services or a thousand services or 2,000 services, that’s going to be difficult in a way that people approach the problem, right? How do you define for each individual service? I need to have some decoration of intent. I need the developer to engage here and tell me, what they’re expecting, to set some assumptions about the application like what it’s going to connect to, those dependences are – That sort of stuff. And I also need tooling to verify that. I need to be able to kind of like build up the whole thing so that I have some way of automatically, you know, maybe with oversight, defining what that security context looks like for this particular service on this particular platform. Trying to do it holistically is actually I think where we get into trouble, right? Obviously, we can’t scale the number of people that it takes to actually understand all of these individual services. We need to actually scale this stuff as software problem instead. [0:13:22.4] CC: With the cloud native architecture and infrastructure, I wonder if it makes it more restrictive because let’s say, these are running on Kubernetes, everything is running at Kubernetes. Things are more connected because it’s a Kubernetes, right? It’s this one huge thing that you’re running on and Kubernetes makes it easier to have access to different notes and when the nodes took those apart, of course, you have to find this connection. Still, it’s supposed to make it easy. I wonder if security from a perspective of somebody, needing to put a restriction and add miff or example, makes it harder or if it makes it easier to just delegate, you have this entire area here for you and because your app is constrained to this space or name space or this part, this node, then you can have as much access as you need, is there any difference? Do you know what I mean? Does it make sense what I said? [0:14:23.9] BM: There was actually, it’s exactly the same thing as we had before. We need to make sure that applications have access to what they need and don’t have access to what they don’t need. Now, Kubernetes does make it easier because you can have network policies and you can apply those and they’re easier to manage than who knows what networking management is holding you have. Kubernetes also has pod security policies which again, actually confederates this knowledge around my pod should be able to do this or should not be able to run its root, it shouldn’t be able to do this and be able to do that. It’s still the same practice Carlisia, but the way that we can control it is now with a standard set off tools. We still have not cracked the whole nut because the whole thing of turning auditing on to understand and then having great tool that can read audit locks from Kubernetes, just still aren’t there. Just to add one more last thing that before we add VMWare and we were Heptio, we had a coworker who wrote basically dynamic audit and that was probably one of the first steps that we would need to be able to employ this at scale. We are early, early, super early in our journey and getting this right, we just don’t have all the necessary tools yet. That’s why it’s hard and that’s why people don’t do it. [0:15:39.6] NL: I do think it is nice to have t hose and primitives are available to people who are making use of that platform though, right? Because again, kind of opens up that conversation, right? Around transparency. The goal being, if you understood the tools that we’re defining that constraint, perhaps you’d have access to view what the constraints are and understand if they’re actually rational or not with your applications. When you’re trying to resolve like I have deployed my application in dev and it’s the wild west, there’s no constraints anywhere. I can do anything within dev, right? When I’m trying to actually promote my application to staging, it gives you some platform around which you can actually sa, “If you want to get to staging, I do have to enforce these things and I have a way and again, all still part of that same API, I still have that same user experience that I had when just deploying or designing the application to getting them deployed.” I could still look at again and understand what the constraints are being applied and make sure that they’re reasonable for my application. Does my application run, does it have access to the network resources that it needs to? If not, can I see where the gaps are, you know? [0:16:38.6] DC: For anyone listening to this. Kubernetes doesn’t have all the documentation we need and no one has actually written this book yet. But on Kubernetes.io, there are a couple of documents about security and if we have shownotes, I will make sure those get included in our shownotes because I think there are things that you should at least understand what’s in a pod security policy. You should at least understand what’s in a network security policy. You should at least understand how roles and role bindings work. You should understand what you’re going to do for certificate management. How do you manage this certificate authority in Kubernetes? How do you actually work these things out? This is where you should start before you do anything else really fancy. At least, understand your landscape. [0:17:22.7] CC: Jeffrey did a TGI K talk on secrets. I think was that a series? There were a couple of them, Duffie? [0:17:29.7] DC: Yeah, there were. I need to get back and do a little more but yeah. [0:17:33.4] BM: We should then add those to our shownotes too. Hopefully they actually exist or I’m willing to see to it because in assistance. [0:17:40.3] CC: We are going to have shownotes, yes. [0:17:44.0] NL: That is interesting point, bringing up secrets and secret management and also, like secured Inexhibit. There are some tools that exist that we can use now in a cloud native world, at least in the container world. Things like vault exist, things like well, now, KBDM you can roll certificate which is really nice. We are getting to a place where we have more tooling available and I’m really happy about it. Because I remember using Kubernetes a year ago and everyone’s like, “Well. How do you secure a secret in Kubernetes?” And I’m like, “Well, it sure is basics for you to encode it. That’s on an all secure.” [0:18:15.5] BM: I would do credit Bitnami has been doing sealed secrets, that’s been out for quite a while but the problem is that how do you suppose to know about that and how are you supposed to know if it’s a good standard? And then also, how are you supposed to benchmark against that? How do you know if your secrets are okay? We haven’t talked about the other side which is response or detection of issues. We’re just talking about starting out, what do you do? [0:18:42.3] DC: That’s right. [0:18:42.6] NL: It is tricky. We’re just saying like, understanding all the avenues that you could be impacted is kind of a daunting task. Let’s talk about like the Target breach that occurred a few years ago? If anybody doesn’t remember this, basically, Target had a huge credit card breach from their database and basically, what happened is that t heir – If I recalled properly, their OIDC token had a – not expired but the audience for it was so broad that someone had hacked into one computer essentially like a register or something and they were able to get the OIDC token form the local machine. The authentication audience for that whole token was so broad that they were able to access the database that had all of the credit card information into it. These are one of these things that you don’t think about when you’re setting up security, when you’re just maybe getting started or something like that. What are the avenues of attack, right? You’d say like, “OIDC is just pure authentication mechanism, why would we need to concern ourselves with this?” And then but not understanding kind of what we were talking about last because the networking and the broadcasting, what is the blast radius of something like this and so, I feel like this is a good example of sometimes security can be really hard and getting started can be really daunting. [0:19:54.6] DC: Yeah, I agree. To Bryan’s point, it’s like, how do you test against this? How do you know that what you’ve defined is enough, right? We can define all of these constraints and we can even think that they’re pretty reasonable or rational and the application may come up and operate but how do you know? How can you verify that? What you’ve done is enough? And then also, remember. With OIDC has its own foundations and loft. You realize that it’s a very strong door but it’s only a strong door, it also do things that you can’t walk around a wall and that it’s protecting or climb over the wall that it’s protecting. There’s a bit of trust and when you get into things like the target breach, you really have to understand blast radius for anything that you’re going to do. A good example would be if you’re using shared key kind of things or like public share key. You have certificate authorities and you’re generating certificates. You should probably have multiple certificate authorities and you can have a basically, a hierarchy of these so you could have basically the root one controlled by just a few people in security. And then, each department has their own certificate authority and then you should also have things like revocation, you should be able to say that, “Hey, all this is bad and it should all go away and it probably should have every revocation list,” which a lot of us don’t have believe it or not, internally. Where if I actually kill our own certificate, a certificate was generated and I put it in my revocation list, it should not be served and in our clients that are accepting that our service is to see that, if we’re using client side certificates, we should reject these instantly. Really, what we need to do is stop looking at security as this one big thing and we need to figure out what are our blast radius. Firecracker, blowing up in my hand, it’s going to hurt me. But Nick, it’s not going to hurt you, you know? If someone drops in a huge nuclear bomb on the United States or the west coast United States, I’m talking to myself right now. You got to think about it like that. What’s the worst that can happen if this thing gets busted or get shared or someone finds that this should not happen? Every piece off data that you have that you consider secure or sensitive, you should be able to figure out what that means and that is how whenever you are defining a security posture that’s butchered to me. Because that is why you’ll notice that a lot of companies some of them do run open within a contained zone. So, within this contained zone you could talk to whomever you want. We don’t actually have to be secure here because if we lose one, we lost them all so who cares? So, we need to think about that and how do we do that in Kubernetes? Well, we use things like name spaces first of all and then we use things like this network policies and then we use things like pod security policies. We can lock some access down to just name spaces if need be. You can only talk to pods and your name space. And I am not telling you how to do this but you need to figure out talking with your developer, talking to the security people. But if you are in security you need to talk to your product management staff and your software engineering staff to figure out really how does this need to work? So, you realize that security is fun and we have all sorts of neat tools depending on what side you’re on. You know if you are on red team, you’re half knee in, you’re blue team you are saving things. We need to figure out these conversations and tooling comes from these conversations but we need to have these conversation first. [0:23:11.0] DC: I feel like a little bit of a broken record on this one but I am going to go back to chaos engineering again because I feel like it is critical to stuff like this because it enables a culture in which you can explore both the behavior of applications itself but why not also use this model to explore different ways of accessing that information? Or coming up with theories about the way the system might be vulnerable based on a particular attack or a type of attack, right? I think that this is actually one of the movements within our space that I think provides because then most hope in this particular scenario because a reasonable chaos engineering practice within an organization enables that ability to explore all of the things. You don’t have to be red team or blue team. You can just be somebody who understands this application well and the question for the day is, “How can we attack this application?” Let’s come up with theories about the way that perhaps this application could be attacked. Think about the problem differently instead of thinking about it as an access problem, think about it as the way that you extend trust to the other components within your particular distributed system like do they have access that they don’t need. Come up with a theory around being able to use some proxy component of another system to attack yet a third system. You know start playing with those ideas and prove them out within your application. A culture that embraces that I think is going to be by far a more secure culture because it lets developers and engineers explore these systems in ways that we don’t generally explore them. [0:24:36.0] BM: Right. But also, if I could operate on myself I would never need a doctor. And the reason I bring that up is because we use terms like chaos engineering and this is no disrespect to you Duffie, so don’t take it as this is panacea or this idea that we make things better and true. That is fine, it will make us better but the little secret behind chaos engineering is that it is hard. It is hard to build these experiments first of all, it is hard to collect results from these experiments. And then it is hard to extrapolate what you got out of the experiments to apply to whatever you are working on to repeat and what I would like to see is what people in our space is talking about how we can apply such techniques. But whether it is giving us more words or giving us more software that we can employ because I hate to say it, it is pretty chaotic in chaos engineering right now for Kubernetes. Because if you look at all the people out there who have done it well. And so, you look at what Netflix has done with pioneering this and then you listen to what, a company such us like Gremlin is talking about it is all fine and dandy. You need to realize that it is another piece of complexity that you have to own and just like any other things in the security world, you need to rationalize how much time you are going to spend on it first is the bottom line because if I have a “Hello, World!” app, I don’t really care about network access to that. Unless it is a “Hello, World!” app running on the same subnet as some doing some PCI data then you know it is a different conversation. [0:26:05.5] DC: Yeah. I agree and I am certainly not trying to version as a panacea but what I am trying to describe is that I feel like I am having a culture that embraces that sort of thinking is going to enable us to be in a better position to secure these applications or to handle a breach or to deal with very hard to understand or resolve problems at scale, you know? Whether that is a number of connections per second or whether that is a number of applications that we have horizontally scaled. You know like being able to embrace that sort of a culture where we asked why where we say “well, what if…” or if we actually come up you know embracing the idea of that curiosity that got you into this field, you know what I mean like the thing that is so frequently our cultures are opposite of that, right? It becomes a race to the finish and in that race to the finish, lots of pieces fall off that we are not even aware of, you know? That is what I am highlighting here when I talk about it. [0:26:56.5] NL: And so, it seems maybe the best solution to the dichotomy between security and agility is really just open conversation, in a way. People actually reaching across the aisle to talk to each other. So, if you are embracing this culture as you are saying Duffie the security team should be having constant communication with the application team instead of just like the team doing something wrong and the security team coming down and smacking their hand. And being like, “Oh you can’t do it this way because of our draconian rules” right? These people are working together and almost playing together a little bit inside of their own environment to create also a better environment. And I am sorry.I didn’t mean to cut you off there, Bryan. [0:27:34.9] BM: Oh man, I thought it was fleeting like all my thoughts. But more about what you are saying is, is that you know it is not just more conversations because we can still have conversations and I am talking about sider and subnets and attack vectors and buffer overflows and things like that. But my developer isn’t talking, “Well, I just need to be able to serve this data so accounting can do this.” And that’s what happens a lot in security conversations. You have two groups of individuals who have wholly different goals and part of that conversation needs to be aligning or jargon and then aligning on those goals but what happens with pretty much everything in the development world, we always bring our networking, our security and our operations people in right at the end, right when we are ready to ship, “Hey make this thing work.” And really it is where a lot of our problems come out. Now security either could or wanted to be involved at the beginning of a software project what we actually are talking about what we are trying to do. We are trying to open up this service to talk to this, share this kind of data. Security can be in there early saying, “Oh no you know, we are using this resource in our cloud provider. It doesn’t really matter what cloud provider and we need to protect this. This data is sitting here at rest.” If we get those conversations earlier, it would be easier to engineer solutions that to be hopefully reused so we don’t have to have that conversation in the future. [0:29:02.5] CC: But then it goes back to the issue of agility, right? Like Duffie was saying, wow you can develop, I guess a development cluster which has much less restrictive restrictions and they move to a production environment where the proper restrictions are then – then you find out or maybe station environment let’s say. And then you find out, “Oh whoops. There are a bunch of restrictions I didn’t deal with but I didn’t move a lot faster because I didn’t have them but now, I have to deal with them.” [0:29:29.5] DC: Yeah, do you think it is important to have a promotion model in which you are able to move toward a more secure deployment right? Because I guess a parallel to this is like I have heard it said that you should develop your monolith first and then when you actually have the working prototype of what you’re trying to create then consider carefully whether it is time to break this thing up into a set of distinct services, right? And consider carefully also what the value of that might be? And I think that the reason that that’s said is because it is easier. It is going to be a lower cognitive load with everything all right there in the same codebase. You understand how all of these pieces interconnect and you can quickly develop or prototype what you are working on. Whereas if you are trying to develop these things into individual micro services first, it is harder to figure out where the line is. Like where to divide all of the business logic. I think this is also important when you are thinking about the security aspects of this right? Being able to do a thing when which you are not constrained, define all of these services and your application in the model for how they communicate without constraint is important. And once you have that when you actually understand what normal looks like from that set of applications then enforce them, right? If you are able to declare that intent you are going to say like these are the ports on the list on for these things, these are the things that they are going to access, this is the way that they are going to go about accessing them. You know if you can declare that intent then that is actually that is a reasonable body of knowledge for which the security people can come along and say, “Okay well, you have told us. You informed us. You have worked with us to tell us like what your intent is. We are going to enforce that intent and see what falls out and we can iterate there.” [0:31:01.9] CC: Yeah everything you said makes sense to me. Starting with build the monolith first. I mean when you start out why which ones will have abstract things that you don’t really – I mean you might think you know but you’re only really knowing practice what you are going to need to abstract. So, don’t abstract things too early. I am a big fan of that idea. So yeah, start with the monolith and then you figure out how to break it down based on what you need. With security I would imagine the same idea resonates with me. Don’t secure things that you don’t need you don’t know just yet that needs securing except the deal breaker things. Like there is some things we know like we don’t want production that are being accessed some types of production that are some things we know we need to secure so from the beginning. [0:31:51.9] BM: Right. But I will still iterate that it is always denied by default, just remember that. It is security is actually the opposite way. We want to make sure that we have the least amount and even if it is harder for us you always want to start with un-allowed TCP communication on port 443 or UDP as well. That is what I would allow rather than saying shut everything else off. But this, I would rather have the way that we only allow that and that also goes in with our declarative nature in cloud native things we like anyways. We just say what we want and everything else doesn’t exists. [0:32:27.6] DC: I do want to clarify though because I think what you and I, we are the representative of the dichotomy right at this moment, right? I feel like what you are saying is the constraint should be the normal, being able to drop all traffic, do not allow anything is normal and then you have to declare intent to open anything up and what I am saying is frequently developers don’t know what normal looks like yet. They need to be able to explore what normal looks like by developing these patterns and then enforce them, right, which is turning the model on its head. And this is actually I think the kernel that I am trying to get to in this conversation is that there has to be a place where you can go play and learn what normal is and then you can move into a world in which you can actually enforce what that normal looks like with reasonable constraint. But until you know what that is, until you have that opportunity to learn it, all we are doing here is restricting your ability to learn. We are adding friction to the process. [0:33:25.1] BM: Right, well I think what I am trying to say here layer on top of this is that yes, I agree but then I understand what a breach can do and what bad security can do. So I will say, “Yeah, go learn. Go play all you want but not on software that will ever make it to production. Go learn these practices but you are going to have to do it outside of” – you are going to have a sandbox and that sandbox is going to be unconnected from the world I mean from our obelisk and you are going to have to learn but you are not going to practice here. This is not where you learn how to do this. [0:33:56.8] NL: Exactly right, yeah. You don’t learn to ride a motorcycle on the street you know? You’d learn to ride a motorcycle on the dirt and then you could take those skills later you know? But yeah I think we are in agreement like production is a place where we do have to enforce all of those things and having some promotion level in which you can come from a place where you learned it to a place where you are beginning to enforce it to a place where it is enforced I think is also important. And I frequently describe this as like development, staging and production, right? Staging is where you are going to hit the edges from because this is where you’re actually defining that constraint and it has to be right before it can be promoted to production, right? And I feel like the middle ground is also important. [0:34:33.6] BM: And remember that production is any environment production can reach. Any environment that can reach production is production and that is including that we do data backup dumps and we clean them up from production and we use it as data in our staging environment. If production can directly reach staging or vice versa, it is all production. That is your attack vector. That is also what is going to get in and steal your production data. [0:34:59.1] NL: That is absolutely right. Google actually makes an interesting not of caveat to that but like side point to that where like if I understand the way that Google runs, they run everything in production, right? Like dev, staging and production are all the same environment. I am more positing this is a question because I don’t know if anybody of us have the answer but I wonder how they secure their infrastructure, their environment well enough to allow people to play to learn these things? And also, to deploy production level code all in the same area? That seems really interesting to be and then if I understood that I probably would be making a lot more money. [0:35:32.6] BM: Well it is simple really. There were huge people process at Google that access gatekeeper for a lot of these stuff. So, I have never worked in Google. I have no intrinsic knowledge of Google or have talked to anyone who has given me this insight, this is all speculation disclaimer over. But you can actually run a big cluster that if you can actually prove that you have network and memory and CPU isolation between containers, which they can in certain cases and certain things that can do this. What you can do is you can use your people process and your approvals to make sure that software gets to where it needs to be. So, you can still play on the same clusters but we have great handles on network that you can’t talk to these networks or you can’t use this much network data. We have great things on CPU that this CPU would be a PCI data. We will not allow it unless it’s tied to CPU or it is PCI. Once you have that in place, you do have a lot more flexibility. But to do that, you will have to have some pretty complex approval structures and then software to back that up. So, the burden on it is not on the normal developer and that is actually what Google has done. They have so many tools and they have so many processes where if you use this tool it actually does the process for you. You don’t have to think about it. And that is what we want our developers to be. We want them to be able to use either our networking libraries or whenever they are building their containers or their Kubernetes manifest, use our tools and we will make sure based on either inspection or just explicit settings that we will build something that is as secure as we can given the inputs. And what I am saying is hard and it is capital H hard and I am actually just pitting where we want to be and where a lot of us are not. You know most people are not there. [0:37:21.9] NL: Yeah, it would be nice if we had like we said earlier like more tooling around security and the processes and all of these things. One thing I think that people seem to balk on or at least I feel is developing it for their own use case, right? It seems like people want an overarching tool to solve all the use cases in the world. And I think with the rise of cloud native applications and things like container orchestration, I would like to see people more developing for themselves around their own processes, around Kubernetes and things like that. I want to see more perspective into how people are solving their security problems, instead of just like relying on let’s say like HashiCorp or like Aqua Sec to provide all the answers like I want to see more answers of what people are doing. [0:38:06.5] BM: Oh, it is because tools like Vault are hard to write and hard to maintain and hard to keep correct because you think about other large competitors to vault and they are out there like tools like CyberArk. I have a secret and I want to make sure only certain will keep it. That is a very difficult tool but the HashiCorp advantage here is that they have made tools to speak to people who write software or people who understand ops not just as a checkbox. It is not hard to get. If you are using vault it is not hard to get a secret out if you have the right credentials. Other tools is super hard to get the secret out if you even have the right credential because they have a weird API or they just make it very hard for you or they expect you to go click on some gooey somewhere. And that is what we need to do. We need to have better programming interfaces and better operator interfaces, which extends to better security people are basis for you to use these tools. You know I don’t know how well this works in practice. But the Jeff Bezos, how teams at AWS or Amazon or forums, you know teams communicate on API and I am not saying that you shouldn’t talk, but we should definitely make sure that our API’s between teams and team who owns security stuff and teams who are writing developer stuff that we can talk on the same level of fidelity that we can having an in person conversation, we should be able to do that through our software as well. Whether that be for asking for ports or asking for our resources or just talking about the problem that we have that is my thought-leadering answer to this. This is “Bryan wants to be a VP of something one day” and that is the answer I am giving. I’m going to be the CIO that is my CIO answer. [0:39:43.8] DC: I like it. So cool. [0:39:45.5] BM: Is there anything else on this subject that we wanted to hit? [0:39:48.5] NL: No, I think we have actually touched on pretty much everything. We got a lot out of this and I am always impressed with the direction that we go and I did not expect us to go down this route and I was very pleased with the discussion we have had so far. [0:39:59.6] DC: Me too. I think if we are going to explore anything else that we talked about like you know, get it more into that state where we are talking about like that we need more feedback loops. We need people developers to talk to security people. We need security people talk to developers. We need to have some way of actually pushing that feedback loop much like some of the other cultural changes that we have seen in our industry are trying to allow for better feedback loops and other spaces. And you’ve brought up dev spec ops which is another move to try and open up that feedback loop but the problem I think is still going to be that even if we improved that feedback loop, we are at an age where – especially if you ended up in some of the larger organizations, there are too many applications to solve this problem for and I don’t know yet how to address this problem in that context, right? If you are in a state where you are a 20-person, 30-person security team and your responsibility is to secure a platform that is running a number of Kubernetes clusters, a number of Vsphere clusters, a number of cloud provider implementations whether that would be AWS or GC, I mean that is a set of problems that is very difficult. It is like I am not sure that improving the feedback loop really solves it. I know that I helps but I definitely you know, I have empathy for those folks for sure. [0:41:13.0] CC: Security is not my forte at all because whenever I am developing, I have a narrow need. You know I have to access a cluster.I have to access a machine or I have to be able to access the database. And it is usually a no brainer but I get a lot of the issues that were brought up. But as a builder of software, I have empathy for people who use software, consume software, mine and others and how can’t they have any visibility as far as security goes? For example, in the world of cloud native let’s say you are using Kubernetes, I sort of start thinking, “Well, shouldn’t there be a scanner that just lets me declare?” I think I am starting an episode right now –should there be a scanner that lets me declare for example this node can only access this set of nodes like a graph. But you just declare and then you run it periodically and you make sure of course this goes down to part of an app can only access part of the database. It can get very granular but maybe at a very high level I mean how hard can this be? For example, this pod can only access that pods but this pod cannot access this name space and just keep checking what if the name spaces changes, the permission changes. Or for example would allow only these answers can do a backup because they are the same users who will have access to the restore so they have access to all the data, you know what I mean? Just keep checking that is in place and it only changes when you want to. [0:42:48.9] BM: So, I mean I know we are at the end of this call and I want to start a whole new conversation but this is actually is why there are applications out there like Istio and Linkerd. This is why people want service meshes because they can turn off all network access and then just use the service mesh to do the communication and then they can use, they can make sure that it is encrypted on both sides and that is a honey cave on all both sides. That is why this is operated. [0:43:15.1] CC: We’ll definitely going to have an episode or multiple on service mesh but we are on the top of the hour. Nick, do your thing. [0:43:23.8] NL: All right, well, thank you so much for joining us on another interesting discussion at The Kubelets Podcast. I am Nicholas Lane, Duffie any final thoughts? [0:43:32.9] DC: There is a whole lot to discuss, I really enjoyed our conversations today. Thank you everybody. [0:43:36.5] NL: And Bryan? [0:43:37.4] BM: Oh it was good being here. Now it is lunch time. [0:43:41.1] NL: And Carlisia. [0:43:42.9] CC: I love learning from you all, thank you. Glad to be here. [0:43:46.2] NL: Totally agree. Thank you again for joining us and we’ll see you next time. Bye. [0:43:51.0] CC: Bye. [0:43:52.1] DC: Bye. [0:43:52.6] BM: Bye. [END OF EPISODE] [0:43:54.7] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

The Podlets - A Cloud Native Podcast
Stateful and Stateless Workloads (Ep 9)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Dec 23, 2019 43:34


This week on The Podlets Cloud Native Podcast we have Josh, Carlisia, Duffie, and Nick on the show, and are also happy to be joined by a newcomer, Brian Liles, who is a senior staff engineer at VMWare! The purpose of today’s show is coming to a deeper understanding of the meaning of ‘stateful’ versus ‘stateless’ apps, and how they relate to the cloud native environment. We cover some definitions of ‘state’ initially and then move to consider how ideas of data persistence and co-ordination across apps complicate or elucidate understandings of ‘stateful’ and ‘stateless’. We then think about the challenging practice of running databases within Kubernetes clusters, which effectively results in an ephemeral system becoming stateful. You’ll then hear some clarifications of the meaning of operators and controllers, the role they play in mediating and regulating states, and also how important they are in a rapidly evolving but skills-scarce environment. Another important theme in this conversation is the CAP theorem or the impossibility of consistency, availability and partition tolerance all at once, but the way different databases allow for different combinations of two out of the three. We then move on to chat about the fundamental connection between workloads and state and then end off with a quick consideration about how ideas of stateful and stateless play out in the context of networks. Today’s show is a real deep dive offering perspectives from some the most knowledgeable in the cloud native space so make sure to tune in! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Bryan Liles Josh Rosso Nicholas Lane Key Points From This Episode: • What ‘stateful’ means in comparison to ‘stateless’.• Understanding ‘state’ as a term referring to data which must persist.• Examples of stateful apps such as databases or apps that revolve around databases.• The idea that ‘persistence’ is debatable, which then problematizes the definition of ‘state’. • Considerations of the push for cloud native to run stateless apps.• How inter-app coordination relates to definitions of stateful and stateless applications.• Considering stateful data as data outside of a stateless cloud native environment.• Why it is challenging to run databases in Kubernetes clusters.• The role of operators in running stateful databases in clusters.• Understanding CRDs and controllers, and how they relate to operators.• Controllers mediate between actual and desired states.• Operators are codified system administrators.• The importance of operators as app number grows in a skill-scarce environment.• Mechanisms around stateful apps are important because they ensure data integrity.• The CAP theorem: the impossibility of consistency, availability, and tolerance.• Why different databases allow for different iterations of the CAP theorem.• When partition tolerance can and can’t get sacrificed.• Recommendations on when to run stateful or stateless apps through Kubernetes.• The importance of considering models when thinking about how to run a stateful app.• Varying definitions of workloads.• Pods can run multiple workloads• Workloads create states, so you can’t have one without the other.• The term ‘workloads’ can refer to multiple processes running at once.• Why the ephemerality of Kubernetes systems makes it hard to run stateful applications. • Ideas of stateful and stateless concerning networks.• The shift from server to browser in hosting stateful sessions. Quotes: “When I started envisioning this world of stateless apps, to me it was like, ‘Why do we even call them apps? Why don’t we just call them a process?’” — @carlisia [0:02:60] “‘State’ really is just that data which must persist.” — @joshrosso [0:04:26] “From the best that I can surmise, the operator pattern is the combination of a CRD plus a controller that will operate on events from the Kubernetes API based on that CRD’s configuration.” — @bryanl [0:17:00] “Once again, don’t let developers name them anything.” — @bryanl [0:17:35] “Data integrity is so important” — @apinick [0:22:31] “You have to really be careful about the different models that you’re evaluating when trying to think about how to manage a stateful application like a database.” — @mauilion [0:31:34] Links Mentioned in Today’s Episode: KubeCon+CloudNativeCon — https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019/Google Spanner — https://cloud.google.com/spanner/CockroachDB — https://www.cockroachlabs.com/CoreOS — https://coreos.com/Red Hat — https://www.redhat.com/enMetacontroller — https://metacontroller.app/Brandon Philips — https://www.redhat.com/en/blog/authors/brandon-phillipsMySQL — https://www.mysql.com/ Transcript: EPISODE 009 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [INTERVIEW] [00:00:41] JR: All right! Hello, everybody, and welcome to episode 6 of The Cubelets Podcast. Today we are going to be discussing the concept of stateful and stateless and what that means in this crazy cloud native landscape that we all work. I am Josh Rosso. Joined with me today is Carlisia. [00:00:59] CC: Hi, everybody. [00:01:01] JR: We also have Duffie. [00:01:03] D: Hey, everybody. [00:01:04] JR: Nicholas. [00:01:05] NL: Yo! [00:01:07] JR: And a newcomer to the podcast, we also have Brian. Brian, you want to give us a little intro about yourself? [00:01:12] BL: Hi! I’m Brian. I work at VMWare. I do lots of community stuff, including sharing the KubeCon+CloudNativeCon. [00:01:22] JR: Awesome! Cool. All right. We’ve got a pretty good cast this week. So let’s dive right into it. I think one of the first things that we’ve been talking a bit about is the concept of what makes an application stateful? And of course in reverse, what makes an application stateless? Maybe we could try to start by discerning those two. Maybe starting with stateless if that makes? Does someone want to take that on? [00:01:45] CC: Well, I’m going to jump right in. I have always been a developer, as supposed to some of you or all of you have who have system admin backgrounds. The first time that I heard the stateless app, I was like, “What?” That wasn’t recent, okay? It was a long time ago, but that was a knot in my head. Why would you have a stateless app? If you have an app, you’re going to need state. I couldn’t imagine what that was. But of course it makes a lot of sense now. That was also when we were more in the monolithic world. [00:02:18] BM: Actually that’s a good point. Before you go into that, it’s a great point. Whenever we start with apps or we start developing apps, we think of an application. An application does everything. It takes input and it does stuff and it gives output. But now in this new world where we have lots of apps, big apps, small apps, we start finding that there’s apps that only talk and coordinate with other apps. They don’t do anything else. They don’t save any data. They don’t do anything. That’s what – where we get into this thing called stateless apps. Apps don’t have any type of data that they store locally. [00:02:53] CC: Yeah. It’s more like when I envision in my head. You said it brilliantly, Brian. It’s almost like a process. When I started envisioning this world of stateless apps, to me it was like, “Why do we even call them apps? Why don’t we just call them a process?” They’re just shifting back data and forth but they’re not – To me, at the beginning, apps were always stateless. They went together. [00:03:17] D: I think, frequently, people think of applications that have only locally relevant stuff that is actually not going to persist to disc, but maybe held in memory or maybe only relevant to the type of connection that’s coming through that application also as stateless, which is interesting, because there’s still some state there, but the premise is that you could lose that state and not lose the functionality of that code. [00:03:42] NL: Something that we might want to dive into really quickly when talking about stateless and stateful apps. What do we mean by the word state? When I first learned about these things, that was what always screwed me up. I’m like, “What do you mean state? Like Washington? Yeah. We got it over here.” [00:03:57] JR: Oh! State. That’s that word. State is one of those words that we use to sound smarter than we actually are 95% of the time, and that’s a number I just made up. When people are talking about state, they mean databases. Yeah. But there are other types of state as well. If you maintain local cache that needs to be persistent, if you have local files that you’re dealing with, like you’re opening files. That’s still state. State really is just that it’s data that must persist. [00:04:32] D: I agree with that definition. I think that state, whether persisted to memory or persisted to disc or persisted to some external system, that’s still what we refer to as state. [00:04:41] JR: All right. Makes sense and sounds about like what I got from it as well. [00:04:45] CC: All right. So now we have this world where we talk about stateless apps and stateful apps. Are there even stateful apps? Do we call a database an app? If we have a distributed system where we have one stateless app over here, another stateless app over there and then we have the database that’s connected to the two of them, are we calling the database a stateful app or is that whole thing – How do we call this? [00:05:15] NL: Yeah. The database is very much a state as an app with state. I’m very much – [00:05:19] D: That’s a close definition. Yeah. [00:05:21] NL: Yeah. Literally, it’s the epitome of a stateful app. But then you also have these apps that talk to databases as well and they might have local data, like data that – they start a transaction and then complete it or they have a long distributed type transaction. Any apps that revolve around a database, if they store local data, whether it’s within a transaction or something else, they’re still stateful apps. [00:05:46] D: Yup. I think you can modify and input data or modify state that has to be persisted in some way I think is a stateful app, even though I do think it’s confusing because of what – As I said before, I think that there are a bunch of applications that we think of, like not everybody considers Spark jobs to be stateful. Spark jobs, for example, are something that would bring data in, mutate that data in some way, produce some output and go away. The definition there is that Spark would generally push the resulting data into some other external system. It’s interesting, because in that model, Spark is not considered to be a stateful app because the Spark job could fail, go away, get recreated, pick up the pieces where it left off or just redo that work until all of the work is done. In many cases, people consider that to be a stateless application. That’s I think is like the crux – In my opinion, the crux of the confusion around what a stateful and stateless application is, is that people frequently – I think it’s more about where you store – what you mean by persistence and how that actually realizes in your application. If you’re pushing your state to an external database, is your application still stateful? [00:06:58] NL: I think it’s a good question, or if you are gathering data from an external source and mutating it in some way, but you don’t need data to be present when you start up, is that a stateful app or a stateless app? Even though you are taking in data, modifying it and checking it, sending out to some other mechanism or serving it in your own way, does that become like a stateless app? If that app gets killed and it comes back and it’s able to recover, is it stateful or stateless? That’s a bit of a gray area, I think. [00:07:26] JR: Yeah. I feel like a lot of the customers I work with, if the application can get killed even if it has some type of local state, they still refer to it as stateless usually, to me at least, when we talk about it because they think, “I can kind of restart this application and I’m not too worried about losing whatever it may have had.” Let’s say cached for simplicity, right? I think that kind of leads us into an interesting question. We’ve talked a lot on this podcast about cloud native infrastructure and cloud native applications and it seems like since the inception of cloud native, there’s always been this push that a stateless app is the best candidate to run or the easiest candidate to run. I’m just curious if we could dive into that for a moment. Why in the cloud native infrastructure area has there always been this push for running stateless applications? Why is it simpler? Those kinds of things. [00:08:15] BL: Before we dive into that, we have to realize – And this is just a problem of our whole ecosystem, this whole cloud native. We’re very hand-wavy in our descriptions for things. There’re a lot of ambiguous descriptions, and state is one of those. Just keep that in mind, that when we’re talking today, we’re really just talking about these things that store data and when that’s the state. Just keep that in mind as you’re listening to this. But when it comes to distributed systems in general, the easiest system is a system that doesn’t need coordination with any other system. If it happens to die, that’s okay. We can just restart it. People like to start there. It’s the easiest thing to start. [00:08:58] NL: Yeah, that was basically what I was going to say. If your application needs to tie into other applications, it becomes significantly more complicated to implement it, at least for your first time and in your system. These small applications that only – They don’t care about anybody else, they just take in data or not, they just do whatever. Those are super easy to start with because they’re just like, “Here. Start this up. Who cares? Whatever happens, it happens.” [00:09:21] CC: That could be a good boundary to define – I don’t want to jump back too far, but to define where is the stateless app to me is part of a system and just say it depends for it to come back up. Does it depend on something else that has state? [00:09:39] BL: I’ll give you an example. I can give you a good example of a stateless app that we use every day, every single one of us, none of us on this call, but when you search Google. You go to google.com and you go to the bar and you type in a search, what’s happening is there is a service at the beginning that collects that search and it federates the search over many different probably clusters of computers so they can actually do the search currently. That app that actually coordinates all that work is a stateless app most likely. All it does is just splits it up and allows more CPUs to do the work. Probably, that goes away. Probably not a problem. You probably have 10 more of them. That’s what I consider stateless. It doesn’t really own any of the data. It’s the coordinator. [00:10:25] CC: Yeah. If it goes down, it comes back up. It doesn’t need to reset itself to the state where it was before. It can truly be considered a stateless because it can just, “Okay. I reset. I’m starting from the beginning from this clear state.” [00:10:43] BL: Yes. That’s a good summary of that. [00:10:45] CC: Because another way to think about stateless – What makes an app stateful app, does it have to be combined or like deployed and shipped together with the part that maintains the state? That’s a more clear cut definition. Then that app is definitely a stateful app. [00:11:05] D: What we frequently talk about in like the cloud native space is like you know that you have a stateless app if you can just create 20 of them and not have to worry about the coordination of them. They are all workers. They are all going to take input. You could spread the load across those 20 in an identical way and not worry about which one you landed on. That’s stateless application. A stateful application is a very different thing. You have to have some coordination. You have to say how many databases can you have on a backend? Because you’re persisting data there, you have to be really careful about that you only write to the master database or to the writing database and you could read of any other memories of that database cluster, that sort of stuff. [00:11:44] CC: It might seem that we are going so deep into this differentiating between stateful and stateless, but this is so important because clusters are usually designed to be ephemeral. Ephemeral means obviously they die down, they are brought back up, the nodes, and you should worry as least as possible with the state of things. Then going back to what Joshua is saying, when we are in this cloud native world, usually we are talking about stateless apps, stateless workloads and then we’re going to just talk about what workload means. But then if that’s the case, where are the stateful apps? It’s like we have this vision that the stateful apps live outside the cloud native world? How does it work? But it’s supposed to work. [00:12:36] BL: Yup. This is the question that keeps a lot of people employed. Making sure my state is available when I need it. You know what? I’m not going to even use that word state. Making sure my data is available wherever I need it and when I need it. I don’t want to go too deep in right now, but this is actually a huge problem in the Kubernetes community in general, and we see it because there’s been lots of advice given, “Don’t run things like databases in your clusters.” This is why we see people taking the ideas of Google Spanner and like CockroachDB and actually going through a lot of work to make sure that you can run databases in Kubernetes clusters. The interesting piece about this is that we’re actually to the point where we can run these types of workloads in our clusters, but with a caveat, big star at the end, it’s very difficult and you have to know what you’re doing. [00:13:34] JR: Yeah. I want to dovetail on that Brian, because it’s something that we see all the time. I feel like when we first started setting up, let’s call them clusters, but in our case it was Kubernetes, right? We always saw that data level always being delegated to like if you’re in Amazon, some service that they hosted and so on. But now I think more and more of the customers that at least I’m seeing. I’m sure Nicholas and Duffie too, they’re interested in doing exactly what you just described. Cockroach is an example I literally just worked with recently, and it’s just interesting how much more thoughtful they have to be about their cluster operations. Going back to what you said Carlisia, it’s not as easy as just like trashing a cluster and instantiating a new one anymore, like they’re used to. They need to be more thoughtful about keeping that data integrity intact through things like upgrades and disaster recover. [00:14:18] D: Another interesting point kind to your point, Brian, is that like, frequently, people are starting to have conversations and concerns around data gravity, which means that I have a whole bunch of data that I need to work with, like to a Spark job, which I mentioned earlier. I need to basically put my compute where that data is. The way that I store that data inside the cluster and use Kubernetes to manage it or whether I just have to make sure that I have some way of bringing up compute workloads close to that data. It’s actually kind of introducing a whole new layer to this whole thing. [00:14:48] BL: Yeah! Whole new layer of work and a whole new layer of complexity, because that’s actually – The crux of all this is like where we slide the complexity too, but this is interesting, and I don’t want to go too far to this one definitely. This is why we’re seeing more people creating operators around managing data. I’ve seen operators who are bringing databases up inside of Kubernetes. I’ve seen operators that actually can bring up resources outside of Kubernetes using the Kubernetes API. The interesting thing about this is that I looked at both solutions and I said, “I still don’t know what the answer is,” and that’s great. That means that we have a lot to learn about the problem, and at least we have some paths for it. [00:15:29] NL: Actually, that kind of reminds me of the first time I ever heard the word stateful or stateless – I’m an infrastructure guy. Was around the discussion of operators, which there’s only a couple of years ago when operators were first introduced at CoreOS and some people were like, “Oh! Well, this is how you now operate a stateful mechanism inside of Kubernetes. This is the way forward that we want to propose.” I was just like, “Cool! What is that? What’s state? What do you mean stateful and stateless?” I had no idea. Josh, you were there. You’re like, “Your frontend doesn’t care about state and your backend does.” I’m like, “Does it? I don’t know. I’m not a developer.” [00:16:10] JR: Let’s talk about exactly that, because I think these patterns we’re starting to see are coming out of the needs that we’re all talking about, right? We’ve seen at least in the Kubernetes community a lot of push for these different constructs, like something called a stateful [inaudible 00:16:21], which isn’t that important right now, but then also like an operator. Maybe we can start by defining what is an operator? What is that pattern and why does it relate to stateful apps? [00:16:31] CC: I think that would be great. I am not clear what an operator is. I know there’s going to be a controller involved. I know it’s not a CRD. I am not clear on that at all, because I only work with CRDs and we don’t define – like the project I worked on with Velero, we don’t categorize it as an operator. I guess an operator uses specific framework that exists out there. Is it a Kubernetes library? I have no idea. [00:16:56] BL: We did it to ourselves again. We’re all doing these to ourselves. From the best that I can surmise, the operator pattern is the combination of a CRD plus a controller that will operate on events from the Kubernetes API based on that CRD’s configuration. That’s what an operator is. [00:17:17] NL: That’s exactly right. [00:17:18] BL: To conflate this, Red Hat created the operator SDK, and then you have [inaudible 00:17:23] and you have a Metacontroller, which can help you build operators. Then we actually sometimes conflate and call CRDs operators, and that’s pretty confusing for everyone. Once again, don’t let developers name anything. [00:17:41] CC: Wait. So let’s back up a little. Okay. There is an actual library that’s called an operator. [00:17:46] BL: Yes. There’s an operator SDK. [00:17:47] CC: Referred to as an operator. I heard that. Okay. Great. But let me back up a little because – [00:17:49] D: The word operator can [00:17:50] CC: Because if you are developing an app for Kubernetes, if you’re extending Kubernetes, you are – Okay, you might not use CRDs, but if you are using CRDs, you need a controller, right? Because how will you do actions? Then every app that has a CRD – because the alternative to having CRDs is just using the API directly without creating CRDs to reflect to resources. If you’re creating CRDs to reflect to resources, you need controllers. All of those apps, they have CRDs, are operators. [00:18:24] D: Yip [inaudible 00:18:25] is an operator. [00:18:26] CC: [inaudible 00:18:26] not an operator. How can you extend Kubernetes and not be qualified [inaudible 00:18:31] operator? [00:18:32] BL: Well, there’s a way. There is a way. You can actually just create a CRD and use a CRD for data storage, you know, store states, and you can actually query the Kubernetes API for that information. You don’t need a controller, but we couple them with controllers a lot to perform action based on that state we’ve saved to etcd. [00:18:50] CC: Duffie. [00:18:51] D: I want to back up just for a moment and talk about the controller pattern and what it is and then go from there to operators, because I think it makes it easier to get it in your head. A control pattern is effectively a way to understand desired state and real state and provide some logic or business code that will allow you to converge those two states, your actual state and your desired state. This is a pattern that we see used in almost everything within a distributed system. It’s like within Kubernetes, within most of the kind of more interesting systems that are out there. This control pattern describes a pretty good way of actually managing application flow across distributed systems. Now, operators, when they were initially introduced, we were talking about that this is a slightly different thing. Operators, when we introduced the idea, came more from like the operational burden of these stateful applications, things like databases and those sorts of stuff. With the database, etcd for example, you have a whole bunch of operational and runtime concerns around managing the lifecycle of that system. How do I add a new member to the cluster? What do I do when a member dies? How do I take action? Right now, that’s somebody like myself waking up at 2 in the morning and working through a run book to basically make sure that that service remains operational through the night. But the idea of an operator was to take that control pattern that we described earlier and make it wake up at 2 in the morning to fix this stuff. We’re going to actually codify the operational knowledge of managing the burden of these stateful applications so that we don’t have to wake up at 2 in the morning and do it anymore. Nobody wants to do that. [00:20:32] BL: Yeah. That makes sense. Remember back at KubCon years ago, I know it was one in Seattle where Brandon Philips was on stage talking about operators. He basically was saying if we think about SysOp, system operators, it was a way to basically automate or capture the knowledge of our system administrators in scripts or in a process or in code a la operators. [00:20:57] D: The last part that I’ll add to this thing, which I think is actually what really describes the value of this idea to me is that there are only so many people on the planet that do what the people in this blog post do. Maybe you’re one of them that listen to this podcast. People who are operating software or operating infrastructure at scale, there just aren’t that many of us on the planet. So as we add more applications, as more people adopt the cloud native regime or start coming to a place where they can crank out more applications more quickly, we’re going to have to get to a place where we are able to automate the burden of managing those applications, because there just aren’t enough of us to be able to support the load that is coming. There just aren’t enough people on the planet that do this to be able to support that. That’s the thing that excites me most about the operator pattern, is that it gives us a place to start. It gives us a place to actually start thinking about managing that burden over time, because if we don’t start changing the way we think about managing that burden, we’re going to run out of people. We’re not going to be able to do it. [00:22:05] NL: Yeah. It’s interesting. With stateful apps, we keep kind of bringing them – coming back to stateful apps, because stateful apps are hard and stateless apps are easy, and we’ve created all these mechanisms around operating things with state because of how just complicated it is to make sure that your data is ready, accessible and has integrity. That’s the big one that I keep not thinking about as a SysOps person coming into the Dev world. Data integrity is so important and making sure that your data is exactly what it needs to be and was the last time you checked it, is super important. It’s only something I’m really starting to grasp. That’s why I was like these things, like operators and all these mechanisms that we keep creating and recreating and recreating keep coming about, because making sure that your stateful apps have the right data at the right time is so important. [00:22:55] BL: Since you brought this up, and we just talked about why a state is so hard, I want to introduce the new term to this conversation, the whole CAP theorem, where data would typically be – in a distributed system at least, your data will be consistent or your data can be available, or if your distributed systems falls in multiple parts, you can have partition tolerance. This is one of those computer science things where you can actually pick two. You can have it be available and have partition tolerance, but your data won’t be consistent, or you can have consistency and availability, but you won’t have partition tolerance. If your cluster splits into two for some reason, the data will be bad. This is why it’s hard, this is why people have written basically lots of PhD dissertations on this subject, and this is why we are talking about this here today, is because managing state, and particularly managing distributed, is actually a very, very hard problem. But there’s software out there that will help us, and Kubernetes is definitely part of that and stateful sets are definitely part of that as well. [00:24:05] JR: I was just going to say on those three points, consistently, availability and partition tolerance. Obviously, we’d want all three if we could have them. Is there one that we most commonly tradeoff and give up or does it go case-by-case? [00:24:17] BL: Actually, it’s been proven. You can’t have all three. It’s literally impossible. It depends. If you have a MySQL server and you’re using MySQL to actually serve data out of this, you’re going to most likely get consistency and availability. If you have it replicated, you might not have partition tolerance. That’s something to think about, and there are different databases and this is actually one of the reasons why there are different databases. This is why people use things like relational databases and they use key value stores not because we really like the interfaces, but because they have different properties around the data. [00:24:55] NL: That’s an interesting point and something that I had recently just been thinking about, like why are there so many different types of databases. I just didn’t know. It was like in only recently heard of CAP theorem as well just before you mentioned it. I’m like, “Wow! That’s so fascinating.” The whole thing where you only pick two. You can’t get three. Josh, to kind of go back to your question really quickly, I think that partition tolerance is the one that we throw away the most. We’re willing to not be able to segregate our database as much as possible because C and A are just too important, I think. At least that’s what I’m saying, like I am wearing an [inaudible 00:25:26] shirt and [inaudible 00:25:27] is not partition tolerant. It’s bad at it. [00:25:31] BL: This is why Google introduced Spanner, and Spanner in some situations can get free with tradeoffs and a lot of really, really smart stuff, but most people can’t run this scale. But we do need to think about partition tolerance, especially with data whenever – Let’s say you run a store and you have multiple instances across the world and someone buys something from inventory, what is your inventory look like at any particular point? You don’t have to answer my question, of course, but think about that. These are still very important problems if fiber gets cut across the Atlantic and now I’ve sold more things than I have. Carlisia, speaking to you as someone who’s only been a developer, have you moved your thoughts on state any further? [00:26:19] CC: Well, I feel that I’m clear on – Well, I think you need to clarify your question better for me. If you’re asking if I understand what it means, I understand what it means. But I actually was thinking to ask this question to all of you, because I don’t know the answer, if that’s the question you’re asking me. I want to put that to the group. Do you recommend people, as in like now-ish, to run stateful workloads? We need to talk about workloads mean. Run stateful apps or database in sites if they’re running a Kubernetes cluster or if they’re planning for that, do you all as experts recommend that they should already be looking into doing that or they should be running for now their stateful apps or databases outside of the cloud native ecosystem and just connecting the two? Because if that’s what your question was, I don’t know. [00:27:21] BL: Well, I’ll take this first. I think that we should be spending lots of more time than we are right now in coming up community-tested solutions around using stateful sets to their best ability. What that means is let’s say if you’re running a database inside of Kubernetes and you’re using a stateful set to manage this, what we do need to figure out is what happens when my database goes down? The pod just kills? When I bring up a new version, I need to make sure that I have the correct software to verify integrity, rebuilt things, so that when it comes back up, it comes back up correctly. That’s what I think we should be doing. [00:27:59] JR: For me, I think working with customers, at least Kubernetes-oriented folks, when they’re trying to introduce Kubernetes as their orchestration part of their overall platform, I’m usually just trying to kind of meet them where they’re at. If they’re new to Kubernetes and distributed systems as a whole, if we have stateless, let’s call them maybe simpler applications to start with, I generally have them lean into that first, because we already have so much in front of us to learn about. I think it was either Brian or Duffie, you said it introduces a whole bunch more complexity. You have to know what you’re doing. You have to know how to operate these things. If they’re new to Kubernetes, I generally will advise start with stateless still. But that being said, so many of our customers that we work with are very interested in running stateful workloads on Kubernetes. [00:28:42] CC: But just to clarify what you said, Josh, because you spoke like an expert, but I still have beginner’s ears. You said something that sounded to me like you recommend that you go stateless. It sounded to me like that. What you really say is that they take out the stateless part of what they have, which they might already have or they might have to change and put the stateless. You’re not suggesting that, “Oh! You can’t do stateful anymore. You need to just do everything stateless.” What you’re saying is take the stateless part of your system, put that in Kubernetes, because that is really well-tested and keep the stateful outside of that ecosystem. Is that right? [00:29:27] JR: I think that’s a better way to put it. Again, it’s not that Kubernetes can’t do stateful. It’s more of a concept of biting off more than you can chew. We still work with a lot of people who are very new to these distributed systems concepts, and to take on running stateful workloads, if we could just delegate that to some other layer, like outside of the cluster, that could be a better place to start, at least in my experience. Nicholas and Duff might have different – [00:29:51] NL: Josh, you basically nailed it like what I was going to say, where it’s like if the team that I’m working with is interested in taking on the complexity of maintaining their databases, their stateful sets and making sure that they have data integrity and availability, then I’m all for them using Kubernetes for a stateful set. Kubernetes can run stateful applications, but there is all this complexity that we keep talking about and maintaining data and all that. If they’re willing to take on their complexity, great, it’s there for you. If they’re not, if they’re a little bit kind of behind as – Not behind, but if they’re kind of starting out their Kubernetes journey or their distributed systems journey, I would recommend them to move that complexity to somebody else and start with something a little bit easier, like a stateless application. There are a lot of good services that provide data as a service, right? You’ve got dataview as RDS is great for creating stateful application. You can leverage it anytime and you’ve got like dedicated wires too. I would point them to there first if they don’t want to take on like complexity. [00:30:51] D: I completely agree with that. An important thing I would add, which is in response to the stateful set piece here, is that as we’ve already described, managing a stateful application like a database does come with some complexity. So you should really carefully look at just what these different models provide you. Whether that model is making use of a stateful set, which provides you like ordinality, ensuring that things start up in a particular order and some of the other capabilities around that stuff. But it won’t, for example, manage some of the complexity. A stateful set won’t, for example, try and issue a command to the new member to make sure that it’s part of an existing database cluster. It won’t manage that kind of stuff. So you have to really be careful about the different models that you’re evaluating when trying to think about how to manage a stateful application like a database. I think because it’s actually why the topic of an operator came up kind of earlier, which was that like there are a lot of primitives within Kubernetes in general that provide you a lot of capability for managing things like stateful applications, but they may not entirely suit your needs. Because of the complexity with stateful applications, you have to really kind of be really careful about what you adopt and where you jump in. [00:32:04] CC: Yeah. I know just from working with Velero, which is a tool for doing backup and recovery migration of Kubernetes clusters. I know that we backup volumes. So if you have something mounted on a volume, we can back that up. I know for a fact that people are using that to backup stateful workloads. We need to talk about workloads. But at any case, one thing to – I think one of you mentioned is that you definitely also need to look at a backup and recovery strategy, which is ever more important if you’re doing stateful workloads. [00:32:46] NL: That’s the only time it’s important. If you’re doing stateless, who cares? [00:32:49] BL: Have we defined what a workload is? [00:32:50] CC: Yeah. But let me say something. Yeah, I think we should do an episode on that maybe, maybe not. We should do an episode on GitOps type of thing for related things, because even though you – Things are stateless, but I don’t want to get into it. Your cluster will change state. You can recover in stuff from like a fresh version. But as it goes through a lifecycle, it will change state and you might want to keep that state. I don’t know. I’m not the expert in that area, but let’s talk about workloads, Brian. Okay. Let me start talking about workloads. I never heard the term workload until I came into the cloud native world, and that was about a year ago or when they started looking in this space more closely. Maybe a little bit before a year ago. It took me forever to understand what a workload was. Now I understand, especially today, we’re talking about a little bit before we started recording. Let me hear from you all what it means to you. [00:34:00] BL: This is one of those terms, and I’m sure like the last any ex-Googlers about this, they’ll probably agree. This is a Google term that we actually have zero context about why it’s a term. I’m sure we could ask somebody and they would tell us, but workloads to me personally are anything that ultimately creates a pod. Deployments create replica sets, create pods. That whole thing is a workload. That’s how I look at it. [00:34:29] CC: Before there were pods, were there workloads, or is a workload a new thing that came along with pods? [00:34:35] BL: Once again, these words don’t make any sense to us, because they’re Google terms. I think that a pod is a part of a workload, like a deployment is a part of a workload, like a replica set is part of a workload. Workload is the term that encompasses an entire set of objects. [00:34:52] D: I think of a workload as a subset of an application. When I think of an application or a set of microservices, I might think of each of the services that make up that entire application as a workload. I think of it that way because that’s generally how I would divide it up to Brian’s point into different deployment or different stateful sets or different – That sort of stuff. Thinking of them each as their own autonomous piece, and altogether they form an application. That’s my think of it. [00:35:20] CC: To connect to what Brian said, deployment, will always run in the pods, which is super confusing if you’re not looking at these things, just so people understand, because it took me forever to understand that. The connection between a workload, a deployment and a pod. Pods contain – If you have a deployment that you’re going to shift Kubernetes – I don’t know if shift is the right word. You’re going to need to run on Kubernetes. That deployment needs to run somewhere, in some artifact, and that artifact is called a pod. [00:35:56] NL: Yeah. Going back to what Duffie said really quickly. A workload to me was always a process, kind of like not just a pod necessarily, but like whatever it is that if you’re like, “I just need to get this to run,” whatever that is. To me that was always a workload, but I think I’m wrong. I think I’m oversimplifying it. I’m just like whatever your process is. [00:36:16] BL: Yeah. I would give you – The reason why I would not say that is because a pod can run multiple containers at once, which ergo is multiple processes. That’s why I say it that way. [00:36:29] NL: Oh! You changed my mind. [00:36:33] BL: The reason I bring this up, and this is probably a great idea for a future show, is about all the jargon and terminology that we use in this land that we just take as everyone knows it, but we don’t all know it, and should be a great conversation to have around that. But the reason I always bring up the whole workload thing is because when we think about workloads and then you can’t have state without workloads, really. I just wanted to make sure that we tied those two things together. [00:36:58] CC: Why can you not have state without workloads? What does that mean? [00:37:01] BL: Well, the reason you can’t have state without workloads is because something is going to have to create that state, whether that workload is running in or out a cluster. Something is going to have to create it. It just doesn’t come out of nowhere. [00:37:11] CC: That goes back to what Nick was saying, that he thinks a workload is a process. Was that was you said, Nick? [00:37:18] NL: It is, yeah, but I’m renegading on that. [00:37:23] CC: At least I could see why you said that. Sorry, Brian. I cut you off. [00:37:28] BL: What I was saying is a workload ultimately is one or more processes. It’s not just a process. It’s not a single process. It could be 10, it could be 1. [00:37:39] JS: I have one final question, and we can bail on this and edit it out if it’s not a good one to end with. I hope it’s not too big, but I think maybe one thing we overlooked is just why it’s hard to run stateful workloads in these new systems like Kubernetes. We talked about how there’s more complexity and stuff, but there might be some room to talk about – People have been spinning up an EC2 server, a server on the web and running MySQL on it forever. Why in like the Kubernetes world of like pods and things is it a little bit harder to run, say, MySQL just [inaudible 00:38:10]. Is that something worth diving into? [00:38:13] NL: Yeah, I think so. I would say that for things like, say, applications, like databases particularly, they are less resilient to outages. While Kubernetes itself is dedicated to – Or most container orchestrations, but Kubernetes specifically, are dedicated to running your pods continuously as long as they will, that it is still somewhat of a shifting landscape. You do have priority and preemption. If you don’t set those things up properly of if there’s just like a total failure of your system at large, your stateful application can just go down at any time. Then how do you reconcile the outage in data, whatever data that might have gotten lost? Those sorts of things become significantly more complicated in an environment like Kubernetes where you don’t necessarily have access to a command line to run the commands to recover as easy. You may not, but it’s the same. [00:39:01] BL: Yes. You got to understand what databases do. Disk is slow, whether you have spinning disk or you have disk on chip, like SSD. What databases do in a lot of cases is they store things in memory. So if it goes away, didn’t get stored. In other cases, what databases do is they have these huge transactional logs, maybe they write them out in files and then they process the transaction log whenever they have CPU time. If a database dies just suddenly, maybe its state is inconsistent because it had items that were to be processed in a queue that haven’t been processed. Now it doesn’t know what’s going on, which is why – [00:39:39] NL: That’s interesting. I didn’t know that. [00:39:40] BL: If you kill MySQL, like kill MySQL D with a -9, why it might not come back up. [00:39:46] JR: Yeah. Going back to Kubernetes as an example, we are living in this newer world where things can get rescheduled and moved around and killed and their IPs changed and things. It seems like this environment is, should I say, more ephemeral, and those types of considerations becoming to be more complex. [00:40:04] NL: I think that really nails it. Yeah. I didn’t know that there were transactional logs about databases. I should, I feel like, have known that but I just have no idea. [00:40:11] D: There’s one more part to the whole stateful, stateless thing that I think is important to cover, but I don’t know if we’ll be able to cover it entirely in the time that we have left, and that is from the network perspective. If you think about the types of connections coming into an application, we refer to some of those connections as stateful and stateless. I think that’s something we could tackle in our remaining time, or what’s everybody’s thought? [00:40:33] JR: Why don’t you try giving us maybe a quick summary of it, Duffie, and then we can end on that. [00:40:36] CC: Yeah. I think it’s a good idea to talk about network and then address that in the context of network. I’m just thinking an idea for an episode. But give us like a quick rundown. [00:40:45] D: Sure. A lot of the kind of older monolithic applications, the way that you would scale these things is you would have multiple of them and then you would have some intelligence in the way that you’re routing connections down to those applications that would describe the ability to ensure that when Bob accesses a website and he authenticates, he’s going to authenticate to one specific instance of this application and the intelligence up in the frontend is going to handle the routing to make sure that Bob’s connection always comes back to that same instance. This is an older pattern. It’s been around for a very long time and it’s certainly the way that we first kind of learned to scale applications before we’ve decided to break into maker services and kind of handle a lot of this routing in a more resilient way. That was kind of one of the early versions of how we do this, and that is a pretty good example of a stateful session, and that there is actually some – Perhaps Bob has authenticated and he has a cookie that allows him, that when he comes back to that particular application, a lot of the settings, his browser settings, whether he’s using the dark theme or the light theme, that sort of stuff, is persisted on the server side rather than on the client side. That’s kind of what I mean by stateful sessions. Stateless sessions mean it doesn’t really matter that the user is terminating to the same end of point, because we’ve managed to keep the state either with the client. We’re handling state on the browser side of things rather on the server side of things. So you’re not necessarily gaining anything by pushing that connection back to the same specific instance, but just to a service that is more widely available. There are lots of examples of this. I mean, Brian’s example of Google earlier. Obviously, when I come back to Google, there are some things I want it to remember. I want it to remember that I’m logged in as myself. I want it to remember that I’ve used a particular – I want it to remember my history. I want it to remember that kind of stuff so that I could go back and find things that I looked at before. There are a ton of examples of this when we think about it. [00:42:40] JR: Awesome! All right, everyone. Thank you for joining us in episode 6, Stateful and Stateless. Signing off. I’m Josh Rosso, and going across the line, thank you Nicholas Lane. [00:42:54] NL: Thank you so much. This was really informative for me. [00:42:56] JR: Carlisia Campos. [00:42:57] CCC: This was a great conversation. Bye, everybody. [00:42:59] JR: Our new comer, Brian Liles. [00:43:01] BL: Until next time. [00:43:03] JR: And Duffie Cooley. [00:43:05] DCC: Thank you so much, everybody. [00:43:06] JR: Thanks all. [00:43:07] CCC: Bye! [END OF EPISODE] [0:50:00.3] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

The Food Blogger Pro Podcast
231: A Better Experience - Building Engagement, Not Just Traffic with Kingston Duffie

The Food Blogger Pro Podcast

Play Episode Listen Later Dec 3, 2019 66:01


Figuring out your "why," how to determine success based on metrics, and why search is so important on websites with Kingston Duffie. ----- Welcome to episode 231 of The Food Blogger Pro Podcast! This week on the podcast, Bjork interviews Kingston Duffie about increasing engagement on websites. A Better Experience  If you’ve been around the food blogging industry for the past few years, you’ve probably heard the word “engagement” more than any other metric. And that’s because brands, bloggers, and creators are putting a greater emphasis on engagement, even over other KPIs or metrics like pageviews or followers. And that’s why Kingston is here today! His company, Slickstream, is aimed at increasing engagement on bloggers’ sites. And if you’ve visited our sister site, Pinch of Yum, lately, chances are you’ve seen Slickstream in action. Heck, you might have even used Slickstream without even knowing. This is a really interesting interview focusing on ways that you can increase engagement on your own site–enjoy! In this episode, you’ll learn: How Kingston got his start Why you should reflect on your “why” Why listening is an important skill as a business owner Why it’s important to pivot How to determine your success with KPIs How Slickstream works Why search is so important on websites How to use Slickstream on your site Resources: Slickstream The Pinch of Yum - Slickstream case study If you have any comments, questions, or suggestions for interviews, be sure to email them to podcast@foodbloggerpro.com.

Swank Thoughts
Rosavelt Duffie and Desman Crawford

Swank Thoughts

Play Episode Listen Later Nov 27, 2019 75:45


Today we have two brand new guests, Lingo and Dez. We talk about their music journey because they are both aspiring rappers coming out of Texas. Our 3 main topics were their music journey, snitching and do all tattoos have to have a meaning behind them.

The Podlets - A Cloud Native Podcast
Cloud Native Infrastructure (Ep 5)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Nov 13, 2019 51:39


This week on The Podlets Podcast, we talk about cloud native infrastructure. We were interested in discussing this because we’ve spent some time talking about the different ways that people can use cloud native tooling, but we wanted to get to the root of it, such as where code lives and runs and what it means to create cloud native infrastructure. We also have a conversation about the future of administrative roles in the cloud native space, and explain why there will always be a demand for people in this industry. We dive into the expense for companies when developers run their own scripts and use cloud services as required, and provide some pointers on how to keep costs at a minimum. Joining in, you’ll also learn what a well-constructed cloud native environment should look like, which resources to consult, and what infrastructure as code (IaC) really means. We compare containers to virtual machines and then weigh up the advantages and disadvantages of bare metal data centers versus using the cloud. Note: our show changed name to The Podlets. Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Nicholas Lane Key Points From This Episode: • A few perspectives on what cloud native infrastructure means. • Thoughts about the future of admin roles in the cloud native space. • The increasing volume of internet users and the development of new apps daily. • Why people in the infrastructure space will continue to become more valuable. • The cost implications for companies if every developer uses cloud services individually. • The relationships between IaC for cloud native and IaC for the could in general. • Features of a well-constructed cloud native environment. • Being aware that not all clouds are created equal and the problem with certain APIs. • A helpful resource for learning more on this topic: Cloud Native Infrastructure. • Unpacking what IaC is not and how Kubernetes really works. • Reflecting how it was before cloud native infrastructure, including using tools like vSphere. • An explanation of what containers are and how they compare to virtual machines. • Is it worth running bare metal in the clouds age? Weighing up the pros and cons. • Returning to the mainframe and how the cloud almost mimics that idea. • A list of the cloud native infrastructures we use daily. • How you can have your own “private” cloud within your bare metal data center. Quotes: “This isn’t about whether we will have jobs, it’s about how, when we are so outnumbered, do we as this relatively small force in the world handle the demand that is coming, that is already here.” — Duffie Coolie @mauilion [0:07:22] “Not every cloud that you’re going to run into is made the same. There are some clouds that exist of which the API is people. You send a request and a human being interprets your request and makes the changes. That is a big no-no.” — Nicholas Lane @apinick [0:16:19] “If you are in the cloud native workspace you may need 1% of your workforce dedicated to infrastructure, but if you are in the bare metal world, you might need 10 to 20% of your workforce dedicated just to running infrastructure.” — Nicholas Lane @apinick [0:41:03] Links Mentioned in Today’s Episode: VMware RADIO — https://www.vmware.com/radius/vmware-radio-amplifying-ideas-innovation/CoreOS — https://coreos.com/Brandon Phillips on LinkedIn — https://www.linkedin.com/in/brandonphilips Kubernetes — https://kubernetes.io/Apache Mesos — http://mesos.apache.orgAnsible — https://www.ansible.comTerraform — https://www.terraform.ioXenServer (Citrix Hypervisor) — https://xenserver.orgOpenStack — https://www.openstack.orgRed Hat — https://www.redhat.com/Kris Nova on LinkedIn — https://www.linkedin.com/in/kris-novaCloud native Infrastructure — https://www.amazon.com/Cloud-Native-Infrastructure-Applications-Environment/dp/1491984309Heptio — https://heptio.cloud.vmware.comAWS — https://aws.amazon.comAzure — https://azure.microsoft.com/en-us/vSphere — https://www.vmware.com/products/vsphere.htmlCircuit City — https://www.circuitcity.comNewegg — https://www.newegg.comUber —https://www.uber.com/ Lyft — https://www.lyft.com Transcript: EPISODE 05 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.1] NL: Hello and welcome to episode five of The Podlets Podcast, the podcast where we explore cloud native topics one topic at a time. This week, we’re going to the root of everything, money. No, I mean, infrastructure. My name is Nicholas Lane and joining me this week are Carlisia Campos. [0:00:59.2] CC: Hi everybody. [0:01:01.1] NL: And Duffie Cooley. [0:01:02.4] DC: Hey everybody, good to see you again. [0:01:04.6] NL: How have you guys been? Anything new and exciting going on? For me, this week has been really interesting, there’s an internal VMware conference called RADIO where we have a bunch of engineering teams across the entire company, kind of get together and talk about the future of pretty much everything and so, have been kind of sponging that up this week and that’s been really interesting to kind of talking about all the interesting ideas and fascinating new technologies that we’re working on. [0:01:29.8] DC: Awesome. [0:01:30.9] NL: Carlisia? [0:01:31.8] CC: My entire team is at RADIO which is in San Francisco and I’m not. But I’m sort of glad I didn’t have to travel. [0:01:42.8] NL: Yeah, nothing too exciting for me this week. Last week I was on PTO and that was great so this week it has just been kind of getting spun back up and I’m getting back into the swing of things a bit. [0:01:52.3] CC: Were you spun back up with a script? [0:01:57.1] NL: Yeah, I was. [0:01:58.8] CC: With infrastructure I suppose? [0:02:00.5] NL: Yes, absolutely. This week on The Podlets Podcast, we are going to be talking about cloud native infrastructure. Basically, I was interested in talking about this because we’ve spent some time talking about some of the different ways that people can use cloud native tooling but I wanted to kind of get to the root of it. Where does your code live, where does it run, what does it mean to create cloud native infrastructure? Start us off, you know, we’re going to talk about the concept. To me, cloud native infrastructure is basically any infrastructure tool or service that allows you to programmatically create infrastructure and by that I mean like your compute nodes, anything running your application, your networking, software defined networking, storage, Seth, object store, dev sort of thing, you can just spin them up in a programmatical in contract way and then databases as well which is very nice. Then I also kind of lump in anything that’s like a managed service as part of that. Going back to databases if you use like difference and they have their RDS or RDB tooling that provides databases on the fly and then they manage it for you. Those things to me are cloud native infrastructure. Duffy, what do you think? [0:03:19.7] DC: Year, I think it’s definitely one of my favorite topics. I spent a lot of my career working with infrastructure one way or the other, whether that meant racking servers in racks and doing it the old school way and figuring out power budgets and you know, dealing with networking and all of that stuff or whether that meant, finally getting to a point where I have an API and my customer’s going to come to me and say, I need 10 new servers, I can be like, one second. Then run all the script because they have 10 new servers versus you know, having to order the hardware, get the hardware delivered, get the hardware racked. Replace the stuff that was dead on arrival, kind of go through that whole process and yeah. Cloud native infrastructure or infrastructure as a service is definitely near and dear to my heart. [0:03:58.8] CC: How do you feel about if you are an admin? You work from VMware and you are a field engineer now. You’re basically a consultant but if you were back in that role of an admin at a company and you had the company was practicing cloud native infrastructure things. Basically, what we’re talking about is we go back to this theme of self-sufficiency a lot. I think we’re going to be going back to this a lot too, as we go through different topics. Mainly, someone was a server in that environment now, they can run an existing script that maybe you made it for them. But do you have concerns that your job is redundant now that you can just one script can do a lot of your work? [0:04:53.6] NL Yeah, in the field engineering org, we kind of have this mantra that we’re trying to automate ourselves out of a job. I feel like anyone who is like really getting into cloud native infrastructure, that is the path that they’re taking as well. If I were an admin in a world that was like hybrid or anything like that, they had like on prem or bare metal infrastructure and they had cloud native infrastructure. I would be more than ecstatic to take any amount of the administrative work of like spinning up new servers in the cloud native infrastructure. If the people just need somewhere they can go click, I got whatever services I need and they all work together because the cloud makes them work together, awesome. That gives me more time to do other tasks that may be a bit more onerous or less automated. I would be all for it. [0:05:48.8] CC: You’re saying that if you are – because I don’t want the admin people listening to this to stop listening and thinking, screw this. You’re saying, if you’re an admin, there will still be plenty of work for you to do? [0:06:03.8] NL: Year, there’s always stuff to do I think. If not, then I guess maybe it’s time to find somewhere else to go. [0:06:12.1] DC: There was a really interesting presentation that really stuck with me when I was working for CoreOS which is another infrastructure company, it was a presentation by our CTO, his name is Brandon Philips and Brandon put together a presentation around the idea that every single day, there are you know, so many thousand new users of the Internet coming online for the first time. That’s so many thousand people who are like going to be storing their photos there, getting emails, doing all those things that we do in our daily lives with the Internet. That globally, across the whole world, there are only about, I think it was like 250k or 300,000 people that do what we do, that understand the infrastructure at a level that they might even be able to automate it, you know? That work in the IT industry and are able to actually facilitate the creation of those resources on which all of those applications will be hosted, right? This isn’t even taking into account, the number of applications per day that are brought into the Internet or made available to users, right? That in itself is a whole different thing. How many people are putting up new webpages or putting up new content or what have you every single day. Fundamentally, I think that we have to think about the problem in a slightly different way, this isn’t about whether we will have jobs, it’s about how, when we are so outnumbered, how do we as this relatively small force in the world, handle the demand that is coming, that is already here today, right? Those people that are listening, who are working infrastructure today, you’re even more valuable when you think about it in those terms because there just aren’t enough people on the planet today to solve those problems using the tools that we are using today, right? Automation is king and it has been for a long time but it’s not going anywhere, we need the people that we have to be able to actually support much larger numbers or bigger scale of infrastructure than they know how to do today. That’s the problem that we have to solve. [0:08:14.8] NL: Yeah, totally. [0:08:16.2] CC: Looking from the perspective of whoever is paying the bills. I think that in the past, as a developer, you had to request a server to run your app in the test environment and eventually you’ll get it and that would be the server that everybody would use to run against, right? Because you’re the developer in the group and everybody’s developing different features and that one server is what we would use to push out changes to and do some level of manual task or maybe we’ll have a QA person who would do it. That’s one server or one resource or one virtual machine. Now, maybe I’m wrong but as a developer, I think what I’m seeing is I will have access to compute and storage and I’ll run a script and I boot up that resource just for myself. Is that more or less expensive? You know? If every single developer has this facility to speed things up much quicker because we’re not depending on IT and if we have a script. I mean, the reality is not as easy like just, well command and you get it but – If it’s so easy and that’s what everybody is doing, doesn’t it become expensive for the company? [0:09:44.3] NL: It can, I think when cloud native infrastructure really became more popular in the workplace and became more like mainstream, there was a lot of talk about the concept of sticker shock, right? It’s the idea of you had this predictable amount of money that was allocated to your infrastructure before, these things cost this much and their value will degrade over time, right? The server you had in 2005 is not going to be as valuable as the server you buy in 2010 but that might be a refresh cycles like five years or so. But you have this predictable amount of money. Suddenly, you have this script that we’re talking about that can spin up in equivalent resource as one of those servers and if someone just leaves it running, that will run for a long time and so, irresponsible users of the cloud or even just regular users of cloud, it does cost a lot of money to use any of these cloud services or it can cost a lot of money. Yes, there is some concern about the amount of money that these things cost because honestly, as we’re exploring cloud native topics. One thing keeps coming up is that cloud really just means, somebody else’s computer, right? You’re not using the cost of maintenance or the time it takes to maintain things, somebody else is and you’re paying that price upfront instead of doing it on like a yearly basis, right? It’s less predictable and usually a bit more than people are expected, right? But there is value there as well. [0:11:16.0] CC: You’re saying if the user is diligent enough to terminate clusters or the machine, that is how you don’t rack up unnecessary cost? [0:11:26.6] NL: Right For a test, like say you want to spin up your code really quickly and just need to – a quick like setup like networking and compute resources to test out your code and you spin up a small, like a tiny instance somewhere in one of the clouds, test out your code and then kill the instance. That won’t cost hardly anything. It didn’t really cost you much on time either, right? You had this automated process hopefully or had a manual process that isn’t too onerous and you get the resource and the things you needed and you’re good. If you aren’t a good player and you keep it going, that can get very expensive very quickly. Because It’s a number of resources used per hour I think is how most billing happens in the cloud. That can exponentially – I mean, they’re not really exponentially grow but it will increase in time to a value that you are not expecting to see. You get a billing and you’re like holy crap, what is this? [0:12:26.9] DC: I think it’s also – I mean, this is definitely where things like orchestration come in, right? With the level of obstruction that you get from things like Kubernetes or Mesos or some other tools, you’re able to provide access to those resources in a more dynamic way with the expectation and sometimes one of the explicit contract, that work load that you deploy will be deployed on common equipment, allowing for things like Vin packing which is a pretty interesting term when it comes to infrastructure and means that you can think of the fact that like, for a particular cluster. I might have 10 of those VMs that we talk about having high value. I attended to those running and then my goal is to make sure that I have enough consumers of those 10 nodes to be able to get my value out of it and so when I split up the environments, we did a little developer has a main space, right? This gets me the ability to effectively over subscribe those resources that I’m paying for in a way that will reduce the overall cost of ownership or cost of – not ownership, maybe cost of operation for those 10 nodes. Let’s take a step back and go down a like memory lane. [0:13:34.7] NL: When did you first hear about the concept of IAS or cloud native infrastructure or infrastructure as code? Carlisia? [0:13:45.5] CC: I think in the last couple of years, same as pretty much coincided with when I started to – you need to cloud native and Kubernetes. I’m not clear on the difference between the infrastructure as code for cloud native versus infrastructure as code for the cloud in general. Is there anything about cloud native that has different requirements and solutions? Are we just talking about, is the cloud and the same applies for cloud native? [0:14:25.8] NL: Yes, I think that they’re the same thing. Cloud, like infrastructure is code for the cloud is inherently cloud native, right? Cloud native just means that whatever you’re trying to do, leverages the tools and the contracts that a cloud provides. The basic infrastructure as code is basically just how do I use the cloud and that’s – [0:14:52.9] CC: In an automated way. [0:14:54.7] NL: In an automated way or just in a way, right? A properly constructed cloud should have a user interface of some kind, that uses an API, right? A contract to create these machines or create these resources. So that the way that it creates its own resources is the same that you create the resource if you programmatically do it right. Orchestration tool like Ansible or Terraform. The API calls that itself makes and its UI needs to be the same and if we have that then we have a well-constructed cloud native environment. [0:15:32.7] DC: Yeah, I agree with that. I think you know, from the perspective of cloud infrastructure or cloud native infrastructure, the goal is definitely to have – it relies on one of the topics that we covered earlier in a program around the idea of API first or API driven being such an intrinsic quality of any cloud native architecture, right? Because, fundamentally, if we can’t do it programmatically, then we’re kind of stuck in that old world of wrecking servers or going through some human managed process and then we’re right back to the same state that we were in before and there’s no way that we can actually scale our ability to manage these problems because we’re stuck in a place where it’s like one to one rather than one to many. [0:16:15.6] NL: Yeah, those API’s are critical. [0:16:17.4] DC: The API’s are critical. [0:16:18.4] NL: You bring up a good point, reminder of something. Not every cloud that you’re going to run into is made the same. There are some clouds that exist and I’m not going to specifically call them out but there are some clouds that exist that the API is people. Using their request and a human being interprets your request and makes the changes. That is a big no-no. Me no like, no good, my brain stopped. That is a poorly constructed cloud native environment. In fact, I would say it is not a cloud native environment at all, it can barely call itself a cloud and sentence. Duffie, how about you? When was the first time you heard the concept of cloud native infrastructure? [0:17:01.3] DC: I’m going to take this question in a form of like, what was the first programmatic infrastructure of those service that I played with. For me, that was actually like back in the Nicera days when we were virtualizing the network and effectively providing and building an API that would allow you to create network resources and a different target that we were developing for where things like XenServer which the time had a reasonable API that would allow you to create virtual machines but didn’t really have a good virtual network solution. There were also technologies like KVM, the ability to actually use KVM to create virtual machines. Again, with an API and then, although, in KVM, it wasn’t quite the same as an API, that’s kind of where things like OpenStack and those technologies came along and kind of wrapped a lot of the capability of KVM behind a restful API which was awesome. But yeah, I would say, XenServer was the first one and that gave me the ability to – like a command line option with which I could stand up and create virtual machines and give them so many resources and all that stuff. You know, from my perspective, it was the Nicera was the first network API that I actually ever saw and it was also one of the first ones that I worked on which was pretty neat. [0:18:11.8] CC: When was that Duffie? [0:18:13.8] DC: Some time ago. I would say like 2006 maybe? [0:18:17.2] NL: The TVs didn’t have color back then. [0:18:21.0] DC: Hey now. [0:18:25.1] NL: For me, for sort of cloud native infrastructure, it reminded me, it was the open stack days I think it was really when I first heard the phrase like I – infrastructure as a service. At the time, I didn’t even get close to grocking it. I still don’t know if I grock it fully but I was working at Red Hat at the time so this was probably back in like 2012, 2013 and we were starting to leverage OpenStack more and having like this API driven toolset that could like spin up these VMs or instances was really cool to me but it’s something I didn’t really get into using until I got into Kubernetes and specifically Kubernetes on different clouds such as like AWS or AS Ram. We’ll touch on those a little bit later but it was using those and then having like a CLI that had an API that I could reference really easily to spin things up. I was like, holy crap, this is incredible. That must have been around like the 2015, 16 timeframe I think. I think I actually heard the phrase cloud native infrastructure first from our friend Kris Nova’s book, Cloud Native Infrastructure. I think that really helped me wrap my brain around really what it is to have something be like cloud native infrastructure, how the different clouds interact in this way. I thought that was a really handy book, I highly recommend it. Also, well written and interesting read. [0:19:47.7] CC: Yes, I read it back to back and when I joined what I have to, I need to read it again actually. [0:19:53.6] NL: I agree, I need to go back to it. I think we’ve touched on something that we normally do which is like what does this topic mean to us. I think we kind of touched on that a bit but if there’s anything else that you all want to expand upon? Any aspect of infrastructure that we’ve not touched on? [0:20:08.2] CC: Well, I was thinking to say something which is reflects my first encounter with Kubernetes when I joined Heptio when it was started using Kubernetes for the very first time and I had such a misconception of why Kubernetes was. I’m saying what I’m going to say to touch base on what I want to say – I wanted to relate what infrastructure as code is not. [0:20:40.0] NL: That’s’ a very good point actually, I like that. [0:20:42.4] CC: Maybe, what Kubernetes now, I’m not clear what it is I’m going to say. Hang with me. It’s all related. I work from Valero, Valero is out, they run from Kubernetes and so I do have to have Kubernetes running for me to run Valero so we came on my machine or came around on the cloud provider and our own prem just for the pluck. For Kubernetes to run, I need to have a cluster running. Not one instance because I was used to like yeah, I could bring up an instance or an instance or two. I’ve done this before but bringing up a cluster, make sure that everything’s connected. I was like, then this. When I started having to do that, I was thinking. I thought that’s what Kubernetes did. Why do I have to bother with this? Isn’t that what Kubernetes supposed to do, isn’t it like just Kubernetes and all of that gets done? Then I had that realization, you know, that encounter where reality, no, I still have to boot up my infrastructure. That doesn’t go away because we’re doing Kubernetes. Kubernetes, this abstraction called that sits on top of that infrastructure. Now what? Okay, I can do it manually while DJIK has a great episode where Joe goes and installs everything by hand, he does thing like every single thing, right? Every single step that you need to do, hook up the network KP’s. It’s brilliant, it really helps you visualize what happens when all of the stuff, how all the stuff is brought up because ideally, you’re not doing that by hand which is what we’re talking about here. I used for example, cloud formation on AWS with a template that Heptio also has in partnership with AWS, there is a template that you can use to bring up a cluster for Kubernetes and everything’s hooked up, that proper networking piece is there. You have to do that first and then you have Kubernetes installed as part of that template. But the take home lesson for me was that I definitely wanted to do it using some sort of automation because otherwise, I don’t have time for that. [0:23:17.8] DC: Ain’t nobody got time for that, exactly. [0:23:19.4] CC: Ain’t got no time for that. That’s not my job. My job is something different. I’m just boarding these stuff up to test my software. Absolutely very handy and if you put people is not working with Kubernetes yet. I just wanted to clarify that there is a separation and the one thing is having your infrastructure input and then you have installed Kubernetes on top of that and then you know, you might have, your application running in Kubernetes or you can have an external application that interacts with Kubernetes. As an extension of Kubernetes, right? Which is what I – the project that I work on. [0:24:01.0] NL: That’s a good point and that’s something we should dive into and I’m glad that you brought this up actually, that’s a good example of a cloud native application using the cloud native infrastructure. Kubernetes has a pretty good job of that all around and so the idea of like Kubernetes itself is a platform. You have like a platform as a service that’s kind of what you’re talking about which is like, if I spin up. I just need to spin up a Kubernetes and then boom, I have a platform and that comes with the infrastructure part of that. There are more and more to like, of this managed Kubernetes offerings that are coming out that facilitate that function and those are an aspect of cloud native infrastructure. Those are the managed services that I was referring to where the administrators of the cloud are taking it upon themselves to do all that for you and then manage it for you and I think that’s a great offering for people who just don’t want to get into the weeds or don’t want to worry about the management of their application. Some of these like – For instance, databases. These managed services are awesome tool and going back to Kubernetes a little bit as well, it is a great show of how a cloud native application can work with the infrastructure that it’s on. For instance, when Kubernetes, if you spin up a service of type load balancer and it’s connected to a cloud properly, the cloud will create that object inside of itself for you, right? A load balancer in AWS is an ELB, it’s just a load balancer in Azure and I’m not familiar with the other terms that the other clouds use, they will create these things for you. I think that the dopest thing on the planet, that is so cool where I’m just like, this tool over here, I created it I n this thing and it told this other thing how to make that work in reality. [0:25:46.5] DC: That is so cool. Orchestration magic. [0:25:49.8] NL: Yeah, absolutely. [0:25:51.6] DC: I agree, and then, actually, I kind of wanted to make a point on that as well which is that, I think – the way I interpreted your original question, Carlisia was like, “What is the difference perhaps between these different models that I call the infrastructure-esque code versus a plat… you know or infrastructure as a service versus platform as a service versus containers as a service like what differentiates these things and for my part, I feel like it is effectively an evolution of the API like what the right entry point for your consumer is. So in the form, when we consider container orchestration. We consider things like middle this in Kubernetes and technologies like that. We make the assumption that the right entry point for that user is an API in which they can define those things that we want to orchestrate. Those containers or those applications and we are going to provide within the form of that platform capability like you know, service discovery and being able to handle networking and do all of those things for you. So that all you really have to do is to find like what needs to be deployed. And what other services they need to rely on and away you go. Whereas when you look at infrastructure of the service, the entry point is fundamentally different, right? We need to be thinking about what the infrastructure needs to look at like I would might ask an infrastructure as a service API how man machines I have running and what networks they are associated with and how much memory and disk is associated with each of those machines. Whereas if I am interacting with a platform as a service, I might ask whatever the questions about those primitives that are exposed by their platform, how many deployments do I have? What name spaces do I have access to do? How many pods are running right now versus how many I ask that would be running? Those questions capabilities. [0:27:44.6] NL: Very good point and yeah I am glad that we explored that a little bit and cloud native infrastructure is not nothing but it is close to useless without a properly leveraged cloud native application of some kind. [0:27:57.1] DC: These API’s all the way down give you this true flexibility and like the real functionality that you are looking for in cloud because as a developer, I don’t need to care how the networking works or where my storage is coming from or where these things are actually located. What the API does any of these things. I want someone to do it for me and then API does that, success and that is what cloud native structure gets even. Speaking of that, the thing that you don’t care about. What was it like before cloud? What do we have before cloud native infrastructure? The things that come to mind are the things like vSphere. I think vSphere is a bridge between the bare metal world and the cloud native world and that is not to say that vSphere itself is not necessarily cloud native but there are some limitations. [0:28:44.3] CC: What is vSphere? [0:28:46.0] DC: vSphere is a tool that VMware has created a while back. I think it premiered back in 2000, 2000-2001 timeframe and it was a way to predictably create and manage virtual machines. So in virtual machine being a kernel that sits on top of a kernel inside of a piece of hardware. [0:29:11.4] CC: So is vSphere two virtual machines where to Kubernetes is to containers? [0:29:15.9] DC: Not quite the same thing I don’t think because fundamentally, the underlying technologies are very different. Another way of explaining the difference that you have that history is you know, back in the day like 2000, 90s even currently like what we have is we have a layer of people who are involved in dealing with physical hardware. They rack 10 or 20 servers and before we had orchestration and virtualization and any of those things, we would actually be installing an operating system and applications on those servers. Those servers would be webservers and those servers will be database servers and they would be a physical machine dedicated to a single task like a webserver or database. Where vSphere comes in and XenServer and then KVM and those technologies is that we think about this model fundamentally differently instead of thinking about it like one application per physical box. We think about each physical box being able to hold a bunch of these virtual boxes that look like virtual machines. And so now those virtual machines are what we put our application code. I have a virtual machine that is a web server. I have a virtual machine that is a database server. I have that level of abstraction and the benefit of this is that I can get more value out of those hardware boxes than I could before, right? Before I have to buy one box or one application, now I can buy one box for 10 or 20 applications. When we take that to the next level, when we get to container orchestration. We realized, “You know what? Maybe I don’t need the full abstraction of a machine. I just need enough of an abstraction to give me enough, just enough isolation back and forth between those applications such that they have their own file system but they can share the kernel but they have their own network” but they can share the physical network that we have enough isolation between them but they can’t interact with each other except for when intent is present, right? That we have that sort of level of abstraction. You can see that this is a much more granular level of abstraction and the benefit of that is that we are not actually trying to create a virtual machine anymore. We are just trying to effectively isolate processes in a Linux kernel and so instead of 20 or maybe 30 VMs per physical box, I can get a 110 process all day long you know on a physical box and again, this takes us back to that concept that I mentioned earlier around bin packing. When we are talking about infrastructure, we have been on this eternal quest to make the most of what we have to make the most of that infrastructure. You know, how do we actually – what tools and tooling that we need to be able to see the most efficiency for the dollar that we spend on that hardware? [0:32:01.4] CC: That is simultaneously a great explanation of container and how containers compare with virtual machines, bravo. [0:32:11.6] DC: That was really low crap idea that was great. [0:32:14.0] CC: Now explain vSphere please I still don’t understand it. I don’t know what it does. [0:32:19.0] NL: So vSphere is the one that creates the many virtual machines on top of a physical machine. It gives you the capability of having really good isolation between these virtual machines and inside of these virtual machines you feel like you have like a metal box but it is not a metal box it is just a process running on a metal box. [0:32:35.3] CC: All right, so it is a system that holds multiple virtual machines inside the same machine. [0:32:41.8] NL: Yeah, so think of it in the cloud native like infrastructure world, vSphere is essentially the API or you could think of it as the cloud itself, which is in the sense an AWS but for your datacenter. The difference being that there isn’t a particularly useful API that vSphere exposes so it makes it harder for people to programmatically leverage, which makes it difficult for me to say like vSphere is a cloud native app like tool. It is a great tool and it has worked wonders and still works wonders for a lot of companies throughout the years but I would hesitate to lump into a cloud native functionality. So prior to cloud native or infrastructures and service, we had these tools like vSphere, which allowed us to make smaller and smaller VMs or smaller and smaller compute resources on a larger compute resource and going back to something, we’re talking about containers and how you spin up these processes. Prior to things that containers and in this world of VM’s a lot of times what you do is you would create a VM that had your application already installed into it. It is burnt into the image so that when that VM stood up it would spin up that process. So that would be the way that you would start processes. The other way would be through orchestration tools similar to Ansible but they existed right or Ansible. That essentially just ran SSH on a number of servers to startup tools like these processes and that is how you’d get distributed systems prior to things like cloud native and containers. [0:34:20.6] CC: Makes sense. [0:34:21.6] NL: And so before we had vSphere we already had XenServer. Before we had virtual machine automation, which is what these tools are, virtual machine automation we had bare metal. We just had joes like Duffie and me cutting our hands on rack equipment. A server was never installed properly unless my blood is on it essentially because they are heavy and it is all metal and it sharpens some capacity and so you would inevitably squash a hand or something. And so you’d rack up the server and then you’d plug in all the things, plug in all the plugs and then set it up manually and then it is good to go and then someone can use it at will or in a logical manner hopefully and that is what we had to do before. It’s like, “Oh I need a new compute resource” okay well, let us call Circuit City or whoever, let us call Newegg and get a new server in and there you go” and then there is a process for that and yeah I think, I don’t know I’m dying of blood loss anymore. [0:35:22.6] CC: And still a lot of companies are using bare metal as a matter of course, which brings up another question, if one would want to ask, which is, is it worth it these days to run bare metal if you have the clouds, right? One example is we see companies like Uber, Lyft, all of these super high volume companies using public clouds, which is to say paying another company for all the data, traffic of data, storage of data in computes and security and everything. And you know one could say, you would save a ton of money to have that in house but using bare metal and other people would say there is no way that this costs so much to host all of that. [0:36:29.1] NL: I would say it really depends on the environment. So usually I think that if a company is using only cloud native resources to begin with, it is hard to make the transition into bare metal because you are used to these tools being in place. A company that is more familiar like they came from a bare metal background and moved to cloud, they may leverage both in a hybrid fashion well because they should have tooling or they can now create tooling so that they can make it so that their bare metal environment can mimic the functionality of the cloud native platform, it really depends on the company and also the need of security and data retention and all of these thing to have like if you need this granularity of control bare metal might be a better approach because of you need to make sure that your data doesn’t leave your company in a specific way putting it on somebody else’s computer probably isn’t the best way to handle that. So there is a balancing act of how much resources are you using in the cloud and how are you using the cloud and what does that cost look like versus what does it cost to run a data center to like have the physical real estate and then have like run the electricity VH fact, people’s jobs like their salaries specifically to manage that location and your compute and network resources and all of these things. That is a question that each company will have to ask. There is normally like hard and fast answer but many smaller companies something like the cloud is better because you don’t have to like think of all these other costs associated with running your application. [0:38:04.2] CC: But then if you are a small company. I mean if you are a small company it is a no brainer. It makes sense for you to go to the clouds but then you said that it is hard to transition from the clouds to bare metal. [0:38:17.3] NL: It can and it really depends on the people that you have working for you. Like if the people you have working for you are good and creating automation and are good and managing infrastructure of any kind, it shouldn’t be too bad but as we’re moving more and more into a cloud focused world, I wonder if those people are going to start going away. [0:38:38.8] CC: For people who are just listening on the podcast, Duffie was heavily nodding as Nick was saying that. [0:38:46.5] DC: I was. I do, I completely agree with the statement that it depends on the people that you have, right? And fundamentally I think the point I would add to this is that Uber or a company like Uber or a company like Lyft, how many people do they employ that are focused on infrastructure, right? And I don’t know the answer to this question but I am positioning it, right? And so, if we assume that they have – that this is a pretty hot market for people who understand infrastructure. So they are not going to have a ton of them, so what specific problems do they want those people that they employ that are focused on infrastructure to solve right? And we are thinking about this as a scale problem. I have 10 people that are going to manage the infrastructure for all of the applications that Uber has, maybe have 20 people but it is going to be a percentage of the people compared to the number of people that I have maybe developing for those applications or for marketing or for within the company. I am going to have, I am going to be, I am going to quickly find myself off balance and then the number of applications that I need to actually support that I need to operationalize to be able to get to be deployed, right? Versus the number of people that I have doing the infrastructure work to provide that base layer for which problems in application will be deployed, right? I look around in the market and I see things like orchestration. I see things like Kubernetes and Mezos and technologies like that. That can provide kind of a multiplier or even AWS or Azure or GCP. I have these things act as a multiplier for those 10 infrastructure pull that I have, right? Because no – they are not looking at trying to figure out how to store it, you know get this data. They are not worried about data centers, they are not worried about servers, they are not worried about networking, right? They can actually have ten people that are decent at infrastructure that can expand, that can spin up very large amounts of infrastructure to satisfy the developing and operational need of a company the size of Uber reasonably. But if we had to go back to a place where we had to do all of that in the physical world like rack those servers, deal with the power, deal with the cool low space, deal with all the cabling and all of that stuff, 10 people are probably not enough. [0:40:58.6] NL: Yeah, absolutely. I put into numbers like if you are in the cloud native workspace you may need 1% of your workforce dedicated to infrastructure, but if you are in the bare metal world, you might need 10 to 20% of your workforce dedicated just to running infrastructure. because the overtly overhead of people’s work is so much greater and a lot of it is focused on things that are tangible instead of things that are fundamental like automation, right? So those 1% or Uber if I am pulling this number totally out of nowhere but if that one percent, their job is to usually focus around automating the allocation of resources and figuring out like the tools that they can use to better leverage those clouds. In the bare metal environment, those people’s jobs are more like, “Oh crap, suddenly we are like 90 degrees in the data center in San Jose, what is going on?” and then having someone go out there and figuring out what physical problem is going on. It is like their day to day lives and something I want to touch on really quickly as well with bare metal, prior to bare metal we had something kind of interesting that we were able to leverage from an infrastructure standpoint and that is mainframes and we are actually going back a little bit to the mainframe idea but the idea of a mainframe, it is essentially it is almost like a cloud native technology but not really. So instead of you using whatever like you are able to spin up X number of resources and networking and make that work. With the mainframe at work is that everyone use the same compute resource. It was just one giant compute resource. There is no networking native because everyone is connected with the same resource and they would just do whatever they needed, write their code, run it and then be done with it and then come off and it was a really interesting idea I think where the cloud almost mimics a mainframe idea where everyone just connects to the cloud. Does whatever they need and then comes off but at a different scale and yeah, Duffie do you have any thoughts on that? [0:43:05.4] DC: Yeah, I agree with your point. I think it is interesting to go back to mainframe days and I think from the perspective of like what a mainframe is versus like what the idea of a cluster in those sorts of things are is that it is kind of like the domain of what you’re looking at. Mainframe considers everything to be within the same physical domain whereas like when you start getting into the larger architectures or some of the more scalable architectures you find that just like any distributed system we are spreading that work across a number of physical nodes. And so we think about it fundamentally differently but it is interesting the parallels between what we do, the works that we are doing today versus what we are doing in maintain times. [0:43:43.4] NL: Yeah, cool. I think we are getting close to a wrap up time but something that I wanted to touch on rather quickly, we have mentioned these things by names but I want to go over some of like the cloud native infrastructures that we use on a day to day basis. So something that we don’t mention them before but Amazon’s AWS is pretty much the number one cloud, I’m pretty sure, right? That is the most number of users and they have a really well structured API. A really good seal eye, peace and gooey, sometimes there is some problems with it and it is the thing that people think of when they think cloud native infrastructure. Going back to that point again, I think that AWS was agreed, AWS is one of the largest cloud providers and has certainly the most adoption as an infrastructure for cloud native infrastructure and it is really interesting to see a number of solutions out there. IBM has one, GCP, Azure, there are a lot of other solutions out there now. That are really focused on trying to follow the same, it could be not the same exact patterns that AWS has but certainly providing this consistent API for all of the same resources or for all of the same services serving and maybe some differentiating services as well. So it is yeah, you could definitely tell it is sort of like for all the leader. You can definitely tell that AWS stumbled onto a really great solution there and that all of the other cloud providers are jumping on board trying to get a piece of that as well. [0:45:07.0] DC: Yep. [0:45:07.8] NL: And also something we can touch on a little bit as well but from a cloud native infrastructure standpoint, it isn’t wrong to say that a bare metal data center can be a cloud native infrastructure. As long as you have the tooling in place, you can have your own cloud native infrastructure, your own private cloud essentially and I know that private cloud doesn’t actually make any sense. That is not how cloud works but you can have a cloud native infrastructure in your own data center but it takes a lot of work. And it takes a lot of management but it isn’t something that exists solely in the realm of Amazon, IBM, Google or Microsoft and I can’t remember the other names, you know the ones that are running as well. [0:45:46.5] DC: Yeah agreed and actually one of the questions you asked, Carlisia, earlier that I didn’t get a chance to answer was do you think it is worth running bare metal today and in my opinion the answer will always be yes, right? Especially as we think about like it is a line that we draw in the sand is container isolation or container orchestration, then there will always be a good reason to run on bare metal to basically expose resources that are available to us against a single bare metal instance. Things like GPU’s or other physical resources like that or maybe we just need really, really fast disk and we want to make sure that we like provide those containers to access to SSDs underlying and there is technology certainly introduced by VMware that expose real hardware from the underlying hype riser up to the virtual machine where these particular containers might run but you know the question I think you – I always come back to the idea that when thinking about those levels of abstraction that provide access to resources like GPU’s and those sorts of things, you have to consider that simplicity is king here, right? As we think about the different fault domains or the failure domains as we are coming up with these sorts of infrastructures, we have to think about what would it look like when they fail or how they fail or how we actually go about allocating those resources for ticket of the machines and that is why I think that bare metal and technologies like that are not going away. I think they will always be around but to the next point and I think as we covered pretty thoroughly in this episode having an API between me and the infrastructure is not something I am willing to give up. I need that to be able to actually to solve my problems at scale even reasonable scale. [0:47:26.4] NL: Yeah, you mean you don’t want to go back to the battle days of around tell netting into a juniper switch and Telnet – setting up your IP – not IP tables it was your IP comp commands. [0:47:39.9] DC: Did you just say Telnet? [0:47:41.8] NL: I said Telnet yeah or serial, serial connect into it. [0:47:46.8] DC: Nice, yeah. [0:47:48.6] NL: All right, I think that pretty much covers it from a cloud native infrastructure. Do you all have any finishing thoughts on the topic? [0:47:54.9] CC: No, this was great. Very informative. [0:47:58.1] DC: Yeah, I had a great time. This is a topic that I very much enjoy. It is things like Kubernetes and the cloud native infrastructure that we exist in is always what I wanted us to get to. When I was in university this was I’m like, “Oh man, someday we are going to live in a world with virtual machines” and I didn’t even have the idea of containers but people can real easily to play with applications like I was amazed that we weren’t there yet and I am so happy to be in this world now. Not to say that I think we can stop and we need to stop improving, of course not. We are not at the end of the journey by far but I am so happy we’re at where we are at right now. [0:48:34.2] CC: As a developer I have to say I am too and I had this thought in my mind as we are having this conversation that I am so happy too that we are where we are and I think well obviously not everybody is there yet but as people start practicing the cloud native development they will come to realize what is it that we are talking about. I mean I said before, I remember the days when for me to get access to a server, I had to file a ticket, wait for somebody’s approval. Maybe I won’t get an approval and when I say me, I mean my team or one of us would do the request and see that. Then you had that server and everybody was pushing to the server like one the main version of our app would maybe run like every day we will get a new fresh copy. The way it is now, I don’t have to depend on anyone and yes, it is a little bit of work to have to run this choreo to put up the clusters but it is so good that is not for me. [0:49:48.6] DC: Yeah, exactly. [0:49:49.7] CC: I am selfish that way. No seriously, I don’t have to wait for a code to merge and be pushed. It is my code that is right there sitting on my computer. I am pushing it to the clouds, boom, I can test it. Sometimes I can test it on my machine but some things I can’t. So when I am doing volume or I have to push it to the cloud provider is amazing. It is so, I don’t know, I feel very autonomous and that makes me very happy. [0:50:18.7] NL: I totally agree for that exact reason like testing things out maybe something isn’t ready for somebody else to see. It is not ready for prime time. So I need something really quick to test it out but also for me, I am the avatar of unwitting chaos meaning basically everything I touch will eventually blow up. So it is also nice that whatever I do that’s weird isn’t going to affect anybody else either and that is great. Blast radius is amazing. All right, so I think that pretty much wraps it up for a cloud native infrastructure episode. I had an awesome time. This is always fun so please send us your concepts and ideas in the GitHub issue tracker. You can see our existing episode and suggestions and you can add your own at github.com/heptio/thecubelets and go to the issues tab and file a new issue or see what is already there. All right, we’ll see you next time. [0:51:09.9] DC: Thank you. [0:51:10.8] CC: Thank you, bye. [END OF INTERVIEW] [0:51:13.9] KN: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter https://twitter.com/ThePodlets and on the https://thepodlets.io website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

The Podlets - A Cloud Native Podcast

Welcome to the first episode of The Podlets Podcast! On the show today we’re kicking it off with some introductions to who we all are, how we got involved in VMware and a bit about our career histories up to this point. We share our vision for this podcast and explain the unique angle from which we will approach our conversations, a way that will hopefully illuminate some of the concepts we discuss in a much greater way. We also dive into our various experiences with open source, share what some of our favorite projects have been and then we define what the term “cloud native” means to each of us individually. The contribution that the Cloud Native Computing Foundation (CNCF) is making in the industry is amazing, and we talk about how they advocate the programs they adopt and just generally impact the community. We are so excited to be on this podcast and to get feedback from you, so do follow us on Twitter and be sure to tune in for the next episode! Note: our show changed name to The Podlets. Follow us: https://twitter.com/thepodlets Hosts: Carlisia Campos Kris Nóva Josh Rosso Duffie Cooley Nicholas Lane Key Points from This Episode: An introduction to us, our career histories and how we got into the cloud native realm. Contributing to open source and everyone’s favorite project they have worked on. What the purpose of this podcast is and the unique angle we will approach topics from. The importance of understanding the “why” behind tools and concepts. How we are going to be interacting with our audience and create a feedback loop. Unpacking the term “cloud native” and what it means to each of us. Differentiating between the cloud native apps and cloud native infrastructure. The ability to interact with APIs as the heart of cloud natives. More about the Cloud Native Computing Foundation (CNCF) and their role in the industry. Some of the great things that happen when a project is donated to the CNCF. The code of conduct that you need to adopt to be part of the CNCF. And much more! Quotes: “If you tell me the how before I understand what that even is, I'm going to forget.” — @carlisia [0:12:54] “I firmly believe that you can't – that you don't understand a thing if you can't teach it.” — @mauilion [0:13:51] “When you're designing software and you start your main function to be built around the cloud, or to be built around what the cloud enables us to do in the services a cloud to offer you, that is when you start to look at cloud native engineering.” — @krisnova [0:16:57] Links Mentioned in Today’s Episode: Kubernetes — https://kubernetes.io/The Podlets on Twitter — https://twitter.com/thepodlets VMware — https://www.vmware.com/Nicholas Lane on LinkedIn — https://www.linkedin.com/in/nicholas-ross-laneRed Hat — https://www.redhat.com/CoreOS — https://coreos.com/Duffie Cooley on LinkedIn — https://www.linkedin.com/in/mauilionApache Mesos — http://mesos.apache.org/Kris Nova on LinkedIn — https://www.linkedin.com/in/kris-novaSolidFire — https://www.solidfire.com/NetApp — https://www.netapp.com/us/index.aspxMicrosoft Azure — https://azure.microsoft.com/Carlisia Campos on LinkedIn — https://www.linkedin.com/in/carlisiaFastly — https://www.fastly.com/FreeBSD — https://www.freebsd.org/OpenStack — https://www.openstack.org/Open vSwitch — https://www.openvswitch.org/Istio — https://istio.io/The Kublets on GitHub — https://github.com/heptio/thekubeletsCloud Native Infrastructure on Amazon — https://www.amazon.com/Cloud-Native-Infrastructure-Applications-Environment/dp/1491984309Cloud Native Computing Foundation — https://www.cncf.io/Terraform — https://www.terraform.io/KubeCon — https://www.cncf.io/community/kubecon-cloudnativecon-events/The Linux Foundation — https://www.linuxfoundation.org/Sysdig — https://sysdig.com/opensource/falco/OpenEBS — https://openebs.io/Aaron Crickenberger — https://twitter.com/spiffxp Transcript: [INTRODUCTION] [0:00:08.1] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concept, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.3] KN: Welcome to the podcast. [0:00:42.5] NL: Hi. I’m Nicholas Lane. I’m a cloud native Architect. [0:00:45.0] CC: Who do you work for, Nicholas? [0:00:47.3] NL: I've worked for VMware, formerly of Heptio. [0:00:50.5] KN: I think we’re all VMware, formerly Heptio, aren’t we? [0:00:52.5] NL: Yes. [0:00:54.0] CC: That is correct. It just happened that way. Now Nick, why don’t you tell us how you got into this space? [0:01:02.4] NL: Okay. I originally got into the cloud native realm working for Red Hat as a consultant. At the time, I was doing OpenShift consultancy. Then my boss, Paul, Paul London, left Red Hat and I decided to follow him to CoreOS, where I met Duffie and Josh. We were on the field engineering team there and the sales engineering team. Then from there, I found myself at Heptio and now with VMware. Duffie, how about you? [0:01:30.3] DC: My name is Duffie Cooley. I'm also a cloud native architect at VMware, also recently Heptio and CoreOS. I've been working in technologies like cloud native for quite a few years now. I started my journey moving from virtual machines into containers with Mesos. I spent some time working on Mesos and actually worked with a team of really smart individuals to try and develop an API in front of that crazy Mesos thing. Then we realized, “Well, why are we doing this? There is one that's called Kubernetes. We should jump on that.” That's the direction in my time with containerization and cloud native stuff has taken. How about you Josh? [0:02:07.2] JR: Hey, I’m Josh. I similar to Duffie and Nicholas came from CoreOS and then to Heptio and then eventually VMware. Actually got my start in the middleware business oddly enough, where we worked on the Egregious Spaghetti Box, or the ESB as it’s formally known. I got to see over time how folks were doing a lot of these, I guess, more legacy monolithic applications and that sparked my interest into learning a bit about some of the cloud native things that were going on. At the time, CoreOS was at the forefront of that. It was a natural progression based on the interests and had a really good time working at Heptio with a lot of the folks that are on this call right now. Kris, you want to give us an intro? [0:02:48.4] KN: Sure. Hi, everyone. Kris Nova. I've been SRE DevOps infrastructure for about a decade now. I used to live in Boulder, Colorado. I came out of a couple startups there. I worked at SolidFire, we went to NetApp. I used to work on the Linux kernel there some. Then I was at Deis for a while when I first started contributing to Kubernetes. We got bought by Microsoft, left Microsoft, the Azure team. I was working on the original managed Kubernetes there. Left that team, joined up with Heptio, met all of these fabulous folks. I think, I wrote a book and I've been doing a lot of public speaking and some other junk along the way. Yeah. Hi. What about you, Carlisia? [0:03:28.2] CC: All right. I think it's really interesting that all the guys are lined up on one call and all the girls on another call. [0:03:34.1] NL: We should have probably broken it up more. [0:03:36.4] CC: I am a developer and have always been a developer. Before joining Heptio, I was working for Fastly, which is a CDN company. They’re doing – helping them build the latest generation of their TLS management system. At some point during my stay there, Kevin Stuart was posting on Twitter, joined Heptio. At this point, Heptio was about, I don't know, between six months in a year-old. I saw those tweets go by I’m like, “Yeah, that sounds interesting, but I'm happy where I am.” I have a very good friend, Kennedy actually. He saw those tweets and here he kept saying to me, “You should apply. You should apply, because they are great people. They did great things. Kubernetes is so hot.” I’m like, “I'm happy where I am.” Eventually, I contacted Kevin and he also said, “Yeah, that it would be a perfect match.” two months later decided to apply. The people are amazing. I did think that Kubernetes was really hard, but my decision-making went towards two things. The people are amazing and some people who were working there I already knew from previous opportunities. Some of the people that I knew – I mean, I love everyone. The only thing was that it was an opportunity for me to work with open source. I definitely could not pass that up. I could not be happier to have made that decision now with VMware acquiring Heptio, like everybody here I’m at VMware. Still happy. [0:05:19.7] KN: Has everybody here contributed to open source before? [0:05:22.9] NL: Yup, I have. [0:05:24.0] KN: What's everybody's favorite project they've worked on? [0:05:26.4] NL: That's an interesting question. From a business aspect, I really like Dex. Dex is an identity provider, or a middleware for identity provider. It provides an OIDC endpoint for multiple different identity providers. You can absorb them into Kubernetes. Since Kubernetes only has an OIDC – only accepts OIDC job tokens for authentication, that functionality that Dex provides is probably my favorite thing. Although, if I'm going to be truly honest, I think right now the thing that I'm the most excited about working on is my own project, which is starting to join like me, joining into my interest in doing Chaos engineering. What about you guys? What’s your favorite? [0:06:06.3] KN: I understood some of those words. NL: Those are things we'll touch on on different episodes. [0:06:12.0] KN: Yeah. I worked on FreeBSD for a while. That was my first welcome to open source. I mean, that was back in the olden days of IRC clients and writing C. I had a lot of fun, and still I'm really close with a lot of folks in the FreeBSD community, so that always has a special place in my heart, I think, just that was my first experience of like, “Oh, this is how you work on a team and you work collaboratively together and it's okay to fail and be open.” [0:06:39.5] NL: Nice. [0:06:40.2] KN: What about you, Josh? [0:06:41.2] JR: I worked on a project at CoreOS. Well, a project that's still out there called ALB Ingress controller. It was a way to bring the AWS ALBs, which are just layer 7 load balancers and take the Kubernetes API ingress, attach those two together so that the ALB could serve ingress. The reason that it was the most interesting, technology aside, is just it went from something that we started just myself and a colleague, and eventually gained community adoption. We had to go through the process of just being us two worrying about our concerns, to having to bring on a large community that had their own business requirements and needs, and having to say no at times and having to encourage individuals to contribute when they had ideas and issues, because we didn't have the bandwidth to solve all those problems. It was interesting not necessarily from a technical standpoint, but just to see what it actually means when something starts to gain traction. That was really cool. Yeah, how about you Duffie? [0:07:39.7] DC: I've worked on a number of projects, but I find that generally where I fit into the ecosystem is basically helping other people adopt open source technologies. I spent a quite a bit of my time working on OpenStack and I spent some time working on Open vSwitch and recently in Kubernetes. Generally speaking, I haven't found myself to be much of a contributor to of code to those projects per se, but more like my work is just enabling people to adopt those technologies because I understand the breadth of the project more than the detail of some particular aspect. Lately, I've been spending some time working more on the SIG Network and SIG-cluster-lifecycle stuff. Some of the projects that have really caught my interest are things like, Kind which is Kubernetes in Docker and working on KubeADM itself, just making sure that we don't miss anything obvious in the way that KubeADM is being used to manage the infrastructure again. [0:08:34.2] KN: What about you, Carlisia? [0:08:36.0] CC: I realize it's a mission what I'm working on at VMware. That is coincidentally the project – the open source project that is my favorite. I didn't have a lot of experience with open source, just minor contributions here and there before this project. I'm working with Valero. It's a disaster recovery tool for Kubernetes. Like I said, it's open source. We’re coming up to version 1 pretty soon. The other maintainers are amazing, super knowledgeable and very experienced, mature. I have such a joy to work with them. My favorites. [0:09:13.4] NL: That's awesome. [0:09:14.7] DC: Should we get into the concept of cloud native and start talking about what we each think of this thing? Seems like a pretty loaded topic. There are a lot of people who would think of cloud native as just a generic term, we should probably try and nail it down here. [0:09:27.9] KN: I'm excited for this one. [0:09:30.1] CC: Maybe we should talk about what this podcast show is going to be? [0:09:34.9] NL: Sure. Yeah. Totally. [0:09:37.9] CC: Since this is our first episode. [0:09:37.8] NL: Carlisia, why don't you tell us a little bit about the podcast? [0:09:40.4] CC: I will be glad to. The idea that we had was to have a show where we can discuss cloud native concepts. As opposed to talking about particular tools or particular project, we are going to aim to talk about the concepts themselves and approach it from the perspective of a distributed system idea, or issue, or concept, or a cloud native concept. From there, we can talk about what really is this problem, what people or companies have this problem? What usually are the solutions? What are the alternative ways to solve this problem? Then we can talk about tools that are out there that people can use. I don't think there is a show that approaches things from this angle. I'm really excited about bringing this to the community. [0:10:38.9] KN: It's almost like TGIK, but turned inside out, or flipped around where TGIK, we do tools first and we talk about what exactly is this tool and how do you use it, but I think this one, we're spinning that around and we're saying, “No, let's pick a broader idea and then let's explore all the different possibilities with this broader idea.” [0:10:59.2] CC: Yeah, I would say so. [0:11:01.0] JR: From the field standpoint, I think this is something we often times run into with people who are just getting started with larger projects, like Kubernetes perhaps, or anything really, where a lot of times they hear something like the word Istio come out, or some technology. Often times, the why behind it isn't really considered upfront, it's just this tool exists, it's being talked about, clearly we need to start looking at it. Really diving into the concepts and the why behind it, hopefully will bring some light to a lot of these things that we're all talking about day-to-day. [0:11:31.6] CC: Yeah. Really focusing on the what and the why. The how is secondary. That's what my vision of this show is. [0:11:41.7] KN: I like it. [0:11:43.0] NL: That's something that really excites me, because there are a lot of these concepts that I talk about in my day-to-day life, but some of them, I don't actually think that I understand pretty well. It's those words that you've heard a million times, so you know how to use them, but you don't actually know the definition of them. [0:11:57.1] CC: I'm super glad to hear you say that mister, because as a developer in many not a system – not having a sysadmin background. Of course, I did sysadmin things as a developer, but not it wasn't my day-to-day thing ever. When I started working with Kubernetes, a lot of things I didn't quite grasp and that's a super understatement. I noticed that I mean, I can ask questions. No problem. I will dig through and find out and learn. The problem is that in talking to experts, a lot of the time when people, I think, but let me talk about myself. A lot of time when I ask a question, the experts jump right to the how. What is this? “Oh, this is how you do it.” I don't know what this is. Back off a little bit, right? Back up. I don't know what this is. Why is this doing this? I don't know. If you tell me the how before I understand what that even is, I'm going to forget. That's what's going to happen. I mean, it’s great you're trying to make an effort and show me the how to do something. This is personal, the way I learn. I need to understand the how first. This is why I'm so excited about this show. It's going to be awesome. This is what we’re going to talk about. [0:13:19.2] DC: Yeah, I agree. This is definitely one of the things that excites me about this topic as well, is that I find my secret super power is troubleshooting. That means that I can actually understand what the expected relationships between things should do, right? Rather than trying to figure out. Without really digging into the actual problem of stuff and what and the how people were going, or the people who were developing the code were trying to actually solve it, or thought about it. It's hard to get to the point where you fully understand that that distributed system. I think this is a great place to start. The other thing I'll say is that I firmly believe that you can't – that you don't understand a thing if you can't teach it. This podcast for me is about that. Let's bring up all the questions and we should enable our audience to actually ask us questions somehow, and get to a place where we can get as many perspectives on a problem as we can, such that we can really dig into the detail of what the problem is before we ever talk about how to solve it. Good stuff. [0:14:18.4] CC: Yeah, absolutely. [0:14:19.8] KN: Speaking of a feedback loop from our audience and taking the problem first and then solution in second, how do we plan on interacting with our audience? Do we want to maybe start a GitHub repo, or what are we thinking? [0:14:34.2] NL: I think a GitHub repo makes a lot of sense. I also wouldn't mind doing some social media malarkey, maybe having a Twitter account that we run or something like that, where people can ask questions too. [0:14:46.5] CC: Yes. Yes to all of that. Yeah. Having an issue list that in a repo that people can just add comments, praises, thank you, questions, suggestions for concepts to talk about and say like, “Hey, I have no clue what this means. Can you all talk about it?” Yeah, we'll talk about it. Twitter. Yes. Interact with those on Twitter. I believe our Twitter handle is TheKubelets. [0:15:12.1] KN: Oh, we already have one. Nice. [0:15:12.4] NL: Yes. See, I'm learning something new already. [0:15:15.3] CC: We already have. I thought you were all were joking. We have the Kubernetes repo. We have a github repo called – [0:15:22.8] NL: Oh, perfect. [0:15:23.4] CC: heptio/thekubelets. [0:15:27.5] DC: The other thing I like that we do in TGIK is this HackMD thing. Although, I'm trying to figure out how we could really make that work for us in a show that's recorded every week like this one. I think, maybe what we could do is have it so that when people can listen to the recording, they could go to the HackMD document, put questions in or comments around things if they would like to hear more about, or maybe share their perspectives about these topics. Maybe in the following week, we could just go back and review what came in during that period of time, or during the next session. [0:15:57.7] KN: Yeah. Maybe we're merging the HackMD on the next recording. [0:16:01.8] DC: Yeah. [0:16:03.3] KN: Okay. I like it. [0:16:03.6] DC: Josh, you have any thoughts? Friendster, MySpace, anything like that? [0:16:07.2] JR: No. I think we could pass on MySpace for now, but everything else sounds great. [0:16:13.4] DC: Do we want to get into the meat of the episode? [0:16:15.3] KN: Yeah. [0:16:17.2] DC: Our true topic, what does cloud native mean to all of us? Kris, I'm interested to hear your thoughts on this. You might have written a book about this? [0:16:28.3] KN: I co-authored a book called Cloud Native Infrastructure, which it means a lot of things to a lot of people. It's one of those umbrella terms, like DevOps. It's up to you to interpret it. I think in the past couple of years of working in the cloud native space and working directly at the CNCF as a CNCF ambassador, Cloud Native Computing Foundation, they're the open source nonprofit folks behind this term cloud native. I think the best definition I've been able to come up with is when you're designing software and you start your main function to be built around the cloud, or to be built around what the cloud enables us to do in the services a cloud to offer you, that is when you start to look at cloud native engineering. I think all cloud native infrastructure is, it's designing software that manages and mutates infrastructure in that same way. I think the underlying theme here is we're no longer caddying configurations disk and doing system D restarts. Now we're just sending HTTPS API requests and getting messages back. Hopefully, if the cloud has done what we expect it to do, that broadcast some broader change. As software engineers, we can count on those guarantees to design our software around. I really think that you need to understand that it's starting with the main function first and completely engineering your app around these new ideas and these new paradigms and not necessarily a migration of a non-cloud native app. I mean, you technically could go through and do it. Sure, we've seen a lot of people do it, but I don't think that's technically cloud native. That's cloud alien. Yeah. I don't know. That's just my thought. [0:18:10.0] DC: Are you saying that cloud native approach is a greenfield approach generally? To be a cloud native application, you're going to take that into account in the DNA of your application? [0:18:20.8] KN: Right. That's exactly what I'm saying. [0:18:23.1] CC: It's interesting that never said – mentioned cloud alien, because that plays into the way I would describe the meaning of cloud native. I mean, what it is, I think Nova described it beautifully and it's a lot of – it really shows her know-how. For me, if I have to describe it, I will just parrot things that I have read, including her book. What it means to me, what it means really is I'm going to use a metaphor to explain what it means to me. Given my accent, I’m obviously not an American born, and so I'm a foreigner. Although, I do speak English pretty well, but I'm not native. English is not my native tongue. I speak English really well, but there are certain hiccups that I'm going to have every once in a while. There are things that I'm not going to know what to say, or it's going to take me a bit long to remember. I rarely run into not understanding it, something in English, but it happens sometimes. That's the same with the cloud native application. If it hasn't been built to run on cloud native platforms and systems, you can migrate an application to cognitive environment, but it's not going to fully utilize the environments, like a native app would. That's my take. [0:19:56.3] KN: Cloud immigrant. [0:19:57.9] CC: Cloud immigrant. Is Nick a cloud alien? [0:20:01.1] KN: Yeah. [0:20:02.8] CC: Are they cloud native alien, or cloud native aliens. Yeah. [0:20:07.1] JR: On that point, I'd be curious if you all feel there is a need to discern the notion of cloud native infrastructure, or platforms, then the notion of cloud native apps themselves. Where I'm going with this, it's funny hearing the Greenfield thing and what you said, Carlisia, with the immigration, if you will, notion. Oftentimes, you see these very cloud native platforms, things, the amount of Kubernetes, or even Mesos or whatever it might be. Then you see the applications themselves. Some people are using these platforms that are cloud native to be a forcing function, to make a lot of their legacy stuff adopt more cloud native principles, right? There’s this push and pull. It's like, “Do I make my app more cloud native? Do I make my infrastructure more cloud native? Do I do them both at the same time?” Be curious what your thoughts are on that, or if that resonates with you at all. [0:21:00.4] KN: I've got a response here, if I can jump in. Of course, Nova with opinions. Who would have thought? I think what I'm hearing here, Josh is as we're using these cloud native platforms, we're forcing the hand of our engineers. In a world where we may be used to just send this blind DNS request out so whatever, and we would be ignorant of where that was going, now in the cloud native world, we know there's the specific DNS implementation that we can count on. It has this feature set that we can guarantee our software around. I think it's a little bit of both and I think that there is definitely an art to understanding, yes, this is a good idea to do both applications and infrastructure. I think that's where you get into this what it needs to be a cloud native engineer. Just in the same traditional legacy infrastructure stack, there's going to be good engineering choices you can make and there's going to be bad ones and there's many different schools of thought over do I go minimalist? Do I go all in at once? What does that mean? I think we're seeing a lot of folks try a lot of different patterns here. I think there's pros and cons though. [0:22:03.9] CC: Do you want to talk about this pros and cons? Do you see patterns that are more successful for some kinds of company versus others? [0:22:11.1] KN: I mean, I think going back to the greenfield thing that we were talking about earlier, I think if you are lucky enough to build out a greenfield application, you're able to bake in greenfield infrastructure management instead as well. That's where you get these really interesting hybrid applications, just like Kubernetes, that span the course of infrastructure and application. If we were to go into Kubernetes and say, “I wanted to define a service of type load balancer,” it’s actually going to go and create a load balancer for you and actually mutate that underlying infrastructure. The only way we were able to get that power and get that paradigm is because on day one, we said we're going to do that as software engineers; taking the infrastructure where you were hidden behind the firewall, or hidden behind the load balancer in the past. The software would have no way to reason about it. They’re blind and greenfield really is going to make or break your ability to even you take the infrastructure layers. [0:23:04.3] NL: I think that's a good distinction to make, because something that I've been seeing in the field a lot is that the users will do cloud native practices, but they’ll use a tool to do the cloud native for them, right? They'll use something along the lines of HashiCorp’s Terraform to create the VMs and the load balancers for them. It's something I think that people forget about is that the application themselves can ask for these resources as well. Terraform is just using an API and your code can use an API to the same API, in fact. I think that's an important distinction. It forces the developer to think a little bit like a sysadmin sometimes. I think that's a good melding of the dev and operations into this new word. Regrettably, that word doesn't exist right now. [0:23:51.2] KN: That word can be cloud native. [0:23:53.3] DC: Cloud here to me breaks down into a different set of topics as well. I remember seeing a talk by Brandon Phillips a few years ago. In his talk, he was describing – he had some numbers up on the screen and he was talking about the fact that we were going to quickly become overwhelmed by the desire to continue to develop and put out more applications for our users. His point was that every day, there's another 10,000 new users of the Internet, new consumers that are showing up on the Internet, right? Globally, I think it's something to the tune of about 350,000 of the people in this room, right? People who understand infrastructure, people who understand how to interact with applications, or to build them, those sorts of things. There really aren't a lot of people who are in that space today, right? We're surrounded by them all the time, but they really just globally aren't that many. His point is that if we don't radically change the way that we think about the development as the deployment and the management of all of these applications that we're looking at today, we're going to quickly be overrun, right? There aren't going to be enough people on the planet to solve that problem without thinking about the problem in a fundamentally different way. For me, that's where the cloud native piece comes in. With that, comes a set of primitives, right? You need some way to automate, or to write software that will manage other software. You need the ability to manage the lifecycle of that software in a resilient way that can be managed. There are lots of platforms out there that thought about this problem, right? There are things like Mesos, there are things like Kubernetes. There's a number of different shots on goal here. There are lots of things that I've really tried to think about that problem in a fundamentally different way. I think of those primitives that being able to actually manage the lifecycle of software, being able to think about packaging that software in such a way that it can be truly portable, the idea that you have some API abstraction that brings again, that portability, such that you can make use of resources that may not be hosted on your infrastructure on your own personal infrastructure, but also in the cloud, like how do we actually make that API contract so complete that you can just take that application anywhere? These are all part of that cloud native definition in my opinion. [0:26:08.2] KN: This is so fascinating, because the human race totally already learned this lesson with the Linux kernel in the 90s, right? We had all these hardware manufacturers coming out and building all these different hardware components with different interfaces. Somebody said, “Hey, you know what? There's a lot of noise going on here. We should standardize these and build a contract.” That contract then implemented control loops, just like in Kubernetes and then Mesos. Poof, we have the Linux kernel now. We're just distributed Linux kernel version 2.0. The human race is here repeating itself all over again. [0:26:41.7] NL: Yeah. It seems like the blast radius of Linux kernel 2.0 is significantly higher than the Linux kernel itself. That made it sound like I was like, pooh-poohing what you're saying. It’s more like, we're learning the same lesson, but at a grander scale now. [0:27:00.5] KN: Yeah. I think that's a really elegant way of putting it. [0:27:03.6] DC: You do raise a good point. If you are embracing on a cloud native infrastructure, remember that little changes are big changes, right? Because you're thinking about managing the lifecycle of a thousand applications now, right? If you're going full-on cloud native, you're thinking about operating at scale, it's a byproduct of that. Little changes that you might be able to make to your laptop are now big changes that are going to affect a fleet of thousand machines, right? [0:27:30.0] KN: We see this in Kubernetes all the time, where a new version of Kubernetes comes out and something totally unexpected happens when it is ran at scale. Maybe it worked on 10 nodes, but when we need to fire up a thousand nodes, what happens then? [0:27:42.0] NL: Yeah, absolutely. That actually brings up something that to me, defines cloud native as well. A lot of my definition of cloud native follows in suit with Kris Nova's book, or Kris Nova, because your book was what introduced me to the phrase cloud native. It makes sense that your opinion informs my opinion, but something that I think that we were just starting to talk about a little bit is also the concept of stability. Cloud native applications and infrastructure means coding with instability in mind. It's not being guaranteed that your VM will live forever, because it's on somebody else's hardware, right? Their hardware could go down, and so what do you do? It has to move over really quickly, has to figure out, have the guarantees of its API and its endpoints are all going to be the same no matter what. All of these things have to exist for the code, or for your application to live in the cloud. That's something that I find to be very fascinating and that's something that really excites me, is not trying to make a barge, but rather trying to make a schooner when you're making an app. Something that can, instead of taking over the waves, can be buffeted by the waves and still continue. [0:28:55.6] KN: Yeah. It's a little more reactive. I think we see this in Kubernetes a lot. When I interviewed Joe a couple years ago, Joe Beda for the book to get a quote from him, he said, this magic phrase that has stuck with me over the past few years, which is “goal-seeking behavior.” If you look at a Kubernetes object, they all use this concept in Go called embedding. Every Kubernetes object has a status in the spec. All it is is it’s what's actually going on, versus what did I tell it, what do I want to go on. Then all we're doing is just like you said with your analogy, is we're just trying to be reactive to that and build to that. [0:29:31.1] JR: That's something I wonder if people don't think about a lot. They don't they think about the spec, but not the status part. I think the status part is as important, or more important maybe than the spec. [0:29:41.3] KN: It totally is. Because I mean, a status like, if you have one potentiality for status, your control loop is going to be relatively trivial. As you start understanding more of the problems that you could see and your code starts to mature and harden, those statuses get more complex and you get more edge cases and your code matures and your code hardens. Then we can take that and globally in these larger cloud native patterns. It's really cool. [0:30:06.6] NL: Yeah. Carlisia, you’re a developer who's now just getting into the cloud native ecosystem. What are your thoughts on developing with cloud native practices in mind? [0:30:17.7] CC: I’m not sure I can answer that. When I started developing for Kubernetes, I was like, “What is a pod?” What comes first? How does this all fit together? I joined the project [inaudible 00:30:24]. I don't have to think about that. It's basically moving the project along. I don't have to think what I have to do differently from the way I did things before. [0:30:45.1] DC: One thing that I think you probably ran into in working with the application is the management of state and how that relates to – where you actually end up coupling that state. Before in development, you might just assume that there is a database somewhere that you would have to interact with. That database is a way of actually pushing that state off of the code that you're actually going to work with. In this way, that you might think of being able to write multiple consumers of state, or multiple things that are going to mutate state and all share that same database. This is one of the patterns that comes up all the time when we start talking about cloud native architectures, is because we have to really be very careful about how we manage that state and mainly, because one of the other big benefits of it is the ability to horizontally scale things that are going to mutate, or consume state. [0:31:37.5] CC: My brain is in its infancy as it relates to Kubernetes. All that I see is APIs all the way down. It's just APIs all the way down. It’s not very different than as a developer for me, is not very much more complex than developing against the database that sits behind. Ask me again a year from now and I will have a more interesting answer. [0:32:08.7] KN: This is so fascinating, right? I remember a couple years ago when Kubernetes was first coming out and listening to some of the original “Elders of Kubernetes,” and even some of the stuff that we were working on at this time. One of the things that they said was we hope one day, somebody doesn't have to care about what's passed these APIs and gets to look at Kubernetes as APIs only. Then they hear that come from you authentically, it's like, “Hey, that's our success statement there. We nailed it.” It's really cool. [0:32:37.9] CC: Yeah. I don’t understood their patterns and I probably should be more cognizant about these patterns are, even if it's just to articulate them. To me, my day-to-day challenge is understanding the API, understanding what library call do I make to make this happen and how – which is just programming 101 almost. Not different from any other regular project. [0:33:10.1] JR: Yeah. That is something that's nice about programming with Kubernetes in mind, because a lot of times you can use the source code as documentation. I hate to say that particularly is a non-developer. I'm a sysadmin first getting into development and documentation is key in my mind. There's been more than a few times where I'm like, “How do I do this?” You can look in the source code for pretty much any application that you're using that's in Kubernetes, or around the Kubernetes ecosystem. The API for that application is there and it'll tell you what you need to do, right? It’s like, “Oh, this is how you format your config file. Got it.” [0:33:47.7] CC: At the same time, I don't want to minimize that knowing what the patterns are is very useful. I haven't had to do any design for Valero for our projects. Maybe if I had, I would have to be forced to look into that. I'm still getting to know the codebase and developing features, but no major design that I had to lead at least. I think with time, I will recognize those patterns and it will make it easier for me to understand what is happening. What I was saying is that not understanding the patterns that are behind the design of those APIs doesn't preclude me at all so call against it, against them. [0:34:30.0] KN: I feel this is the heart of cloud native. I think we totally nailed it. The heart of cloud native is in the APIs and your ability to interact with the APIs. That's what makes it programmable and that's what makes – gives you the interface for you and your software to interact with that. [0:34:45.1] DC: Yeah, I agree with that. API first. On the topic of cloud native, what about the Cloud Native Computing Foundation? What are our thoughts on the CNCF and what is the CNCF? Josh, you have any thoughts on that? [0:35:00.5] JR: Yeah. I haven't really been as close to the CNCF as I probably should, to be honest with you. One of the great things that the CNCF has put together are programs around getting projects into this, I don't know if you would call it vendor neutral type program. Maybe somebody can correct me on that. Effectively, there's a lot of different categories, like networking and storage and runtimes for containers and things of that nature. There's a really cool landscape that can show off a lot of these different technologies. A lot of the categories, I'm guessing we'll be talking about on this podcast too, right? Things like, what does it mean to do cloud native networking and so on and so forth? That's my purview of the CNCF. Of course, they put on KubeCon, which is the most important thing to me. I'm sure someone else on this call can talk deeper at an organization level what they do. [0:35:50.5] KN: I'm happy to jump in here. I've been working with them for I think three years now. I think first, it's important to know that they are a subsidiary of the Linux Foundation. The Linux Foundation is the original open source, nonprofit here, and then the CNCF is one of many, like Apache is another one that is underneath the broader Linux Foundation umbrella. I think the whole point of – or the CNCF is to be this neutral party that can help us as we start to grow and mature the ecosystem. Obviously, money is going to be involved here. Obviously, companies are going to be looking out for their best interest. It makes sense to have somebody managing the software that is outside, or external of these revenue-driven companies. That's where I think the CNCF comes into play. I think that's its main responsibility is. What happens when somebody from company A and somebody from Company B disagree with the direction that the software should go? The CNCF can come in and say, “Hey, you know what? Let's find a happy medium in here and let's find a solution that works for both folks and let's try to do this the best we can.” I think a lot of this came from lessons we learned the hard way with Linux. In a weird way, we did – we are in version 2.0, but we were able to take advantage of some of the priority here. [0:37:05.4] NL: Do you have any examples of a time in the CNCF jumped in and mediated between two companies? [0:37:11.6] KN: Yeah. I think the steering committee, the Kubernetes steering committee is a great example of this. It's a relatively new thing. It hasn't been around for a very long time. You look at the history of Kubernetes and we used to have this incubation process that has since been retired. We've tried a lot of solutions and the CNCF has been pretty instrumental and guiding the shape of how we're going to manage, solve governance for such a monolithic project. As Kubernetes grows, the problem space grows and more people get involved. We're having to come up with new ways of managing that. I think that's not necessarily a concrete example of two specific companies, but I think that's more of as people get involved, the things that used to work for us in the past are no longer working. The CNCF is able to recognize that and guide us out of that. [0:37:57.2] DC: Cool. That’s such a very good perspective on the CNCF that I didn't have before. Because like Josh, my perspective with CNCF was well, they put on that really cool party three times a year. [0:38:07.8] KN: I mean, they definitely are great at throwing parties. [0:38:12.6] NL: They are that. [0:38:14.1] CC: My perspective of the CNCF is from participating in the Kubernetes meetup here in San Diego. I’m trying to revive our meetup, which is really hard to do, but different topic. I know that they try to make it easier for people to find meetups, because they have on meetup.com, they have an organization. I don't know what the proper name is, but if you go there and you put your zip code, you'll find any meetup that's associated with them. My meetup here in San Diego is associated, can be easily found. They try to give a little bit of money for swags. We also give out ads for meetup. They offer help for finding speakers and they also have a speaker catalog on their website. They try to help in those ways, which I think is very helpful, very valuable. [0:39:14.9] DC: Yeah, I agree. I know about CNCF, mostly just from interacting with folks who are working on its behalf. Working at meeting a bunch of the people who are working on the Kubernetes project, on behalf of the CNCF, folks like Ihor and people like that, which are constantly amazingly with the amount of work that they do on behalf of the CNCF. I think it's been really good seeing what it means to provide governance over a project. I think that really highlights – that's really highlighted by the way that Kubernetes itself has managed. I think a lot of us on the call have probably worked with OpenStack and remember some of the crazy battles that went on between vendors around particular components in that stack. I've yet to actually really see that level of noise creep into the Kubernetes situation. I think squarely on the CNCF around managing governance, and also run the community for just making it accessible enough thing that people can plug into it, without actually having to get into a battle about taking ownership of CNI, for example. Nobody should own CNI. That should be its own project under its own governance. How you satisfy the needs for something like container networking should be a project that you develop as a company, and you can make the very best one that you could make it even attract as many customers to that as you want. Fundamentally, the way that your interface to that major project should be something that is abstracted in such a way that it isn't owned by any one company. There should be a contact in an API, that sort of thing. [0:40:58.1] KN: Yeah. I think the best analogy I ever heard was like, “We’re just building USB plugs.” [0:41:02.8] DC: That's actually really great. [0:41:05.7] JR: To that point Duffie, I think what's interesting is more and more companies are looking to the CNCF to determine what they're going to place their bets on from a technology perspective, right? Because they've been so burned historically from some project owned by one vendor and they don't really know where it's going to end up and so on and so forth. It's really become a very serious thing when people consider the technologies they're going to bet their business on. [0:41:32.0] DC: Yeah. When a project is absorbed into the CNCF, or donated to the CNCF, I guess. There are a number of projects that this has happened to. Obviously, if you see that iChart that is the CNCF landscape, there's just tons of things happening inside of there. It's a really interesting process, but I think that from my part, I remember recently seeing Sysdig Falco show up in that list and seeing them donate – seeing Sysdig donate Falco to the CNCF was probably one of the first times that I've actually have really tried to see what happens when that happens. I think that some of the neat stuff here that happens is that now this is an open source project. It's under the governance of the CNCF. It feels to me more an approachable project, right? I don't feel I have to deal with Sysdig directly to interact with Falco, or to contribute to it. It opens that ecosystem up around this idea, or the genesis of the idea that they built around Falco, which I think is really powerful. What do you all think of that? [0:42:29.8] KN: I think, to look at it from a different perspective, that's one example of when the CNCF helps a project liberate itself. There's plenty of other examples out there where the CNCF is an opt-in feature, that is only there if we need it. I think cluster API, which I'm sure we're going to talk about this in a later episode. I mean, just a quick overview is a lot of different vendors implementing the same API and making that composable and modular. I mean, nowhere along the way in the history of that project has the CNCF had to come and step in. We’ve been able to operate independently of that. I think because the CNCF is even there, we all are under this working agreement of we're going to take everybody's concerns into consideration and we're going to take everybody’s use case in some consideration, work together as an ecosystem. I think it's just even having that in place, whether or not you use it or not is a different story. [0:43:23.4] CC: Do you all know any project under the CNCF? [0:43:26.1] KN: I have one. [0:43:27.7] JR: Well, I've heard of this one. It's called Kubernetes. [0:43:30.1] CC: Is it called Kubernetes or Kubernetes? [0:43:32.8] JR: It’s called Kubernetes. [0:43:36.2] CC: Wow. That’s not what Duffie thinks. [0:43:38.3] DC: I don’t say it that way. No, it's been pretty fascinating seeing just the breadth of projects that are under there. In fact, I was just recently noticing that OpenEBS is up for joining the CNCF. There seems to be – it's fascinating that the things that are being generated through the CNCF and going through that life cycle as a project sometimes overlap with one another and it's very – it seems it's a delicate balance that the CNCF would have to play to keep from playing favorites. Because part of the charter of CNCF is to promote the project, right? I'm always curious to see and I'm fascinated to see how this plays out as we see projects that are normally competitive with one another under the auspice of the same organization, like a CNCF. How do they play this in such a way that they remain neutral, even it would – it seems like it would take a lot of intention. [0:44:39.9] KN: Yeah. Well, there's a difference between just being a CNCF project and being an official project, or a graduated project. There's different tiers. For instance, Kubicorn, a tool that I wrote, we just adopted the CNCF, like I think a code of conduct and there was another file I had to include in the repo and poof, were magically CNCF now. It's easy to get onboard. Once you're onboard, there's legal implications that come with that. There totally is this tier ladder stature that I'm not even super familiar with. That’s how officially CNCF you can be as your product grows and matures. [0:45:14.7] NL: What are some of the code of conduct that you have to do to be part of the CNCF? [0:45:20.8] KN: There's a repo on it. I can maybe find it and add it to the notes after this, but there's this whole tutorial that you can go through and it tells you everything you need to add and what the expectations are and what the implications are for everything. [0:45:33.5] NL: Awesome. [0:45:34.1] CC: Well, Valero is a CNCF project. We follow the what is it? The covenant? [0:45:41.2] KN: Yeah, I think that’s what it is. [0:45:43.0] CC: Yes. Which is the same that Kubernetes follows. I am not sure if there can be others that can be adopted, but this is definitely one. [0:45:53.9] NL: Yeah. According to Aaron Crickenberger, who was the Release Lead for Kubernetes 1.14, the CNCF code of conduct can be summarized as don't be a jerk. [0:46:06.6] KN: Yeah. I mean, there's more to it than that, but – [0:46:10.7] NL: That was him. [0:46:12.0] KN: Yeah. This is something that I remember seeing an open source my entire career, open source comes with this implication of you need to be well-rounded and polite and listen and be able to take others’ just thoughts and concerns into consideration. I think we just are getting used to working like that as an engineering industry. [0:46:32.6] NL: Agreed. Yeah. Which is a great point. It's something that I hadn't really thought of. The idea of development back in the day, it seems like before, there was such a thing as the CNCF are cloud native. It seemed that things were combative, or people were just trying to push their agenda as much as possible. Bully their way through. That doesn't seem that happens as much anymore. Do you guys have any thoughts on that? [0:46:58.9] DC: I think what you're highlighting is more the open source piece than the cloud native piece, which I – because I think that when you're working – open source, I think has been described a few times as a force multiplier for software development and software adoption. I think of these things are very true. If you look at a lot of the big successful closed source projects, they have – the way that people in this room and maybe people listening to this podcast might perceive them, it's definitely just fundamentally differently than some open source project. Mainly, because it feels it's more of a community-driven thing and it also feels you're not in a place where you're beholden to a set of developers that you don't know that are not interested in your best, and in what's best for you, or your organization to achieve whatever they set out to do. With open source, you can be a part of the voice of that project, right? You can jump in and say, “You know, it would really be great if this thing has this feature, or I really like how you would do this thing.” It really feels a lot more interactive and inclusive. [0:48:03.6] KN: I think that that is a natural segue to this idea of we build everything behind the scenes and then hey, it's this new open source project, that everything is done. I don't really think that's open source. We see some of these open source projects out there. If you go look at the git commit history, it's all everybody from the same company, or the same organization. To me, that's saying that while granted the source code might be technically open source, the actual act of engineering and architecting the software is not done as a group with multiple buyers into it. [0:48:37.5] NL: Yeah, that's a great point. [0:48:39.5] DC: Yeah. One of the things I really appreciate about Heptio actually is that all of the projects that we developed there were – that the developer chat for that was all kept in some neutral space, like the Kubernetes Slack, which I thought was really powerful. Because it means that not only is it open source and you can contribute code to a project, but if you want to talk to people who are also being paid to develop that project, you can just go to the channel and talk to them, right? It's more than open source. It's open community. I thought that was really great. [0:49:08.1] KN: Yeah. That's a really great way of putting it. [0:49:10.1] CC: With that said though, I hate to be a party pooper, but I think we need to say goodbye. [0:49:16.9] KN: Yeah. I think we should wrap it up. [0:49:18.5] JR: Yeah. [0:49:19.0] CC: I would like to re-emphasize that you can go to the issues list and add requests for what you want us to talk about. [0:49:29.1] DC: We should also probably link our HackMD from there, so that if you want to comment on something that we talked about during this episode, feel free to leave comments in it and we'll try to revisit those comments maybe in our next episode. [0:49:38.9] CC: Exactly. That's a good point. We will drop a link the HackMD page on the corresponding issue. There is going to be an issue for each episode, so just look for that. [0:49:51.8] KN: Awesome. Well, thanks for joining everyone. [0:49:54.1] NL: All right. Thank you. [0:49:54.6] CC: Thank you. I'm really glad to be here. [0:49:56.7] DC: Hope you enjoyed the episode and I look forward to a bunch more. [END OF EPISODE] [0:50:00.3] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter https://twitter.com/ThePodlets and on the https://thepodlets.io, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

Leader Dialogue
LEADER DIALOGUE: Transforming Healthcare Financial Billing – Deep Dive

Leader Dialogue

Play Episode Listen Later Sep 27, 2019


In this week’s follow-up episode of Leader Dialogue , Jennifer, Lisa and Duffie will dig deeper into topics such as patient engagement, culture and consumerism with special guest Alan Nalle of Patientco. Alan Nalle/Patientco Alan Nalle is Chief Strategy Officer at Patientco, developing strategies and supporting plans for Patientco to deliver unique value to its clients. Prior […] The post LEADER DIALOGUE: Transforming Healthcare Financial Billing – Deep Dive appeared first on Business RadioX ®.

Leader Dialogue
LEADER DIALOGUE: Strategies to High Reliability – Deep Dive

Leader Dialogue

Play Episode Listen Later Sep 13, 2019


Craig Clapper joins hosts Ben, Lisa and Duffie for a “deep dive” discussion about strategies and opportunities for improvement in aiding healthcare organizations how to achieve and sustain High Reliable Organization (HRO) status and understanding how Consumerism impacts the inter-relations among quality improvement, process improvement, Baldrige Framework and high reliability in health systems. Craig Clapper/Press Ganey […] The post LEADER DIALOGUE: Strategies to High Reliability – Deep Dive appeared first on Business RadioX ®.

Leader Dialogue
LEADER DIALOGUE: Georgia Lt. Governor Geoff Duncan – Deep Dive

Leader Dialogue

Play Episode Listen Later Aug 23, 2019


Georgia Lt. Governor Geoff Duncan joins hosts Ben, Jennifer, Lisa and Duffie for a “deep dive” discussion about leadership, government, education, healthcare, technology and more on Leader Dialogue . Lt. Governor Geoff Duncan A former professional baseball player and successful entrepreneur, Geoff Duncan was elected Georgia s Lieutenant Governor in November of 2018. Geoff s faith inspired him to a […] The post LEADER DIALOGUE: Georgia Lt. Governor Geoff Duncan – Deep Dive appeared first on Business RadioX ®.

Leader Dialogue
LEADER DIALOGUE: Georgia Lt. Governor Geoff Duncan

Leader Dialogue

Play Episode Listen Later Aug 16, 2019


Georgia Lt. Governor Geoff Duncan joins Ben, Jennifer, Lisa and Duffie to discuss leadership, education, healthcare, technology and more on “Leader Dialogue”. Lt. Governor Geoff Duncan A former professional baseball player and successful entrepreneur, Geoff Duncan was elected Georgia s Lieutenant Governor in November of 2018. Geoff s faith inspired him to a life of leadership and […] The post LEADER DIALOGUE: Georgia Lt. Governor Geoff Duncan appeared first on Business RadioX ®.

Leader Dialogue
Leader Dialogue: Performance Excellence Challenges in the Age of Consumerism – Deep Dive

Leader Dialogue

Play Episode Listen Later Jul 26, 2019


A deep dive discussion about Performance Excellence Challenges in the Age of Consumerism is provided by the hosts Lisa, Jennifer and Duffie, with special guest Niki Buchanan, Venture Leader of Population Insights & Care at Philips Wellcentive. Niki Buchanan, Philips Niki Buchanan serves as Venture Leader, Population Insights & Care at Philips. A dynamic, strategic, mission-driven and […] The post Leader Dialogue: Performance Excellence Challenges in the Age of Consumerism – Deep Dive appeared first on Business RadioX ®.

Leader Dialogue
LEADER DIALOGUE: The Power of the Baldrige Framework – Deep Dive

Leader Dialogue

Play Episode Listen Later Jun 28, 2019


A deep dive discussion about the power of the Baldrige framework is examined by the hosts Ben, Lisa and Duffie, with special guest and former healthcare industry CEO Lowell Kruse. Lowell Kruse/Chair of Communities of Excellence Lowell C. Kruse is the former CEO of Heartland Health in St. Joseph, Missouri. He retired in July 2009 […] The post LEADER DIALOGUE: The Power of the Baldrige Framework – Deep Dive appeared first on Business RadioX ®.

Black Face White Place
FEAR And Growth

Black Face White Place

Play Episode Listen Later May 1, 2019 22:18


FEAR And Growth by Duffie

The Coaches Office
NFL Playoffs, Daytona and Sandarvis Duffie

The Coaches Office

Play Episode Listen Later Apr 8, 2019 45:55


We've got a Daytona race preview with our good buddy Cody Early, we're talking NFL playoffs AND we welcome in a friend to the coaches office, Sandarvis Duffie of the LA Rams. Listen in, hit those 5 stars, and enjoy! --- Support this podcast: https://anchor.fm/thecoachesoffice/support

Leader Dialogue
LEADER DIALOGUE: Transformation Requires Innovation Deep Dive with Roger Spoelman

Leader Dialogue

Play Episode Listen Later Nov 6, 2018


In part 2 of the Innovation discussion Ben and Duffie are joined by Roger Spoelman SVP, Strategic and Operational Integration, Trinity health, CEO Mercy Health, and interim CEO Loyola University Medical Center in Chicago. (Roger is Ben s executive coach). In part 1 of this 2-part Innovation series, Roger can be seen in the Mercy Health […] The post LEADER DIALOGUE: Transformation Requires Innovation Deep Dive with Roger Spoelman appeared first on Business RadioX ®.

Taste Radio
Insider Ep. 4: How Sandows Is Using the Power of Design to Pave a Path for Cold Brew Across the Pond

Taste Radio

Play Episode Listen Later Oct 19, 2018 37:14


While cold brew has been the fastest-growing segment of the U.S. coffee business in recent years, it’s just now starting to heat up across the pond. London-based Sandows is one of a handful of British coffee companies paving a path for cold brew in the U.K. and was first to market in the country, according to co-founder Hugh Duffie. Taking their cue from speciality coffee brands in the U.S., Duffie and co-founder Luke Suddards created a foundation for Sandows based on direct trade sourcing and ultra-high quality brewing. But Duffie and Suddards knew that the liquid was only one part of the equation. To elicit interest from U.K. consumers who are mostly still unaware of the concept of cold brew coffee, they developed distinctive branding and packaging that could, in Duffie’s words, “sell the product without people even needing to try it.” “We wanted people to get excited the way that we did when we would order these products from the U.S.,” Duffie said in an interview included in this episode. “We wanted to create something that felt true to our experience of that specialty coffee culture.” Hear more from Duffie in the following interview in which he discussed the impact Sandows’ package design has had on awareness and sales. He also spoke about the company’s efforts to expand the market for cold brew in the U.K., how it is educating British consumers about premium coffee and its product and innovation strategy. Show notes: 1:25: Keep It Elevated -- The hosts discussed the recent and abrupt shuttering of Pilotworks and riffed on recent office visits, sights and libations from BevNET CEO John Craven’s visit to London, and notable new products sent to BevNET HQ. Please note that after episode 5, Taste Radio Insider will only be available on a single feed. If you haven't already, subscribe today and don’t miss an episode. 18:22: Interview: Hugh Duffie, Co-Founder, Sandows -- In an interview recorded in London, BevNET CEO John Craven sat down with Duffie for wide-ranging interview that includes his background in the coffee business and what inspired him and co-founder Luke Suddards to launch Sandows. They also discussed the gradual development of coffee culture in the U.K. and how Sandows is positioning itself in the nascent space, the development of the brand’s packaging and name, and the reasoning behind the company’s wide range of products. Brands in this episode: 6AM Health, SpiritFruit, Belgian Boys, OHi Superfood Bars, Neimand Dry Gin, Splinter Group Spirits, Batisite Rhum, R.W. Garcia, Fritos, Sandows

Take FL1GHT
#016 - Hugh Duffie - From the Kitchen to Selfridges & Sainsbury's (The 'Sandows' Coffee Story)

Take FL1GHT

Play Episode Listen Later Sep 9, 2018 91:05


Hugh is the founder of 'Sandows Coffee'. The first cold brew company to hit the streets of London.

Leader Dialogue
LEADER DIALOGUE: Clarifying Strategy Execution Failure Modes & Myths with CHIME – Deep Dive

Leader Dialogue

Play Episode Listen Later Sep 7, 2018


On this follow-up deep dive episode of “Leader Dialogue“, CHIME CEO Russ Branzell again joins Ben, Jennifer and Duffie to discuss the two common Strategy Execution failure modes resulting from five (5) commonly held strategy execution myths. (Adapted from the research of: Donald Sull, MIT Sloan School of Management, and Rebecca Homkes fellow at the […] The post LEADER DIALOGUE: Clarifying Strategy Execution Failure Modes & Myths with CHIME – Deep Dive appeared first on Business RadioX ®.

Leader Dialogue
LEADER DIALOGUE: The Art & Science of Medicine with Hamilton Health - Deep Dive

Leader Dialogue

Play Episode Listen Later Aug 24, 2018


In follow-up to the last “Leader Dialogue” episode with Sandy McKenzie, Chief Operating Officer at Hamilton Health, Duffie and Jennifer dive into the content of leadership and culture while Ben is vacationing in Hawaii. This episode explores some of the crucial mistakes made by leaders and teams that inhibit performance, such as fear and lack […] The post LEADER DIALOGUE: The Art & Science of Medicine with Hamilton Health - Deep Dive appeared first on Business RadioX ®.

Leader Dialogue
LEADER DIALOGUE: The Baldrige Journey vs. The Destination – Deep Dive

Leader Dialogue

Play Episode Listen Later Aug 1, 2018


On this follow-up deep dive episode of “Leader Dialogue“, Core Values Partners President Paul Grizzell again joins Ben, Jennifer and Duffie to expound upon his experiences and tips for organizations launching or continuing their journey toward the prestigious Malcolm Baldrige National Quality Award. The hosts discuss how the transformation journey is like developing a health […] The post LEADER DIALOGUE: The Baldrige Journey vs. The Destination – Deep Dive appeared first on Business RadioX ®.

Leader Dialogue
LEADER DIALOGUE: The Baldrige Journey vs. The Destination

Leader Dialogue

Play Episode Listen Later Jul 23, 2018


On this episode of “Leader Dialogue“, Core Values Partners President Paul Grizzell joins Ben, Jennifer, and Duffie to share experiences and tips for organizations launching or continuing their journey toward the prestigious Malcolm Baldrige National Quality Award. The hosts introduce how the Baldrige Performance Excellence Framework aligns directly with the Organizational Hierarchy of Needs and […] The post LEADER DIALOGUE: The Baldrige Journey vs. The Destination appeared first on Business RadioX ®.

The Podcast and Chill Show
#25 - The DuffNoBeer Interview

The Podcast and Chill Show

Play Episode Listen Later Mar 25, 2018 83:54


DuffNoBeer is a director/podcaster from "Da Bottom" section of Philadelphia. "Duffie" currently co-hosts a podcast called "The Podcast and Chill Show" and has many other ventures in the works, including short films and musical projects. Other Topics in this episode along with our interview of DuffNoBeer -Sacramento shooting -Austin Bombings -Trump vs Biden MMA Match Follow DuffNoBeer on IG/Twitter: @DuffNoBeer Follow The Podcast and Chill Show on IG: @Podandchillshow Follow us on: Website: podandchillshow.com Apple Podcasts: https://itunes.apple.com/us/podcast/the-podcast-and-chill-show/id1293417290?mt=2#episodeGuid=tag%3Asoundcloud%2C2010%3Atracks%2F363153086 Google Play: https://play.google.com/music/listen?u=0#/ps/I4ez3oes2ptgqbb7pqnilhv4jqq Stitcher: https://www.stitcher.com/podcast/kelley-kemp/the-podcast-and-chill-show TuneIn: https://tunein.com/radio/The-Podcast-and-Chill-Show-p1085590/ Instagram: instagram.com/podandchillshow Twitter: twitter.com/podandchillshow Facebook: facebook.com/podandchillshow

Making Money in the Music Business
MMMB Podcast 22 - Interview With Entertainment Executive Kendall Duffie

Making Money in the Music Business

Play Episode Listen Later Oct 21, 2017 46:58


In this podcast episode our hosts interview artist, producer and entertainment executive Kendall Duffie about his amazing career in the music industry.

AIR IT OUT RADIO
EXCLUSIVE INTERVIEW WITH SONYA MC DUFFIE

AIR IT OUT RADIO

Play Episode Listen Later Apr 22, 2017 130:00


 EXCLUSIVE INTERVIEW WITH SONYA MCDUFFIE

Black Face White Place
Black Face White Place Episode 1- Allow Me To Introduce Myself

Black Face White Place

Play Episode Listen Later Mar 4, 2016 10:51


Welcome to the inaugural episode of Black Face White Place a life and pop culture podcast hosted by yours truly Duffie.

Cloud Jazz Smooth Jazz
Cloud Jazz Nº 561 (Smooth Jazz & Brothers) - Episodio exclusivo para mecenas

Cloud Jazz Smooth Jazz

Play Episode Listen Later Aug 15, 2014 58:49


Agradece a este podcast tantas horas de entretenimiento y disfruta de episodios exclusivos como éste. ¡Apóyale en iVoox! Música protagonizada por hermanos, unidos por lazos de sangre y por la música: los hermanos Mann, Johnson, Brecker, Duffie, Brown, East, Doky, Braxton, Grainger, Peterson y Whalum.Escucha este episodio completo y accede a todo el contenido exclusivo de Cloud Jazz Smooth Jazz. Descubre antes que nadie los nuevos episodios, y participa en la comunidad exclusiva de oyentes en https://go.ivoox.com/sq/27170

WAGTi Radio
Kloud 9

WAGTi Radio

Play Episode Listen Later Nov 6, 2009 62:43


Twin brothers Kelvis and Kendall Duffie formed Kloud 9 in the tradition of such artists as The Isley Brothers, The Whispers and Maze Featuring Frankie Beverly, all of whom the group would go on to appear with on tour. Growing up in a single-parent home in Denver, CO where their mother was a church musician, Kendall and Kelvis started singing in church at an early age. By the age of fourteen they captured the attention of producer Jerry Weaver (former guitarist with Aretha Franklin and producer of Janet Jackson's debut album). With Weaver, Kendall and Kelvis laid the foundation for a music career and began learning the ins and outs of the music business. Weaver formed a gospel group called Meekness which included the Duffies but the group experienced only minor success. When Meekness disbanded, the Duffie twins headed for Nashville to try to further themselves in the music industry. Their experiences there propelled them into careers as record executives with major gospel labels. However their thirst to perform and record never left them and so they decided to form a vocal duo. The name Kloud 9 was derived out the concept of wanting people to experience a mood of ultimate pleasure when listening to their music. The spelling of Kloud with a "K" was a nod to the fact that both their names began with that letter. After shopping their music and being constantly told they should push their music in a more edgy and raunchy style, Kendall decided to look abroad for audiences who appreciated quality soul music. "I remember driving to work one day and pulling over at a grocery store parking lot and literally breaking down in tears," states Kendall. I felt so empty and unhappy and I knew I had to do music and do it at a level that would satisfy the desire in my soul. I called my job and resigned on the spot, bought a one-way ticket to London where I knew music had a different appreciation, knowing that being an American would open doors for me over there. I told my family that I would not return until I found an outlet to release my music. I realized I didn't have enough money to travel on public transportation every day so I bought a map of London, got numbers and addresses to all the indie and major labels in London and my game plan everyday for the first few months I was there was walking 4 - 5 hours each way for 20-30 minutes meetings. At one point I recall feeling rocks on my foot and looked under my shoe and saw I had literally walked a hole in my shoes!" During my time in London I met a producer named Ray Hayden who produced Maysa's first album. He allowed me to work in his studio to help him with production on various projects. I remember leaving his studio one morning around 3AM with not a penny in my pocket and knowing I had several hours to walk back to where I was staying but I was finally feeling happy again. I was creating music. It was through Ray that I eventually met Maysa and through Maysa that I met Incognito leader Bluey. Shortly thereafter we got a call from Expansion Records in the UK to do an album." Koud 9's debut CD ON KLOUD 9 was released in 2002 in the UK and was critically acclaimed. The album topped the UK Sweet Rhythms Soul Chart for three weeks and spent twelve weeks on Finland's Soul charts. Kloud 9's other UK releases include 2005's YEARNING TO LOVE YOU and KLOUD 9 PRESENTS THE VIBE ROOM, a compilation of Kendall's productions, both of which reached #1 on the UK soul music charts. ENJOY THE RIDE is truly Kloud 9's coming out event in America. The duo has been building a following in America as an opening act for The Isley Brothers, The Whispers, Maze featuring Frankie Beverly and of course Maysa. They recently won a Soultracks award for "Best duo/group" and have been making an impression on their peers and mentors in the business. As R & B legend Walter Scott of the Whispers says, "These are some super talented brothers, no question about it. The world is gonna hear from them!" posted by April Sims / http://www.wearegreaterthani.com