POPULARITY
Qu'est-ce que le BPA ?Le bisphénol A, plus connu sous le nom de BPA, est un composé chimique utilisé depuis les années 1960 pour fabriquer certains plastiques, notamment les polycarbonates. On le retrouve dans de nombreux objets du quotidien : gourdes, boîtes alimentaires, revêtements de canettes ou encore tickets de caisse. Le BPA est un perturbateur endocrinien. Il imite l'action des hormones, en particulier les œstrogènes, et peut ainsi dérégler le système hormonal humain. Sommes-nous vraiment exposés au Bisphénol A ?L'exposition au BPA est réelle, surtout lorsque les plastiques sont chauffés, rayés ou usés. Voici quelques exemples concrets d'utilisation à risque : un tupperware passé au micro-ondes, une bouteille d'eau restée au soleil. Une gourde en plastique abîmée.Dans ces cas, le plastique peut libérer des microparticules de BPA dans les aliments ou les boissons. De nombreuses études ont établi un lien entre cette exposition et divers problèmes de santé. Comme des déséquilibres hormonaux, des troubles de la fertilité, une puberté précoce, des risques accrus de syndrome métabolique et certains cancers hormonodépendants.Face à ces risques, l'Union européenne a interdit le BPA dans les biberons dès 2011, puis a limité son usage dans les matériaux au contact des aliments.“Sans BPA” : une fausse sécurité ?De nombreux produits sont aujourd'hui étiquetés “BPA-free”, mais cela ne garantit pas une innocuité totale. BPS et BPF remplacent souvent le BPA. Certaines études récentes soulignent que ces alternatives peuvent présenter des risques comparables, voire supérieurs.C'est ce qu'on appelle l'effet cocktail : ce ne sont pas uniquement les doses individuelles qui posent problème, mais l'accumulation de petites expositions répétées au quotidien.Comment limiter les risques au quotidien ?Voici donc quelques gestes simples pour réduire l'exposition aux perturbateurs endocriniens liés aux plastiques :Ne jamais chauffez les aliments dans des contenants en plastique.Privilégiez les matériaux alternatifs comme le verre, l'inox ou le silicone alimentaire.Jetez les boîtes plastiques abîmées, rayées ou déformées.Ne laissez pas vos gourdes en plastique au soleil ou dans une voiture chaude.Soyez encore plus vigilant pour les enfants et les femmes enceintes, dont le système hormonal est particulièrement sensible.Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
We are digging into a superpower inside your Linux Kernel. How eBPF works, and how anyone can take advantage of it.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. River: River is the most trusted place in the U.S. for individuals and businesses to buy, sell, send, and receive Bitcoin. Support LINUX UnpluggedLinks:
We're taking on some of the toughest critiques of the Linux desktop, then taking a look at CachyOS and what makes it feel like a million bucks.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:
What if your daily cup of coffee from the drive-thru or even your organic grass-fed steak is quietly sabotaging your health? Plastics and their toxic counterparts have invaded everything from our food to our homes, but the good news is, you're not powerless to fight back. In this hard-hitting episode of Blasphemous Nutrition, Aimee takes on the pervasive problem of plastic contamination in our food and environment, breaking down the shocking ways BPA, BPS, phthalates, and other toxins wreak havoc on your health. From hormone disruption and infertility to insulin resistance and increased disease risk, the stakes are high—but so are the opportunities to take control.You'll learn:Why “BPA-free” isn't the hero you think it is—and how alternatives like BPS and BPF might be even worse.The real health risks of phthalates hiding in everything from makeup to fast food.How chronic plastic exposure impacts fertility, metabolic health, and even your kids' cognitive development.Why popular detox diets fall short and how to support your body's natural detox pathways the right way, including the top detox-supporting foods.Why sweating, pooping, and hydrating are the detox heroes you never knew you needed.Resources:Find Research Citations and Transcript at Blasphemous Nutrition on SubstackWork with AimeeGet on the Glorious Greens Waitlist for an Interactive Experience in March with Yours Truly! Click here.Get access to the Greens Challenge now for a DIY experience (and bonus cookbook) Click here.Episode 33: How Dehydration Hacks Your HealthPhotography by: Dai Ross PhotographyPodcast Cover Art: Lilly Kate CreativeFound this episode eye-opening? Share it with a friend who could use a crash course on detoxing the right way. And don't forget to rate and review the podcast to help spread the word!CHAT ME UP: let me know what's on your mind by texting here! How to Leave a Review on Apple Podcasts Via iOS Device1. Open Apple Podcast App (purple app icon that says Podcasts).2. Go to the icons at the bottom of the screen and choose “search”3. Search for “Blasphemous Nutrition”4. Click on the SHOW, not the episode.5. Scroll all the way down to “Ratings and Reviews” section6. Click on “Write a Review” (if you don't see that option, click on “See All” first)7. Rate the show on a five-star scale (5 is highest rating) and write a review!8. Bask in the glow of doing a good deed that makes a difference!
Your post op day #4 right pneumonectomy patient is suddenly coughing up large volumes of serosanguinous sputum! What are you worried about and what do you need to do? Join your Swedish thoracic surgery team, Drs. Chloe Hanson, Peter White, and Brian Louie as we discuss the management of this dangerous and frustrating surgical complication. Hosts: Chloe E. Hanson, M.D., PGY3 Brian E. Louie, MD, Thoracic Attending Peter T. White, MD, Thoracic Attending Learning Objectives: What is a bronchopleural fistula (BPF) and what different ways do they present? Describe the acute management of an early BPF. Describe the differences in operative considerations between an early and late BPF. Describe different options for closure of a pneumonectomy space. References: - Sugarbaker's Adult Chest Surgery, 3e Sugarbaker DJ, Bueno R, Burt BM, Groth SS, Loor G, Wolf AS, Williams M, Adams A. Sugarbaker D.J., & Bueno R, & Burt B.M., & Groth S.S., & Loor G, & Wolf A.S., & Williams M, & Adams A(Eds.),Eds. David J. Sugarbaker, et al. https://shc.amegroups.org/article/view/3787/html - Dal Agnol G, Vieira A, Oliveira R, Ugalde Figueroa PA. Surgical approaches for bronchopleural fistula. Shanghai Chest 2017;1:14. Please visit https://behindtheknife.org to access other high-yield surgical education podcasts, videos and more. If you liked this episode, check out our recent episodes here: https://app.behindtheknife.org/listen
C'est interdit de retirer des données du dossier de lot. Vraiment ?
The Linux 6.12 kernel isn't just another update — it's a game-changer that deserves our full attention, from performance improvements to fascinating new features.Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free!Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:
Plastic has been the go-to material for baby bottles for decades, but recent concerns have raised questions about its safety. Although BPA was banned in baby bottles over a decade ago, similar chemicals like BPF and BPS are still used as replacements, and research on their long-term effects is ongoing. Additionally, the issue of microplastics in plastic bottles has become a growing concern, with studies suggesting that babies may be exposed to tiny plastic particles that could impact their development. In this episode, you'll learn practical tips to minimize risks associated with plastic baby bottles, from safe handling and cleaning practices to the importance of avoiding excessive heat. We'll also discuss alternatives, including the pros and cons of glass, silicone, and stainless steel bottles. Whether you're already using plastic bottles or exploring other options, this episode will empower you to make a well-informed choice for your baby's health and safety. Thank you to our sponsor Try AG1 and get a FREE 1-year supply of immune-supporting Vitamin D AND 5 FREE AG1 travel packs with your first purchase. That's a $48 value for FREE! Just one daily scoop provides whole-body benefits like gut, immune, and stress support. AG1 sources bioavailable ingredients that actually work with your body. Plus, their formula has all non-GMO ingredients and contains no added sugar. With AG1, I know I am filling any nutrient gaps and supporting my gut for healthy digestion. (As a friendly reminder, pregnant or nursing women should seek professional medical advice before taking this or any other dietary supplement.) Read the full article and resources that accompany this episode. Join Pregnancy Podcast Premium to access the entire back catalog, listen to all episodes ad-free, get a copy of the Your Birth Plan Book, and more. Check out the 40 Weeks podcast to learn how your baby grows each week and what is happening in your body. Plus, get a heads up on what to expect at your prenatal appointments and a tip for dads and partners. For more evidence-based information, visit the Pregnancy Podcast website.
In this episode, my guest is Dr. Shanna Swan, Ph.D., professor of environmental medicine and reproductive health at the Mount Sinai School of Medicine. Dr. Swan is the world's leading expert on the harmful impact of chemicals in our food, water, cosmetics, and various household and consumer products on our hormones, and the consequences for fertility and overall reproductive health. She explains how exposure to phthalates and other endocrine disruptors adversely impacts fetal development, puberty, and the adult brain and body. We discuss the global decline in human fertility due to disruptive environmental toxins, such as pesticides, and certain foods and beverages we consume. We discuss practical strategies to minimize exposure to harmful chemicals, such as phthalates, bisphenol A (BPA), BPS, and PFAS. This includes reducing disposable plastic use, making healthier food preparation, consumption, and storage choices, and selecting personal and household products that don't contain harmful toxins. This episode allows you to assess your risk of exposure to endocrine disruptors accurately and empowers you to take control of your hormone health and fertility. Access the full show notes for this episode at hubermanlab.com. Thank you to our sponsors AG1: https://drinkag1.com/huberman LMNT: https://drinklmnt.com/huberman ROKA: https://roka.com/huberman BetterHelp: https://betterhelp.com/huberman Function: https://functionhealth.com/huberman Timestamps 00:00:00 Dr. Shanna Swan 00:02:58 Sponsors: LMNT, ROKA & BetterHelp 00:06:49 Environmental Chemicals, Fertility, Hormones, Phthalates 00:13:30 Phthalate Syndrome, Animal Data, Male Offspring 00:19:11 Phthalate Syndrome in Humans, Pregnancy & Babies 00:27:30 Hyenas; Phthalate Syndrome in Males 00:32:49 Sponsor: AG1 00:34:22 Polycystic Ovary Syndrome (PCOS), Mothers & Female Offspring 00:39:03 Anogenital Distance & Sperm Count 00:45:03 Sperm Count & Fertility 00:49:24 Sponsor: Function 00:51:11 Sperm Count Decline 00:58:19 Sperm Quality & Pesticides 01:04:12 Atrazine, Amphibians, Sexual Dimorphism, Behavior 01:09:00 Preschoolers, Phthalate Exposure, Sexually Dimorphic Behaviors 01:14:08 Tools: Lowering Exposure to Endocrine Disruptors, Fertility 01:24:52 Tools: BPA, BPS, BPF & Can Linings; Drinkware; Plastics & Microwave 01:30:07 Tools: Buying Organic; Skin Products, Fragrance; Sunscreens, Consumer Guides 01:32:58 Funding 01:34:31 Tools: Distilling Water, Shoes, Clothing, Food Sourcing; Building Materials 01:40:12 Europe vs. US Chemical Safety, REACH Program 01:46:20 Tool: Pregnancy & Fetal Health 01:49:23 Plastics & Environmental Concern; Fertility 01:55:26 Sperm Quality, Fertility, Cell Phone, Temperature 01:58:04 Other Animals & Fertility Decline, Ecosystems 02:01:58 Advancing Technologies, Fertility, Offspring & Adverse Effects 02:06:02 Tool: Consumer Guides, Personal & Household Products 02:09:39 Tool: Receipts; Thyroid System; Non-Stick Pans 02:15:18 Zero-Cost Support, YouTube, Spotify & Apple Follow & Reviews, Sponsors, YouTube Feedback, Protocols Book, Social Media, Neural Network Newsletter Disclaimer & Disclosures
According to the Banking and Payments Federation, a measly 28% considered moving their home loans over the past year, even though interest rates are now coming down. Even though it may not be in the best interests of all its members the BPFI is launching a new website to promote switching called InYourInterest.ie and promoting a standardised salary certificate. We got the details from Brian Hayes the Chief Executive of the BPF.
Après plusieurs enregistrements ratés, le voici enfin, l'épisode sur la formation BPF ! Comme j'ai beaucoup de boulot en ce moment je ne fais pas trop de blabla et je poste l'épisode vite fait. Bonne écoute !!
Les donneurs d'ordre empoisonnent souvent la vie de leurs sous-traitants.
London is brimming with capital waiting to be invested in housing, facing a millionaire exodus, and undergoing significant changes in the rental market. Key insights cover Melanie Leech's vision for unlocking capital, Labour's private school tax raid, non-traditional PCL areas, voluntary rent increases, and mortgage rate cuts. Melanie Leech, Chief Executive of the British Property Federation (BPF), envisions a new housing boom in London, akin to past transformative periods. Since the 1980s, London's population has grown by almost half, yet housing supply has lagged, contributing to high rents and declining home ownership. The BPF's manifesto, "Building our Future," calls for critical changes to the planning system, aiming to provide homes for diverse demographics and budgets. To expedite development, more planners are needed. Political parties are committing to providing more resources, and BPF members are willing to pay higher planning fees for faster, more effective systems. With significant capital available, smart investment can unlock more build-to-rent (BtR) properties, housing for older people, student housing, and social housing. BtR properties offer long-term leases, predictable rental increases, and well-maintained facilities, providing greater security and quality for tenants. With the right policies, the BPF predicts that BtR output could double to 30,000 homes annually across the UK. Investing in purpose-built student accommodation (PBSA) is essential, enhancing the student experience and alleviating pressure on the wider rental market. Private sector capital is increasingly invested in social housing, with partnerships between not-for-profit and for-profit sectors emerging to boost supply. The BPF's manifesto offers the next government an opportunity to unlock investment, power London's economy, and maintain its global status. For property investors, non-traditional prime central London (PCL) areas like Camden, Notting Hill, Shoreditch, and South Bank are becoming attractive. These areas have experienced significant urban regeneration, boosting desirability and property availability. Young, affluent buyers are drawn to these neighborhoods for their cultural, culinary, and entertainment options, alongside potential for strong investment returns. Notting Hill, with its colorful houses and famous market, has seen property values increase by 101% over the past decade. Shoreditch has undergone major transformation, with significant price rises and a growing number of high-value properties. Camden and Kentish Town have seen substantial rental price growth, making them attractive for investors seeking higher returns. Labour's plans to address the rental market include preventing landlords from creating bidding wars, though renters can still "voluntarily" offer higher prices. Inspired by New Zealand's policies, Labour aims to ensure advertised rental prices match final charges, limiting competitive bidding. Critics argue this creates a loophole, as competitive bidding environments persist due to supply shortages. Labour also proposes limiting the amount of rent tenants can pay upfront. Scotland's government has implemented rent rise limits and eviction moratoriums, offering a model for tenant protection. With Labour leading in polls, their plans could significantly impact the rental market. In mortgage news, NatWest has reduced fixed-rate mortgage costs ahead of a potential Bank of England rate cut in August. The bank's five-year fixed rates for remortgage now start from 4.26% with a £1,495 fee. Virgin Money and Suffolk Building Society have also adjusted rates, offering various reductions for residential Maximize your property wealth with London Property. Turn challenges into opportunities. With expert knowledge and reach, we tackle the complexities and inefficiencies of the property market with you.
With a General Election now imminent Sam Stafford thought that it might be interesting to try to compare what is being offered by the main political parties in relation to housing, planning and development with what the housing, planning and development sector would like to see being offered. In a conversation recorded at Outset Studios in Shoreditch Sam speaks to new friends of the podcast Richard Blyth, Tony Mulhall, Marie Chadwick and Ian Fletcher, and old friend of the podcast Paul Brocklehurst, about the policy proposals that their respective organisations are promulgating. Richard is Head of Policy & Practice at the RTPI; Tony is a Senior Specialist at RICS; Marie is Policy Leader at the NHF; Paul is Chair of the LPDF; and Ian is Director of Real Estate Policy at the BPF. Sam invites them all to outline their respective manifestos and then they focused on two key areas that everybody agreed need to be addressed: the need to get more resources into LPAs and the need to reintroduce strategic planning whilst at the same time getting local plans moving again. Towards the end of the episode Sam also asks Marie about the issue of RPs not bidding for S106 sites, which is a very live one at present. Some accompanying reading. Blue belt, grey belt, wild belt – the manifestos compared https://lichfields.uk/blog/2024/june/20/blue-belt-grey-belt-wild-belt-the-manifestos-compared RICS' Land & Rural Manifesto overview https://www.rics.org/news-insights/rics-uk-general-election-land-and-rural-manifesto-review The BPF General Election Manifesto https://bpf.org.uk/our-work/general-election-2024/ LPDF's 10 Point Plan for a Step Change in Delivery https://lpdf.co.uk/latest-lpdf-publications RTPI's Planifesto https://www.rtpi.org.uk/new/our-campaigns/rtpi-planifesto-2024/ Some accompanying viewing. NHF's campaign for a Plan for Housing https://www.youtube.com/watch?v=WmM3WLCjcwQ Some accompanying listening. Manifesto by Roxy Music https://www.youtube.com/watch?v=fjkVYOArUQM 50 Shades - T-Shirts! If you have listened to Episode 45 of 50 Shades of Planning you will have heard Clive Betts say that... 'In the Netherlands planning is seen as part of the solution. In the UK, too often, planning is seen as part of the problem'. Sam said in reply that that would look good on a t-shirt and it does. Further details can be found here: http://samuelstafford.blogspot.com/2021/07/50-shades-of-planning-t-shirts.html
On a mis plein de trucs et c'est assez compliqué comme ça, ça fera prendre du temps l'inspecteur. Horrible. Avec cette stratégie on fait d'une pierre deux coups : on montre que l'on ne maitrise pas bien nos processus et on énerve l'inspecteur. Le processus déviation est la vitrine de la maitrise du processus. Le rapport d'inspection est la vitrine de notre processus déviation. Il doit être uniforme d'une déviation à l'autre, facile à comprendre pour nimporte qui et aller directement au but. Avec une methode d'investigation cadrée et un template de rapport officiel, les rapports d'investigation sont bon du premier coup et une fois clôturé on ne devrait pas avoir besoin de revenir dessus. Bref le rapport d'investigation n'est pas une partie à négliger et l'important est de faire figurer en bonne place l'analyse d'impact et la recherche de cause qu'elle soit primaire ou secondaire. On en parle avec plus de détails dans l'épisode 21 : N'hésitez pas à nous contacter pour plus d'informations ! Dans l'épisode 22 on vous montrera des exemples de SCS et de rapport d'investigation ! Bonne écoute et à bientot ! Retrouvez notre site : https://ac-qualite-efficace.com/ Abonnez vous à notre Newsletter : https://acdelaqualitefficace.substack.com/ Good manufacturing podcast : https://www.podcastics.com/podcast/634/link/ C'est marqué dans les BPF : https://www.podcastics.com/podcast/4330/link/ La Chaine youtube : https://www.youtube.com/channel/UC3_l-vgXrXIucvv99bYdYrw
Episode 5: Follow the money: Liz Peace, former chief executive of the British Property Federation and now senior independent governor at the RICS, is this week's guest on the Home Truths podcast. Having focused on commercial property during her 13 years at the BPF until 2014, she has since taken up a series of high-profile roles including becoming the chair of the Old Oak and Park Royal Development Corporation, overseeing the delivery of thousands of homes in west London. In today's podcast she talks about the many housing and planning reviews she has witnessed during her time in the property industry, and how successive governments have failed to tackle the fundamental problems in the housing market. Jackie Sadek and Peter Bill, the podcasts' co-hosts, ask Liz to explain her proposed financial model that would leverage private finance in order to build more homes that would be genuinely affordable. This was the fifth and final episode of the series, all our previous episodes are available to download. Introduced and edited by Chloe McCulloch, editorial director for Building and Housing Today. Audio production by Tariq Aziz. Home Truths is a Building Talks series produced for Building and Housing Today. Subscribe for news and analysis at www.building.co.uk & www.housingtoday.co.uk LinkedIn: Building Magazine & Housing Today X (formerly Twitter): @BuildingNews & @housing_today Email: newsdesk@assemblemediagroup.co.uk
Dans un système qualité, il y a 2 processus vraiment essentiels.
Allons au labo de microbio poser nos questions sur les BPF !
You know that we love at-home body tests. At Million Marker, we have the Detect & Detox Test Kit. We measure the levels of BPA, BPS, BPF, parabens, phthalates, and oxybenzone in your urine. Many of these chemicals mimic estrogen, which can impact fertility. What we put in and on our bodies can greatly affect the health of our unborn children. That's why we are so excited to chat today with Cheryl Sew Hoy, the CEO/Founder of Tiny Health. Tiny Health is the first microbiome test company for the whole family. They support families through preconception and postpartum. It's such a unique company providing an important service, and we are so excited to learn more about Tiny Health. Learn more about Tiny Health's services: https://www.tinyhealth.com/ Get tested for BPA, phthalates, parabens, and other hormone-disrupting chemicals with Million Marker's Detect & Detox Test Kit: https://www.millionmarker.com/
in this episode I dive deep into the toxic effects of Bisphenol compounds (BPA, BPS and BPF) and how they are destroying the health of humans, animals and our planet. Download my candida guide: https://holisticspring.com/product/seasonal-candida-parasite-cleanseJoin My Holistic Community: www.holisticspring.orgINSTA: @wholistichomeopath
Episode 2441 - On this Friday's show Vinnie Tortorich welcomes Dr. Anthony Jay and they continue their discussion about plastics, hormone disruptors, and more. https://vinnietortorich.com/hormone-disruption-dr-anthony-jay-episode-2441 PLEASE SUPPORT OUR SPONSORS YOU CAN WATCH THIS EPISODE ON YOUTUBE - Hormone Disruptors Anthony's book, "Estrogeneration," is still selling strong. (2:00) Did you know that 79% of foods in supermarkets have BPA and 99% have phthalates—hormone-disrupting chemicals? (3:30) BPA doesn't necessarily mean contaminant-free; others are called BPS, BPF, and others. Vinnie recounts Mike Keen's story about kayaking around Greenland and gathering seal poo for research. (9:00) In one of the remotest places on earth, even plastic was found in seal poo. Pay attention to your products like sunscreens, soaps, and other hygiene items. (18:00) Dr. Anthony Jay reviews his professional experience. (24:00) They discuss various other products that can cause hormone dysregulation, particularly estrogens. Anthony discusses how ubiquitous various plastics are; they are in all kinds of paper products because they are used as coatings. We are seeing the chemicals affect the levels of testosterone and estrogen in young males. (33:30) The effects even show up as apathy in behavior. What does low testosterone mean? Vinnie asks Anthony his thoughts. (40:00) They discuss natural testosterone replacement. (45:00) Testosterone has a lot of functions in the body. Soy estrogens: there are conflicting opinions as to whether it is beneficial or not. It's a holistic approach to increasing testosterone: lifestyle, exercise, whole foods, avoiding plastics, etc. should be considered together. can be purchased here. Dr. Jay's consulting company is . Vinnie's NSNG® VIP Community is open again for a short time, so join while you have a chance! Go to to sign up! [the_ad id="20253"] PURCHASE BEYOND IMPOSSIBLE (2022) The documentary launched on January 11! Order it TODAY! This is Vinnie's third documentary in just over three years. Get it now on Apple TV (iTunes) and/or Amazon Video! Link to the film on Apple TV (iTunes): Then, Share this link with friends, too! It's also now available on Amazon (the USA only for now)! Visit my new Documentaries HQ to find my films everywhere: REVIEWS: Please submit your REVIEW after you watch my films. Your positive REVIEW does matter! FAT: A DOCUMENTARY 2 (2021) Visit my new Documentaries HQ to find my films everywhere: Then, please share my fact-based, health-focused documentary series with your friends and family. The more views, the better it ranks, so please watch it again with a new friend! REVIEWS: Please submit your REVIEW after you watch my films. Your positive REVIEW does matter! FAT: A DOCUMENTARY (2019) Visit my new Documentaries HQ to find my films everywhere: Then, please share my fact-based, health-focused documentary series with your friends and family. The more views, the better it ranks, so please watch it again with a new friend! REVIEWS: Please submit your REVIEW after you watch my films. Your positive REVIEW does matter!
Alex Notay is Placemaking & Investment Director at Thriving Investments, which aims to be the most socially responsible asset management company in the housing sector, and is part of Places for People. She's an internationally recognised expert on Build to Rent and ESG in residential, and she has helped lead residential and ESG committees for leading industry bodies ULI, BPF and AREF. So she was the perfect guest to explain how we should be measuring ESG risks and impacts in residential. We covered topics including: What are the key trends shaping how we approach sustainability? Which are the best of 500+ real estate and sustainability benchmarks for residential? Key lessons from writing (my personal favourite ESG report of recent years), ‘Environmental, Social and Governance (ESG): Why it matters and how to get started - A Guide for the Institutional PRS (British Property Federation) BPF Residential ESG Guidance report: https://bpf.org.uk/media/4534/bpf-residential-esg-guidance.pdf Guest website: https://www.thrivinginvestments.co.uk/ Guest LinkedIn: https://www.linkedin.com/in/alexandranotayparr/ Host LinkedIn: https://www.linkedin.com/in/annaclareharper/ Host website: https://www.greenresi.com/
In the latest episode of the politics and property podcast, EG's senior writer Piers Wehner and former government minister Mark Prisk are joined by special guest Melanie Leech, chief executive of the BPF, to recap the party conference season and take a look at some of the policies unveiled. Are we seeing a new level of focus on planning and development from both the government and opposition? Will the sledgehammer taken to HS2 North prove disastrous for developers? And just what is Labour hoping to deliver if it forms the next government?
Solfate Podcast - Interviews with blockchain founders/builders on Solana
Follow the @SolfatePod show on Twitter for updates. Thanks for listening frens :)Notes from the showThe creator and lead developer of Solang, Sean Young, a compiler that allow developers to write Solana programs (aka smart contracts) in the Solidity programming language. This has been a multi year effort to allow existing Solidity developers, like all those existing in the Ethereum ecosystem, to use their existing language knowledge to write Solidity smart contracts on the Solana blockchain.Sean describes how he started his developer journey in the blockchain space, starting as writing his own compiler for the Solidity programming language for a EVM compatible blockchain for the purpose of processing traditional documents.Sean began hitting roadblocks when he was trying to add new features into the Solidity language, which is effectively only used for Ethereum and EVM compatible blockchains and maintained by the Ethereum community.As a general overview, Sean describes how a compiler actually works. Including how compilers like Solang and even native Solana uses LLVM toolkit (Low Level Virtual Machine) to maximize compatibility for multiple programming languages.Words and acronyms used throughout the episodesolidity - A statically-typed curly-braces programming language designed for developing smart contracts that run on Ethereum and most EVM compatible blockchains.EVM - the Ethereum Virtual Machine - essentially the portion of any Ethereum based blockchain that actually runs/executes smart contracts written in the Solidity programming languageEIP - Ethereum Improvement Proposals - standards specifying potential new features or processes for EthereumWASM - Web Assembly - is a binary instruction format for a stack-based virtual machineLLVM - Low Level Virtual Machine - a set of compiler and toolchain technologies that can be used to develop a frontend for any programming language and a backend for any instruction set architecture.Solana specific terms (or at least common in the Solana ecosystem): BPF - Berkeley Packet Filter - a technology used in certain computer operating systems for programs that need to, among other things, analyze network traffic.SBF (aka SBPF) - Solana Berkeley Packet Filter - this is a custom implementation of BPF with tweaks for the Solana runtime and SVMSVM - Solana Virtual Machine - the portion of the Solana runtime that actually runs/executes code on the Solana blockchainIDL - Interface Definition Language - generic term for a language that lets a program or object written in one language communicate with another program written in an unknown languageFind Sean and Solang onlineFollow Sean on twitterSolang's documentationSolang getting started guideFollow us aroundNicktwitter: @nickfrostygithub: github.com/nickfrostywebsite: https://nick.afJamestwitter: @jamesrp13github: github.com/jamesrp13Solfate Podcasttwitter: @SolfatePodmore podcast episodes: solfate.com/podcast
systemd is a service manager for Linux. It is the first process that runs on many Linux distributions and manages all other user processes. It includes utilities for logging, process isolation, process dependencies, socket activation, and many other tasks. psystemd is a python library to communicate with systemd over dbus from python as an alternative to shelling out from an application to control services. Anita Zhang is an engineerd managerd at Meta and Alvaro Levia is a production engineer at Meta. I attended their systemd workshop at the Southern California Linux Expo. Topics covered: What's systemd? Giving talks and workshops cgroups and namespaces systemd timers vs cron Migrating from CentOS 6 to 7 Production engineers need to go lower in the stack to debug applications Meta's Linux userspace team Use of public cloud at Meta Meta's bootcamp Pystemd Mastodon Anita Zhang Alvaro Leiva Workshop systemd workshop Conference talks Journey into the Heart of systemd - Scale 19x Systemd: why you should care as a Python developer - PyCon 2018 Move Fast without Breaking things - Scale 18x Solving All the Problems with systemd - LISA18 Using systemd to high level languages - All Systems Go! The Curious Case of Memory Growth - Scale 19x Related Links systemd psystemd systemd-run systemd-timers Transcript You can help edit this transcript on GitHub. Introductions [00:00:00] Jeremy: So today I'm talking to Avaro Leiva and Anita Zhang. Avaro is the author of the pystemd library and he's a production engineer at Meta. And Anita is an engineerd managerd at Meta, and I'll let her explain that further. [00:00:19] Jeremy: But thank you both for joining me today. [00:00:21] Anita: Yeah, thanks for having us. [00:00:24] Jeremy: I guess where we could start, Anita, maybe you could explain a little bit your, your title that I just gave you there. engineerd managerd [00:00:31] Anita: Yeah, so by default I, I should be a software engineering manager, but when I transitioned to management, I was not, Ready to go public with, um, my transition. So I kind of hid it by, changing the title. we have some weird systems in place that grep on like the word engineer. So I had to keep engineer in there somehow. and so I kind of polled my friends what I should change my title to, and they're like, oh, you're gonna support the systemd team, so you should change it to like managerd. So I was like, sounds good. engineerd, managerd. I didn't wanna get kicked out of any workplace groups, for example, that required me to be an engineer. [00:01:15] Jeremy: Oh, okay. [00:01:17] Anita: Or like engineering function, I guess. [00:01:19] Jeremy: Yeah. Yeah. And you just gotta title it yourself, so as long as you got engineer in it, you're good. [00:01:24] Anita: Yeah, pretty much. Some people have really fun titles like Chief Potato Officer and things like that. [00:01:32] Jeremy: So what groups does the, uh, the potato officer get to go in? [00:01:37] Anita: Yeah. Not the C level ones. (laughs) What's systemd? [00:01:42] Jeremy: I guess maybe to, to start, we should explain to people who aren't familiar, uh, what systemd is. So if either of you wanna wanna take that one. [00:01:52] Alvaro: so people who doesn't know, right? So systemd is today is your init system, right? Is the thing that manage your, your process. and the best way to understand this, it is like when your computer, it needs to execute something. And that's something is what we call pid one. And that pid one is the thing that is gonna manage everything from now from there on, right? Uh, in the most basic level, if you remember how to, how does program start, how does like an idea becomes a program? Uh, you need to fork exec, right? So that means that something has to be at the top of that tree and that is systemd. now that can be anything, right? So there was a time where that was like systemv and there was also like upstart, uh, today's systemd is the thing that, uh, it's shipped in most distributions. [00:02:37] Jeremy: Yeah, because I, I definitely remember when I first started working with Linux, uh, it was with CentOS 6, and when I would want to run a service, I would have to go and write a bash script and kind of have all these checks for, is this thing running? Does it have permission to these things, which user is it running as, and so there was a lot of stuff that I remember having to do before systemd came out. [00:03:08] Alvaro: The good old days as we call them, [00:03:11] Jeremy: Or the bad old days. [00:03:13] Anita: Yeah. Depending on who you ask. [00:03:15] Alvaro: Yeah. So, so that is super interesting because, um, During those time, like you said, you have to write a first script. That means that you were basically yourself, your own service manager, right? So ideas as simple as, is my program running? There was no real answer. You have to figure it out, right? So if you run a program, uh, you maybe would create a pid file which hold the p or the pid of the process, of the main process, right? And then something needs to check, oh, is this file exist? Does the file exist and does the content of this file actually match to a process? And then you grab the process. So it was all these ideas that you had to do, and then for, you have to do it for every single software that you would deploy on your machine, right? That also makes really hard to parallelize stuff, right? Because you have no concept of dependencies. So if your computer has to put, uh, I, I dunno if you remember like long time ago, like Linux machine would, takes like five minutes to boot like your desktop. I remember like openSUSE. I can't remember, like 2008, 2007. Uh, it would take like five minutes to boot and then Ubuntu came and, and it start like immediately. And it was because, you can parallelize things, but you cannot do that if all you're running are bash script. Why was systemd chosen to be included in Linux distributions? [00:04:26] Jeremy: I remember before the Linux distributions didn't include it. And I wonder if you have any insight into how systemd got chosen to be the thing to manage our processes and basically how we got to where we are today. [00:04:44] Anita: I mean, we can kind of speculate a little bit. at the time when Lennart started systemd, um, with. Kai Sievers probably messed up his name there. Um, they were all at Red Hat and Red Hat manages fedora these days and I believe fedoras kind of like the bleeding edge for a lot of the new software ideas. Um, and when they picked up systemd as the defaults, um, eventually it started to trickle down to the rest of their distributions through RHEL and to CentOS and at the same time, I think other distributions started to see how useful it was in terms of managing all the different processes and services. Um, I know Debian at one point had kind of a vote on like whether they should make systemd either default or like, make it easy to switch between both. And then they decided to just stick with systemd because it's, I mean, the public agrees that it's like easy to use and it's more useful. It abstracts away a lot of things that they had to manually do before Who is interested in systemd? Who comes to your talks and workshops? [00:05:43] Jeremy: Something I've been kind of curious about. So just this year at SCaLE uh, you ran a, a workshop teaching people how to use systemd and, and sort of what it is about. I guess when, when you get people coming to these workshops, what are they typically, where are they typically coming from? Are they like system administrators or are they software developers? Like when you run these workshops, who are you looking for as your audience? [00:06:13] Alvaro: To be fair, this was the first time that we actually did a workshop for this. But we have like, talk about this in, in many like conferences. here's what happened, right? So every time that you put systemd on the title of, uh, of a talk, you are like baiting people into coming in, right? Because you do want to hear like some people who are still like reluctant from that war that happened like a few years ago. Between systemd and Ups tart right? most of the people who we get are, I would say like, software engineers, people who do software, and at least the question that I always get a lot, it is like, why should I care about systemd um, if I run everything on my containers in my Docker containers, right? The other type of audience that you get, you do get system administrators. Uh, but in general those people only cares about starting and stopping services don't really care about like the, like the nice other features that systemd has to offer. And then you get people who just wanna start like flame wars and I'm here for them. Why give talks and workshops on systemd? [00:07:13] Jeremy: In previous years, you've given conference talks and, and things like that related to systemd. And I wonder for, for both of you where, where the, the interests came from, where this is something that you feel strongly enough about that you wanna give talks about it. Because it's like, a lot of times when people give a conference talk, it's about, like new front end technology or some, you know, new shiny thing. Whereas systemd is like, it's like very valuable, but it's something that I feel like a lot of people don't think about. And so I'm just kind of curious where the interest came for, for both of you. [00:07:52] Anita: I think I just like giving talks and teaching in general. So if I have work that I found really exciting or interesting, then I'd want to like tell people about it and like teach them and like show them something cool. I think systemd is kind of a really good topic in that case because a lot of people want to learn more about it. Today there's like lots of new developments going on in systemd. So there's like a lot of basic stuff that you can learn, but also a lot of new advanced topics that are changing every year as well. aside from that, there's also like more generally applicable things. Like everyone wants to know how to debug something if you're like a software engineer or developer or even a sysadmin. Um, so last year I did a debugging talk. there's a lot of overlap I'd say how about you Alvaro? [00:08:48] Alvaro: For me, it, my interest in systemd started in, back when I was working on Instagram, we needed to migrate from CentOS6 to CentOS7. and that was the transition where you would have like a random init system to systemd, right? So we needed to migrate all of our scripts from like shell script to whatever shell script is going to interact with systemd. And that's when I was like, I don't like this. So I also have a thing where if I find something that doesn't have an Python API for it, I go and create a Python api. So I, I create pystemd like during that time. And I guess for me, the first reaction was when I was digging up systemd was like, whoa, can systemd do that? Like, like really, like I can like manage, network firewalls, right? Can I, can I stop my service from actually accessing the internet without having to deal with iptables at the time? So that's kind of like the feeling that I wanted to show people when I, when we do these these talks and, and these workshops, right? It's why like most of our talks, eh, have light demos in them because we do want to show people like, Hey, like, this is real. You can use it. [00:09:55] Jeremy: I don't know if this was a conscious decision on your part, but the thing about things like systemd is they, they feel like more foundational things that don't change that quickly. Like if you look at front end development, for example, at at meta you've got React, and that ecosystem changes so often that it's like there's always this new thing, you learn the way to do it and then it changes, right? Whereas I feel like when you're in the Linux user space and you're with systemd, like they're adding new things, but the, the. Foundations kind of stay the same. I'm not sure if that sounds accurate to both of you. [00:10:38] Anita: Yeah, I'd say a lot of the, there are a lot of stable building blocks in systemd, but at Meadow we also have a kernel team, which is working on like new kernel features all the time. They take years possibly to adopt, but with systemd, if we're able to influence the community and like get those kernel features in earlier, then like we can start to really shape what the future of operating systems look like. So it's not, it's very like not short term, uh, work that we're doing. It's a lot of long term, uh, work. [00:11:11] Jeremy: Yeah, that's, that's interesting in that I didn't even think about the fact that you are sitting at the, the user level with systemd, but you kind of know what you want. And so if there's things that the kernel can do to support that, you're having that involvement. With the open source community, make sure that you have your, your say get put in there. Yeah. [00:11:33] Anita: Mm-hmm. [00:11:35] Alvaro: It, it goes both way, right? So one part it is like, yeah, sure, we want features and we create them. Um, and we actually wanted to those to be upstream because we like, one thing that you should, you should never do is manage internal patches for like, things like the kernel, because that's rebase hell. Um, but you also want to be like part of the community and, and, and, and get the benefit of like, being part of it. Who should care about systemd? [00:11:59] Jeremy: And so, like one thing you mentioned ear earlier, Alvaro, is that people will sometimes ask you, I'm running my application in, in Docker containers. Why do I care about systemd? So, so maybe you could explain like, how you would respond to that. Yeah. [00:12:17] Alvaro: Well for more, for most people who actually run their application container I'd say like, no, you probably shouldn't care. Right? Like, you're good where you are. But in general, like, like system is foundational in the sense that it is the first thing that your computer boots your computer doesn't boot off of Docker or Kubernetes or, or any like that. So like something has to run these applications. there's also like a lot of value is that not all applications exist in the vacuum. Like, uh, like let me give you an example. Like if you have a web server, When people are uploading stuff to the web server, you will upload temporary things and then you have to clean it up after a while. So you may want to take advantage of systemd timers or cron or, or whatever you want, right? While the classical container view is that your pid one of the container is the application that you're running, right? So you do want to have like this whole ecosystem, Not all companies can run on containers. not everything can run in containers. So that's basically where all the things start to, to getting into shape. There's a lot of value in understanding how programs actually like exist, right? With the thing that I told you at the beginning of how an idea becomes a program understanding like, like you hit, you are in your bash, right? And you hit ls Star full enter, right? What happened in your machine? Understanding all the things, uh, there is a lot of value and understanding how systemd works. It's, it, it provides, uh, like that knowledge for you. [00:13:39] Jeremy: So for the average engineer at Meta who is relying on your team to deploy their, their code, I guess, if that's the right term, do you think that they're ever needing to think about systemd or is that kind of more like the responsibility of your team and they're just worried about like, I put my thing into my container and I don't, I don't worry about it. [00:14:04] Anita: I think there's like a whole level of the stack that sh ideally we should not even care or know that we're running systemd below them. I think that's, say we're doing our job well, cuz then the abstraction is good enough that they don't have to worry about it. But there's like a whole class of engineers below that that have to, you know, support the systems that run our on bare metal and infrastructure and make it happen. And those are the people who really care about what we're putting in systemd or like what the corner cases are and things like that. [00:14:37] Jeremy: Yeah, that, that makes sense. I mean, one of the talks that was at SCaLE was, uh, Brian Cantrill um, he gave a talk about the forgotten operator, and he was talking about how people forget that there are actual servers behind all the things we're deploying to, right? [00:14:55] Anita: Mm-hmm. [00:14:55] Jeremy: There is a person that you're racking the machines and plugging the power, and like, even though there's all these abstractions in front, that still exists. And so it sounds like things happening at the kernel level and the Linux user space and systemd that's also true because all this infrastructure that people are using to deploy their software on your team is the one who has to keep that running and to keep that running, they need to understand, uh, systemd and, and all these foundational Linux pieces. Yeah. [00:15:27] Anita: Mm-hmm. Yeah. [00:15:29] Alvaro: Like with that said um, I, and maybe it's because I'm very close to to, to the source. Um, and, and you know, like, like I said, like when, when all your tool is a hammer, everything looks like a nail? Well, that hammer for me, a lot of the times it is like even like cgroups or, or namespaces or even like systemd itself, right? there is a lot of times where, um, like for instance, a few years ago we have not, like, like last year or something, uh, we had an application that was very was very hard to load, right? It used a lot of memory. And so we start with, with a model where we would load like a, like a parent process and then child process would deal with, with, um, with the actual work of the thing, the classical model of our server. Now, the thing is that each of the sub process that would run would need to run, uh, on a separate set of privileges, right? So it would really need to run as different users. And that was like very easy to do. But now we actually wanted to some process to run with a, with only view of the file system while the parent process actually doesn't have to do that, right? Uh, or we want to limit the amount of CPU that a child process would use. So like all of these things, we were able like to, to swap it out uh, with using like systemd and, and, uh, like, like a good, Strategy for like, you create a process, you create a new cgroup, you put that into the cgroup, you create the namespace, uh, you add this process into that namespace, and then you have like all this architecture, and it's pretty free because forking it's free in general. [00:17:01] Anita: Actually, Alvaro's comment reminded me of like why we even ended up building the systemd team in the first place. It's kind of like if we have all these teams trying to touch cgroups on their own or like manage processes on their own, they're all gonna do it a different way and not, all of them will be ideal or like, to put it bluntly, I guess, we're really aiming to try and provide like a unified, really good foundational experience, for the layers above us. And so, systemd and the other things that go into the operating system are a step to get there. What are cgroups and namespaces? [00:17:40] Jeremy: And so for someone who's not familiar with the concept of cgroups or of namespaces, could you kind of give like a brief description? [00:17:50] Anita: so namespaces are, uh, we're talking about the kernel feature where, um, there are different ways to isolate, uh, different resources to the process or like, so that they have their own view of certain things, the network or, the processes and things like that. Um, and Cgroup stand for control groups. It's, at meta we only use Cgroups v2 which is a way to organize your processes into, Kind of like a directory view. but processes will be grouped into different, folders, shall you say, but that allows you to, uh, manage the resources between different groups of processes, which is how systemd does its services. [00:18:33] Alvaro: So a, a control group will allow you to impose restrictions on how each system uses the resources, right? So with a cgroup, you can say, only use 20% of cpu, and the, and the kernel will take care of that. Uh, while namespace it is basically how you view the system around you. So like your mount directory like, like where does your home points to? that's, I would say it's more on the namespace side of things. So one is the view then one is the actual, the restrictions. And like Anita said, like systemd does a very clever thing. It doesn't have two, is not the. It's not why cgroups exist, but every time that you start a systemd service, systemd will create a cgroup for that service and will put every process in that cgroup, even though all cgroups would end up being the same, for instance. But eh, you can now like have a consolidated list of what process belong to a service. So a simple question like, like what services has my Apache web service started? That's show you how old I am. But yeah, you can answer that now because you just look at the cgroup, you don't look at the process tree. [00:19:42] Jeremy: So it, it sounds like the, the namespacing is maybe more for the purposes of security, like you said, giving you a certain view of your, your system. and the cgroups are more for restricting resources, but also, like you said, being able to see what are all the processes, um, are associated. Um, so that you, you don't have a process that spins up other processes and then you don't know who owns those, and then you don't know how to shut 'em all down. That, that takes care of that for you. [00:20:17] Alvaro: So I, I always reluctant to use the word security or privacy. I would like to use the word isolation. Yeah. And then if people want to impose the idea of security and privacy to those, that's fine, but it's, but it's mostly about isolation. [00:20:32] Anita: Yeah. Namespaces are what back all the container technologies are. Anytime you run things in a container, it's probably using some kind of name spacing. But yeah, you, you kind of hit the nail in the head. Isolation versus like resource control [00:20:46] Alvaro: As Anita just said that's what fits on containers, uh, namespaces and cgroup like a big mix of those. But that doesn't mean that the only reason why those things exist are for containers. You can take advantage of those technologies without actually having to think of a container. systemd timers vs cron [00:21:04] Jeremy: Something you had mentioned a little bit earlier is, is how systemd has other features and one of them was, was timers. And I was kind of curious, cuz you said you could, you wanna schedule a job, you can run it using cron or you can run it using systemd timers. And it, I feel like whenever I see people scheduling jobs, they're always talking about cron but, but not so much about systemd timers. So I was curious if you had any thoughts on that. [00:21:32] Anita: I don't know. I feel like it's used pretty interchangeably these days. Um, like even when people say cron they're actually running a systemd timer with the cron format, for their time. [00:21:46] Alvaro: So the, the advantage of of systemd timers over cron is, is basically two, right? The first one it is that, you get more control on the time, right? So you have monotonic and absolute times, right? Which is basically like, you can say like this, start five minutes after the previous run. Or you can say this, start after five minutes after the vote, right? So those are two type of time, that is the first one, uh, which may be irrelevant for most people, but that's it. Uh, the other one is that you actually have advantage over the, you take full advantage of systemd, right? In current you say run this process, right? And how that process run, it's basically controlled by the process itself, right? So if you, uh, like if the crontab is for the user, that's good for you, but if you want to like nice it or make it use less cpu, that's what it is. Well, with systemd you say, This cron will start the service and the service, you take full fledged advantage of all the things a service can do. [00:22:45] Jeremy: From what I could tell, looking at the, the timers api, it, it felt like it would be a lot easier to kind of see when things ran, get, you know, get a log of, I ran this time job and it, it failed. Um, it seemed like systemd had a lot more kind of built in to, to kind of look into that. but, uh, yeah, like Anita was saying, like when you, you hear kind of cron all the time, but like you said, maybe it's, maybe they're not actually using cron all the time. They're just saying cron [00:23:18] Alvaro: Well, I would say this for cron like the, the time, the time, uh, syntax for it, it's pretty, it's pretty easy to understand, even though I never remember where, I remember where weekday is, right? The fourth, which one is which? [00:23:32] Jeremy: I, I'm with Anita. I need to look it up whenever I'm gonna use it. (laughs) [00:23:36] Anita: Yeah. I use a cron translator when I have to use cron format. [00:23:41] Alvaro: This is like, like a flags to tar, right? Like, I never remember which, which flags to put. [00:23:48] Anita: Yeah, that's true. [00:23:50] Alvaro: We didn't talk about this, we haven't talked about systemd-run, but one of the advantages of the, one of the advantages of using timers is that you can schedule them on demand, right? So like cron if you wanna schedule something over time, you need to modify the cron the cron file. Uh, and that's, it's problem right? With systemd, you can have like ephemeral units and so you can say like, just for now, go and run this process five hours from now. Like, and after that, just forget about it. [00:24:21] Jeremy: Yeah, the, during the workshop you mentioned systemd-run and I hadn't even heard of it. And after I saw that I was like, wow, this, this could be really useful. [00:24:32] Alvaro: It is quite useful. How have things changed at meta? [00:24:34] Jeremy: One of the things you had mentioned, I, I guess you've, you've been at Meta for, for quite a while and you were talking about how you started with having all these scripts you were running on CentOS 6 and getting off of that to something more standard. I wonder if you could speak a little bit to that, that process. Like what did things look like then and, and how have they they changed over the years? [00:25:01] Alvaro: I would say the following thing, right? Like Anita said, like for most engineers, the day to day of things don't really change that much, because this is foundational things, right? So if you have to fundamentally change the way that you run applications every couple of years, then you waste a lot of time, right? It's not the same as you say, like react where, or, or in the old days, angular where angular one, angular two, angular three, and then it's gone, right? Like, so, so I, I would say it like for the average engineers things don't change that much, uh, for the other type of engineers, like, like us who we, who that we really care about, like how things run. like having a, an API where you can like query the state of your service. Like if like asking like, is my service running with an API that returns true or false, that is actually like a volume value that you can like, Transferring in your application, uh, that, that helps a lot on, on distributed systems. a lot of like our container infrastructure that we use internally at Meta is based on a lot of these ideas and technologies. [00:26:05] Anita: Yeah, thinking back to the CentOS 6 to 7 migration, I wasn't on like the any operating systems team at the time, but I was working with them and I also was on a team that had to migrate, figure out how to migrate our scripts and things over. so the one thing that did make it easy is that the OS team, uh, we deploy all our things using Chef. Maybe you've heard like Puppet and Ansible, that's our version, the Open Source Chef code. Um, and they wrote some really good documentation on how to migrate, from Runit, which is what we were using before to systemd. it was. a very large scale effort across multiple teams to kind of make sure their stuff works, do the OS upgrade and then get used to using systemd. [00:26:54] Jeremy: And so the, the team who is performing this migration, that's not the product team. That would be the, is it production engineering? Is that, is that what you called that? [00:27:09] Alvaro: So, so I was at the other side of, of that, of that table where I, the same as Anita, we were doing the migration more how most things work at Facebook is that it's a combination of the team that is responsible for the technology and the teams who uses the technology. Right. So we are a company, so we. Can like, move together. it's the same thing when you upgrade kernels. Most of the time the kernel team will do the effort to upgrade the kernels, and when they hit a roadblock or something, they will call for the owner of the service and the owner of the service can help debug uh, for the case of CentOS 6 and CentOS 7, eh, I was the PE at Instagram P Stand for Production Engineer. I was the PE at Instagram who did most of the migration of our fleet. So I, I rewrote most of the things because I understand how our things work, and the OS team provide like the support to understanding like, like when can I use some things, when can I use not other things. There was the equivalent of ChatGPT at those days, right? I was just ask them how to do stuff. They will gimme recipes. so, so it it's kind of like, like a mix, uh, work, uh, between those two teams. Uh, Anita, maybe you can talk a little bit about what you talk when you were upgrading the version of systemd and you found a bug? [00:28:23] Anita: Oh, the, like regular systemd upgrades nowadays? I, I'd say it's a lot easier these days. I mean, since the, at the time when we did the CentOS 6 to 7 migration, it was like, our fleet was a lot more fragmented. I'd say nowadays it's a lot more homogenous, which makes, which makes it easier. yeah, in the early versions there were some kind of obscure like, interactions with the kernel or like, um, we, we make pretty heavy use of systemd to run our container system. So, uh, if we run into any corner cases, um, like pretty obscure stuff sometimes, because we make pretty heavy use of the resource control properties. we usually those end up on the GitHub tracker, things like that. [00:29:13] Alvaro: That's the side effect of hiring very smart people. They do very smart things that are very hard to understand. (laughs) [00:29:21] Jeremy: That's kind of an interesting point about you, you saying you're using these, these features, you know, of the kernel very heavily because, you're kind of running your own infrastructure, I think even your own data centers, so you're kind of forced to go to this level, it sounds like just because of the sheer number of services you're running and the fact that like, you have to find a way to pack 'em all onto the same machine. Does that, does that sound right? [00:29:54] Anita: Yeah, I'd say at, at our scale, like it's more cost effective to act, own the servers and run all everything on it ourselves versus like, you know, leasing from, uh, AWS or something, which we've also explored in the past. But that also means we need more engineers to build and run things on our servers. [00:30:16] Jeremy: Yeah. So the, the distinction between, let's say you're a, a small company or a mid-size company and you pay AWS or, or Google to, to do your hosting for you, then you may not necessarily get exposed to a lot of the, the kernel level problems or even the Linux user space problems because you're, you're working at a higher level and that's why you don't necessarily encounter those kinds of things. [00:30:46] Anita: I'd say not, not necessarily. I think, once you get even like slightly lower in the stack where you're just like on your own server, Then you will want to start really looking into like what systemd's doing, how does it interact with other, uh, services, um, on your server, and how can you like connect these different features together? [00:31:08] Alvaro: One of the things that every developer who who works like has to worry about is log right, and that, and that's the first time that you actually start interacting with systemdata available, right? So you have to understand, like maybe it's not just tail /var/log foo, but log right. Maybe it's just journalctl and it's like, what? But yeah. [00:31:32] Jeremy: Yeah. That's a good point too about whenever you're working with the operating system, like you're deploying onto a Linux machine. Regardless of the distribution, if you're the person who's responsible for that, you, you need to know this stuff. Right. Otherwise it's kind of like, you're just putting stuff out there and hoping for the best. Yeah. [00:31:54] Alvaro: Yeah. There, there's also another thing that, I dunno if I've said this before, but, a lot of the times you don't have to know these technologies, but knowing them will help you do your work better. [00:32:05] Jeremy: Yeah, totally. I mean, I think that that applies to pretty much anything in, in development, right? I, I've heard often that some people will say, you take the level that you work at currently and then kind of just go down one level. Right. And then, so you can kind of see what's underneath that. And you don't necessarily need to keep digging, cuz eventually if you keep digging, you're getting into, you know, machine instructions and whatnot. But, um, Yeah, maybe just one level is, is good to, to give you a better sense of what's happening. Production engineers need to go lower in the stack to be able to debug applications [00:32:36] Alvaro: Um, every time that I, that I, that somebody ask me like, what is the difference between a PE and a SWE, uh, software engineer, production engineer, typical conference, uh, one of the biggest difference that I, that I say is that a PE would tends to ask a lot of questions going the same thing that you're saying, we're trying to go down the stack, right? And I always ask the following question, eh, do you know how time dot sleep is implemented? Right? Do you like, like if you, if you were to see time dot sleep on your Python program, like do you actually know what is doing under the hood, right? Is it a while true? While the time, is it doing a signal interrupt? Is it doing a select on a file descriptor with a timeout? Like what is it doing? would you be able to implement it? And the reason why I say this, because like when you're debugging an application, like somebody something's using your cpu, right? And then you see that line on your code, you. You can debug every single line of your code. But also there's a lot of value to say like, no time.sleep doesn't cause CPU to spike. Right. Because it's implemented in a way that it would not be possible to do that. Meta's linux user space team [00:33:39] Jeremy: Another thing that I think might be kind of interesting to talk about is, so Meta has this Linux user space team. And I, I wonder like including your role in it, but just as a whole, like what does that actually mean day to day? Like, what are the kinds of problems people are facing that, a user space team would be handling? [00:34:04] Anita: Hmm. It's kind of large cuz now that the team's grown out to encompass a few other things as well. But I'll focus on the Linux user space part. the team started off, on the software engineering side as the systemd developer team. So our job was really to contribute to the community. and both, you know, help with, problems and bugs that show up in upstream, um, while also bringing in new features, that we think would be useful both at Meta and to like, folks, in the Linux community as a whole. so we still play a heavy role in, systemd. We also support it, uh, within the fleet, like we roll out new releases and things like that. but we're also working on a few other projects in. User space. Um, BP filter is one of them, which is, uh, how can we convert like IP tables and network filtering, into BPF programs. Um, on the production engineering side, they focus a lot on, the community engagements. So in addition to supporting CentOS they also handle, or they like support several packages in Fedora, Debian and other distributions, really figuring out how we can, be a better member of the open source community, and, you know, make connections there and things like that. [00:35:30] Jeremy: And, and what was your, your process for getting in involved with this team? Because it sounded like maybe it either didn't exist at the start, or it was really small and, and now it's really, really grown. [00:35:44] Anita: So I was kind of the first member of like the systemd team, if you would call it that. Um, it spun out of containers. So my manager at the time, who's now my director, was he kind of made a call out on workplace looking for people who'd be willing to, contribute to systemd. He was, supporting the containers team at the time who after the CentOS 7 migration, they realized the potential that systemd could have, making their jobs a lot easier when it came to developing the container backend. and so along with that, they also needed someone to help, you know, fix bugs, put in new features and things that would, tie into the goals of the containers team. Um, and eventually now our host management team, I was the first person who reached out to him and said, Hey, I wanna give this a try. I was on the security team at the time and I always had dreams of going back into like, operating systems development and getting better at it. So yeah, that's kind of how I ended up in this space. A few years later, he decided, Hey, we should build a team and you should like hire some people who will also do this with you and increase our investments in systemd. so that's how we kind of built out the Linux user space team to encompass systemd and more like operating system, projects. Working on the internal security team vs the linux userspace team [00:37:12] Jeremy: And so when you were working on the security team before, was that on software internal to meta or were you also involved with, you know, the open source, user space side as well? [00:37:24] Anita: That was all internal at the time. Which was kind of a regret because there was a lot of stuff that I would've liked to talk about externally. But I think, moving to Linux user space made me realize like, oh, there's so much more potential in open source projects, in security, which is still like very closed source from our side. [00:37:48] Jeremy: And, and so like in your experience, what have been some of the big differences? I mean, definitely getting to talk about it is a big one. but like in terms of your day-to-day, what are the big differences between working on something internal versus something that that's open source? [00:38:04] Anita: I have to talk more with external folks. we're, pretty regular members of like the systemd like conclave sync that we have with the other upstream maintainers. Um, Oh yeah. There's a lot more like cross company or an external open source community building that we have to do. it kind of puts into perspective like how we manage our time and also our relationships versus like internally, like everyone you work with works at Meta. we kind of have, uh, some shared leadership at the top. it is a little faster to turn around, um, because, you know, you can just ping people on work chat. But the, all of the systems there are closed source. So, um, there's not like this swath of people outside that you can ask about when it comes to open source things. [00:38:58] Jeremy: You can't, can't look in, discord or whatever for questions about, internal meta infrastructure to other people. It's gotta be. all in the same place. Yeah. [00:39:10] Anita: Yeah. And I'd say with like the open source projects, there's a lot of potential to tap into, expertise and talent that just doesn't exist internally. That's what I found really valuable, cuz people have really great ideas outside as well. Um, and we should like, listen to them and figure out how to build that into their systems and also ours Alvaro's work at meta [00:39:31] Jeremy: And, Avaro, I don't know when you first started, was that on internal, infrastructure and tooling as well? [00:39:39] Alvaro: Yeah, so, um, my path is different than Anita and actually my path and Anita doesn't share any common edges. so I, I don't work at the user space or the Linux kernel or anything. I always work in teams adjacent to it. Uh, but. It's always been very interesting to know these technologies, right? So I started working on Instagram and then I did a lot of the work in containers in migrations at where, where we build psystemd and also like getting to know more about that technologies. We did, uh, a small pilot on using casync which is a very old tool that like, it's only for the fans, (laughs) it's still on systemd repository, I dunno if that's used or anything, but it was like a very cool idea of how to distribute images. Uh, and in Instagram we do very fast deployments. So we deploy, or back then we used to deploy the source code, of Instagram every seven minutes, right? So every seven minutes, every time that a developer did commit to master, uh, we pushed that into production in less than an hour and we did that every seven minutes. So we were like planning to, to use those technologies for that. Um, And then I moved to another team inside of Meta, which is called Cloud Foundation, where we do a lot of like cloud infrastructure, uh, like public cloud. Uh, that's the area, that is very much not talked much about. but I keep like contributing to, to like this world. never really work on, on, on those teams inside of Meta. [00:41:11] Jeremy: So I guess it's your, your team is responsible for working with the engineers who work on product to be able to take their code and, and deploy it. And it's kind of like you work in combination with the user space team or the systemd team to make sure that what you're doing can be supported by them. Is that kind of an accurate description? [00:41:35] Alvaro: Yeah, that's, that's, that's definitely not an exhaustive description, but yeah, that's the, we, we, we do that. Public cloud at meta [00:41:42] Jeremy: It's interesting that you're, you're talking about public cloud now. So when you move to public cloud, are you using VMs kind of like you would in a data center, or is it, you're actually looking at the more managed services and things like that? [00:41:57] Alvaro: So I'm gonna take a small detour and say like, something that is funny. When I got hired by Facebook, we were working on Instagram. So Instagram was just an acquisition for, for, for meta right. And Instagram ran on AWS. So why wasn't the original team who were moving stuff from AWS into the internal data centers at Meta? On the team that I work now, uh, we work to support workloads that cannot run on meta infrastructure either for legal reasons, or for, for practical reasons. Right, because we don't have the hardware, uh, capability or legal resource because the government ask us, like, this cannot be on, on your data center or security, right? We don't wanna run this, this binary that we don't understand on our network. We do want to work in isolation. and the same thing that Anita was saying, where their team are building the common ways of using these tools, like systemd, and user space. we do the same thing, but for using cloud technologies. So in a way that is more similar to meta. So that's the detour now the, to answer your actual question, uh, we do a potpourri of things, right? So since we manage infrastructure and then teams deploy their code, they are better suited to know how their code, gets to run. Uh, with that said, we do have our preferred ways of how you would run stuff. and it's a combination of user containers, uh, open source containers, and and also like VMs There's a big difference between VMs and meta and in public cloud [00:43:23] Jeremy: So it, it sounds like in this case, you're, you're still using VMs even in public cloud, so the way that you do deployments, the location is different, but the actual software and infrastructure that you're running is, is similar. [00:43:39] Alvaro: So there's there's a lot of difference. Between the two things, right. So, the uniformity of hardware at Facebook, or our data centers, makes deploying things very simple, right? while in, in the cloud, you first, you don't get that uniformity because everybody like builds their AMIs as, as they want to build it. But also like a meta, we use, one operating system, in the cloud, you are a little bit more free of what you want. And one of the reasons why you want to go to the cloud is because you can run stuff on. On, on, on the way that that meta will run. Right? So, so even though we have something that are similar, it's not as simple like, oh, just change your deployment from like this data center to like whatever us is one think you would run. [00:44:28] Jeremy: Can, can you give an example of something where you wouldn't be able to run it on Meta's, image that they would choose to go to public cloud to run a different image for? [00:44:41] Alvaro: So, um, so in, in general, like if the government ask us, like, this is not necessarily like, like the US government, right? So, and like if the government ask us like, hey, like you need to keep this transaction on, on our territory, right? for logs, for all the reasons, for whatever, right? like, and, and we wanted to be in the place, we would have to comply. And that's where we will probably use this, this kind of technologies security is another one that is pretty good. And the other one, it is like, in it general, like, like, uh, like disaster recovery, right? If, if meta is down in a way where we cannot communicate with each other using metas technologies, right? Like you would need to have like a bootstrap point. [00:45:23] Jeremy: Is, is it the case where you are not able to put, uh, meta's image up into public cloud? Because you were, The examples you gave was more about location, right? Where you're saying we need to host in public cloud because it needs to be in this country, but then I think you were also saying the, the actual images you would use on AWS right. Would be. I don't know, maybe you'd be using Amazon Linux or maybe you'd be using a different, os entirely. And is that mainly because you're just not able to deploy the same images you have, uh, in-house? [00:46:03] Alvaro: So in, in, in general, uh, this is kind of like very hard to to explain, but, but, uh, if, if we would have to deploy code to a, machine and that machine would, would, would be accessed by people who are not like meta employees and we have no way of getting them to sign NDAs then we would not deploy meta code into that machine. Uh, because that's Sorry. No, not Pi PI's personal information. I was, uh, ip, sorry, that's that's the word. Yeah. Yeah. [00:46:31] Jeremy: So, okay. So if there's, so if you're in public cloud, there's certain things that you just won't put there just because. Those are only allowed to run on Metas own infrastructure. [00:46:44] Alvaro: Yeah Meta's bootcamp [00:46:44] Jeremy: Earlier you were talking about Instagram was an acquisition and they were in AWS were, were you there at the time or you joined, after? [00:46:54] Alvaro: No, I joined. I joined after I joined to, to meta. The way that Meta does hiring, at least for my area, is that you get hired as a production engineer, but you don't get assigned to a team. So you go through a process called boot camp where you get to try different teams and figure out what things you like. I try a couple of different teams, turns out that I like it to work at the Instagram. [00:47:15] Jeremy: And so at that time they were already running on Facebook's internal infrastructure and they had migrated off of AWS [00:47:24] Alvaro: We were on the process of finishing that migration. [00:47:28] Jeremy: So by the time you were there, yeah. Basically get, getting everything out of AWS and then into meta's internal. [00:47:35] Alvaro: Yeah. And, and, and everything is, is a very hard terms to, to define. Uh, I would say like, like most of all, like the bulk of things we were putting it in inside, like, at least what we call our Django servers. Like they were all just moving into internal infrastructure. How Anita started [00:47:52] Jeremy: This kind of touches on the, the whole boot camp thing, but, Anita, I saw that you, you interned at Facebook and then you took a position there, when you ended up taking a position, I'm kind of curious what were the different projects you looked at or, or how did you end up settling on the one you chose? [00:48:11] Anita: Yeah, I interned, um, and I joined straight out of university. I went into bootcamp similar to Alvaro and I got the chance to explore several different teams. I knew I was never gonna do UI that was just like not my thing. Um, so I focused, uh, my search on all like backend infrastructure teams. Um, obviously security, uh, was one of them because that's the team I was in interning on. Um, I also explored, the kind of testing infra team. we call it sandcastle. It runs our internal like unit tests and things. and I also explored one of the, ads infrastructure backend teams. so it was mainly just, you know, getting to know the people, um, seeing which projects appealed to me the most. Um, and then, you know, I kind of chose based on that, I, I think I've always chosen. My work based on how interesting the project sounded, uh, which has worked out in my favor as far as I could tell. How Alvaro started [00:49:14] Jeremy: How, how about, you Alvaro what were the, the different projects you looked at when you first started? [00:49:20] Alvaro: So, As a PE you do have a more restrictive, uh, number of teams that you can, that you can join. Uh, like I don't get an option to work in ui. Not that I wanted, but, (laughs) I, I, it's, it's so long ago. Uh, I remember I did look at, um, at MySQL as a team, uh, that was also one of the cool team. Uh, we had at that time, uh, distribute, uh, engine, uh, to, to run work, like if, like celery or something like that. But internally, I really like the constable distribute like workloads, um, and. I can't remember. I think I did put, come with the Messenger team, that I, I ended up having like a good relationship with their TL their tech lead, uh, but never actually like joined that team. And I believe because she have me have a, a PHP task and it was like, no, I'm not down for doing PHP [00:50:20] Jeremy: Only Python. Huh? [00:50:21] Alvaro: Exactly. Python. Python. Because it's just above C level. Psystemd [00:50:27] Jeremy: I mean related to that, you, you started the, the psystemd project. And so I wonder if you could explain what the context behind that was. Like what sparked I need to make this, this library? [00:50:41] Alvaro: So it's, it's a confluence of two things. The first one, it is like, again, if I see something that doesn't have a Python API for it, I. Feels the strong urge to create one. I have done this a couple of times, mostly internally, but also externally. that was one. And when, while we were doing the migration, I, I, I honestly, I really hate text processing. So the classical thing was like, if you wanna know if your application's running, you do systemctl, you shell out to systemctl status, then parse the output, find the, find the status column. Okay. And I didn't like that. And I start reading about like, systemd uh, and I got in contact with the or I saw like the dbus implementation of systemd. And that was, I thought that was a very interesting idea how that opened all the doors. Right? Uh, so I got a demo working like in a couple of hours. and then I said like, okay, now how do we make this pythonic? And then I created that and I just created, again, just for migrating Instagram. That was the idea. Then, uh, one of the team members who work with Anita, but also one who doesn't work with us anymore, they saw this and said like, Hey, like this looks like a good thing to open source it. So it was like, sure, like I'm happy to opensource it. So we opensource it and then we went to all System Go, which is a very nice interesting conference that happened in Berlin where like all the head for like user space get together. and, and I talk about it and people seems to like it, and that's the story of that. [00:52:15] Jeremy: And so this was replacing, I guess, like you were saying, a lot of people were shelling out and running cat commands and things like that from their Python scripts. And this was meant to be a layer on top of that. [00:52:30] Alvaro: Yes. So it, it does a couple of things. So first of all, inspecting the processes or, or like the services, getting that information out. That's one of the main usage. But also like starting or stopping or like doing all that operations that you want to do. Uh, knowing the state of, of, of services, uh, that's also another thing that people take advantage of. The other thing that people take advantage of is to modify the status of the, of the processes at runtime, like changing properties, like increasing or decreasing the CPU threshold. because systemd provides a very nice API or interface to modify the cgroups properties that otherwise you would need to kind of understand the tree structure that, uh, that, that whatever. so that's what people tend to use this mostly internally. [00:53:23] Jeremy: And so it, it sounds like at least on the production engineering side, you're primarily working in, in Python. is that something that's the teams before were using Python and so everybody just continues using Python? Or is there kind of like more structure or thought put into that? [00:53:41] Alvaro: I would say the following thing about it, um, like in in general, uh, there's, there's not a direction on which language you should use. It's pretty natural which language you should use, but with without said, there's not a Potpourri of languages inside of, of meta. most teams use c c plus plus Python and rust and that's it. There's go, that appears every once in a while there. Sorry, I should not talk about this like, like, or talk like this about this, but eh, there are team who are actually like very fond of go and they use it and they contribute a lot to that space. It's just not. That much, uh, use internally. I have always gravitated towards Python. That has been the language that teach me how to do real coding. and that's the language that got me a job at meta. So I tends to work mostly on that. Yeah. [00:54:31] Anita: Hey, you forgot hack Alvaro. Our web services. (laughs) [00:54:37] Alvaro: Yes. Yes. Uh, so I would say like, the most used language at Meta is actually PHP it's just like used by, by one particular product. That, that is the Facebook product. Yes. So our, our entire web interface, eh, or web stack uses a combination of hack, which is a compiled php, which is better than uncompiled php, also known as vanilla php. Uh, there is a lot of like GraphQL, React, and, I think that's it. [00:55:07] Anita: Infrastructure is largely like c plus plus Python, and now Rust is getting a huge following as well. [00:55:15] Alvaro: Yeah. Like, like Rust. Rust is, I I would say it's the fastest growing language inside, inside of Meta. And the thing is that there is also what you call like the bootstrap problem. Um, there's like today, if I wanted do my python program and I have a function that fails one every three times, I can add a decorator that is retry, that retries every time that something fails for a timeout, right? And that's built in and it's there used and it's documented. And I can look at source code that uses this to understand how, how works. When you start with a new language, you don't get the things. So people have to build them. So there's the bootstrap problem. [00:55:55] Jeremy: That's also an opportunity as well, right? Like if you are the ones building sort of the foundations, then you, you have an opportunity to be the ones who have the core libraries that people are, are using every day. Whereas if a language has been around a while, it's kind of, some of that stuff is already set, right? And you may or may not like the APIs, but that's what people use. So that's what we, that's what we do. One of the last things I'd kind of like to ask, so Anita, you moved into management in just the last year or two or so, and I'm kind of curious what your experience has. Been like, was that a conscious decision where you wanted to go from engineering, uh, software engineering to management? Or maybe you could talk a little bit to that. [00:56:50] Anita: Oh man, it hasn't even been a year yet. I feel like so much time has passed already. Uh, no, I never had any plans to go into management. I love being an engineer. I love being in the code. but, I'd say my, my current manager and uh, my director, you know, who hired me into the Linux user space team, kind of. Sold me a little bit on the idea of like, Hey, if you wanna like, keep pushing more projects, you wanna build out the team that you wanna see working on these things, um, you can consider going into management, taking it slow in a, what we call a T L M role, which is like a tech lead manager, role where you kind of spend some time doing development, and leading the team while also supporting, the engineers as a manager doing the hiring and the relationship building and things that you do in management. so that actually worked out quite well for me, despite Alvaro shaking his head at first. I really enjoyed being able to split my time into kind of the key projects that I really wanted to work on, um, while also supporting the engineers and having them build out, um, New features in systemd and kind of getting their own foothold in the community as well. but I'd say like in the past few months, it's been pretty crazy. I, I probably naively thought that I'd have a little more control over, I don't know. My destiny has a manager and that's like a hundred percent not true. (laughs) Um, you're, you are kind of at both the whims of your engineers and also the people above you. And you kind of have to strike that balance. But, um, my favorite part still, just being able to hide the nasty stuff away from the engineers, let them focus on their work and enjoy what engineers wanna do best, which is just like coding, designing, and like, you know, doing fun, open source stuff. [00:58:56] Alvaro: I will say like, Anita may laugh about me for, because like she's on the other side, but one thing that I least I find very cool at Meta is that managers are not seen as your boss. Right? They're still like a teammate who just basically has a different roles. This is why like when you're an engineer, you can transition to be a manager and that's, it's not considered a promotion that's considered like a, a like an horizontal step and vice versa, you can come back, right. from a manager into, into like an engineer. Yeah. [00:59:25] Jeremy: That was what I would say. And, uh, I guess when you were shaking your head, I'm guessing this means you, you don't wanna become a manager anytime soon. [00:59:35] Alvaro: So I, I never closed the door on that, but I was checking my head to the work of a tlm. Right. Uh, so the tlm TL stands for Tech Lead and m stands for manager. so you're basically both, but with the time of only one. So, uh, Anita was able to pull it off. I don't think I would be able to pull up like, double duty on that. [00:59:56] Anita: Yeah. Unfortunately I support too many people now to do the TL stuff as deeply as I used to, but I still have find some time to code a little bit here and there. [01:00:09] Jeremy: So you were talking a little bit about how things have been crazy the last few months. If, if someone is making the transition into management, like what are the kinds of things that you would tell them to, to look out for or to be aware that's coming? [01:00:27] Anita: Um, when I, before I transitioned, I talked to a lot of managers about like, oh, what was like, you know, the hardest part about management. And they all have kind of their own horror story about what happened to them when they transitioned or even like, difficult things that happened to them during management. I'd say don't expect it to be easy. you're gonna make a lot of mistakes usually in like the interpersonal relationship side, and it's really just about learning how to learn from your mistakes, pick back up and do better next time. I think, um, you know, if people like books, the Making of a Manager by Julie Jo, she was a designer, and also a manager, at then Facebook. She's no longer here. but she has a really good book on like what you can expect when you transition into management. the other thing I'd say is don't go into management without having a management chain that you can really trust. I'd say that can kind of make or break your first few years as a manager, whether you'll enjoy it or not, or even like whether you'll be able to get through the hard times. [01:01:42] Jeremy: Good point. Yeah. I mean, I think whenever you take on anything new, right? Having the support of the people above you or just around you as well is like, that makes such a big difference, right? Even like the situation can be bad, but if everyone is supportive, then you can, you can get through it. [01:02:02] Anita: Yeah, that's absolutely right. [01:02:04] Jeremy: I think that's a good place to wrap up unless either of you have anything else that you thought we should have talked about. so if people want to check out what you're working on, what you're up to, um, how can they find you? [01:02:20] Anita: well, I guess we're both on matrix now. Uh, I'm Anita Zha on Matrix, a n i t a z h A. we both have Twitters as well. If you just search up our names. Nope. Yeah, you're on Twitter. Yeah. [01:02:36] Alvaro: There is an impostor with my name, right? Actually it's not an impostor. It's just me. I just never log into Twitter anymore. [01:02:40] Anita: We both have Mastodon now as well? Yes. Fosstodon we're both frequently at conferences as well. what's, what's coming up next? I think it's, uh, devconf cZ in the Czech Republic. and then, uh, all systems go in September. [01:02:57] Alvaro: You sent something in Canada? [01:03:01] Anita: Oh, yeah. L F F L F S M M B P F is coming up. That's a, that's more of a kernel conference, though. [01:03:09] Alvaro: An acryonym that is longer than the actual word. Yes. Yeah. [01:03:12] Jeremy: That's a lot. That's a lot of letters. [01:03:14] Anita: It's a, it's a mouthful. (laughs) [01:03:18] Jeremy: That's very neat that you get to, to kind of go to all these different conferences and, and actually get, to meet the people in, in person that are, you know, working with the same things you are and, get to be in the same room. I think that's a, that's a real privilege. Yeah. [01:03:35] Anita: Yeah, for sure. [01:03:38] Jeremy: All right. Well, Anita and Alvaro, thank you so much for chatting with me today. [01:03:43] Alvaro: Thank you for hosting. [01:03:45] Anita: Yeah. Thanks for the opportunity. This is a lot of fun.
Liz Rice, Chief Open Source Officer at Isovalent, joins Corey on Screaming in the Cloud to discuss the release of her newest book, Learning eBPF, and the exciting possibilities that come with eBPF technology. Liz explains what got her so excited about eBPF technology, and what it was like to write a book while also holding a full-time job. Corey and Liz also explore the learning curve that comes with kernel programming, and Liz illustrates why it's so important to be able to explain complex technologies in simple terminology. About LizLiz Rice is Chief Open Source Officer with eBPF specialists Isovalent, creators of the Cilium cloud native networking, security and observability project. She sits on the CNCF Governing Board, and on the Board of OpenUK. She was Chair of the CNCF's Technical Oversight Committee in 2019-2022, and Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, and Learning eBPF, both published by O'Reilly.She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, competing in virtual races on Zwift, and making music under the pseudonym Insider Nine.Links Referenced: Isovalent: https://isovalent.com/ Learning eBPF: https://www.amazon.com/Learning-eBPF-Programming-Observability-Networking/dp/1098135121 Container Security: https://www.amazon.com/Container-Security-Fundamental-Containerized-Applications/dp/1492056707/ GitHub for Learning eBPF: https://github.com/lizRice/learning-eBPF TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Our returning guest today is Liz Rice, who remains the Chief Open Source Officer with Isovalent. But Liz, thank you for returning, suspiciously closely timed to when you have a book coming out. Welcome back.Liz: [laugh]. Thanks so much for having me. Yeah, I've just—I've only had the physical copy of the book in my hands for less than a week. It's called Learning eBPF. I mean, obviously, I'm very excited.Corey: It's an O'Reilly book; it has some form of honeybee on the front of it as best I can tell.Liz: Yeah, I was really pleased about that. Because eBPF has a bee as its logo, so getting a [early 00:01:17] honeybee as the O'Reilly animal on the front cover of the book was pretty pleasing, yeah.Corey: Now, this is your second O'Reilly book, is it not?Liz: It's my second full book. So, I'd previously written a book on Container Security. And I've done a few short reports for them as well. But this is the second, you know, full-on, you can buy it on Amazon kind of book, yeah.Corey: My business partner wrote Practical Monitoring for O'Reilly and that was such an experience that he got entirely out of observability as a field and ran running to AWS bills as a result. So, my question for you is, why would anyone do that more than once?Liz: [laugh]. I really like explaining things. And I had a really good reaction to the Container Security book. I think already, by the time I was writing that book, I was kind of interested in eBPF. And we should probably talk about what that is, but I'll come to that in a moment.Yeah, so I've been really interested in eBPF, for quite a while and I wanted to be able to do the same thing in terms of explaining it to people. A book gives you a lot more opportunity to go into more detail and show people examples and get them kind of hands-on than you can do in their, you know, 40-minute conference talk. So, I wanted to do that. I will say I have written myself a note to never do a full-size book while I have a full-time job because it's a lot [laugh].Corey: You do have a full-time job and then some. As we mentioned, you're the Chief Open Source Officer over at Isovalent, you are on the CNCF governing board, you're on the board of OpenUK, and you've done a lot of other stuff in the open-source community as well. So, I have to ask, taking all of that together, are you just allergic to things that make money? I mean, writing the book as well on top of that. I'm told you never do it for the money piece; it's always about the love of it. But it seems like, on some level, you're taking it to an almost ludicrous level.Liz: Yeah, I mean, I do get paid for my day job. So, there is that [laugh]. But so, yeah—Corey: I feel like that's the only way to really write a book is, in turn, to wind up only to just do it for—what someone else is paying you to for doing it, viewing it as a marketing exercise. It pays dividends, but those dividends don't, in my experience from what I've heard from everyone say, pay off as of royalties on book payments.Liz: Yeah, I mean, it's certainly, you know, not a bad thing to have that income stream, but it certainly wouldn't make you—you know, I'm not going to retire tomorrow on the royalty stream unless this podcast has loads and loads of people to buy the book [laugh].Corey: Exactly. And I'm always a fan of having such [unintelligible 00:03:58]. I will order it while we're on the call right now having this conversation because I believe in supporting the things that we want to see more of in the world. So, explain to me a little bit about what it is. Whatever you talking about learning X in a title, I find that that's often going to be much more approachable than arcane nonsense deep-dive things.One of the O'Reilly books that changed my understanding was Linux Kernel Internals, or Understanding the Linux Kernel. Understanding was kind of a heavy lift at that point because it got very deep very quickly, but I absolutely came away understanding what was going on a lot more effectively, even though I was so slow I needed a tow rope on some of it. When you have a book that started with learning, though, I imagined it assumes starting at zero with, “What's eBPF?” Is that directionally correct, or does it assume that you know a lot of things you don't?Liz: Yeah, that's absolutely right. I mean, I think eBPF is one of these technologies that is starting to be, particularly in the cloud-native world, you know, it comes up; it's quite a hot technology. What it actually is, so it's an acronym, right? EBPF. That acronym is almost meaningless now.So, it stands for extended Berkeley Packet Filter. But I feel like it does so much more than filtering, we might as well forget that altogether. And it's just become a term, a name in its own right if you like. And what it really does is it lets you run custom programs in the kernel so you can change the way that the kernel behaves, dynamically. And that is… it's a superpower. It's enabled all sorts of really cool things that we can do with that superpower.Corey: I just pre-ordered it as a paperback on Amazon and it shows me that it is now number one new release in Linux Networking and Systems Administration, so you're welcome. I'm sure it was me that put it over the top.Liz: Wonderful. Thank you very much. Yeah [laugh].Corey: Of course, of course. Writing a book is one of those things that I've always wanted to do, but never had the patience to sit there and do it or I thought I wasn't prolific enough, but over the holidays, this past year, my wife and business partner and a few friends all chipped in to have all of the tweets that I'd sent bound into a series of leather volumes. Apparently, I've tweeted over a million words. And… yeah, oh, so I have to write a book 280 characters at a time, mostly from my phone. I should tweet less was really the takeaway that I took from a lot of that.But that wasn't edited, that wasn't with an overall theme or a narrative flow the way that an actual book is. It just feels like a term paper on steroids. And I hated term papers. Love reading; not one to write it.Liz: I don't know whether this should make it into the podcast, but it reminded me of something that happened to my brother-in-law, who's an artist. And he put a piece of video on YouTube. And for unknowable reasons if you mistyped YouTube, and you spelt it, U-T-U-B-E, the page that you would end up at from Google search was a YouTube video and it was in fact, my brother-in-law's video. And people weren't expecting to see this kind of art movie about matches burning. And he just had the worst comment—like, people were so mean in the comments. And he had millions of views because people were hitting this page by accident, and he ended up—Corey: And he made the cardinal sin of never read the comments. Never break that rule. As soon as you do that, it doesn't go well. I do read the comments on various podcast platforms on this show because I always tell people to insulted all they want, just make sure you leave a five-star review.Liz: Well, he ended up publishing a book with these comments, like, one comment per page, and most of them are not safe for public consumption comments, and he just called it Feedback. It was quite something [laugh].Corey: On some level, it feels like O'Reilly books are a little insulated from the general population when it comes to terrible nonsense comments, just because they tend to be a little bit more expensive than the typical novel you'll see in an airport bookstore, and again, even though it is approachable, Learning eBPF isn't exactly the sort of title that gets people to think that, “Ooh, this is going to be a heck of a thriller slash page-turner with a plot.” “Well, I found the protagonist unrelatable,” is not sort of the thing you're going to wind up seeing in the comments because people thought it was going to be something different.Liz: I know. One day, I'm going to have to write a technical book that is also a murder mystery. I think that would be, you know, quite an achievement. But yeah, I mean, it's definitely aimed at people who have already come across the term, want to know more, and particularly if you're the kind of person who doesn't want to just have a hand-wavy explanation that involves boxes and diagrams, but if, like me, you kind of want to feel the code, and you want to see how things work and you want to work through examples, then that's the kind of person who might—I hope—enjoy working through the book and end up with a possible mental model of how eBPF works, even though it's essentially kernel programming.Corey: So, I keep seeing eBPF in an increasing number of areas, a bunch of observability tools, a bunch of security tools all tend to tie into it. And I've seen people do interesting things as far as cost analysis with it. The problem that I run into is that I'm not able to wind up deploying it universally, just because when I'm going into a client engagement, I am there in a purely advisory sense, given that I'm biasing these days for both SaaS companies and large banks, that latter category is likely going to have some problems if I say, “Oh, just take this thing and go ahead and deploy it to your entire fleet.” If they don't have a problem with that, I have a problem with their entire business security posture. So, I don't get to be particularly prescriptive as far as what to do with it.But if I were running my own environment, it is pretty clear by now that I would have explored this in some significant depth. Do you find that it tends to be something that is used primarily in microservices environments? Does it effectively require Kubernetes to become useful on day one? What is the onboard path where people would sit back and say, “Ah, this problem I'm having, eBPF sounds like the solution.”Liz: So, when we write tools that are typically going to be some sort of infrastructure, observability, security, networking tools, if we're writing them using eBPF, we're instrumenting the kernel. And the kernel gets involved every time our application wants to do anything interesting because whenever it wants to read or write to a file, or send receive network messages, or write something to the screen, or allocate memory, or all of these things, the kernel has to be involved. And we can use eBPF to instrument those events and do interesting things. And the kernel doesn't care whether those processes are running in containers, under Kubernetes, just running directly on the host; all of those things are visible to eBPF.So, in one sense, doesn't matter. But one of the reasons why I think we're seeing eBPF-based tools really take off in cloud-native is that you can, by applying some programming, you can link events that happened in the kernel to specific containers in specific pods in whatever namespace and, you know, get the relationship between an event and the Kubernetes objects that are involved in that event. And then that enables a whole lot of really interesting observability or security tools and it enables us to understand how network packets are flowing between different Kubernetes objects and so on. So, it's really having this vantage point in the kernel where we can see everything and we didn't have to change those applications in any way to be able to use eBPF to instrument them.Corey: When I see the stories about eBPF, it seems like it's focused primarily on networking and flow control. That's where I'm seeing it from a security standpoint, that's where I'm seeing it from cost allocation aspect. Because, frankly, out of the box, from a cloud provider's perspective, Kubernetes looks like a single-tenant application with a really weird behavioral pattern, and some of that crosstalk gets very expensive. Is there a better way than either using eBPF and/or VPC flow logs to figure out what's talking to what in the Kubernetes ecosystem, or is BPF really your first port of call?Liz: So, I'm coming from a position of perspective of working for the company that created the Cilium networking project. And one of the reasons why I think Cilium is really powerful is because it has this visibility—it's got a component called Hubble—that allows you to see exactly how packets are flowing between these different Kubernetes identities. So, in a Kubernetes environment, there's not a lot of point having network flows that talk about IP addresses and ports when what you really want to know is, what's the Kubernetes namespace, what's the application? Defining things in terms of IP addresses makes no sense when they're just being refreshed and renewed every time you change pods. So yeah, Kubernetes changes the requirements on networking visibility and on firewalling as well, on network policy, and that, I think, is you don't have to use eBPF to create those tools, but eBPF is a really powerful and efficient platform for implementing those tools, as we see in Cilium.Corey: The only competitor I found to it that gives a reasonable explanation of why random things are transferring multiple petabytes between each other in the middle of the night has been oral tradition, where I'm talking to people who've been around there for a while. It's, “So, I'm seeing this weird traffic pattern at these times a day. Any idea what that might be?” And someone will usually perk up and say, “Oh, is it—” whatever job that they're doing. Great. That gives me a direction to go in.But especially in this era of layoffs and as environments exist for longer and longer, you have to turn into a bit of a data center archaeologist. That remains insufficient, on some level. And some level, I'm annoyed with trying to understand or needing to use tooling like this that is honestly this powerful and this customizable, and yes, on some level, this complex in order to get access to that information in a meaningful sense. But on the other, I'm glad that that option is at least there for a lot of workloads.Liz: Yeah. I think, you know, that speaks to the power of this new generation of tooling. And the same kind of applies to security forensics, as well, where you might have an enormous stream of events, but unless you can tie those events back to specific Kubernetes identities, which you can use eBPF-based tooling to do, then how do you—the forensics job of tying back where did that event come from, what was the container that was compromised, it becomes really, really difficult. And eBPF tools—like Cilium has a sub-project called Tetragon that is really good at this kind of tying events back to the Kubernetes pod or whether we want to know what node it was running on what namespace or whatever. That's really useful forensic information.Corey: Talk to me a little bit about how broadly applicable it is. Because from my understanding from our last conversation, when you were on the show a year or so ago, if memory serves, one of the powerful aspects of it was very similar to what I've seen some of Brendan Gregg's nonsense doing in his kind of various talks where you can effectively write custom programming on the fly and it'll tell you exactly what it is that you need. Is this something that can be instrument once and then effectively use it for basically anything, [OTEL 00:16:11]-style, or instead, does it need to be effectively custom configured every time you want to get a different aspect of information out of it?Liz: It can be both of those things.Corey: “It depends.” My least favorite but probably the most accurate answer to hear.Liz: [laugh]. But I think Brendan did a really great—he's done many talks talking about how powerful BPF is and built lots of specific tools, but then he's also been involved with Bpftrace, which is kind of like a language for—a high-level language for saying what it is that you want BPF to trace out for you. So, a little bit like, I don't know, awk but for events, you know? It's a scripting language. So, you can have this flexibility.And with something like Bpftrace, you don't have to get into the weeds yourself and do kernel programming, you know, in eBPF programs. But also there's gainful employment to be had for people who are interested in that eBPF kernel programming because, you know, I think there's just going to be a whole range of more tools to come, you know>? I think we're, you know, we're seeing some really powerful tools with Cilium and Pixie and [Parker 00:17:27] and Kepler and many other tools and projects that are using eBPF. But I think there's also a whole load of more to come as people think about different ways they can apply eBPF and instrument different parts of an overall system.Corey: We're doing this over audio only, but behind me on my wall is one of my least favorite gifts ever to have been received by anyone. Mike, my business partner, got me a thousand-piece puzzle of the Kubernetes container landscape where—Liz: [laugh].Corey: This diagram is psychotic and awful and it looks like a joke, except it's not. And building that puzzle was maddening—obviously—but beyond that, it was a real primer in just how vast the entire container slash Kubernetes slash CNCF landscape really is. So, looking at this, I found that the only reaction that was appropriate was a sense of overwhelmed awe slash frustration, I guess. It's one of those areas where I spend a lot of time focusing on drinking from the AWS firehose because they have a lot of products and services because their product strategy is apparently, “Yes,” and they're updating these things in a pretty consistent cadence. Mostly. And even that feels like it's multiple full-time jobs shoved into one.There are hundreds of companies behind these things and all of them are in areas that are incredibly complex and difficult to go diving into. EBPF is incredibly powerful, I would say ridiculously so, but it's also fiendishly complex, at least shoulder-surfing behind people who know what they're doing with it has been breathtaking, on some level. How do people find themselves in a situation where doing a BPF deep dive make sense for them?Liz: Oh, that's a great question. So, first of all, I'm thinking is there an AWS Jigsaw as well, like the CNCF landscape Jigsaw? There should be. And how many pieces would it have? [It would be very cool 00:19:28].Corey: No, because I think the CNCF at one point hired a graphic designer and it's unclear that AWS has done such a thing because their icons for services are, to be generous here, not great. People have flashcards that they've built for is what services does logo represent? Haven't a clue, in almost every case because I don't care in almost every case. But yeah, I've toyed with the idea of doing it. It's just not something that I'd ever want to have my name attached to it, unfortunately. But yeah, I want someone to do it and someone else to build it.Liz: Yes. Yeah, it would need to refresh every, like, five minutes, though, as they roll out a new service.Corey: Right. Because given that it appears from the outside to be impenetrable, it's similar to learning VI in some cases, where oh, yeah, it's easy to get started with to do this trivial thing. Now, step two, draw the rest of the freaking owl. Same problem there. It feels off-putting just from a perspective of you must be at least this smart to proceed. How do you find people coming to it?Liz: Yeah, there is some truth in that, in that beyond kind of Hello World, you quite quickly start having to do things with kernel data structures. And as soon as you're looking at kernel data structures, you have to sort of understand, you know, more about the kernel. And if you change things, you need to understand the implications of those changes. So, yeah, you can rapidly say that eBPF programming is kernel programming, so why would anybody want to do it? The reason why I do it myself is not because I'm a kernel programmer; it's because I wanted to really understand how this is working and build up a mental model of what's happening when I attach a program to an event. And what kinds of things can I do with that program?And that's the sort of exploration that I think I'm trying to encourage people to do with the book. But yes, there is going to be at some point, a pretty steep learning curve that's kernel-related but you don't necessarily need to know everything in order to really have a decent understanding of what eBPF is, and how you might, for example—you might be interested to see what BPF programs are running on your existing system and learn why and what they might be doing and where they're attached and what use could that be.Corey: Falling down that, looking at the process table once upon a time was a heck of an education, one week when I didn't have a lot to do and I didn't like my job in those days, where, “Oh, what is this Avahi daemon that constantly running? MDNS forwarding? Who would need that?” And sure enough, that tickled something in the back of my mind when I wound up building out my networking box here on top of BSD, and oh, yeah, I want to make sure that I can still have discovery work from the IoT subnet over to whatever it is that my normal devices live. Ah, that's what that thing always running for. Great for that one use case. Almost never needed in other cases, but awesome. Like, you fire up a Raspberry Pi. It's, “Why are all these things running when I'm just want to have an embedded device that does exactly one thing well?” Ugh. Computers have gotten complicated.Liz: I know. It's like when you get those pop-ups on—well certainly on Mac, and you get pop-ups occasionally, let's say there's such and such a daemon wants extra permissions, and you think I'm not hitting that yes button until I understand what that daemon is. And it turns out, it's related, something completely innocuous that you've actually paid for, but just under a different name. Very annoying. So, if you have some kind of instrumentation like tracing or logging or security tooling that you want to apply to all of your containers, one of the things you can use is a sidecar container approach. And in Kubernetes, that means you inject the sidecar into every single pod. And—Corey: Yes. Of course, the answer to any Kubernetes problem appears to be have you tried running additional containers?Liz: Well, right. And there are challenges that can come from that. And one of the reasons why you have to do that is because if you want a tool that has visibility over that container that's inside the pod, well, your instrumentation has to also be inside the pod so that it has visibility because your pod is, by design, isolated from the host it's running on. But with eBPF, well eBPF is in the kernel and there's only one kernel, however many containers were running. So, there is no kind of isolation between the host and the containers at the kernel level.So, that means if we can instrument the kernel, we don't have to have a separate instance in every single pod. And that's really great for all sorts of resource usage, it means you don't have to worry about how you get those sidecars into those pods in the first place, you know that every pod is going to be instrumented if it's instrumented in the kernel. And then for service mesh, service mesh usually uses a sidecar as a Layer 7 Proxy injected into every pod. And that actually makes for a pretty convoluted networking path for a packet to sort of go from the application, through the proxy, out to the host, back into another pod, through another proxy, into the application.What we can do with eBPF, we still need a proxy running in userspace, but we don't need to have one in every single pod because we can connect the networking namespaces much more efficiently. So, that was essentially the basis for sidecarless service mesh, which we did in Cilium, Istio, and now we're using a similar sort of approach with Ambient Mesh. So that, again, you know, avoiding having the overhead of a sidecar in every pod. So that, you know, seems to be the way forward for service mesh as well as other types of instrumentation: avoiding sidecars.Corey: On some level, avoiding things that are Kubernetes staples seems to be a best practice in a bunch of different directions. It feels like it's an area where you start to get aligned with the idea of service meesh—yes, that's how I pluralize the term service mesh and if people have a problem with that, please, it's imperative you've not send me letters about it—but this idea of discovering where things are in a variety of ways within a cluster, where things can talk to each other, when nothing is deterministically placed, it feels like it is screaming out for something like this.Liz: And when you think about it, Kubernetes does sort of already have that at the level of a service, you know? Services are discoverable through native Kubernetes. There's a bunch of other capabilities that we tend to associate with service mesh like observability or encrypted traffic or retries, that kind of thing. But one of the things that we're doing with Cilium, in general, is to say, but a lot of this is just a feature of the networking, the underlying networking capability. So, for example, we've got next generation mutual authentication approach, which is using SPIFFE IDs between an application pod and another application pod. So, it's like the equivalent of mTLS.But the certificates are actually being passed into the kernel and the encryption is happening at the kernel level. And it's a really neat way of saying we don't need… we don't need to have a sidecar proxy in every pod in order to terminate those TLS connections on behalf of the application. We can have the kernel do it for us and that's really cool.Corey: Yeah, at some level, I find that it still feels weird—because I'm old—to have this idea of one shared kernel running a bunch of different containers. I got past that just by not requiring that [unintelligible 00:27:32] workloads need to run isolated having containers run on the same physical host. I found that, for example, running some stuff, even in my home environment for IoT stuff, things that I don't particularly trust run inside of KVM on top of something as opposed to just running it as a container on a cluster. Almost certainly stupendous overkill for what I'm dealing with, but it's a good practice to be in to start thinking about this. To my understanding, this is part of what AWS's Firecracker project starts to address a bit more effectively: fast provisioning, but still being able to use different primitives as far as isolation boundaries go. But, on some level, it's nice to not have to think about this stuff, but that's dangerous.Liz: [laugh]. Yeah, exactly. Firecracker is really nice way of saying, “Actually, we're going to spin up a whole VM,” but we don't ne—when I say ‘whole VM,' we don't need all of the things that you normally get in a VM. We can get rid of a ton of things and just have the essentials for running that Lambda or container service, and it becomes a really nice lightweight solution. But yes, that will have its own kernel, so unlike, you know, running multiple kernels on the same VM where—sorry, running multiple containers on the same virtual machine where they would all be sharing one kernel, with Firecracker you'll get a kernel per instance of Firecracker.Corey: The last question I have for you before we wind up wrapping up this episode harkens back to something you said a little bit earlier. This stuff is incredibly technically nuanced and deep. You clearly have a thorough understanding of it, but you also have what I think many people do not realize is an orthogonal skill of being able to articulate and explain those complex concepts simply an approachably, in ways that make people understand what it is you're talking about, but also don't feel like they're being spoken to in a way that's highly condescending, which is another failure mode. I think it is not particularly well understood, particularly in the engineering community, that there are—these are different skill sets that do not necessarily align congruently. Is this something you've always known or is this something you've figured out as you've evolved your career that, oh I have a certain flair for this?Liz: Yeah, I definitely didn't always know it. And I started to realize it based on feedback that people have given me about talks and articles I'd written. I think I've always felt that when people use jargon or they use complicated language or they, kind of, make assumptions about how things are, it quite often speaks to them not having a full understanding of what's happening. If I want to explain something to myself, I'm going to use straightforward language to explain it to myself [laugh] so I can hold it in my head. And I think people appreciate that.And you can get really—you know, you can get quite in-depth into something if you just start, step by step, build it up, explain everything as you go along the way. And yeah, I think people do appreciate that. And I think people, if they get lost in jargon, it doesn't help anybody. And yeah, I very much appreciate it when people say that, you know, they saw a talk or they read something I wrote and it meant that they finally grokked whatever that concept was that that I was trying to explain. I will say at the weekend, I asked ChatGPT to explain DNS in the style of Liz Rice, and it started off, it was basically, “Hello there. I'm Liz Rice and I'm here to explain DNS in very simple terms.” I thought, “Okay.” [laugh].Corey: Every time I think I've understood DNS, there's another level to it.Liz: I'm pretty sure there is a lot about DNS that I don't understand, yeah. So, you know, there's always more to learn out there.Corey: There's certainly is. I really want to thank you for taking time to speak with me today about what you're up to. Where's the best place for people to find you to learn more? And of course, to buy the book.Liz: Yeah, so I am Liz Rice pretty much everywhere, all over the internet. There is a GitHub repo that accompanies the books that you can find that on GitHub: lizRice/learning-eBPF. So, that's a good place to find some of the example code, and it will obviously link to where you can download the book or buy it because you can pay for it; you can also download it from Isovalent for the price of your contact details. So, there are lots of options.Corey: Excellent. And we will, of course, put links to that in the [show notes 00:32:08]. Thank you so much for your time. It's always great to talk to you.Liz: It's always a pleasure, so thanks very much for having me, Corey.Corey: Liz Rice, Chief Open Source Officer at Isovalent. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that you have somehow discovered this episode by googling for knitting projects.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Jean Yang, CEO of Akita Software, joins Corey on Screaming in the Cloud to discuss how she went from academia to tech founder, and what her company is doing to improve monitoring and observability. Jean explains why Akita is different from other observability & monitoring solutions, and how it bridges the gap from what people know they should be doing and what they actually do in practice. Corey and Jean explore why the monitoring and observability space has been so broken, and why it's important for people to see monitoring as a chore and not a hobby. Jean also reveals how she took a leap from being an academic professor to founding a tech start-up. About JeanJean Yang is the founder and CEO of Akita Software, providing the fastest time-to-value for API monitoring. Jean was previously a tenure-track professor in Computer Science at Carnegie Mellon University.Links Referenced: Akita Software: https://www.akitasoftware.com/ Aki the dog chatbot: https://www.akitasoftware.com/blog-posts/we-built-an-exceedingly-polite-ai-dog-that-answers-questions-about-your-apis Twitter: https://twitter.com/jeanqasaur TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today is someone whose company has… well, let's just say that it has piqued my interest. Jean Yang is the CEO of Akita Software and not only is it named after a breed of dog, which frankly, Amazon service namers could take a lot of lessons from, but it also tends to approach observability slash monitoring from a perspective of solving the problem rather than preaching a new orthodoxy. Jean, thank you for joining me.Jean: Thank you for having me. Very excited.Corey: In the world that we tend to operate in, there are so many different observability tools, and as best I can determine observability is hipster monitoring. Well, if we call it monitoring, we can't charge you quite as much money for it. And whenever you go into any environment of significant scale, we pretty quickly discover that, “What monitoring tool are you using?” The answer is, “Here are the 15 that we use.” Then you talk to other monitoring and observability companies and ask them which ones of those they've replace, and the answer becomes, “We're number 16.” Which is less compelling of a pitch than you might expect. What does Akita do? Where do you folks start and stop?Jean: We want to be—at Akita—your first stop for monitoring and we want to be all of the monitoring, you need up to a certain level. And here's the motivation. So, we've talked with hundreds, if not thousands, of software teams over the last few years and what we found is there is such a gap between best practice, what people think everybody else is doing, what people are talking about at conferences, and what's actually happening in software teams. And so, what software teams have told me over and over again, is, hey, we either don't actually use very many tools at all, or we use 15 tools in name, but it's you know, one [laugh] one person on the team set this one up, it's monitoring one of our endpoints, we don't even know which one sometimes. Who knows what the thresholds are really supposed to be. We got too many alerts one day, we turned it off.But there's very much a gap between what people are saying they're supposed to do, what people in their heads say they're going to do next quarter or the quarter after that and what's really happening in practice. And what we saw was teams are falling more and more into monitoring debt. And so effectively, their customers are becoming their monitoring and it's getting harder to catch up. And so, what Akita does is we're the fastest, easiest way for teams to quickly see what endpoints you have in your system—so that's API endpoints—what's slow and what's throwing errors. And you might wonder, okay, wait, wait, wait, Jean. Monitoring is usually about, like, logs, metrics, and traces. I'm not used to hearing about API—like, what do APIs have to do with any of it?And my view is, look, we want the most simple form of what might be wrong with your system, we want a developer to be able to get started without having to change any code, make any annotations, drop in any libraries. APIs are something you can watch from the outside of a system. And when it comes to which alerts actually matter, where do you want errors to be alerts, where do you want thresholds to really matter, my view is, look, the places where your system interfaces with another system are probably where you want to start if you've really gotten nothing. And so, Akita view is, we're going to start from the outside in on this monitoring. We're turning a lot of the views on monitoring and observability on its head and we just want to be the tool that you reach for if you've got nothing, it's middle of the night, you have alerts on some endpoint, and you don't want to spend a few hours or weeks setting up some other tool. And we also want to be able to grow with you up until you need that power tool that many of the existing solutions out there are today.Corey: It feels like monitoring is very often one of those reactive things. I come from the infrastructure world, so you start off with, “What do you use for monitoring?” “Oh, we wait till the help desk calls us and users are reporting a problem.” Okay, that gets you somewhere. And then it becomes oh, well, what was wrong that time? The drive filled up. Okay, so we're going to build checks in that tell us when the drives are filling up.And you wind up trying to enumerate all of the different badness. And as a result, if you leave that to its logical conclusion, one of the stories that I heard out of MySpace once upon a time—which dates me somewhat—is that you would have a shift, so there were three shifts working around the clock, and each one would open about 5000 tickets, give or take, for the monitoring alerts that wound up firing off throughout their infrastructure. At that point, it's almost, why bother? Because no one is going to be around to triage these things; no one is going to see any of the signal buried and all of that noise. When you talk about doing this for an API perspective, are you running synthetics against those APIs? Are you shimming them in order to see what's passing through them? What's the implementation side look like?Jean: Yeah, that's a great question. So, we're using a technology called BPF, Berkeley Packet Filter. The more trendy, buzzy term is EBPF—Corey: The EBPF. Oh yes.Jean: Yeah, Extended Berkeley Packet Filter. But here's the secret, we only use the BPF part. It's actually a little easier for users to install. The E part is, you know, fancy and often finicky. But um—Corey: SEBPF then: Shortened Extended BPF. Why not?Jean: [laugh]. Yeah. And what BPF allows us to do is passively watch traffic from the outside of a system. So, think of it as you're sending API calls across the network. We're just watching that network. We're not in the path of that traffic. So, we're not intercepting the traffic in any way, we're not creating any additional overhead for the traffic, we're not slowing it down in any way. We're just sitting on the side, we're watching all of it, and then we're taking that and shipping an obfuscated version off to our cloud, and then we're giving you analytics on that.Corey: One of the things that strikes me as being… I guess, a common trope is there are a bunch of observability solutions out there that offer this sort of insight into what's going on within an environment, but it's, “Step one: instrument with some SDK or some agent across everything. Do an entire deploy across your fleet.” Which yeah, people are not generally going to be in a hurry to sign up for. And further, you also said a minute ago that the idea being that someone could start using this in the middle of the night in the middle of an outage, which tells me that it's not, “Step one: get the infrastructure sparkling. Step two: do a global deploy to everything.” How do you go about doing that? What is the level of embeddedness into the environment?Jean: Yeah, that's a great question. So, the reason we chose BPF is I wanted a completely black-box solution. So, no SDKs, no code annotations. I wanted people to be able to change a config file and have our solution apply to anything that's on the system. So, you could add routes, you could do all kinds of things. I wanted there to be no additional work on the part of the developer when that happened.And so, we're not the only solution that uses BPF or EBPF. There's many other solutions that say, “Hey, just drop us in. We'll let you do anything you want.” The big difference is what happens with the traffic once it gets processed. So, what EBPF or BPF gives you is it watches everything about your system. And so, you can imagine that's a lot of different events. That's a lot of things.If you're trying to fix an incident in the middle of the night and someone just dumps on you 1000 pages of logs, like, what are you going to do with that? And so, our view is, the more interesting and important and valuable thing to do here is not make it so that you just have the ability to watch everything about your system but to make it so that developers don't have to sift through thousands of events just to figure out what went wrong. So, we've spent years building algorithms to automatically analyze these API events to figure out, first of all, what are your endpoints? Because it's one thing to turn on something like Wireshark and just say, okay, here are the thousand API calls, I saw—ten thousand—but it's another thing to say, “Hey, 500 of those were actually the same endpoint and 300 of those had errors.” That's quite a hard problem.And before us, it turns out that there was no other solution that even did that to the level of being able to compile together, “Here are all the slow calls to an endpoint,” or, “Here are all of the erroneous calls to an endpoint.” That was blood, sweat, and tears of developers in the night before. And so, that's the first major thing we do. And then metrics on top of that. So, today we have what's slow, what's throwing errors. People have asked us for other things like show me what happened after I deployed. Show me what's going on this week versus last week. But now that we have this data set, you can imagine there's all kinds of questions we can now start answering much more quickly on top of it.Corey: One thing that strikes me about your site is that when I go to akitasoftware.com, you've got a shout-out section at the top. And because I've been doing this long enough where I find that, yeah, you work at a company; you're going to say all kinds of wonderful, amazing aspirational things about it, and basically because I have deep-seated personality disorders, I will make fun of those things as my default reflexive reaction. But something that AWS, for example, does very well is when they announce something ridiculous on stage at re:Invent, I make fun of it, as is normal, but then they have a customer come up and say, “And here's the expensive, painful problem that they solved for us.”And that's where I shut up and start listening. Because it's a very different story to get someone else, who is presumably not being paid, to get on stage and say, “Yeah, this solved a sophisticated, painful problem.” Your shout-outs page has not just a laundry list of people saying great things about it, but there are former folks who have been on the show here, people I know and trust: Scott Johnson over at Docker, Gergely Orosz over at The Pragmatic Engineer, and other folks who have been luminaries in the space for a while. These are not the sort of people that are going to say, “Oh, sure. Why not? Oh, you're going to send me a $50 gift card in a Twitter DM? Sure I'll say nice things,” like it's one of those respond to a viral tweet spamming something nonsense. These are people who have gravitas. It's clear that there's something you're building that is resonating.Jean: Yeah. And for that, they found us. Everyone that I've tried to bribe to say good things about us actually [laugh] refused.Corey: Oh, yeah. As it turns out that it's one of those things where people are more expensive than you might think. It's like, “What, you want me to sell my credibility down the road?” Doesn't work super well. But there's something like the unsolicited testimonials that come out of, this is amazing, once people start kicking the tires on it.You're currently in open beta. So, I guess my big question for you is, whenever you see a product that says, “Oh, yeah, we solve everything cloud, on-prem, on physical instances, on virtual machines, on Docker, on serverless, everything across the board. It's awesome.” I have some skepticism on that. What is your ideal application architecture that Akita works best on? And what sort of things are you a complete nonstarter for?Jean: Yeah, I'll start with a couple of things we work well on. So, container platforms. We work relatively well. So, that's your Fargate, that's your Azure Web Apps. But that, you know, things running, we call them container platforms. Kubernetes is also something that a lot of our users have picked us up and had success with us on. I will say our Kubernetes deploy is not as smooth as we would like. We say, you know, you can install us—Corey: Well, that is Kubernetes, yes.Jean: [laugh]. Yeah.Corey: Nothing in Kubernetes is as smooth as we would like.Jean: Yeah, so we're actually rolling out Kubernetes injection support in the next couple of weeks. So, those are the two that people have had the most success on. If you're running on bare metal or on a VM, we work, but I will say that you have to know your way around a little bit to get that to work. What we don't work on is any Platform as a Service. So, like, a Heroku, a Lambda, a Render at the moment. So those, we haven't found a way to passively listen to the network traffic in a good way right now.And we also work best for unencrypted HTTP REST traffic. So, if you have encrypted traffic, it's not a non-starter, but you need to fall into a couple of categories. You either need to be using Kubernetes, you can run Akita as a sidecar, or you're using Nginx. And so, that's something we're still expanding support on. And we do not support GraphQL or GRPC at the moment.Corey: That's okay. Neither do I. It does seem these days that unencrypted HTTP API calls are increasingly becoming something of a relic, where folks are treating those as anti-patterns to be stamped out ruthlessly. Are you still seeing significant deployments of unencrypted APIs?Jean: Yeah. [laugh]. So, Corey—Corey: That is the reality, yes.Jean: That's a really good question, Corey, because in the beginning, we weren't sure what we wanted to focus on. And I'm not saying the whole deployment is unencrypted HTTP, but there is a place to install Akita to watch where it's unencrypted HTTP. And so, this is what I mean by if you have encrypted traffic, but you can install Akita as a Kubernetes sidecar, we can still watch that. But there was a big question when we started: should this be GraphQL, GRPC, or should it be REST? And I read the “State of the API Report” from Postman for you know, five years, and I still keep up with it.And every year, it seemed that not only was REST, remaining dominant, it was actually growing. So, [laugh] this was shocking to me as well because people said, well, “We have this more structured stuff, now. There's GRPC, there's GraphQL.” But it seems that for the added complexity, people weren't necessarily seeing the value and so, REST continues to dominate. And I've actually even seen a decline in GraphQL since we first started doing this. So, I'm fully on board the REST wagon. And in terms of encrypted versus unencrypted, I would also like to see more encryption as well. That's why we're working on burning down the long tail of support for that.Corey: Yeah, it's one of those challenges. Whenever you're deploying something relatively new, there's this idea that it should be forward-looking and you, on some level, want to modernize your architecture and infrastructure to keep up with it. An AWS integration story I see that's like that these days is, “Oh, yeah, generate an IAM credential set and just upload those into our system.” Yeah, the modern way of doing that is role assumption: to find a role and here's how to configure it so that it can do what we need to do. So, whenever you start seeing things that are, “Oh, yeah, just turn the security clock back in time a little bit,” that's always a little bit of an eyebrow raise.I can also definitely empathize with the joys of dealing with anything that even touches networking in a Lambda context. Building the Lambda extension for Tailscale was one of the last big dives I made into that area and I still have nightmares as a result. It does a lot of interesting things right up until you step off the golden path. And then suddenly, everything becomes yaks all the way down, in desperate need of shaving.Jean: Yeah, Lambda does something we want to handle on our roadmap, but I… believe we need a bigger team before [laugh] we are ready to tackle that.Corey: Yeah, we're going to need a bigger boat is very often [laugh] the story people have when they start looking at entire new architectural paradigms. So, you end up talking about working in containerized environments. Do you find that most of your deployments are living in cloud environments, in private data centers, some people call them private cloud. Where does the bulk of your user applications tend to live these days?Jean: The bulk of our user applications are in the cloud. So, we're targeting small to medium businesses to start. The reason being, we want to give our users a magical deployment experience. So, right now, a lot of our users are deploying in under 30 minutes. That's in no small part due to automations that we've built.And so, we initially made the strategic decision to focus on places where we get the most visibility. And so—where one, we get the most visibility, and two, we are ready for that level of scale. So, we found that, you know, for a large business, we've run inside some of their production environments and there are API calls that we don't yet handle well or it's just such a large number of calls, we're not doing the inference as well and our algorithms don't work as well. And so, we've made the decision to start small, build our way up, and start in places where we can just aggressively iterate because we can see everything that's going on. And so, we've stayed away, for instance, from any on-prem deployments for that reason because then we can't see everything that's going on. And so, smaller companies that are okay with us watching pretty much everything they're doing has been where we started. And now we're moving up into the medium-sized businesses.Corey: The challenge that I guess I'm still trying to wrap my head around is, I think that it takes someone with a particularly rosy set of glasses on to look at the current state of monitoring and observability and say that it's not profoundly broken in a whole bunch of ways. Now, where it all falls apart, Tower of Babelesque, is that there doesn't seem to be consensus on where exactly it's broken. Where do you see, I guess, this coming apart at the seams?Jean: I agree, it's broken. And so, if I tap into my background, which is I was a programming languages person in my very recently, previous life, programming languages people like to say the problem and the solution is all lies in abstraction. And so, computing is all about building abstractions on top of what you have now so that you don't have to deal with so many details and you got to think at a higher level; you're free of the shackles of so many low-level details. What I see is that today, monitoring and observability is a sort of abstraction nightmare. People have just taken it as gospel that you need to live at the lowest level of abstraction possible the same way that people truly believe that assembly code was the way everybody was going to program forevermore back, you know, 50 years ago.So today, what's happening is that when people think monitoring, they think logs, not what's wrong with my system, what do I need to pay attention to? They think, “I have to log everything, I have to consume all those logs, we're just operating at the level of logs.” And that's not wrong because there haven't been any tools that have given people any help above the level of logs. Although that's not entirely correct, you know? There's also events and there's also traces, but I wouldn't say that's actually lifting the level of [laugh] abstraction very much either.And so, people today are thinking about monitoring and observability as this full control, like, I'm driving my, like, race car, completely manual transmission, I want to feel everything. And not everyone wants to or needs to do that to get to where they need to go. And so, my question is, how far are can we lift the level of abstraction for monitoring and observability? I don't believe that other people are really asking this question because most of the other players in the space, they're asking what else can we monitor? Where else can we monitor it? How much faster can we do it? Or how much more detail can we give the people who really want the power tools?But the people entering the buyer's market with needs, they're not people—you don't have, like, you know, hordes of people who need more powerful tools. You have people who don't know about the systems are dealing with and they want easier. They want to figure out if there's anything wrong with our system so they can get off work and do other things with their lives.Corey: That, I think, is probably the thing that gets overlooked the most. It's people don't tend to log into their monitoring systems very often. They don't want to. When they do, it's always out of hours, middle of the night, and they're confronted with a whole bunch of upsell dialogs of, “Hey, it's been a while. You want to go on a tour of the new interface?”Meanwhile, anything with half a brain can see there's a giant spike on the graph or telemetry stop coming in.Jean: Yeah.Corey: It's way outside of normal business hours where this person is and maybe they're not going to be in the best mood to engage with your brand.Jean: Yeah. Right now, I think a lot of the problem is, you're either working with monitoring because you're desperate, you're in the middle of an active incident, or you're a monitoring fanatic. And there isn't a lot in between. So, there's a tweet that someone in my network tweeted me that I really liked which is, “Monitoring should be a chore, not a hobby.” And right now, it's either a hobby or an urgent necessity [laugh].And when it gets to the point—so you know, if we think about doing dishes this way, it would be as if, like, only, like, the dish fanatics did dishes, or, like, you will just have piles of dishes, like, all over the place and raccoons and no dishes left, and then you're, like, “Ah, time to do a thing.” But there should be something in between where there's a defined set of things that people can do on a regular basis to keep up with what they're doing. It should be accessible to everyone on the team, not just a couple of people who are true fanatics. No offense to the people out there, I love you guys, you're the ones who are really helping us build our tool the most, but you know, there's got to be a world in which more people are able to do the things you do.Corey: That's part of the challenge is bringing a lot of the fire down from Mount Olympus to the rest of humanity, where at some level, Prometheus was a great name from that—Jean: Yep [laugh].Corey: Just from that perspective because you basically need to be at that level of insight. I think Kubernetes suffers from the same overall problem where it is not reasonably responsible to run a Kubernetes production cluster without some people who really know what's going on. That's rapidly changing, which is for the better, because most companies are not going to be able to afford a multimillion-dollar team of operators who know the ins and outs of these incredibly complex systems. It has to become more accessible and simpler. And we have an entire near century at this point of watching abstractions get more and more and more complex and then collapsing down in this particular field. And I think that we're overdue for that correction in a lot of the modern infrastructure, tooling, and approaches that we take.Jean: I agree. It hasn't happened yet in monitoring and observability. It's happened in coding, it's happened in infrastructure, it's happened in APIs, but all of that has made it so that it's easier to get into monitoring debt. And it just hasn't happened yet for anything that's more reactive and more about understanding what the system is that you have.Corey: You mentioned specifically that your background was in programming languages. That's understating it slightly. You were a tenure-track professor of computer science at Carnegie Mellon before entering industry. How tied to what your area of academic speciality was, is what you're now at Akita?Jean: That's a great question and there are two answers to that. The first is very not tied. If it were tied, I would have stayed in my very cushy, highly [laugh] competitive job that I worked for years to get, to do stuff there. And so like, what we're doing now is comes out of thousands of conversations with developers and desire to build on the ground tools that I'm—there's some technically interesting parts to it, for sure. I think that our technical innovation is our moat, but is it at the level of publishable papers? Publishable papers are a very narrow thing; I wouldn't be able to say yes to that question.On the other hand, everything that I was trained to do was about identifying a problem and coming up with an out-of-the-box solution for it. And especially in programming languages research, it's really about abstractions. It's really about, you know, taking a set of patterns that you see of problems people have, coming up with the right abstractions to solve that problem, evaluating your solution, and then, you know, prototyping that out and building on top of it. And so, in that case, you know, we identified, hey, people have a huge gap when it comes to monitoring and observability. I framed it as an abstraction problem, how can we lift it up?We saw APIs as this is a great level to build a new level of solution. And our solution, it's innovative, but it also solves the problem. And to me, that's the most important thing. Our solution didn't need to be innovative. If you're operating in an academic setting, it's really about… producing a new idea. It doesn't actually [laugh]—I like to believe that all endeavors really have one main goal, and in academia, the main goal is producing something new. And to me, building a product is about solving a problem and our main endeavor was really to solve a real problem here.Corey: I think that it is, in many cases, useful when we start seeing a lot of, I guess, overflow back and forth between academia and industry, in both directions. I think that it is doing academia a disservice when you start looking at it purely as pure theory, and oh yeah, they don't deal with any of the vocational stuff. Conversely, I think the idea that industry doesn't have anything to learn from academia is dramatically misunderstanding the way the world works. The idea of watching some of that ebb and flow and crossover between them is neat to see.Jean: Yeah, I agree. I think there's a lot of academics I super respect and admire who have done great things that are useful in industry. And it's really about, I think, what you want your main goal to be at the time. Is it, do you want to be optimizing for new ideas or contributing, like, a full solution to a problem at the time? But it's there's a lot of overlap in the skills you need.Corey: One last topic I'd like to dive into before we call it an episode is that there's an awful lot of hype around a variety of different things. And right now in this moment, AI seems to be one of those areas that is getting an awful lot of attention. It's clear too there's something of value there—unlike blockchain, which has struggled to identify anything that was not fraud as a value proposition for the last decade-and-a-half—but it's clear that AI is offering value already. You have recently, as of this recording, released an AI chatbot, which, okay, great. But what piques my interest is one, it's a dog, which… germane to my interest, by all means, and two, it is marketed as, and I quote, “Exceedingly polite.”Jean: [laugh].Corey: Manners are important. Tell me about this pupper.Jean: Yeah, this dog came really out of four or five days of one of our engineers experimenting with ChatGPT. So, for a little bit of background, I'll just say that I have been excited about the this latest wave of AI since the beginning. So, I think at the very beginning, a lot of dev tools people were skeptical of GitHub Copilot; there was a lot of controversy around GitHub Copilot. I was very early. And I think all the Copilot people retweeted me because I was just their earlies—like, one of their earliest fans. I was like, “This is the coolest thing I've seen.”I've actually spent the decade before making fun of AI-based [laugh] programming. But there were two things about GitHub Copilot that made my jaw drop. And that's related to your question. So, for a little bit of background, I did my PhD in a group focused on program synthesis. So, it was really about, how can we automatically generate programs from a variety of means? From constraints—Corey: Like copying and pasting off a Stack Overflow, or—Jean: Well, the—I mean, that actually one of the projects that my group was literally applying machine-learning to terabytes of other example programs to generate new programs. So, it was very similar to GitHub Copilot before GitHub Copilot. It was synthesizing API calls from analyzing terabytes of other API calls. And the thing that I had always been uncomfortable with these machine-learning approaches in my group was, they were in the compiler loop. So, it was, you know, you wrote some code, the compiler did some AI, and then it spit back out some code that, you know, like you just ran.And so, that never sat well with me. I always said, “Well, I don't really see how this is going to be practical,” because people can't just run random code that you basically got off the internet. And so, what really excited me about GitHub Copilot was the fact that it was in the editor loop. I was like, “Oh, my God.”Corey: It had the context. It was right there. You didn't have to go tabbing to something else.Jean: Exactly.Corey: Oh, yeah. I'm in the same boat. I think it is basically—I've seen the future unfolding before my eyes.Jean: Yeah. Was the autocomplete thing. And to me, that was the missing piece. Because in your editor, you always read your code before you go off and—you know, like, you read your code, whoever code reviews your code reads your code. There's always at least, you know, two pairs of eyes, at least theoretically, reading your code.So, that was one thing that was jaw-dropping to me. That was the revelation of Copilot. And then the other thing was that it was marketed not as, “We write your code for you,” but the whole Copilot marketing was that, you know, it kind of helps you with boilerplate. And to me, I had been obsessed with this idea of how can you help developers write less boilerplate for years. And so, this AI-supported boilerplate copiloting was very exciting to me.And I saw that is very much the beginning of a new era, where, yes, there's tons of data on how we should be programming. I mean, all of Akita is based on the fact that we should be mining all the data we have about how your system and your code is operating to help you do stuff better. And so, to me, you know, Copilot is very much in that same philosophy. But our AI chatbot is, you know, just a next step along this progression. Because for us, you know, we collect all this data about your API behavior; we have been using non-AI methods to analyze this data and show it to you.And what ChatGPT allowed us to do in less than a week was analyze this data using very powerful large-language models and I have this conversational interface that both gives you the opportunity to check over and follow up on the question so that what you're spitting out—so what we're spitting out as Aki the dog doesn't have to be a hundred percent correct. But to me, the fact that Aki is exceedingly polite and kind of goofy—he, you know, randomly woofs and says a lot of things about how he's a dog—it's the right level of seriousness so that it's not messaging, hey, this is the end all, be all, the way, you know, the compiler loop never sat well with me because I just felt deeply uncomfortable that an AI was having that level of authority in a system, but a friendly dog that shows up and tells you some things that you can ask some additional questions to, no one's going to take him that seriously. But if he says something useful, you're going to listen. And so, I was really excited about the way this was set up. Because I mean, I believe that AI should be a collaborator and it should be a collaborator that you never take with full authority. And so, the chat and the politeness covered those two parts for me both.Corey: Yeah, on some level, I can't shake the feeling that it's still very early days there for Chat-Gipity—yes, that's how I pronounce it—and it's brethren as far as redefining, on some level, what's possible. I think that it's in many cases being overhyped, but it's solving an awful lot of the… the boilerplate, the stuff that is challenging. A question I have, though, is that, as a former professor, a concern that I have is when students are using this, it's less to do with the fact that they're not—they're taking shortcuts that weren't available to me and wanting to make them suffer, but rather, it's, on some level, if you use it to write your English papers, for example. Okay, great, it gets the boring essay you don't want to write out of the way, but the reason you write those things is it teaches you to form a story, to tell a narrative, to structure an argument, and I think that letting the computer do those things, on some level, has the potential to weaken us across the board. Where do you stand on it, given that you see both sides of that particular snake?Jean: So, here's a devil's advocate sort of response to it, is that maybe the writing [laugh] was never the important part. And it's, as you say, telling the story was the important part. And so, what better way to distill that out than the prompt engineering piece of it? Because if you knew that you could always get someone to flesh out your story for you, then it really comes down to, you know, I want to tell a story with these five main points. And in some way, you could see this as a playing field leveler.You know, I think that as a—English is actually not my first language. I spent a lot of time editing my parents writing for their work when I was a kid. And something I always felt really strongly about was not discriminating against people because they can't form sentences or they don't have the right idioms. And I actually spent a lot of time proofreading my friends' emails when I was in grad school for the non-native English speakers. And so, one way you could see this as, look, people who are not insiders now are on the same playing field. They just have to be clear thinkers.Corey: That is a fascinating take. I think I'm going to have to—I'm going to have to ruminate on that one. I really want to thank you for taking the time to speak with me today about what you're up to. If people want to learn more, where's the best place for them to find you?Jean: Well, I'm always on Twitter, still [laugh]. I'm @jeanqasaur—J-E-A-N-Q-A-S-A-U-R. And there's a chat dialog on akitasoftware.com. I [laugh] personally oversee a lot of that chat, so if you ever want to find me, that is a place, you know, where all messages will get back to me somehow.Corey: And we will, of course, put a link to that into the [show notes 00:35:01]. Thank you so much for your time. I appreciate it.Jean: Thank you, Corey.Corey: Jean Yang, CEO at Akita Software. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry insulting comment that you will then, of course, proceed to copy to the other 17 podcast tools that you use, just like you do your observability monitoring suite.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Our favorite features in Linux 6.2, the Hollywood tool getting open-sourced, and a systemd update you need to know about.
Our favorite features in Linux 6.2, the Hollywood tool getting open-sourced, and a systemd update you need to know about.
In this episode, Daryl and Scott interview Clément Olivier, fellow BizApps MVP, and chat about his original tools for the XrmToolBox, the Delta Plugins, and the new tool which allows you to rename the views in bulk. Some of the highlights: Delta Plugins: Local Assembly vs CRM - to compare the plugins classes Plugin between the local vs the plugin assembly registered in the Dataverse environment Delta Plugin Steps between two environments to compare plugin steps between two of your environments export the results into an Excel sheet Daryl overwhelmed a new grad with a code review Using Preview features in production Pain points with the solution management View Renamer - bulk search and replace to bulk rename the views Homework for Clement from the hosts Clément's Info and other links: https://www.linkedin.com/in/clemolivier @Clement0livier Blog: https://stuffandtacos.azurewebsites.net Delta Plugins: Local Assembly vs CRM: https://github.com/carfup/XTBPlugins.DeltaAssemblyvsCrm Delta Plugin Steps between two environments: https://github.com/carfup/XTBPlugins.DeltaAssemblyvsCrm View Renamer: https://github.com/carfup/XTBPlugins.ViewRenamer Buy him a coffee if you appreciate his contribution to the community: https://www.buymeacoffee.com/clementolivier Previous episodes with Clément The Business Process Flow Manager w/ Clement Olivier & Gus Gonzalez: https://xrmtoolcast.libsyn.com/the-business-process-flow-manager-w-clement-olivier-gus-gonzalez The Quick Edit Form, A PCF Control: https://xrmtoolcast.libsyn.com/the-quick-edit-form-a-pcf-control New XrmToolBox Tool PCF to BPF with Clément Olivier: https://xrmtoolcast.libsyn.com/new-xrmtoolbox-tool-pcf-to-bpf-with-clment-olivier Got questions? Have your own tool you'd like to share? Have a suggestion for a future episode? Contact Daryl and Scott at cast@xrmtoolbox.com. Follow us on LinkedIn and @XrmToolCast for updates on future episodes. Do you want to see us too? Subscribe to our YouTube channel to view the last episodes. Don't forget to rate and leave a review for this show at Podchaser. Your hosts: Daryl LaBar: https://www.linkedin.com/in/daryllabar | @ddlabar Scott Durow: https://www.linkedin.com/in/scottdurow | @ScottDurow Editor: Linn Zaw Win: https://www.linkedin.com/in/linnzawwin | @LinnZawWin Music: https://www.purple-planet.com
From international gaming phenomenon to game-changing initiatives, Dutch-born entrepreneur and innovator, Henk Rogers continues to pave the way in regenerating our home planet and working toward establishing permanent human settlements on the Moon and Mars. Starting his career in computer gaming more than three decades ago, Rogers revolutionized the industry by creating Japan's first Computer Role-Playing Game (RPG) and later bringing the legendary game Tetris, to the world. Since then, Rogers has dedicated his career to research, development, advocacy and implementation of renewable energy sources in his adopted home of Hawaii. His Blue Planet Foundation (BPF) based out of Honolulu, has led efforts to pass the nation's first 100% renewable energy mandate, requiring the State of Hawaii to commit to switching to 100% renewable sources of electricity by 2045. His newest initiative, Blue Planet Alliance (BPA) based out of New York City, is expanding BPF's “mandate first, business model second” approach to international regions and countries. BPA is driving global systemic change by developing projects that change behaviors of people, companies, towns and countries, toward sustainability. In an effort to expand life beyond Earth, to the Moon and Mars, Rogers has also established the International MoonBase Alliance (IMA) with leaders in space exploration across private-public and academic sectors. IMA manages HI-SEAS, a Moon-Mars habitat on Mauna Loa used by NASA for five years of simulated Mars missions, creating business opportunities in Hawaii, while advancing space settlement efforts. Rogers continues to explore renewable energy and space settlement opportunities through his Blue Planet Research, which conducts R&D at his off-grid ranch on Hawaii Island. Rogers has become part of the Renewable Energy Solutions industry with Blue Planet Energy, a Blue Planet Research spin-off, which designs and manufactures the safest, longest lasting and most environmentally friendly energy storage systems in the world.
BPF has an insightful conversation with Mississippi's own, Kristen Ley, founder and owner of Thimblepress - a nationally recognized, creative and innovative line of greeting cards and gifts that spring from her faith-based mission. Listen in to Kristen'a journey with her achievements and challenges that provide some insight for those desiring to pursue their own start up.
BPF wraps up our fascinating time with Mr. Meredith with excerpts from his powerful 1961 letter to the Kennedy Administration regarding his rights as an American citizen.
BPF continues the conversation with James Meredith to discuss the time surrounding his entrance to the University of Mississippi and his time in the Air Force.
BPF sits down with James Meredith to hear about his fascinating life story. Listen as we dive into Meredith's formative years growing up in North MS.
Interview by Manny Akiio https://www.instagram.com/mannyakiio We recently caught up with female rapper Gotti BPF for an exclusive “Off The Porch” interview! During our conversation she talked about life in Henry county, jumping off the porch, facing 20 years in prison, starting to take music seriously after she came home from jail, explains what BPF stands for, making mostly pain music, music being very therapeutic for her, new singles “Spazz” & “Stand Or Fall”, upcoming music, being an independent artist, and much more!
Can a dividend growth investor keep non-growing dividend stocks? We always insist on dividend growth, but what happens if your holding shows no more: should you keep or sell? Mike has built a 5-rule decision process to help investors who find themselves in such a case. Today, we put these rules to the test with concrete examples: GNTX, BPF.UN.TO, DIS, CAE, RCI.A.TO, D, SYZ.TO. For the complete show notes, make sure to check out our website: thedividendguyblog.com/92 Twitter: @TheDividendGuy FB: http://bit.ly/2Z7Q5gF YouTube: http://bit.ly/2Zs6r1r
BPF continues the conversation on what Mississippi can do to prevent an energy and electricity crisis from coming here.
#178: Observability has been around since the dawn of computing. Around 1992, BPF was introduced. It gave us the ability to do network packet filtering. Around 22 years later in 2014, eBPF was included in Linux kernel 3.18, building on top of what was available with BPF. Now in 2022, eBPF is helping to supercharge Kubernetes observability. In this episode, we speak with Shachar Azulay, CEO and Co-Founder of groundcover, about how eBPF is changing how we monitor our Kubernetes clusters in five minutes or less. Shachar's information: LinkedIn: https://www.linkedin.com/in/shahar-azulay-54156bb4/ YouTube channel: https://youtube.com/devopsparadox/ Books and Courses: Catalog, Patterns, And Blueprints https://www.devopstoolkitseries.com/posts/catalog/ Review the podcast on Apple Podcasts: https://www.devopsparadox.com/review-podcast/ Slack: https://www.devopsparadox.com/slack/ Connect with us at: https://www.devopsparadox.com/contact/
BPF had the fortunate opportunity to discuss timely topics with a few of Mississippi's top experts in the energy and electricity industry at the regulatory & policy making levels. Listen in as they dive into the energy & electricity crisis in the U.K. and Europe that's beginning to happen in parts of the U.S. and what Mississippi can do to prevent it from coming here.
BPF continues the conversation with Secretary Michael Watson on cutting regulations to help make Mississippi more business friendly and more Mississippians realize their dreams.
BPF visits with Secretary Watson to find out more on the "Tackle the Tape" initiative and the plan within it titled "29 x 29." The aim is to review every regulation for all 29 occupational licensing boards and commissions by the year 2029.
In this episode we speak to Liz Rice, Chief Open Source Officer at Isovalent, the company behind the open source eBPF product Cilium. We discuss why it's such a revolutionary approach to developing low-level kernel applications, how BPF can be used for observability, networking and security, how developers should think about application security, and why all of these technologies are open source.About Liz RiceLiz Rice is Chief Open Source Officer at eBPF pioneers Isovalent, creators of the Cilium project, which provides cloud native networking, observability and security. Prior to Isovalent she was VP Open Source Engineering with security specialists Aqua Security. She is also Chair of the CNCF's Technical Oversight Committee, has co-chaired the KubeCon / CloudNativeCon and is an Ambassador for Open UK.Other things mentioned:IsovalentBerkeley labDave ThalerKubernetesFirecrackerLambdaM1 MacbookVS CodeLet us know what you think on Twitter:https://twitter.com/consoledotdevhttps://twitter.com/davidmyttonhttps://twitter.com/lizriceOr by email: hello@console.devAbout ConsoleConsole is the place developers go to find the best tools. Our weekly newsletter picks out the most interesting tools and new releases. We keep track of everything - dev tools, devops, cloud, and APIs - so you don't have to. Sign up for free at: https://console.devRecorded: 2022-05-05.
In this week's episode, Allen & Kelly go on vacation! They talk about their best and worst vacations they have taken over the years, what type of person they turn into when they plan a vacation, and what it means to be a BPF. *** Allen's Instagram: @allenohplease Kelly's Twitter, Instagram, & TikTok: @sonickellz Podcast Twitter: @cauldroncrewpod Podcast Instagram: @bitches.brew.pod Podcast Email: cauldroncrewpod@gmail.com
BPF guest podcasters had the opportunity to meet and talk with gallerist and curator Stacy Conde of Conde Contemporary. Listen in as she tells how she makes a pivot from Miami to Mississippi and illuminates Natchez the way only she can.
Starting with Vercel CEO, Guillermo Rauch on 9th June 2022, in season 3 of the Console DevTools Podcast we'll be speaking to 10 interesting people currently working in devtools about a specific technical topic. Upcoming guests:Dev Infra, with Guillermo Rauch (Vercel)BPF, with Liz Rice (Isovalent)OSS & Investing, with Joseph Jacks (OSS Capital)Privacy Engineering, with Cate Huston (DuckDuckGo)Security & Software Supply Chain, with Feross Aboukhadijeh (Socket)Data science, with Ines Montani (Explosion)Containers & Tests, with Sergei Egorov (Atomic Jar)VR, with Elena Kokkinara (Inflight VR)WASM, with Connor Hicks (Suborbital)Engineering Leadership, with Meri Williams (LabGenius, LeadDev & Kindred)Join David for our first episode, on 9th June 2022. In the meantime, subscribe to the Console newsletter for weekly reviews of the best 2-3 devtools.Follow us on Twitter:https://twitter.com/consoledotdevhttps://twitter.com/davidmytton
BPF had the opportunity to sit down with Mrs. King and hear how she's an agent for change in her community through her Natchez Heritage School of Cooking. She's also gearing up for the 3rd annual Soul Food Fusion Festival, (June 17 - 19) where the community celebrates together at a block-long white linen dining table eating the cuisine of the local chefs.
From NPP in Manipur to BPF in Assam and now VIP in Bihar, BJP has maintained a track record of losing allies. And it is not bothered. ----more---- https://theprint.in/opinion/politically-correct/shark-remora-bond-defines-bjps-ties-with-allies-bihars-son-of-mallah-learns-the-hard-way/891571/
Hi! I'm Jacqui Piñol your host and creator of The Canine Condition Podcast. Welcome to Season 2. This is the why, where & how to adopt or help a canine companion. Each episode is a conversation with a trustworthy dog rescue organization or animal welfare advocate (who will leave you inspired and empowered). Our goal is to save homeless dogs and set you up for success with information on how to raise and keep a healthy and well balanced dog. Embark on this journey with me and let's save HUman's best friend together. The Canine Condition. Come. Sit. Stay. My guest on the podcast today is Cathy Bissell, founder of The Bissell Pet Foundation. The foundation's goal is to help reduce the number of animals in shelters and in other rescue organizations. BPF has awarded millions of dollars to their growing partner network of animal welfare organizations and has impacted the lives of an incalculable number of pets by funding pet adoption, spay/neuter, microchipping and medical emergencies. With growing support and lots of boots on the ground they are saving lives in all 50 states and Canada. To connect with: The Canine Condition or The BISSELL Pet Foundation click these links: The Canine Condition: www.thecaninecondition.com Instagram: https://www.instagram.com/thecaninecondition/ Website: https://www.bissellpetfoundation.org Instagram: https://www.instagram.com/bissellpets/ . Facebook: https://www.facebook.com/bissellpets Cathy Bissell on Instagram: https://www.instagram.com/cathy_bissell/ The Canine Condition Podcast Writer & Host: Jacqueline Piñol Sound Editor & Engineer: Joe Crow - The Audio Pro Producer: Jacqueline Piñol Music: Jonny Blu
About LizLiz Rice is Chief Open Source Officer with cloud native networking and security specialists Isovalent, creators of the Cilium eBPF-based networking project. She is chair of the CNCF's Technical Oversight Committee, and was Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, published by O'Reilly.She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, and competing in virtual races on Zwift.Links: Isovalent: https://isovalent.com/ Container Security: https://www.amazon.com/Container-Security-Fundamental-Containerized-Applications/dp/1492056707/ Twitter: https://twitter.com/lizrice GitHub: https://github.com/lizrice Cilium and eBPF Slack: http://slack.cilium.io/ CNCF Slack: https://cloud-native.slack.com/join/shared_invite/zt-11yzivnzq-hs12vUAYFZmnqE3r7ILz9A TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Today's episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that's built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you're defining those as, which depends probably on where you work. It's getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that's exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn't eat all the data you've gotten on the system, it's exactly what you've been looking for. Check it out today at min.io/download, and see for yourself. That's min.io/download, and be sure to tell them that I sent you.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the interesting things about hanging out in the cloud ecosystem as long as I have and as, I guess, closely tied to Amazon as I have been, is that you learned that you never quite are able to pronounce things the way that people pronounce them internally. In-house pronunciations are always a thing. My guest today is Liz Rice, the Chief Open Source Officer at Isovalent, and they're responsible for, among other things, the Cilium open-source project, which is around eBPF, which I can only assume is internally pronounced as ‘Ehbehpf'. Liz, thank you for joining me today and suffering my pronunciation slings and arrows.Liz: I have never heard ‘Ehbehpf' before, but I may have to adopt it. That's great.Corey: You also are currently—in a term that is winding down if I'm not misunderstanding—you were the co-chair of KubeCon and CloudNativeCon at the CNCF, and you are also currently on the technical oversight committee for the foundation.Liz: Yeah, yeah. I'm currently the chair, in fact, of the technical oversight committee.Corey: And now that Amazon has joined, I assumed that they had taken their horrible pronunciation habits, like calling AMIs ‘Ah-mies' and whatnot, and started spreading them throughout the ecosystem with wild abandon.Liz: Are we going to have to start calling CNCF ‘Ka'Nff' or something?Corey: Exactly. They're very frugal, by which I mean they never buy a vowel. So yeah, it tends to be an ongoing challenge. Joking and all the rest aside, let's start, I guess, at the macro view. The CNCF does an awful lot of stuff, where if you look at the CNCF landscape, for example, like, I think some of my jokes on the internet go a bit too far, but you look at this thing and last time I checked, there were something like four or 500 different players in various spaces.And it's a very useful diagram, don't get me wrong by any stretch of the imagination, but it also is one of those things that is so staggeringly vast that I've got a level with you on this one, given my old, ancient sysadmin roots, “The hell with it. I'm going to run some VMs in a three-tiered architecture just like grandma and grandpa used to do,” and call it good. Not really how the industry is evolved, but it's overwhelming.Liz: But that might be the right solution for your use case so, you know, don't knock it if it works.Corey: Oh, yeah. If it's a terrible architecture and it works, is it really that terrible of an architecture? One wonders.Liz: Yeah, yeah. I mean, I'm definitely not one of those people who thinks, you know, every solution has the same—you know, is solved by the same hammer, you know, all problems are not the same nail. So, I am a big fan of a lot of the CNCF projects, but that doesn't mean to say I think those are the only ways to deploy software. You know, there are plenty of things like Lambda are a really great example of something that is super useful and very applicable for lots of applications and for lots of development teams. Not necessarily the right solution for everything. And for other people, they need all the bells and whistles that something like Kubernetes gives them. You know, horses for courses.Corey: It's very easy for me to make fun of just about any company or service or product, but the thing that always makes me set that aside and get down to brass tacks has been, “Okay, great. You can build whatever you want. You can tell whatever glorious marketing narrative you wish to craft, but let's talk to a real customer because once we do that, then if you're solving a problem that someone is having in the wild, okay, now it's no longer just this theoretical exercise and PowerPoint. Now, let's actually figure out how things work when the rubber meets the road.”So, let's start, I guess, with… I'll leave it to you. Isovalent are the creators of the Cilium eBPF-based networking project.Liz: Yeah.Corey: And eBPF is the part of that I think I'm the most familiar with having heard the term. Would you rather start on the company side or on the eBPF side?Liz: Oh, I don't mind. Let's—why don't we start with eBPF? Yeah.Corey: Cool. So easy, ridiculous question. I know that it's extremely important because Brendan Gregg periodically gets on stage and tells amazing stories about this; the last time he did stuff like that, I went stumbling down into the rabbit hole of DTrace, and I have never fully regretted doing that, nor completely forgiven him. What is eBPF?Liz: So, it stands for extended Berkeley Packet Filter, and we can pretty much just throw away those words because it's not terribly helpful. What eBPF allows you to do is to run custom programs inside the kernel. So, we can trigger these programs to run, maybe because a network packet arrived, or because a particular function within the kernel has been called, or a tracepoint has been hit. There are tons of places you can attach these programs to, or events you can attach programs to.And when that event happens, you can run your custom code. And that can change the behavior of the kernel, which is, you know, great power and great responsibility, but incredibly powerful. So Brendan, for example, has done a ton of really great pioneering work showing how you can attach these eBPF programs to events, use that to collect metrics, and lo and behold, you have amazing visibility into what's happening in your system. And he's built tons of different tools for observing everything from, I don't know, memory use to file opens to—there's just endless, dozens and dozens of tools that Brendan, I think, was probably the first to build. And now this sort of new generations of eBPF-based tooling that are kind of taking that legacy, turning them into maybe more, going to say user-friendly interfaces, you know, with GUIs, and hooking them up to metrics platforms, and in the case of Cilium, using it for networking and hooking it into Kubernetes identities, and making the information about network flows meaningful in the context of Kubernetes, where things like IP addresses are ephemeral and not very useful for very long; I mean, they just change at any moment.Corey: I guess I'm trying to figure out what part of the stack this winds up applying to because you talk about, at least to my mind, it sounds like a few different levels all at once: You talk about running code inside of the kernel, which is really close to the hardware—it's oh, great. It's adventures in assembly is almost what I'm hearing here—but then you also talk about using this with GUIs, for example, and operating on individual packets to run custom programs. When you talk about running custom programs, are we talking things that are a bit closer to, “Oh, modify this one field of that packet and then call it good,” or are you talking, “Now, we launch Microsoft Word.”Liz: Much more the former category. So yeah, let's inspect this packet and maybe change it a bit, or send it to a different—you know, maybe it was going to go to one interface, but we're going to send it to a different interface; maybe we're going to modify that packet; maybe we're going to throw the packet on the floor because we don't—there's really great security use cases for inspecting packets and saying, “This is a bad packet, I do not want to see this packet, I'm just going to discard it.” And there's some, what they call ‘Packet of Death' vulnerabilities that have been mitigated in that way. And the real beauty of it is you just load these programs dynamically. So, you can change the kernel or on the fly and affect that behavior, just immediately have an effect.If there are processes already running, they get instrumented immediately. So, maybe you run a BPF program to spot when a file is opened. New processes, existing processes, containerized processes, it doesn't matter; they'll all be detected by your program if it's observing file open events.Corey: Is this primarily used from a security perspective? Is it used for—what are the common use cases for something like this?Liz: There's three main buckets, I would say: Networking, observability, and security. And in Cilium, we're kind of involved in some aspects of all those three things, and there are plenty of other projects that are also focusing on one or other of those aspects.Corey: This is where when, I guess, the challenge I run into the whole CNCF landscape is, it's like, I think the danger is when I started down this path that I'm on now, I realized that, “Oh, I have to learn what all the different AWS services do.” This was widely regarded as a mistake. They are not Pokémon; I do not need to catch them all. The CNCF landscape applies very similarly in that respect. What is the real-world problem space for which eBPF and/or things like Cilium that leverage eBPF—because eBPF does sound fairly low-level—that turn this into something that solves a problem people have? In other words, what is the problem that Cilium should be the go-to answer for when someone says, “I have this thing that hurts.”Liz: So, at one level, Cilium is a networking solution. So, it's Kubernetes CNI. You plug it in to provide connectivity between your applications that are running in pods. Those pods have to talk to each other somehow and Cilium will connect those pods together for you in a very efficient way. One of the really interesting things about eBPF and networking is we can bypass some of the networking stack.So, if we are running in containers, we're running our applications in containers in pods, and those pods usually will have their own networking namespace. And that means they've got their own networking stack. So, a packet that arrives on your machine has to go through the networking stack on that host machine, go across a virtual interface into your pod, and then go through the networking stack in that pod. And that's kind of inefficient. But with eBPF, we can look at the packet the moment it's come into the kernel—in fact in some cases, if you have the right networking interfaces, you can do it while it's still on the network interface card—so you look at that packet and say, “Well, I know what pod that's destined for, I can just send it straight there.” I don't have to go through the whole networking stack in the kernel because I already know exactly where it's going. And that has some real performance improvements.Corey: That makes sense. In my explorations—we'll call it—with Kubernetes, it feels like the universe—at least at the time I went looking into it—was, “Step One, here's how to wind up launching Kubernetes to run a blog.” Which is a bit like using a chainsaw to wind up cutting a sandwich. Okay, massively overpowered but I get the basic idea, like, “Okay, what's project Step Two?” It's like, “Oh, great. Go build Google.”Liz: [laugh].Corey: Okay, great. It feels like there's some intermediary steps that have been sort of glossed over here. And at the small-scale that I kicked the tires on, things like networking performance never even entered the equation; it was more about get the thing up and running. But yeah, at scale, when you start seeing huge numbers of containers being orchestrated across a wide variety of hosts that has serious repercussions and explains an awful lot. Is this the sort of thing that gets leveraged by cloud providers themselves, is it something that gets built in mostly on-prem environments, or is it something that rides in, almost, user-land for most of these use cases that customers coming to bringing to those environments? I'm sorry, users, not customers. I'm too used to the Amazonian phrasing of everyone as a customer. No, no, they are users in an open-source project.Liz: [laugh]. Yeah, so if you're using GKE, the GKE Dataplane V2 is using Cilium. Alibaba Cloud uses Cilium. AWS is using Cilium for EKS Anywhere. So, these are really, I think, great signals that it's super scalable.And it's also not just about the connectivity, but also about being able to see your network flows and debug them. Because, like you say, that day one, your blog is up and running, and day two, you've got some DNS issue that you need to debug, and how are you going to do that? And because Cilium is working with Kubernetes, so it knows about the individual pods, and it's aware of the IP addresses for those pods, and it can map those to, you know, what's the pod, what service is that pod involved with. And we have a component of Cilium called Hubble that gives you the flows, the network flows, between services. So, you know, we've probably all seen diagrams showing Service A talking to Service B, Service C, some external connectivity, and Hubble can show you those flows between services and the outside world, regardless of how the IP addresses may be changing underneath you, and aggregating network flows into those services that make sense to a human who's looking at a Kubernetes deployment.Corey: A running gag that I've had is that one of the drawbacks and appeals of Kubernetes, all at once, is that it lets you cosplay as a cloud provider, even if you don't happen to work for one of them. And there's a bit of truth to it, but let's be serious here, despite what a lot of the cloud providers would wish us to believe via a bunch of marketing, there's a tremendous number of data center environments out there, hybrid environments, and companies that are in those environments are not somehow laggards, or left behind technologically, or struggling to digitally transform. Believe it or not—I know it's not a common narrative—but large companies generally don't employ people who lack critical thinking skills and strategic insight. There's usually a reason that things are the way that they are and when you don't understand that my default approach is that, oh context that gets missing, so I want to preface this with the idea there is nothing wrong in those environments. But in a purely cloud-native environment—which means that I'm very proud about having no single points of failure as I have everything routing to a single credit card that pays the cloud providers—great. What is the story for Cilium if I'm using, effectively, the managed Kubernetes options that Name Any Cloud Provider will provide for me these days? Is it at that point no longer for me or is it something that instead expresses itself in ways I'm not seeing, yet?Liz: Yeah, so I think, as an open-source project—and it is the only CNI that's at incubation level or beyond, so you know, it's CNCF-supported networking solution; you can use it out of the box, you can use it for your tiny blog application if you've decided to run that on Kubernetes, you can do so—things start to get much more interesting at scale. I mean, that… continuum between you know, there are people purely on managed services, there are people who are purely in the cloud, hybrid cloud is a real thing, and there are plenty of businesses who have good reasons to have some things in their own data centers, something's in the public cloud, things distributed around the world, so they need connectivity between those. And Cilium will solve a lot of those problems for you in the open-source, but also, if you're telco scale and you have things like BGP networks between your data centers, then that's where the paid versions of Cilium, the enterprise versions of Cilium, can help you out. And, as Isovalent, that's our business model to have, like—we fully support or we contribute a lot of resources into the open-source Cilium, and we want that to be the best networking solution for anybody, but if you are an enterprise who wants those extra bells and whistles, and the kind of scale that, you know, a telco, or a massive retailer, or a large media organization, or name your vertical, then we have solutions for that as well. And I think it was one of the really interesting things about the eBPF side of it is that, you know, we're not bound to just Kubernetes, you know? We run in the kernel, and it just so happens that we have that Kubernetes interface for allocating IP addresses to endpoints that happened to be pods. But—Corey: So, back to my crappy pile of VMs—because the hell with all this newfangled container nonsense—I can still benefit from something like Cilium?Liz: Exactly, yeah. And there's plenty of people using it for just load-balancing, which, why not have an eBPF-based high-performance load balancer?Corey: Hang on, that's taking me a second to work my way through. What is the programming language for eBPF? It is something custom?Liz: Right. So, when you load your BPF program into the kernel, it's in the form of eBPF bytecode. There are people who write an eBPF bytecode by hand; I am not one of those people.Corey: There are people who used to be able to write Sendmail configs without running through the M four preprocessor, and I don't understand those people either.Liz: [laugh]. So, our choices are—well, it has to be a language that can be compiled into that bytecode, and at the moment, there are two options: C, and more recently, Rust. So, the C code, I'm much more familiar with writing BPF code in C, it's slightly limited. So, because these BPF programs have to be safe to run, they go through a verification process which checks that you're not going to crash the kernel, that you're not going to end up in some hardware loop, and basically make your machine completely unresponsive, we also have to know that BPF programs, you know, they'll only access memory that they're supposed to and that they can't mess up other processes. So, there's this BPF verification step that checks for example that you always check that a pointer isn't nil before you dereference it.And if you try and use a pointer in your C code, it might compile perfectly, but when you come to load it into the kernel, it gets rejected because you forgot to check that it was non-null before.Corey: You try and run it, the whole thing segfaults, you see the word ‘fault' there and well, I guess blameless just went out the window there.Liz: [laugh]. Well, this is the thing: You cannot segfault in the kernel, you know, or at least that's a bad [day 00:19:11]. [laugh].Corey: You say that, but I'm very bad with computers, let's be clear here. There's always a way to misuse things horribly enough.Liz: It's a challenge. It's pretty easy to segfault if you're writing a kernel module. But maybe we should put that out as a challenge for the listener, to try to write something that crashes the kernel from within an eBPF because there's a lot of very smart people.Corey: Right now the blood just drained from anyone who's listening, in the kernel space or the InfoSec space, I imagine.Liz: Exactly. Some of my colleagues at Isovalent are thinking, “Oh, no. What's she brought on here?” [laugh].Corey: What have you done? Please correct me if I'm misunderstanding this. So, eBPF is a very low-level tool that requires certain amounts of braining in order [laugh] to use appropriately. That can be a heavy lift for a lot of us who don't live in those spaces. Cilium distills this down into something that is all a lot more usable and understandable for folks, and then beyond that, you wind up with Isovalent, that winds up effectively productizing and packaging this into something that becomes a lot more closer to turnkey. Is that directionally accurate?Liz: Yes, I would say that's true. And there are also some other intermediate steps, like the CLI tools that Brendan Gregg did, where you can—I mean, a CLI is still fairly low-level, but it's not as low-level as writing the eBPF code yourself. And you can be quite in-dep—you know, if you know what things you want to observe in the kernel, you don't necessarily have to know how to write the eBPF code to do it, but if you've got these fairly low-level tools to do it. You're absolutely right that very few people will need to write their own… BPF code to run in the kernel.Corey: Let's move below the surface level of awareness; the same way that most of us don't need to know how to compile our own kernel in this day and age.Liz: Exactly.Corey: A few people very much do, but because of their hard work, the rest of us do not.Liz: Exactly. And for most of us, we just take the kernel for granted. You know, most people writing applications, it doesn't really matter if—they're just using abstractions that do things like open files for them, or create network connections, or write messages to the screen, you don't need to know exactly how that's accomplished through the kernel. Unless you want to get into the details of how to observe it with eBPF or something like that.Corey: I'm much happier not knowing some of the details. I did a deep dive once into Linux system kernel internals, based on an incredibly well-written but also obnoxiously slash suspiciously thick O'Reilly book, Linux Systems Internalsand it was one of those, like, halfway through, “Can I please be excused? My brain is full.” It's one of those things that I don't use most of it on a day-to-day basis, but it's solidified by understanding of what the computer is actually doing in a way that I will always be grateful for.Liz: Mmm, and there are tens of millions of lines of code in the Linux kernel, so anyone who can internalize any of that is basically a superhero. [laugh].Corey: I have nothing but respect for people who can pull that off.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.In your day job, quote-unquote—which is sort of a weird thing to say, given that you are working at an open-source company; in fact, you are the Chief Open Source Officer, so what you're doing in the community, what you're exploring on the open-source project side of things, it is all interrelated. I tend to have trouble myself figuring out where my job starts and stops most weeks; I'm sympathetic to it. What inspired you folks to launch a company that is, “Ah, we're going to be in the open-source space?” Especially during a time when there's been a lot of pushback, in some respects, about the evolution of open-source and the rise of large cloud providers, where is open-source a viable strategy or a tactic to get to an outcome that is pleasing for all parties?Liz: Mmm. So, I wasn't there at the beginning, for the Isovalent journey, and Cilium has been around for five or six years, now, at this point. I very strongly believe in open-source as an effective way of developing technology—good technology—and getting really good feedback and, kind of, optimizing the speed at which you can innovate. But I think it's very important that businesses don't think—if you're giving away your code, you cannot also sell your code; you have to have some other thing that adds value. Maybe that's some extra code, like in the Isovalent example, the enterprise-related enhancements that we have that aren't part of the open-source distribution.There's plenty of other ways that people can add value to open-source. They can do training, they can do managed services, there's all sorts of different—support was the classic example. But I think it's extremely important that businesses don't just expect that I can write a bunch of open-source code, and somehow magically, through building up a whole load of users, I will find a way to monetize that.Corey: A bunch of nerds will build my product for me on nights and weekends. Yeah, that's a bit of an outmoded way of thinking about these things.Liz: Yeah exactly. And I think it's not like everybody has perfect ability to predict the future and you might start a business—Corey: And I have a lot of sympathy for companies who originally started with the idea of, “Well, we are the project leads. We know this code the best, therefore we are the best people in the world to run this as a service.” The rise of the hyperscale cloud providers has called that into significant question. And I feel for them because it's difficult to completely pivot your business model when you're already a publicly-traded company. That's a very fraught and challenging thing to do. It means that you're left with a bunch of options, none of them great.Cilium as a project is not that old, neither is Isovalent, but it's new enough in the iterative process, that you were able to avoid that particular pitfall. Instead, you're looking at some level of making this understandable and useful to humans, almost the point where it disappears from their level of awareness that they need to think about. There's huge value in something like that. Do you think that there is a future in which projects and companies built upon projects that follow this model are similarly going to be having challenges with hyperscale cloud providers, or other emergent threats to the ecosystem—sorry, ‘threat' is an unfair and unkind word here—but changes to the ecosystem, as we see the world evolving in ways that most of us did not foresee?Liz: Yeah, we've certainly seen some examples in the last year or two, I guess, of companies that maybe didn't anticipate, and who necessarily has a crystal ball to anticipate how cloud providers might use their software? And I think in some cases, the cloud providers has not always been the most generous or most community-minded in their approach to how they've done that. But I think for a company, like Isovalent, our strong point is talent. It would be extremely rare to find the level of expertise in, you know, what is a pretty specialized area. You know, the people at Isovalent who are working on Cilium are also working on eBPF itself, and that level of expertise is, I think, pretty unrivaled.So, we're in such a new space with eBPF, we've only in the last year or so, got to the point where pretty much everyone is running a kernel that's new enough to use eBPF. Startups do have a kind of agility that I think gives them an advantage, which I hope we'll be able to capitalize on. I think sometimes when businesses get upset about their code being used, they probably could have anticipated it. You know, if it's open-source, people will use your software, and you have to think of that.Corey: “What do you mean you're using the thing we gave away for free and you're not paying us to use it?”Liz: Yeah.Corey: “Uh, did you hear what you just said?” Some of this was predictable, let's be fair.Liz: Yeah, and I think you really have to, as a responsible business, think about, well, what does happen if they use all the open-source code? You know, is that a problem? And as far as we're concerned, everybody using Cilium is a fantastic… thing. We fully welcome everyone using Cilium as their data plane because the vast majority of them would use that open-source code, and that would be great, but there will be people who need that extra features and the expertise that I think we're in a unique position to provide. So, I joined Isovalent just about a year ago, and I did that because I believe in the technology, I believe in the company, I believe in, you know, the foundations that it has in open-source.It's a very much an open-source first organization, which I love, and that resonates with me and how I think we can be successful. So, you know, I don't have that crystal ball. I hope I'm right, we'll find out. We should do this again, you know, a couple of years and see how that's panning out. [laugh].Corey: I'll book out the date now.Liz: [laugh].Corey: Looking back at our conversation just now, you talked about open-source, and business strategy and how that's going to be evolving. We talked about the company, we talked about an incredibly in-depth, technical product that honestly goes significantly beyond my current level of technical awareness. And at no point in any of those aspects of the conversation did you talk about it in a way that I did not understand, nor did you come off in any way as condescending. In fact, you wrote an O'Reilly book on Container Security that's written very much the same way. How did you learn to do that? Because it is, frankly, an incredibly rare skill.Liz: Oh, thank you. Yeah, I think I have never been a fan of jargon. I've never liked it when people use a complicated acronym, or really early days in my career, there was a bit of a running joke about how everything was TLAs. And you think, well, I understand why we use an acronym to shorten things, but I don't think we need to assume that everybody knows what everything stands for. Why can't we explain things in simple language? Why can't we just use ordinary terms?And I found that really resonates. You know, if I'm doing a presentation or if I'm writing something, using straightforward language and explaining things, making sure that people understand the, kind of, fundamentals that I'm going to build my explanation on. I just think that has a—it results in people understanding, and that's my whole point. I'm not trying to explain something to—you know, my goal is that they understand it, not that they've been blown away by some kind of magic. I want them to go away going, “Ah, now I understand how this bit fits with that bit,” or, “How this works.” You know?Corey: The reason I bring it up is that it's an incredibly undervalued skill because when people see it, they don't often recognize it for what it is. Because when people don't have that skill—which is common—people just write it off as oh, that person's a bad communicator. Which I think is a little unfair. Being able to explain complex things simply is one of the most valuable yet undervalued skills that I've found in this entire space.Liz: Yeah, I think people sometimes have this sort of wrong idea that vocabulary and complicated terms are somehow inherently smarter. And if you use complicated words, you sound smarter. And I just don't think that's accessible, and I don't think it's true. And sometimes I find myself listening to someone, and they're using complicated terms or analogies that are really obscure, and I'm thinking, but could you explain that to me in words of one syllable? I don't think you could. I think you're… hiding—not you [laugh]. You know, people—Corey: Yeah. No, no, that's fair. I'll take the accusation as [unintelligible 00:31:24] as I can get it.Liz: [laugh]. But I think people hide behind complex words because they don't really understand them sometimes. And yeah, I would rather people understood what I'm saying.Corey: To me—I've done it through conference talks, but the way I generally learn things is by building something with them. But the way I really learn to understand something is I give a conference talk on it because, okay, great. I can now explain Git—which was one of my early technical talks—to folks who built Git. Great. Now, how about I explain it to someone who is not immersed in the space whatsoever? And if I can make it that accessible, great, then I've succeeded. It's a lot harder than it looks.Liz: Yeah, exactly. And one of the reasons why I enjoy building a talk is because I know I've got a pretty good understanding of this, but by the time I've got this talk nailed, I will know this. I might have forgotten it in six months time, you know, but [laugh] while I'm giving that talk, I will have a really good understanding of that because the way I want to put together a talk, I don't want to put anything in a talk that I don't feel I could explain. And that means I have to understand how it works.Corey: It's funny, this whole don't give talks about things you don't understand seems like there's really a nouveau concept, but here we are, we're [working on it 00:32:40].Liz: I mean, I have committed to doing talks that I don't fully understand, knowing that—you know, with the confidence that I can find out between now and the [crosstalk 00:32:48]—Corey: I believe that's called a forcing function.Liz: Yes. [laugh].Corey: It's one of those very high-risk stories, like, “Either I'm going to learn this in the next three months, or else I am going to have some serious egg on my face.”Liz: Yeah, exactly, definitely a forcing function. [laugh].Corey: I really want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?Liz: So, I am online pretty much everywhere as lizrice, and I am on Twitter. I'm on GitHub. And if you want to come and hang out, I am on the Cilium and eBPF Slack, and also the CNCF Slack. Yeah. So, come say hello.Corey: There. We will put links to all of that in the [show notes 00:33:28]. Thank you so much for your time. I appreciate it.Liz: Pleasure.Corey: Liz Rice, Chief Open Source Officer at Isovalent. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment containing an eBPF program that on every packet fires off a Lambda function. Yes, it will be extortionately expensive; almost half as much money as a Managed NAT Gateway.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.