Podcasts about Wavefront

Locus of points at equal phase in a wave

  • 53PODCASTS
  • 123EPISODES
  • 33mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 6, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Wavefront

Latest podcast episodes about Wavefront

The Asianometry Podcast
How the EUV Mirrors are Made

The Asianometry Podcast

Play Episode Listen Later Apr 6, 2025


When the semiconductor industry first started on EUV lithography, almost everyone believed that the optics would be the hardest things to do. Wavefront error—deviations of a light wave from its ideal—comes from imperfections in mirrors and lenses. A good lens aims for lambda/10 error; EUV optics must hit lambda/50. With EUV's 13.5 nm wavelength, that's 260 picometers. For context, a water molecule is 275 pm wide. That's the total error budget—for the entire six-mirror system. Because errors add in quadrature, each mirror gets 106 pm rms. But mirrors double light deviation on reflection, so surface accuracy must be halved: 53 pm rms. That's the radius of a hydrogen atom. It is 20 times harder for an EUV system with six mirrors to achieve the same wavefront performance of a DUV system with 60 surfaces. In this video, we go back to the machine you guys all know and love (again) and the finest multilayer mirrors ever made in history.

The Asianometry Podcast
How the EUV Mirrors are Made

The Asianometry Podcast

Play Episode Listen Later Apr 6, 2025


When the semiconductor industry first started on EUV lithography, almost everyone believed that the optics would be the hardest things to do. Wavefront error—deviations of a light wave from its ideal—comes from imperfections in mirrors and lenses. A good lens aims for lambda/10 error; EUV optics must hit lambda/50. With EUV's 13.5 nm wavelength, that's 260 picometers. For context, a water molecule is 275 pm wide. That's the total error budget—for the entire six-mirror system. Because errors add in quadrature, each mirror gets 106 pm rms. But mirrors double light deviation on reflection, so surface accuracy must be halved: 53 pm rms. That's the radius of a hydrogen atom. It is 20 times harder for an EUV system with six mirrors to achieve the same wavefront performance of a DUV system with 60 surfaces. In this video, we go back to the machine you guys all know and love (again) and the finest multilayer mirrors ever made in history.

Screaming in the Cloud
Insights from a Vendor Insider with Ian Smith

Screaming in the Cloud

Play Episode Listen Later Sep 19, 2024 33:49


It turns out, you don't need to step outside to observe the clouds. On this episode, we're joined by Chronosphere Field CTO Ian Smith. He and Corey delve into the innovative solutions Chronosphere offers, share insights from Ian's experience in the industry, and discuss the future of cloud-native technologies. Whether you're a seasoned cloud professional or new to the field, this conversation with Ian Smith is packed with valuable perspectives and actionable takeaways.Show Highlights:(0:00) Intro(0:42) Chronosphere sponsor read(1:53) The role of Chief of Staff at Chronosphere(2:45) Getting recognized in the Gartner Magic Quadrant(4:42) Talking about the buying process(8:26) The importance of observability(10:18) Guiding customers as a vendor(12:19)  Chronosphere sponsor read(12:46) What should you do as an observability buyer(16:01) Helping orgs understand observability(19:56) Avoiding toxicly positive endorsements(24:15) Being transparent as a vendor(27:43) The myth of "winner take all"(30:02) Short term fixes vs. long term solutions(33:54) Where you can find more from Ian and ChronosphereAbout Ian SmithIan Smith is Field CTO at Chronosphere where he works across sales, marketing, engineering and product to deliver better insights and outcomes to observability teams supporting high-scale cloud-native environments. Previously, he worked with observability teams across the software industry in pre-sales roles at New Relic, Wavefront, PagerDuty and Lightstep.LinksChronosphere: https://chronosphere.io/?utm_source=duckbill-group&utm_medium=podcastIan's Twitter: https://x.com/datasmithingIan's LinkedIn: https://www.linkedin.com/in/ismith314159/SponsorChronosphere: https://chronosphere.io/?utm_source=duckbill-group&utm_medium=podcast

Gig Boss
Two Music Technology Ideas You Could Start Today

Gig Boss

Play Episode Listen Later Jul 25, 2024 66:51


Adam talks with two music technology founders about their software, why they built it, and how they use it. Adam, Jeff, and Anton even discuss some music tech startup ideas that any musician could start to build today.Jeff and Anton have also organized a music technology event called Wavefront (www.wavefrontmn.com) that will feature a bunch of MN-based music technology companies including Jamstik (https://jamstik.com/) and Campfire Foundation (https://www.cfmusic.org/), among others.Tickets to Wavefront (August 1st, 2024) are free, but you must have tickets to attend. You can find tickets here: www.wavefrontmn.com---------Jeff and Anton's company Caedence: https://caedence.io/(synchronized practice and performance control)---------Download Gig Boss app: https://linktr.ee/gigbossJoin the Gig Boss Facebook Group: https://www.facebook.com/groups/gigboss-----------------------------Interested in hearing music made by our awesome guests? Check out this Spotify playlist, which includes music by all our music-making guests! https://open.spotify.com/playlist/63kUrVwtV5tYFsyqLJHO3a?si=7853ba778dff4654------------------------------Intro/Outro music "Far Away From here" : https://adammeckler.hearnow.com/

The Mutual Audio Network
Wavefront Short: Split-Second(071724)

The Mutual Audio Network

Play Episode Listen Later Jul 17, 2024 13:54


On a nearly deserted tube running to nowhere, a signle figure enters to save the human race. Learn more about your ad choices. Visit megaphone.fm/adchoices

Wednesday Wonders
Wavefront Short: Split-Second

Wednesday Wonders

Play Episode Listen Later Jul 17, 2024 13:54


On a nearly deserted tube running to nowhere, a signle figure enters to save the human race. Learn more about your ad choices. Visit megaphone.fm/adchoices

Action and Ambition
Increase Your Team Performance In All Forms with QueryPal – Your Team's New Smart Chat Companion

Action and Ambition

Play Episode Listen Later Apr 24, 2024 33:34


Welcome to another episode of The Action and Ambition Podcast! Joining us today is Dev Nag, the Founder and CEO of QueryPal, the smartest way to chat in Slack and MS Teams. QueryPal sifts through your chat history, wikis, knowledge bases, and uploaded files to find the most relevant answers instantly. It seamlessly integrates with the tools you already use, including Slack, Microsoft Teams, Confluence, Notion, and Google Drive. With QueryPal, your team can move past the endless cycle of repetitive questions and enter a world where information is effortlessly accessible. Previously, he founded Wavefront and was a software engineer for PayPal and Google. Tune in to learn more!

VMware Podcasts
Digital Transformation Podcast, Part 12: How VMware IT harnessed seamless Monitoring using Wavefront

VMware Podcasts

Play Episode Listen Later Nov 16, 2023 13:39


Digital Transformation Podcast, Part 12: How VMware IT harnessed seamless Monitoring using Wavefront by VMware Podcasts

The VFX Artists Podcast
The Man Who Wrote The Book on Compositing with Ron Brinkmann

The VFX Artists Podcast

Play Episode Listen Later Jul 8, 2023 60:37 Transcription Available


The Man Who Wrote The Book on Compositing with Ron Brinkman | TVAP EP56When I embarked on my compositing journey, Ron Brinkmann's book, "The Art and Science of Digital Compositing," proved to be an invaluable resource. It provided a wealth of visual information and included a DVD with footage for hands-on practice. Even today, this book remains a seminal work and continues to be relevant for artists, thanks to its emphasis on developing the skill of visual perception.Therefore, it was a tremendous honor to have Ron Brinkmann as a guest on our show. He served as the Compositing Supervisor at Sony Imageworks for films such as Speed, Contact, James and the Giant Peach, and more. Interestingly, despite being widely recognized for his expertise in compositing, Brinkmann initially started his journey in the field of 3D as a demonstration artist for Wavefront. (Wavefront later merged with Alias and played a significant role in the development of Maya, a renowned software).Subsequently, Brinkmann co-founded Nothing Real with his colleagues and created Shake, a compositing software that held sway in the industry for many years, particularly in high-end compositing. (I believe it was most recently used by Dneg for their work on Inception).As always, if you liked this episode, please

PaperPlayer biorxiv neuroscience
Guide to the construction and use of an adaptive optics two-photon microscope with direct wavefront sensing

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Jan 24, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.01.24.525307v1?rss=1 Authors: Kleninfeld, D., Yao, P., Liu, R., Thunemann, M., Boggini, T. Abstract: Two-photon microscopy, combined with appropriate optical labeling, has enabled the study of structure and function throughout nervous systems. This methodology enables, for example, the measurement and tracking of sub-micrometer structures within brain cells, the spatio-temporal mapping of spikes in individual neurons, and the spatio-temporal mapping of transmitter release in individual synapses. Yet the spatial resolution of two-photon microscopy rapidly degrades as imaging is attempted at depths more than a few scattering lengths into tissue, i.e., below the superficial layers that constitute the top 300 to 400 um of neocortex. To obviate this limitation, we measure the wavefront at the focus of the excitation beam and utilize adaptive optics that alters the incident wavefront to achieve an improved focal volume. We describe the constructions, calibration, and operation of a two-photon microscopy that incorporates adaptive optics to restore diffraction-limited resolution throughout the nearly 900 um depth of mouse cortex. Our realization utilizes a guide star formed by excitation of red-shifted dye within the blood serum to directly measure the wavefront. We incorporate predominantly commercial optical, optomechanical, mechanical, and electronic components; computer aided design models of the exceptional custom components are supplied. The design is modular and allows for expanded imaging and optical excitation capabilities. We demonstrate our methodology in mouse neocortex by imaging the morphology of somatostatin-expressing neurons at 700 um beneath the pia, calcium dynamics of layer 5b projection neurons, and glutamate transmission to L4 neurons. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

Astro arXiv | all categories
Three-sided pyramid wavefront sensor II Preliminary demonstration on the new CACTI testbed

Astro arXiv | all categories

Play Episode Listen Later Oct 10, 2022 1:05


Three-sided pyramid wavefront sensor II Preliminary demonstration on the new CACTI testbed by Lauren Schatz et al. on Monday 10 October The next generation of giant ground and space telescopes will have the light-collecting power to detect and characterize potentially habitable terrestrial exoplanets using high-contrast imaging for the first time. This will only be achievable if the performance of Giant Segmented Mirror Telescopes (GSMTs) extreme adaptive optics (ExAO) systems are optimized to their full potential. A key component of an ExAO system is the wavefront sensor (WFS), which measures aberrations from atmospheric turbulence. A common choice in current and next-generation instruments is the pyramid wavefront sensor (PWFS). ExAO systems require high spatial and temporal sampling of wavefronts to optimize performance, and as a result, require large detectors for the WFS. We present a closed-loop testbed demonstration of a three-sided pyramid wavefront sensor (3PWFS) as an alternative to the conventional four-sided pyramid wavefront (4PWFS) sensor for GSMT-ExAO applications on the new Comprehensive Adaptive Optics and Coronagraph Test Instrument (CACTI). The 3PWFS is less sensitive to read noise than the 4PWFS because it uses fewer detector pixels. The 3PWFS has further benefits: a high-quality three-sided pyramid optic is easier to manufacture than a four-sided pyramid. We detail the design of the two components of the CACTI system, the adaptive optics simulator and the PWFS testbed that includes both a 3PWFS and 4PWFS. A preliminary experiment was performed on CACTI to study the performance of the 3PWFS to the 4PWFS in varying strengths of turbulence using both the Raw Intensity and Slopes Map signal processing methods. This experiment was repeated for a modulation radius of 1.6 lambda/D and 3.25 lambda/D. We found that the performance of the two wavefront sensors is comparable if modal loop gains are tuned. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2210.03823v1

PaperPlayer biorxiv neuroscience
Noninvasive, automated and reliable detection of spreading depolarizations in severe traumatic brain injury using scalp EEG

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Oct 8, 2022


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.10.07.511376v1?rss=1 Authors: Chamanzar, A., Elmer, J., Shutter, L., Hartings, J. A., Grover, P. Abstract: Background: Noninvasive detection of spreading depolarizations (SD), as a potentially treatable mechanism of worsening brain injuries after traumatic brain injuries (TBI), has remained elusive. Current methods to detect SDs are based on intracranial recording, an invasive method with limited spatial coverage. Less invasive methods to diagnose SD are needed to improve generalizability and application of this emerging science and to guide worsening brain injury treatments. Here, we demonstrate, for the first time, a signal processing paradigm that can enable automated detection of SDs using noninvasive electroencephalography (EEG). Methods: Building on our previously developed WAVEFRONT algorithm, we have designed a novel automated SD detection method. This algorithm, with learnable parameters and improved velocity estimation, extracts and tracks propagating power depressions, as well as near-DC shifts using low-density EEG. This modified WAVEFRONT is robust to the amplitude outliers and non-propagating depressions on the scalp. We show the feasibility of detecting SD events (700 total SDs) in continuous, low-density scalp EEG recording (95{+/-}42.2 hours with 19 electrodes) acquired from 12 severe TBI patients who underwent decompressive hemicraniectomy (DHC) and intracranial EEG that could be used as a ground truth for event detection. We quantify the performance of WAVEFRONT in terms of SD detection accuracy, including true positive rate (TPR) and false positive rate (FPR), as well as the accuracy of estimating the frequency of SDs. Results: WAVEFRONT achieves the best average validation accuracy of 74% TPR (with 95% confidence interval of 70.8%-76.7%), with less than 1.5% FPR using Delta band EEG. Preliminary evidence suggests that WAVEFRONT can achieve a very good performance (regression with R2 [~=] 0.71) in the estimation of SD frequencies. Conclusions: We demonstrate feasibility and quantify the performance of noninvasive SD detection after severe TBI using an automated algorithm. WAVEFRONT can potentially be used for diagnosis and monitoring of worsening brain injuries to guide treatments by providing a measure of SD frequency. Extension of these results to patients with intact skulls requires further study. Copy rights belong to original authors. Visit the link for more info Podcast created by PaperPlayer

Astro arXiv | all categories
Adapting the pyramid wavefront sensor for pupil fragmentation of the ELT class telescopes

Astro arXiv | all categories

Play Episode Listen Later Sep 19, 2022 0:53


Adapting the pyramid wavefront sensor for pupil fragmentation of the ELT class telescopes by Nicolas Levraud et al. on Monday 19 September The next generation of Extremely Large Telescope (24 to 39m diameter) will suffer from the so-called "pupil fragmentation" problem. Due to their pupil shape complexity (segmentation, large spiders ...), some differential pistons may appear between some isolated part of the full pupil during the observations. Although classical AO system will be able to correct for turbulence effects, they will be blind to this specific telescope induced perturbations. Hence, such differential piston, a.k.a petal modes, will prevent to reach the diffraction limit of the telescope and ultimately will represent the main limitation of AO-assisted observation with an ELT. In this work we analyse the spatial structure of these petal modes and how it affects the ability of a Pyramid Wavefront sensor to sense them. Then we propose a variation around the classical Pyramid concept for increasing the WFS sensitivity to this particular modes. Nevertheless, We show that one single WFS can not accurately and simultaneously measure turbulence and petal modes. We propose a double path wavefront sensor scheme to solve this problem. We show that such a scheme,associated to a spatial filtering of residual turbulence in the second WFS path dedicated to petal mode sensing, allows to fully measure and correct for both turbulence and fragmentation effects and will eventually restore the full capability and spatial resolution of the future ELT. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2209.08822v1

Interplace
A Playful Past Allows Us To Last: Part II of II

Interplace

Play Episode Listen Later Sep 18, 2022 16:11


Hello Interactors,I was interviewed!Big thanks to my friend and former Wavefront colleague, Mark Sylvester, who is now the Curator, Host, and Executive Producer at TEDx Santa Barbara.Check it out!https://tedxsantabarbara.com/.../brad-weed-we-need.../The unedited version that was streamed live is here on FB:https://fb.watch/fz9nyudo5r/Last week I left off Part I introducing a new science proposed by two scientists affiliated with my favorite multidisciplinary institution, and leader in studying complexity adaptive systems, The Santa Fe Institute. Today I draw from their paper published in August that includes links to a recent book that has shook the scientific academy. Science is adapting to a new world, a new climate, and new future. This proposed new scientific field aims to accelerate that adaptation. As interactors, you're special individuals self-selected to be a part of an evolutionary journey. You're also members of an attentive community so I welcome your participation.Please leave your comments below or email me directly.Now let's go…EVOLVING FAST AND SLOW“What until now has passed for ‘civilization' might in fact be nothing more than a gendered appropriation – by men, etching their claims in stone – of some earlier system of knowledge that had women at its centre.”These are the words of David Graeber and David Wengrow from their recent epic myth-busting book, The Dawn of Everything: a New History of Humanity. They paint a picture of human history that debunks many assumptions underlying the contributions of theoretical ‘great men' that dominate recollections of history, scientific discovery, and human evolution. But two great women stepped forward in August to offer a new center for systems of knowledge that complements Graeber and Wengrow's theories.Recent technological and collaborative advances in anthropology, archeology, ecology, geography, and related disciplines are sketching new patterns of interactions of people and place. Complex webs of far-flung and slow growing networks of social interactions, spanning large swaths of the globe over millennia, are coming into focus.Graeber and Wengrow claim “the world of hunter-gatherers as it existed before the coming of agriculture was one of bold social experiments, resembling a carnival parade of political forms.” This interpretation offers a radical counter to existing “drab abstractions of evolutionary theory.” Contrary to popular belief, they offer that“Agriculture, in turn, did not mean the inception of private property, nor did it mark an irreversible step towards inequality. In fact, many of the first farming communities were relatively free of ranks and hierarchies. And far from setting class differences in stone, a surprising number of the world's earliest cities were organized on robustly egalitarian lines, with no need for authoritarian rulers, ambitious warrior-politicians, or even bossy administrators.”Graeber and Wengrow's analysis offer an alternative understanding of the nearly 300,000 years of homo sapiens' existence. And Stefani Crabtree and Jennifer Dunne, both affiliated with the Santa Fe Institute, wrote a recent opinion piece that builds on their position. “Towards a science of archeaoecology”, published in the journal, Trends in Ecology & Evolution, calls for integrating elements of archeology and ecology under the term archeaoecology to further understand these pasts.By sharing approaches and data of related fields they hope to form a more complete picture of the unfolding of humanity and ecosystems so that both may continue to unfold into the future. They hope to intertwine two interrelated trends that emerged over the last 60,000 years of humanity. Some findings of which, were also highlighted by Graeber and Wengrow. These two trends are:The slow evident far-flung dispersal of homo sapiens across regions and around the globe.The increasingly rapid development of tools and technologies that enabled it.Together these contributed to the gradual and pervasive spread of complex social networks fueled by the interaction of people and place – and other animal species. However, as Crabtree and Dunne remind us, “As humans spread to new places and their populations grew…their impacts on ecosystems grew commensurately.”ARTIFACTS, ECOFACTS, AND SCALING MATHThe subfield of archeology that studies these impacts is environmental archeology. While much of this research focuses on a reconstruction of past climates, it doesn't always consider the larger ecological context. But the combined fields of paleontology (the study of fossilized plants and animals) and ecology does, under the name of paleoecology. However, it misses human elements of archeology just as environmental archeology sometimes ignores aspects of ecology.But new sensing technologies, increased computing power, advances in ecological modelling, and a growing corpus of digitized archeological records is providing bridges between these disciplines.  Now scientists can construct integrated understandings of how people interacted with place through deep time. Instead of fragments of artifacts, ecofacts, and trash deposits uncovered through disparate stages of time amidst localized climatic conditions, a more thorough and dynamic representation emerges.How do the interactions of people and place impact ecosystems and cultures and in turn influence their respective evolutions? It's questions like this that led Crabtree and Dunne to call on earth and human researchers to “confront pressing questions about the sustainability of current and future coupled natural-human systems” under the banner of archeoecology.It was archaeologists and paleoecologists who first coined this term. It described scientists or studies that relied on varieties of data, like geological morphology or climatology, to form interpretations of the archeological past. But they weren't intent on necessarily forming a systematic understanding of historic dynamic interactions of natural-human systems. Moreover, they weren't, as Crabtree and Dunne propose, providing an “intellectual home” for a new integrative science bridging these three disciplines:Archaeology: the study of past societies by reconstructing physical non-biological environments.Palaeoecology: the reconstruction of past ecosystems based on fossil remains but often excluding humans.Ecology: considerations of the living and nonliving interactions among organisms, mostly non-human, in existing ecosystems.The new home they suggest is filled with a growing assortment of tools and technologies which can be shared among them. They range in scale from the microscopic analysis of plants, animals, and tree rings to vast ecological and social networks through the distribution of species amidst cascading patterns of extinction. Computer models can represent everything from cellular structures that mimic behavior of biology to modelling individual and group behaviors based on quantitative data found across a range of space and time. In May I wrote about how this kind of modeling, led by another Santa Fe affiliate, Scott Ortman, uncovered new findings regarding the Scaling of Hunter-Gatherer Camp Size and Human Sociality in my Interplace essay called City Maps and Scaling Math.This array of interdependent tools conspires to generate the Crabtree and Dunne definition of archeocecology:“The branch of science that employs archaeological, ecological, and environmental records to reconstruct past complex ecosystems including human roles and impacts, leveraging advances in ecological analysis, modeling, and theory for studying the earth's human past.”NATURE OR NURTUREThe aim of this new science is to reconstruct interdependent networks of human mediated systems that mutually depend on each other for survival. This offers clues, for example, into just how many plants and animals may have migrated and propagated on their own through earth's natural systems versus being transported and nurtured by highly mobile, creative humans amidst networks of seemingly egalitarian bands. Crabtree and Dunne offer one such example from Cyprus where scientists used archeoecological approaches to discover how that area's current ecosystem came to be.Using species distribution models and food webs the research showed how settlers in the later part of the Stone Age (Neolithic period) “brought with them several nondomesticated animals and plants, including fox (Vulpes vulpes indutus), deer (Dama dama), pistachios (Pistacia vera), flax (Linum sp.), and figs (Ficus carica), to alter the Cyprian ecosystem to meet their needs. These were supplemented with domestic einkorn [early forms of wheat] (Triticum monococcum) and barley (Hordeum vulgare), as well as domesticated pigs (Sus scrofa), sheep (Ovis sp.), goat (Capra sp.), and cattle (Bos sp.).”The coincidental dating of these human settlers, plants, and animals suggests not only the introduction of new species to the area, but the intention to create a niche ecosystem on which they could survive. Elements of that Neolithic ecosystem are alive in Cyprus to this day. Crabtree's own research into the ecological impacts of the removal of Aboriginal populations in Australia corroborates these theories.Her work highlights the need to marry the high-tech scientific approaches of archeoecology with Traditional Ecological Knowledge…otherwise known as Indigenous Knowledge or Indigenous Science. As I wrote last week in Part I, stitching together past and present Western science requires collaborations with Indigenous people, their knowledge, culture, and traditions. To strategize the survival of the natural world, of which we humans are linked – amidst a changing and increasingly volatile climate – requires honoring, respecting, and collaborating with people and cultures as varied and complex as the ecosystems on which we coexist.Crabtree and Dunne show how archeoecology can reveal “how humans altered, and were shaped by, ecosystems across deep time.” By collaborating, sharing, and synthesizing diverse bodies of knowledge across artificial academic and cultural boundaries and beliefs we can “explore implications for the future sustainability of anthropogenically modified landscapes.” This is particularly imperative “given scenarios such as changing climate, land-use intensification, and species extinctions.”This treatise on archeoecology by Crabtree and Dunne offers a set of tools necessary to present “a new history of humankind.” Much like Graeber and Wengrow set out to do, it also encourages “a new science of history, one that restores our ancestors to their full humanity.”Collaborative science, like collaborative music and sports, spawns unexpected, serendipitous discovery through systems of human tension, tolerance, intimacy, and cumulative joy and sorrow, setbacks, and steps forward. This is the nature of unbridled egalitarian play observed among young people unaltered by prejudice, politics, fright, and might. It's felt in us all through lifetime acts of negotiation and negation, rejoice and reproach, exaltation and anguish, or creation and destruction. It is the nature of humankind. And it is, like our ecosystems, in constant mutualistic flux.As is the work of Crabtree, Dunne, Graeber (RIP), Wengrow, and others like them. But as they have already shown, “The answers are often unexpected, and suggest that the course of human history may be less set in stone, and more full of playful possibilities, than we tend to assume.” This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit interplace.io

Screaming in the Cloud
The Ever-Changing World of Cloud Native Observability with Ian Smith

Screaming in the Cloud

Play Episode Listen Later Sep 13, 2022 41:58


About IanIan Smith is Field CTO at Chronosphere where he works across sales, marketing, engineering and product to deliver better insights and outcomes to observability teams supporting high-scale cloud-native environments. Previously, he worked with observability teams across the software industry in pre-sales roles at New Relic, Wavefront, PagerDuty and Lightstep.Links Referenced: Chronosphere: https://chronosphere.io Last Tweet in AWS: lasttweetinaws.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while, I find that something I'm working on aligns perfectly with a person that I wind up basically convincing to appear on this show. Today's promoted guest is Ian Smith, who's Field CTO at Chronosphere. Ian, thank you for joining me.Ian: Thanks, Corey. Great to be here.Corey: So, the coincidental aspect of what I'm referring to is that Chronosphere is, despite the name, not something that works on bending time, but rather an observability company. Is that directionally accurate?Ian: That's true. Although you could argue it probably bend a little bit of engineering time. But we can talk about that later.Corey: [laugh]. So, observability is one of those areas that I think is suffering from too many definitions, if that makes sense. And at first, I couldn't make sense of what it was that people actually meant when they said observability, this sort of clarified to me at least when I realized that there were an awful lot of, well, let's be direct and call them ‘legacy monitoring companies' that just chose to take what they were already doing and define that as, “Oh, this is observability.” I don't know that I necessarily agree with that. I know a lot of folks in the industry vehemently disagree.You've been in a lot of places that have positioned you reasonably well to have opinions on this sort of question. To my understanding, you were at interesting places, such as LightStep, New Relic, Wavefront, and PagerDuty, which I guess technically might count as observability in a very strange way. How do you view observability and what it is?Ian: Yeah. Well, a lot of definitions, as you said, common ones, they talk about the three pillars, they talk really about data types. For me, it's about outcomes. I think observability is really this transition from the yesteryear of monitoring where things were much simpler and you, sort of, knew all of the questions, you were able to define your dashboards, you were able to define your alerts and that was really the gist of it. And going into this brave new world where there's a lot of unknown things, you're having to ask a lot of sort of unique questions, particularly during a particular instance, and so being able to ask those questions in an ad hoc fashion layers on top of what we've traditionally done with monitoring. So, observability is sort of that more flexible, more dynamic kind of environment that you have to deal with.Corey: This has always been something that, for me, has been relatively academic. Back when I was running production environments, things tended to be a lot more static, where, “Oh, there's a problem with the database. I will SSH into the database server.” Or, “Hmm, we're having a weird problem with the web tier. Well, there are ten or 20 or 200 web servers. Great, I can aggregate all of their logs to Syslog, and worst case, I can log in and poke around.”Now, with a more ephemeral style of environment where you have Kubernetes or whatnot scheduling containers into place that have problems you can't attach to a running container very easily, and by the time you see an error, that container hasn't existed for three hours. And that becomes a problem. Then you've got the Lambda universe, which is a whole ‘nother world pain, where it becomes very challenging, at least for me, in order to reason using the old style approaches about what's actually going on in your environment.Ian: Yeah, I think there's that and there's also the added complexity of oftentimes you'll see performance or behavioral changes based on even more narrow pathways, right? One particular user is having a problem and the traffic is spread across many containers. Is it making all of these containers perform badly? Not necessarily, but their user experience is being affected. It's very common in say, like, B2B scenarios for you to want to understand the experience of one particular user or the aggregate experience of users at a particular company, particular customer, for example.There's just more complexity. There's more complexity of the infrastructure and just the technical layer that you're talking about, but there's also more complexity in just the way that we're handling use cases and trying to provide value with all of this software to the myriad of customers in different industries that software now serves.Corey: For where I sit, I tend to have a little bit of trouble disambiguating, I guess, the three baseline data types that I see talked about again and again in observability. You have logs, which I think I've mostly I can wrap my head around. That seems to be the baseline story of, “Oh, great. Your application puts out logs. Of course, it's in its own unique, beautiful format. Why wouldn't it be?” In an ideal scenario, they're structured. Things are never ideal, so great. You're basically tailing log files in some cases. Great. I can reason about those.Metrics always seem to be a little bit of a step beyond that. It's okay, I have a whole bunch of log lines that are spitting out every 500 error that my app is throwing—and given my terrible code, it throws a lot—but I can then ideally count the number of times that appears and then that winds up incrementing counter, similar to the way that we used to see with StatsD, for example, and Collectd. Is that directionally correct? As far as the way I reason about, well so far, logs and metrics?Ian: I think at a really basic level, yes. I think that, as we've been talking about, sort of greater complexity starts coming in when you have—particularly metrics in today's world of containers—Prometheus—you mentioned StatsD—Prometheus has become sort of like the standard for expressing those things, so you get situations where you have incredibly high cardinality, so cardinality being the interplay between all the different dimensions. So, you might have, my container is a label, but also the type of endpoint is running on that container as a label, then maybe I want to track my customer organizations and maybe I have 5000 of those. I have 3000 containers, and so on and so forth. And you get this massive explosion, almost multiplicatively.For those in the audience who really live and read cardinality, there's probably someone screaming about well, it's not truly multiplicative in every sense of the word, but, you know, it's close enough from an approximation standpoint. As you get this massive explosion of data, which obviously has a cost implication but also has, I think, a really big implication on the core reason why you have metrics in the first place you alluded to, which is, so a human being can reason about it, right? You don't want to go and look at 5000 log lines; you want to know, out of those 5000 log lines of 4000 errors and I have 1000, OKs. It's very easy for human beings to reason about that from a numbers perspective. When your metrics start to re-explode out into thousands, millions of data points, and unique sort of time series more numbers for you to track, then you're sort of losing that original goal of metrics.Corey: I think I mostly have wrapped my head around the concept. But then that brings us to traces, and that tends to be I think one of the hardest things for me to grasp, just because most of the apps I build, for obvious reasons—namely, I'm bad at programming and most of these are proof of concept type of things rather than anything that's large scale running in production—the difference between a trace and logs tends to get very muddled for me. But the idea being that as you have a customer session or a request that talks to different microservices, how do you collate across different systems all of the outputs of that request into a single place so you can see timing information, understand the flow that user took through your application? Is that again, directionally correct? Have I completely missed the plot here? Which is again, eminently possible. You are the expert.Ian: No, I think that's sort of the fundamental premise or expected value of tracing, for sure. We have something that's akin to a set of logs; they have a common identifier, a trace ID, that tells us that all of these logs essentially belong to the same request. But importantly, there's relationship information. And this is the difference between just having traces—sorry, logs—with just a trace ID attached to them. So, for example, if you have Service A calling Service B and Service C, the relatively simple thing, you could use time to try to figure this out.But what if there are things happening in Service B at the same time there are things happening in Service C and D, and so on and so forth? So, one of the things that tracing brings to the table is it tells you what is currently happening, what called that. So oh, I know that I'm Service D. I was actually called by Service B and I'm not just relying on timestamps to try and figure out that connection. So, you have that information and ultimately, the data model allows you to fully sort of reflect what's happening with the request, particularly in complex environments.And I think this is where, you know, tracing needs to be sort of looked at as not a tool for—just because I'm operating in a modern environment, I'm using some Kubernetes, or I'm using Lambda, is it needs to be used in a scenario where you really have troubles grasping, from a conceptual standpoint, what is happening with the request because you need to actually fully document it. As opposed to, I have a few—let's say three Lambda functions. I maybe have some key metrics about them; I have a little bit of logging. You probably do not need to use tracing to solve, sort of, basic performance problems with those. So, you can get yourself into a place where you're over-engineering, you're spending a lot of time with tracing instrumentation and tracing tooling, and I think that's the core of observability is, like, using the right tool, the right data for the job.But that's also what makes it really difficult because you essentially need to have this, you know, huge set of experience or knowledge about the different data, the different tooling, and what influential architecture and the data you have available to be able to reason about that and make confident decisions, particularly when you're under a time crunch which everyone is familiar with a, sort of like, you know, PagerDuty-style experience of my phone is going off and I have a customer-facing incident. Where is my problem? What do I need to do? Which dashboard do I need to look at? Which tool do I need to investigate? And that's where I think the observability industry has become not serving the outcomes of the customers.Corey: I had a, well, I wouldn't say it's a genius plan, but it was a passing fancy that I've built this online, freely available Twitter client for authoring Twitter threads—because that's what I do is that of having a social life—and it's available at lasttweetinaws.com. I've used that as a testbed for a few things. It's now deployed to roughly 20 AWS regions simultaneously, and this means that I have a bit of a problem as far as how to figure out not even what's wrong or what's broken with this, but who's even using it?Because I know people are. I see invocations all over the planet that are not me. And sometimes it appears to just be random things crawling the internet—fine, whatever—but then I see people logging in and doing stuff with it. I'd kind of like to log and see who's using it just so I can get information like, is there anyone I should talk to about what it could be doing differently? I love getting user experience reports on this stuff.And I figured, ah, this is a perfect little toy application. It runs in a single Lambda function so it's not that complicated. I could instrument this with OpenTelemetry, which then, at least according to the instructions on the tin, I could then send different types of data to different observability tools without having to re-instrument this thing every time I want to kick the tires on something else. That was the promise.And this led to three weeks of pain because it appears that for all of the promise that it has, OpenTelemetry, particularly in a Lambda environment, is nowhere near ready for being able to carry a workload like this. Am I just foolish on this? Am I stating an unfortunate reality that you've noticed in the OpenTelemetry space? Or, let's be clear here, you do work for a company with opinions on these things. Is OpenTelemetry the wrong approach?Ian: I think OpenTelemetry is absolutely the right approach. To me, the promise of OpenTelemetry for the individual is, “Hey, I can go and instrument this thing, as you said and I can go and send the data, wherever I want.” The sort of larger view of that is, “Well, I'm no longer beholden to a vendor,”—including the ones that I've worked for, including the one that I work for now—“For the definition of the data. I am able to control that, I'm able to choose that, I'm able to enhance that, and any effort I put into it, it's mine. I own that.”Whereas previously, if you picked, say, for example, an APM vendor, you said, “Oh, I want to have some additional aspects of my information provider, I want to track my customer, or I want to track a particular new metric of how much dollars am I transacting,” that effort really going to support the value of that individual solution, it's not going to support your outcomes. Which is I want to be able to use this data wherever I want, wherever it's most valuable. So, the core premise of OpenTelemetry, I think, is great. I think it's a massive undertaking to be able to do this for at least three different data types, right? Defining an API across a whole bunch of different languages, across three different data types, and then creating implementations for those.Because the implementations are the thing that people want, right? You are hoping for the ability to, say, drop in something. Maybe one line of code or preferably just, like, attach a dependency, let's say in Java-land at runtime, and be able to have the information flow through and have it complete. And this is the premise of, you know, vendors I've worked with in the past, like New Relic. That was what New Relic built on: the ability to drop in an agent and get visibility immediately.So, having that out-of-the-box visibility is obviously a goal of OpenTelemetry where it makes sense—Go, it's very difficult to attach things at runtime, for example—but then saying, well, whatever is provided—let's say your gRPC connections, database, all these things—well, now I want to go and instrument; I want to add some additional value. As you said, maybe you want to track something like I want to have in my traces the email address of whoever it is or the Twitter handle of whoever is so I can then go and analyze that stuff later. You want to be able to inject that piece of information or that instrumentation and then decide, well, where is the best utilized? Is it best utilized in some tooling from AWS? Is it best utilized in something that you've built yourself? Is it best of utilized an open-source project? Is it best utilized in one of the many observability vendors, or is even becoming more common, I want to shove everything in a data lake and run, sort of, analysis asynchronously, overlay observability data for essentially business purposes.All of those things are served by having a very robust, open-source standard, and simple-to-implement way of collecting a really good baseline of data and then make it easy for you to then enhance that while still owning—essentially, it's your IP right? It's like, the instrumentation is your IP, whereas in the old world of proprietary agents, proprietary APIs, that IP was basically building it, but it was tied to that other vendor that you were investing in.Corey: One thing that I was consistently annoyed by in my days of running production infrastructures at places, like, you know, large banks, for example, one of the problems I kept running into is that this, there's this idea that, “Oh, you want to use our tool. Just instrument your applications with our libraries or our instrumentation standards.” And it felt like I was constantly doing and redoing a lot of instrumentation for different aspects. It's not that we were replacing one vendor with another; it's that in an observability, toolchain, there are remarkably few, one-size-fits-all stories. It feels increasingly like everyone's trying to sell me a multifunction printer, which does one thing well, and a few other things just well enough to technically say they do them, but badly enough that I get irritated every single time.And having 15 different instrumentation packages in an application, that's either got security ramifications, for one, see large bank, and for another it became this increasingly irritating and obnoxious process where it felt like I was spending more time seeing the care and feeding of the instrumentation then I was the application itself. That's the gold—that's I guess the ideal light at the end of the tunnel for me in what OpenTelemetry is promising. Instrument once, and then you're just adjusting configuration as far as where to send it.Ian: That's correct. The organization's, and you know, I keep in touch with a lot of companies that I've worked with, companies that have in the last two years really invested heavily in OpenTelemetry, they're definitely getting to the point now where they're generating the data once, they're using, say, pieces of the OpenTelemetry pipeline, they're extending it themselves, and then they're able to shove that data in a bunch of different places. Maybe they're putting in a data lake for, as I said, business analysis purposes or forecasting. They may be putting the data into two different systems, even for incident and analysis purposes, but you're not having that duplication effort. Also, potentially that performance impact, right, of having two different instrumentation packages lined up with each other.Corey: There is a recurring theme that I've noticed in the observability space that annoys me to no end. And that is—I don't know if it's coming from investor pressure, from folks never being satisfied with what they have, or what it is, but there are so many startups that I have seen and worked with in varying aspects of the observability space that I think, “This is awesome. I love the thing that they do.” And invariably, every time they start getting more and more features bolted onto them, where, hey, you love this whole thing that winds up just basically doing a tail-F on a log file, so it just streams your logs in the application and you can look for certain patterns. I love this thing. It's great.Oh, what's this? Now, it's trying to also be the thing that alerts me and wakes me up in the middle of the night. No. That's what PagerDuty does. I want PagerDuty to do that thing, and I want other things—I want you just to be the log analysis thing and the way that I contextualize logs. And it feels like they keep bolting things on and bolting things on, where everything is more or less trying to evolve into becoming its own version of Datadog. What's up with that?Ian: Yeah, the sort of, dreaded platform play. I—[laugh] I was at New Relic when there were essentially two products that they sold. And then by the time I left, I think there was seven different products that were being sold, which is kind of a crazy, crazy thing when you think about it. And I think Datadog has definitely exceeded that now. And I definitely see many, many vendors in the market—and even open-source solutions—sort of presenting themselves as, like, this integrated experience.But to your point, even before about your experience of these banks it oftentimes become sort of a tick-a-box feature approach of, “Hey, I can do this thing, so buy more. And here's a shared navigation panel.” But are they really integrated? Like, are you getting real value out of it? One of the things that I do in my role is I get to work with our internal product teams very closely, particularly around new initiatives like tracing functionality, and the constant sort of conversation is like, “What is the outcome? What is the value?”It's not about the feature; it's not about having a list of 19 different features. It's like, “What is the user able to do with this?” And so, for example, there are lots of platforms that have metrics, logs, and tracing. The new one-upmanship is saying, “Well, we have events as well. And we have incident response. And we have security. And all these things sort of tie together, so it's one invoice.”And constantly I talk to customers, and I ask them, like, “Hey, what are the outcomes that you're getting when you've invested so heavily in one vendor?” And oftentimes, the response is, “Well, I only need to deal with one vendor.” Okay, but that's not an outcome. [laugh]. And it's like the business having a single invoice.Corey: Yeah, that is something that's already attainable today. If you want to just have one vendor with a whole bunch of crappy offerings, that's what AWS is for. They have AmazonBasics versions of everything you might want to use in production. Oh, you want to go ahead and use MongoDB? Well, use AmazonBasics MongoDB, but they call it DocumentDB because of course they do. And so, on and so forth.There are a bunch of examples of this, but those companies are still in business and doing very well because people often want the genuine article. If everyone was trying to do just everything to check a box for procurement, great. AWS has already beaten you at that game, it seems.Ian: I do think that, you know, people are hoping for that greater value and those greater outcomes, so being able to actually provide differentiation in that market I don't think is terribly difficult, right? There are still huge gaps in let's say, root cause analysis during an investigation time. There are huge issues with vendors who don't think beyond sort of just the one individual who's looking at a particular dashboard or looking at whatever analysis tool there is. So, getting those things actually tied together, it's not just, “Oh, we have metrics, and logs, and traces together,” but even if you say we have metrics and tracing, how do you move between metrics and tracing? One of the goals in the way that we're developing product at Chronosphere is that if you are alerted to an incident—you as an engineer; doesn't matter whether you are massively sophisticated, you're a lead architect who has been with the company forever and you know everything or you're someone who's just come out of onboarding and is your first time on call—you should not have to think, “Is this a tracing problem, or a metrics problem, or a logging problem?”And this is one of those things that I mentioned before of requiring that really heavy level of knowledge and understanding about the observability space and your data and your architecture to be effective. And so, with the, you know, particularly observability teams and all of the engineers that I speak with on a regular basis, you get this sort of circumstance where well, I guess, let's talk about a real outcome and a real pain point because people are like, okay, yeah, this is all fine; it's all coming from a vendor who has a particular agenda, but the thing that constantly resonates is for large organizations that are moving fast, you know, big startups, unicorns, or even more traditional enterprises that are trying to undergo, like, a rapid transformation and go really cloud-native and make sure their engineers are moving quickly, a common question I will talk about with them is, who are the three people in your organization who always get escalated to? And it's usually, you know, between two and five people—Corey: And you can almost pick those perso—you say that and you can—at least anyone who's worked in environments or through incidents like this more than a few times, already have thought of specific people in specific companies. And they almost always fall into some very predictable archetypes. But please, continue.Ian: Yeah. And people think about these people, they always jump to mind. And one of the things I asked about is, “Okay, so when you did your last innovation around observably”—it's not necessarily buying a new thing, but it maybe it was like introducing a new data type or it was you're doing some big investment in improving instrumentation—“What changed about their experience?” And oftentimes, the most that can come out is, “Oh, they have access to more data.” Okay, that's not great.It's like, “What changed about their experience? Are they still getting woken up at 3 am? Are they constantly getting pinged all the time?” One of the vendors that I worked at, when they would go down, there were three engineers in the company who were capable of generating list of customers who are actually impacted by damage. And so, every single incident, one of those three engineers got paged into the incident.And it became borderline intolerable for them because nothing changed. And it got worse, you know? The platform got bigger and more complicated, and so there were more incidents and they were the ones having to generate that. But from a business level, from an observability outcomes perspective, if you zoom all the way up, it's like, “Oh, were we able to generate the list of customers?” “Yes.”And this is where I think the observability industry has sort of gotten stuck—you know, at least one of the ways—is that, “Oh, can you do it?” “Yes.” “But is it effective?” “No.” And by effective, I mean those three engineers become the focal point for an organization.And when I say three—you know, two to five—it doesn't matter whether you're talking about a team of a hundred or you're talking about a team of a thousand. It's always the same number of people. And as you get bigger and bigger, it becomes more and more of a problem. So, does the tooling actually make a difference to them? And you might ask, “Well, what do you expect from the tooling? What do you expect to do for them?” Is it you give them deeper analysis tools? Is it, you know, you do AI Ops? No.The answer is, how do you take the capabilities that those people have and how do you spread it across a larger population of engineers? And that, I think, is one of those key outcomes of observability that no one, whether it be in open-source or the vendor side is really paying a lot of attention to. It's always about, like, “Oh, we can just shove more data in. By the way, we've got petabyte scale and we can deal with, you know, 2 billion active time series, and all these other sorts of vanity measures.” But we've gotten really far away from the outcomes. It's like, “Am I getting return on investment of my observability tooling?”And I think tracing is this—as you've said, it can be difficult to reason about right? And people are not sure. They're feeling, “Well, I'm in a microservices environment; I'm in cloud-native; I need tracing because my older APM tools appear to be failing me. I'm just going to go and wriggle my way through implementing OpenTelemetry.” Which has significant engineering costs. I'm not saying it's not worth it, but there is a significant engineering cost—and then I don't know what to expect, so I'm going to go on through my data somewhere and see whether we can achieve those outcomes.And I do a pilot and my most sophisticated engineers are in the pilot. And they're able to solve the problems. Okay, I'm going to go buy that thing. But I've just transferred my problems. My engineers have gone from solving problems in maybe logs and grepping through petabytes worth of logs to using some sort of complex proprietary query language to go through your tens of petabytes of trace data but actually haven't solved any problem. I've just moved it around and probably just cost myself a lot, both in terms of engineering time and real dollars spent as well.Corey: One of the challenges that I'm seeing across the board is that observability, for certain use cases, once you start to see what it is and its potential for certain applications—certainly not all; I want to hedge that a little bit—but it's clear that there is definite and distinct value versus other ways of doing things. The problem is, is that value often becomes apparent only after you've already done it and can see what that other side looks like. But let's be honest here. Instrumenting an application is going to take some significant level of investment, in many cases. How do you wind up viewing any return on investment that it takes for the very real cost, if only in people's time, to go ahead instrumenting for observability in complex environments?Ian: So, I think that you have to look at the fundamentals, right? You have to look at—pretend we knew nothing about tracing. Pretend that we had just invented logging, and you needed to start small. It's like, I'm not going to go and log everything about every application that I've had forever. What I need to do is I need to find the points where that logging is going to be the most useful, most impactful, across the broadest audience possible.And one of the useful things about tracing is because it's built in distributed environments, primarily for distributed environments, you can look at, for example, the biggest intersection of requests. A lot of people have things like API Gateways, or they have parts of a monolith which is still handling a lot of requests routing; those tend to be areas to start digging into. And I would say that, just like for anyone who's used Prometheus or decided to move away from Prometheus, no one's ever gone and evaluated Prometheus solution without having some sort of Prometheus data, right? You don't go, “Hey, I'm going to evaluate a replacement for Prometheus or my StatsD without having any data, and I'm simultaneously going to generate my data and evaluate the solution at the same time.” It doesn't make any sense.With tracing, you have decent open-source projects out there that allow you to visualize individual traces and understand sort of the basic value you should be getting out of this data. So, it's a good starting point to go, “Okay, can I reason about a single request? Can I go and look at my request end-to-end, even in a relatively small slice of my environment, and can I see the potential for this? And can I think about the things that I need to be able to solve with many traces?” Once you start developing these ideas, then you can have a better idea of, “Well, where do I go and invest more in instrumentation? Look, databases never appear to be a problem, so I'm not going to focus on database instrumentation. What's the real problem is my external dependencies. Facebook API is the one that everyone loves to use. I need to go instrument that.”And then you start to get more clarity. Tracing has this interesting network effect. You can basically just follow the breadcrumbs. Where is my biggest problem here? Where are my errors coming from? Is there anything else further down the call chain? And you can sort of take that exploratory approach rather than doing everything up front.But it is important to do something before you start trying to evaluate what is my end state. End state obviously being sort of nebulous term in today's world, but where do I want to be in two years' time? I would like to have a solution. Maybe it's open-source solution, maybe it's a vendor solution, maybe it's one of those platform solutions we talked about, but how do I get there? It's really going to be I need to take an iterative approach and I need to be very clear about the value and outcomes.There's no point in doing a whole bunch of instrumentation effort in things that are just working fine, right? You want to go and focus your time and attention on that. And also you don't want to go and burn just singular engineers. The observability team's purpose in life is probably not to just write instrumentation or just deploy OpenTelemetry. Because then we get back into the land where engineers themselves know nothing about the monitoring or observability they're doing and it just becomes a checkbox of, “I dropped in an agent. Oh, when it comes time for me to actually deal with an incident, I don't know anything about the data and the data is insufficient.”So, a level of ownership supported by the observability team is really important. On that return on investment, sort of, though it's not just the instrumentation effort. There's product training and there are some very hard costs. People think oftentimes, “Well, I have the ability to pay a vendor; that's really the only cost that I have.” There's things like egress costs, particularly volumes of data. There's the infrastructure costs. A lot of the times there will be elements you need to run in your own environment; those can be very costly as well, and ultimately, they're sort of icebergs in this overall ROI conversation.The other side of it—you know, return and investment—return, there's a lot of difficulty in reasoning about, as you said, what is the value of this going to be if I go through all this effort? Everyone knows a sort of, you know, meme or archetype of, “Hey, here are three options; pick two because there's always going to be a trade off.” Particularly for observability, it's become an element of, I need to pick between performance, data fidelity, or cost. Pick two. And when data fidelity—particularly in tracing—I'm talking about the ability to not sample, right?If you have edge cases, if you have narrow use cases and ways you need to look at your data, if you heavily sample, you lose data fidelity. But oftentimes, cost is a reason why you do that. And then obviously, performance as you start to get bigger and bigger datasets. So, there's a lot of different things you need to balance on that return. As you said, oftentimes you don't get to understand the magnitude of those until you've got the full data set in and you're trying to do this, sort of, for real. But being prepared and iterative as you go through this effort and not saying, “Okay, well, I'm just going to buy everything from one vendor because I'm going to assume that's going to solve my problem,” is probably that undercurrent there.Corey: As I take a look across the entire ecosystem, I can't shake the feeling—and my apologies in advance if this is an observation, I guess, that winds up throwing a stone directly at you folks—Ian: Oh, please.Corey: But I see that there's a strong observability community out there that is absolutely aligned with the things I care about and things I want to do, and then there's a bunch of SaaS vendors, where it seems that they are, in many cases, yes, advancing the state of the art, I am not suggesting for a second that money is making observability worse. But I do think that when the tool you sell is a hammer, then every problem starts to look like a nail—or in my case, like my thumb. Do you think that there's a chance that SaaS vendors are in some ways making this entire space worse?Ian: As we've sort of gone into more cloud-native scenarios and people are building things specifically to take advantage of cloud from a complexity standpoint, from a scaling standpoint, you start to get, like, vertical issues happening. So, you have things like we're going to charge on a per-container basis; we're going to charge on a per-host basis; we're going to charge based off the amount of gigabytes that you send us. These are sort of like more horizontal pricing models, and the way the SaaS vendors have delivered this is they've made it pretty opaque, right? Everyone has experiences, or has jerks about overages from observability vendors' massive spikes. I've worked with customers who have used—accidentally used some features and they've been billed a quarter million dollars on a monthly basis for accidental overages from a SaaS vendor.And these are all terrible things. Like, but we've gotten used to this. Like, we've just accepted it, right, because everyone is operating this way. And I really do believe that the move to SaaS was one of those things. Like, “Oh, well, you're throwing us more data, and we're charging you more for it.” As a vendor—Corey: Which sort of erodes your own value proposition that you're bringing to the table. I mean, I don't mean to be sitting over here shaking my fist yelling, “Oh, I could build a better version in a weekend,” except that I absolutely know how to build a highly available Rsyslog cluster. I've done it a handful of times already and the technology is still there. Compare and contrast that with, at scale, the fact that I'm paying 50 cents per gigabyte ingested to CloudWatch logs, or a multiple of that for a lot of other vendors, it's not that much harder for me to scale that fleet out and pay a much smaller marginal cost.Ian: And so, I think the reaction that we're seeing in the market and we're starting to see—we're starting to see the rise of, sort of, a secondary class of vendor. And by secondary, I don't mean that they're lesser; I mean that they're, sort of like, specifically trying to address problems of the primary vendors, right? Everyone's aware of vendors who are attempting to reduce—well, let's take the example you gave on logs, right? There are vendors out there whose express purpose is to reduce the cost of your logging observability. They just sit in the middle; they are a middleman, right?Essentially, hey, use our tool and even though you're going to pay us a whole bunch of money, it's going to generate an overall return that is greater than if you had just continued pumping all of your logs over to your existing vendor. So, that's great. What we think really needs to happen, and one of the things we're doing at Chronosphere—unfortunate plug—is we're actually building those capabilities into the solution so it's actually end-to-end. And by end-to-end, I mean, a solution where I can ingest my data, I can preprocess my data, I can store it, query it, visualize it, all those things, aligned with open-source standards, but I have control over that data, and I understand what's going on with particularly my cost and my usage. I don't just get a bill at the end of the month going, “Hey, guess what? You've spent an additional $200,000.”Instead, I can know in real time, well, what is happening with my usage. And I can attribute it. It's this team over here. And it's because they added this particular label. And here's a way for you, right now, to address that and cap it so it doesn't cost you anything and it doesn't have a blast radius of, you know, maybe degraded performance or degraded fidelity of the data.That though is diametrically opposed to the way that most vendors are set up. And unfortunately, the open-source projects tend to take a lot of their cues, at least recently, from what's happening in the vendor space. One of the ways that you can think about it is a sort of like a speed of light problem. Everyone knows that, you know, there's basic fundamental latency; everyone knows how fast disk is; everyone knows the, sort of like, you can't just make your computations happen magically, there's a cost of running things horizontally. But a lot of the way that the vendors have presented efficiency to the market is, “Oh, we're just going to incrementally get faster as AWS gets faster. We're going to incrementally get better as compression gets better.”And of course, you can't go and fit a petabyte worth of data into a kilobyte, unless you're really just doing some sort of weird dictionary stuff, so you feel—you're dealing with some fundamental constraints. And the vendors just go, “I'm sorry, you know, we can't violate the speed of light.” But what you can do is you can start taking a look at, well, how is the data valuable, and start giving the people controls on how to make it more valuable. So, one of the things that we do with Chronosphere is we allow you to reshape Prometheus metrics, right? You go and express Prometheus metrics—let's say it's a business metric about how many transactions you're doing as a business—you don't need that on a per-container basis, particularly if you're running 100,000 containers globally.When you go and take a look at that number on a dashboard, or you alert on it, what is it? It's one number, one time series. Maybe you break it out per region. You have five regions, you don't need 100,000 data points every minute behind that. It's very expensive, it's not very performant, and as we talked about earlier, it's very hard to reason about as a human being.So, giving the tools to be able to go and condense that data down and make it more actionable and more valuable, you get performance, you get cost reduction, and you get the value that you ultimately need out of the data. And it's one of the reasons why, I guess, I work at Chronosphere. Which I'm hoping is the last observability [laugh] venture I ever work for.Corey: Yeah, for me a lot of the data that I see in my logs, which is where a lot of this stuff starts and how I still contextualize these things, is nonsense that I don't care about and will never care about. I don't care about load balance or health checks. I don't particularly care about 200 results for the favicon when people visit the site. I care about other things, but just weed out the crap, especially when I'm paying by the pound—or at least by the gigabyte—in order to get that data into something. Yeah. It becomes obnoxious and difficult to filter out.Ian: Yeah. And the vendors just haven't done any of that because why would they, right? If you went and reduced the amount of log—Corey: Put engineering effort into something that reduces how much I can charge you? That sounds like lunacy. Yeah.Ian: Exactly. They're business models entirely based off it. So, if you went and reduced every one's logging bill by 30%, or everyone's logging volume by 30% and reduced the bills by 30%, it's not going to be a great time if you're a publicly traded company who has built your entire business model on essentially a very SaaS volume-driven—and in my eyes—relatively exploitative pricing and billing model.Corey: Ian, I want to thank you for taking so much time out of your day to talk to me about this. If people want to learn more, where can they find you? I mean, you are a Field CTO, so clearly you're outstanding in your field. But if, assuming that people don't want to go to farm country, where's the best place to find you?Ian: Yeah. Well, it'll be a bunch of different conferences. I'll be at KubeCon this year. But chronosphere.io is the company website. I've had the opportunity to talk to a lot of different customers, not from a hard sell perspective, but you know, conversations like this about what are the real problems you're having and what are the things that you sort of wish that you could do?One of the favorite things that I get to ask people is, “If you could wave a magic wand, what would you love to be able to do with your observability solution?” That's, A, a really great part, but oftentimes be being able to say, “Well, actually, that thing you want to do, I think I have a way to accomplish that,” is a really rewarding part of this particular role.Corey: And we will, of course, put links to that in the show notes. Thank you so much for being so generous with your time. I appreciate it.Ian: Thanks, Corey. It's great to be here.Corey: Ian Smith, Field CTO at Chronosphere on this promoted guest episode. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment, which going to be super easy in your case, because it's just one of the things that the omnibus observability platform that your company sells offers as part of its full suite of things you've never used.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Astro arXiv | all categories
Joint optimization of wavefront sensing and reconstruction with automatic differentiation

Astro arXiv | all categories

Play Episode Listen Later Sep 13, 2022 0:49


Joint optimization of wavefront sensing and reconstruction with automatic differentiation by Rico Landman et al. on Tuesday 13 September High-contrast imaging instruments need extreme wavefront control to directly image exoplanets. This requires highly sensitive wavefront sensors which optimally make use of the available photons to sense the wavefront. Here, we propose to numerically optimize Fourier-filtering wavefront sensors using automatic differentiation. First, we optimize the sensitivity of the wavefront sensor for different apertures and wavefront distributions. We find sensors that are more sensitive than currently used sensors and close to the theoretical limit, under the assumption of monochromatic light. Subsequently, we directly minimize the residual wavefront error by jointly optimizing the sensing and reconstruction. This is done by connecting differentiable models of the wavefront sensor and reconstructor and alternatingly improving them using a gradient-based optimizer. We also allow for nonlinearities in the wavefront reconstruction using Convolutional Neural Networks, which extends the design space of the wavefront sensor. Our results show that optimization can lead to wavefront sensors that have improved performance over currently used wavefront sensors. The proposed approach is flexible, and can in principle be used for any wavefront sensor architecture with free design parameters. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2209.05904v1

Astro arXiv | all categories
Controlling petals using fringes: discontinuous wavefront sensing through sparse aperture interferometry at Subaru SCExAO

Astro arXiv | all categories

Play Episode Listen Later Sep 7, 2022 0:46


Controlling petals using fringes: discontinuous wavefront sensing through sparse aperture interferometry at Subaru SCExAO by Vincent Deo et al. on Wednesday 07 September Low wind and petaling effects, caused by the discontinuous apertures of telescopes, are poorly corrected -- if at all -- by commonly used workhorse wavefront sensors (WFSs). Wavefront petaling breaks the coherence of the point spread function core, splitting it into several side lobes, dramatically shutting off scientific throughput. We demonstrate the re-purposing of non-redundant sparse aperture masking (SAM) interferometers into low-order WFSs complementing the high-order pyramid WFS, on the SCExAO experimental platform at Subaru Telescope. The SAM far-field interferograms formed from a 7-hole mask are used for direct retrieval of petaling aberrations, which are almost invisible to the main AO loop. We implement a visible light dual-band SAM mode, using two disjoint 25 nm wide channels, that we recombine to overcome the one-lambda ambiguity of fringe-tracking techniques. This enables a control over petaling with sufficient capture range yet without conflicting with coronagraphic modes in the near-infrared. We present on-sky engineering results demonstrating that the design is able to measure petaling well beyond the range of a single-wavelength equivalent design. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2209.02898v1

Data on Kubernetes Community
Operating FoundationDB on Kubernetes (DoK Day EU 2022) // Johannes M. Scheuermann

Data on Kubernetes Community

Play Episode Listen Later May 27, 2022 8:56


https://go.dok.community/slack https://dok.community/ From the DoK Day EU 2022 (https://youtu.be/Xi-h4XNd5tE) FoundationDB is an open-source distributed transactional Key-Value store that is used by multiple companies like Apple, Snowflake and VMWare Tanzu (previously Wavefront). This talk will cover the design of the FoundationDB operator and lessons learned from operating FoundationDB on Kubernetes. We will discuss some of the missing pieces in Kubernetes to make it easier to operate FoundationDB on top of it and how we solved those challenges in the operator. We will focus on the pieces of the FoundationDB operator that are different to most other operators and why we decided to implement those pieces like they are. We will also discuss how to run an high available FoundationDB cluster on top of Kubernetes and what different choices a user has. We will also cover some challenges that arise when running stateful services at scale on top of Kubernetes and how they can be managed. At the end of this talk we will give an outlook for future design changes and planned features in our operator. The main take-away from this talk is to understand how to run and operate FoundationDB on Kubernetes. Johannes started his journey in the Kubernetes eco-system in early 2015, onboarding projects and applications onto Kubernetes. Since 2020 Johannes works as an SRE for FoundationDB at Apple and is co-leading the development of the open source FoundationDB operator.

Sonic Cinema Productions
EVP Podcast #46- Remakes (Name Please, Night Driving)

Sonic Cinema Productions

Play Episode Listen Later Jan 30, 2022 15:53


Some new remakes of classic shows Wavefront and Deadline Shows- Name Please and Night Driving

Screaming in the Cloud
Making Multi-Cloud Waves with Betty Junod

Screaming in the Cloud

Play Episode Listen Later Nov 3, 2021 35:13


About Betty Betty Junod is the Senior Director of Multi-Cloud Solutions at VMware helping organizations along their journey to cloud. This is her second time at VMware, having previously led product marketing for end user computing products.  Prior to VMware she held marketing leadership roles at Docker and solo.io in following the evolution of technology abstractions from virtualization, containers, to service mesh. She likes to hang out at the intersection of open source, distributed systems, and enterprise infrastructure software. @bettyjunod  Links: Twitter: https://twitter.com/BettyJunod Vmware.com/cloud: https://vmware.com/cloud TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: You know how git works right?Announcer: Sorta, kinda, not really Please ask someone else!Corey: Thats all of us. Git is how we build things, and Netlify is one of the best way I've found to build those things quickly for the web. Netlify's git based workflows mean you don't have to play slap and tickle with integrating arcane non-sense and web hooks, which are themselves about as well understood as git. Give them a try and see what folks ranging from my fake Twitter for pets startup, to global fortune 2000 companies are raving about. If you end up talking to them, because you don't have to, they get why self service is important—but if you do, be sure to tell them that I sent you and watch all of the blood drain from their faces instantly. You can find them in the AWS marketplace or at www.netlify.com. N-E-T-L-I-F-Y.comCorey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Periodically, I like to poke fun at a variety of different things, and that can range from technologies or approaches like multi-cloud, and that includes business functions like marketing, and sometimes it extends even to companies like VMware. My guest today is the Senior Director of Multi-Cloud Solutions at VMware, so I'm basically spoilt for choice. Betty Junod, thank you so much for taking the time to speak with me today and tolerate what is no doubt going to be an interesting episode, one way or the other.Betty: Hey, Corey, thanks for having me. I've been a longtime follower, and I'm so happy to be here. And good to know that I'm kind of like the ultimate cross-section of all the things [laugh] that you can get snarky about.Corey: The only thing that's going to make that even better is if you tell me, “Oh, yeah, and I moonlight on a contract gig by naming AWS services.” And then I just won't even know where to go. But I'll assume they have to generate those custom names in-house.Betty: Yes. Yes, I think they do those there. I may comment on it after the fact.Corey: So, periodically I am, let's call it miscategorized, in my position on multi-cloud, which is that it's a worst practice that when you're designing something from scratch, you should almost certainly not be embracing unless you're targeting a very specific corner case. And I stand by that, but what that has been interpreted as by the industry, in many cases because people lack nuance when you express your opinions in tweet-sized format—who knew—as me saying, “Multi-cloud bad.” Maybe, maybe not. I'm not interested in assigning value judgment to it, but the reality is that there are an awful lot of multi-cloud deployments out there. And yes, some of them started off as, “We're going to migrate from one to the other,” and then people gave up and called it multi-cloud, but it is nuanced. VMware is a company that's been around for a long time. It has reinvented itself in a few different ways at different periods of its evolution, and it's still highly relevant. What is the Multi-Cloud Solutions group over at VMware? What do you folks do exactly?Betty: Yeah. And so I will start by multi-cloud; we're really taking it from a position of meeting the customer where they are. So, we know that if anything, the only thing that's a given in our industry is that there will be something new in the next six months, next year, and the whole idea of multi-cloud, from our perspective, is giving customers the optionality, so don't make it so that it's a closed thing for them. But if they decide—it's not that they're going to start, “Hey, I'm going to go to cloud, so day one, I'm going to go all-in on every cloud out there.” That doesn't make sense, right, as—Corey: But they all gave me such generous free credit offers when I founded my startup; I feel obligated to at this point.Betty: I mean, you can definitely create your account, log in, play around, get familiar with the console, but going from zero to being fully operationalized team to run production workloads with the same kind of SLAs you had before, across all three clouds—what—within a week is not feasible for people getting trained up and actually doing that. Our position is that meeting customers where they are and knowing that they may change their mind, or something new will come up—a new service—and they really want to use a new service from let's say GCP or AWS, they want to bring that with an application they already have or build a new app somewhere, we want to help enable that choice. And whether that choice applies to taking an existing app that's been running in their data center—probably on vSphere—to a new place, or building new stuff with containers, Kubernetes, serverless, whatever. So, it's all just about helping them actually take advantage of those technologies.Corey: So, it's interesting to me about your multi-cloud group, for lack of a better term, is there a bunch of things fall under its umbrella? I believe Bitnami does—or as I insist on calling it, ‘bitten-A-M-I'—I believe that SaltStack—which I wrote a little bit of once upon a time, which tells me you folks did no due diligence whatsoever because everything I've ever written is molten garbage—Betty: Not [unintelligible 00:04:33].Corey: And—so to be clear, SaltStack is good; just the parts that I wrote are almost certainly terrible because have you met me?Betty: I'll make a note. [laugh].Corey: You have Wavefront, you have CloudHealth, you have a bunch of other things in the portfolio, and yeah, all those things do work across multiple clouds, but there's nothing that makes using any of those things a particularly bad idea even if you're all-in on one cloud provider, too. So, it's a portfolio that applies to a whole bunch have different places from your perspective, but it can be used regardless of where folks stand ideologically.Betty: Yes. So, this goes back to the whole idea that we meet the customers where they are and help them do what they want to do. So, with that, making sure these technologies that we have work on all the clouds, whether that be in the data center or the different vendors, so that if a customer wants to just use one, or two, or three, it's fine. That part's up to them.Corey: The challenge I've run into is that—and maybe this is a ‘Twitter Bubble' problem, but unfortunately, having talked to a whole bunch of folks in different contexts, I know it isn't—there's almost this idea that you have to be incredibly dogmatic about a particular technology that you're into. I joke periodically about the Rust Evangelism Strikeforce where their entire job is talking about using Rust; their primary IDE is PowerPoint because they're giving talks all the time about it rather than writing code. And great, that's a bit of an exaggeration, but there are the idea of a technology purist who is taking, “Things must be this way,” well past a point of being reasonable, and disregarding the reality that, yeah, the world is messy in a way that architectural diagrams never are.Betty: Yeah. The architectural diagrams are always 2D, right? Back to that PowerPoint slide: how can I make pretty boxes? And then I just redraw a line because something new came out. But you and I have been in this industry for a long time, there's always something new.And I think that's where the dogmatism gets problematic because if you say we're only going to do containers this way—you know, I could see Swarm and Kubernetes, or all-in on AWS and we're going to use all the things from AWS and there's only this way. Things are generational and so the idea that you want to face the reality and say that there is a little bit of everything. And then it's kind of like, how do you help them with a part of that? As a vendor, it could be like, “I'm going to help us with a part of it, or I'm going to help address certain eras of it.” That's where I think it gets really bad to be super dogmatic because it closes you off to possibly something new and amazing, new thinking, different ways to solve the same problem.Corey: That's the problem is left to our own devices, most of us who are building things, especially for random ideas, yeah, there's a whole modern paradigm of how I can build these things, but I'm going to shortcut to the thing I know best, which may very well the architectures that I was using 15 years ago, maybe tools that I was using 15 years ago. There's a reason that Vim is still as popular as it is. Would I recommend it to someone who's a new user? Absolutely not; it's user-hostile, but back in my days of being a grumpy sysadmin, you learned vi because it was on everything you could get into, and you never knew in what environment you were going to be encountering stuff. These days, you aren't logging in to remote systems to manage them, in most cases, and when it happens, it's a rarity and a bug.The world changes; different approaches change, but you have to almost reinvent your entire philosophy on how things work and what your career trajectory looks like. And you have to give up aspects of what you've considered to be part of your identity and embrace something new. It was hard for me to accept that, for example, Docker and the wave of containerization that was rolling out was effectively displacing the world that I was deep in of configuration management with Puppet and with Salt. And the world changes; I said, “Okay, now I'll work on cloud.” And if something else happens, and mainframes are coming back again, instead, well, I'm probably not going to sit here railing against the tide. It would be ridiculous to do that from my perspective. But I definitely understand the temptation to fight against it.Betty: Mm-hm. You know, we spend so much time learning parts of our craft, so it's hard to say, “I'm now not going to be an expert in my thing,” and I have to admit that something else might be better and I have to be a newbie again. That can be scary for someone who's spent a lot of time to be really well-versed in a specific technology. It's funny that you bring up the whole Docker and Puppet config management; I just had a healthy discussion over Slack with some friends. Some people that we know and comment about some of the newer areas of config management, and the whole idea is like, is it a new category or an evolution of? And I went back to the point that I made earlier is like, it's generations. We continually find new ways to solve a problem, and one thing now is it [sigh] it just all goes so much faster, now. There's a new thing every week. [laugh] it seems sometimes.Corey: It is, and this is the joy of having been in this industry for a while—toxic and broken in many ways though it is—is that you go through enough cycles of seeing today's shiny, new, amazing thing become tomorrow's legacy garbage that we're stuck supporting, which means that—at least from my perspective—I tend to be fairly conservative with adopting new technologies with respect to things that matter. That means that I'm unlikely to wind up looking at the front page of Hacker News to pick a framework to build a banking system in, and I'm unlikely to be the first kid on my block to update to a new file system or database, just because, yeah, if I break a web server, we all laugh, we make fun of the fact that it throws an error for ten minutes, and then things are back up and running. If I break the database, there's a terrific chance that we don't have a company anymore. So, it's the ‘mistakes will show' area and understanding when to be aggressive and when to hold back as far as jumping into new technologies is always a nuanced decision. And let's be clear as well, an awful lot of VMware's customers are large companies that were founded, somehow—this is possible—before 2010. Imagine that. Did people—Betty: [laugh]. I know, right?Corey: —even have businesses or lives back then? I thought we all used horse-driven carriages and whatnot. And they did not build on cloud—not because of any perception of distrust; because it functionally did not exist at the time that they were building these things. And, “Oh, come out into the cloud. It's fine now.” It… yeah, that application is generating hundreds of millions in revenue every quarter. Maybe we treat that with a little bit of respect, rather than YOLO-ing it into some Lambda-driven monster that's constructed—Betty: One hundred—Corey: —out of popsicle sticks and glue.Betty: —percent. Yes. I think people forget that. And it's not that these companies don't want to go to cloud. It's like, “I can't break this thing. That could be, like, millions of dollars lost, a second.”Corey: I write my weekly newsletters in a custom monstrosity of a system that has something like 30-some-odd Lambda functions, a bunch of API gateways that are tied together with things, and periodically there are challenges with it that break as the system continues to evolve. And that's fine. And I'm okay with using something like that as a part of my workflow because absolute worst case, I can go back to the way that my newsletter was originally written: in Google Docs, and it doesn't look anywhere near the same way, and it goes back to just a text email that starts off with, “I have messed up.” And that would be a better story than most of the stuff I put out as a common basis. Similarly, yeah, durability is important.If this were a serious life-critical app, it would not just be hanging out in a single region of a single provider; it would probably be on one provider, as I've talked about, but going multi-region and having backups to a different cloud provider. But if AWS takes a significant enough outage to us-west-2 in Oregon, to the point where my ridiculous system cannot function to write the newsletter, that too, is a different handwritten email that goes out that week because there's no announcement they've made that anyone's going to give the slightest toss about, given the fact that it's basically Cloud Armageddon. So, we'll see. It's about understanding the blast radius and understanding your use case.Betty: Yep. A hundred percent.Corey: So, you've spent a fair bit of time doing interesting things in your career. This is your second outing at VMware, and in the interim, you were at solo.io for a bit, and before that you were in a marketing leadership role at Docker. Let's dive in, if you will. Given that you are no longer working at Docker, they recently made an announcement about a pricing model change, whereas it is free to use Docker Desktop for anyone's personal projects, and for small companies.But if you're a large company, which they define is ten million in revenue a year or 250 employees—those two things don't go alike, but okay—then you have to wind up having a paid plan. And I will say it's a novel approach, but I'm curious to hear what you have to say about it.Betty: Well, I'd say that I saw that there was a lot of flutter about that news, and it's kind of a, it doesn't matter where you draw the line in the sand for the tier, there's always going to be some pushback on it. So, you have to draw a line somewhere. I haven't kept up with the details around the pricing models that they've implemented since I left Docker a few years ago, but monetization is a really important part for a startup. You do have to make money because there are people that you have to pay, and eventually, you want to get off of raising money from VCs all the time. Docker Desktop has been something that has been a real gem from a local developer experience, right, giving the—so that has been well-received by the community.I think there was an enterprise application for it, but when I saw that, I was like, yeah, okay, cool. They need to do something with that. And then it's always hard to see the blowback. I think sometimes with the years that we've had with Docker, it's kind of like no matter what they do, the Twitterverse and Hacker News is going to just give them a hard time. I mean, that is my honest opinion on that. If they didn't do it, and then, say, they didn't make the kind of revenue they needed, people would—that would become another Twitter thread and Hacker News blow up, and if they do it, you'll still have that same reaction.Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking databases, observability, management, and security.And - let me be clear here - it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build.With Always Free you can do things like run small scale applications, or do proof of concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free. No asterisk. Start now. Visit https://snark.cloud/oci-free that's https://snark.cloud/oci-free.Corey: It seems to be that Docker has been trying to figure out how to monetize for a very long time because let's be clear here; I think it is difficult to overstate just how impactful and transformative Docker was to the industry. I gave a talk “Heresy in the Church of Docker” that listed a bunch of things that didn't get solved with Docker, and I expected to be torn to pieces for it, and instead I was invited to give it at ContainerCon one year. And in time, a lot of those things stopped being issues because the industry found answers to it. Now, unfortunately, some of those answers look like Kubernetes, but that's neither here nor there. But now it's, okay, so giving everything that you do that is core and central away for free is absolutely part of what drove the adoption that it saw, but goodwill from developers is not the sort of thing that generally tends to lead to interesting revenue streams.So, they had to do something. And they've tried a few different things that haven't seemed to really pan out. Then they spun off that pesky part of their business that made money selling support contracts, over to Mirantis, which was apparently looking for something now that OpenStack was no longer going to be a thing, and Kubernetes is okay, “Well, we'll take Docker enterprise stuff.” Great. What do they do, as far as turning this into a revenue model?There's a lot of the, I guess, noise that I tend to ignore when it comes to things like this because angry people on Twitter, or on Hacker News, or other terrible cesspools on the internet, are not where this is going to be decided. What I'm interested in is what the actual large companies are going to say about it. My problem with looking at it from the outside is that it feels as if there's significant ambiguity across the board. And if there's one thing that I know about large company procurement departments, it's that they do not like ambiguity. This change takes effect in three or four months, which is underwear-outside-the-pants-superhero-style speed for a lot of those companies, and suddenly, for a lot of developers, they're so far removed from the procurement side of the house that they are never going to have a hope of getting that approved on a career-wide timespan.And suddenly, for a lot of those companies, installing and running Docker Desktop just became a fireable offense because from the company's perspective, the sheer liability side of it, if they were getting subject to audit, is going to be a problem. I don't believe that Docker is going to start pulling Oracle-like audit tactics, but no procurement or risk management group in the world is going to take that on faith. So, the problem is not that it's expensive because that can be worked around; it's not that there's anything inherently wrong with their costing model. The problem is the ambiguity of people who just don't know, “Does this apply to me or doesn't this apply to me?” And that is the thing that is the difficult, painful part.And now, as a result, the [unintelligible 00:17:28] groups and their champions of Docker Desktop are having to spend a lot more time, energy, and thought on this than it would simply be for cutting a check because now it's a risk org-wide, and how do we audit to figure out who's installed this previously free open-source thing? Now what?Betty: Yeah, I'll agree with you on that because once you start making it into corporate-issued software that you have to install on the desktop, that gets a lot harder. And how do you know who's downloaded it? Like my own experience, right? I have a locked-down laptop; I can't just install whatever I want. We have a software portal, which lets me download the approved things.So, it's that same kind of model. I'd be curious because once you start looking at from a large enterprise perspective, your developers are working on IP, so you don't want that on something that they've downloaded using their personal account because now it sits—that code is sitting with their personal account that's using this tool that's super productive for them, and that transition to then go to an enterprise, large enterprise and going through a procurement cycle, getting a master services agreement, that's no small feat. That's a whole motion that is different than someone swiping a credit card or just downloading something and logging in. It's similar to what you see sometimes with the—how many people have signed up for and paid 99 bucks for Dropbox, and then now all of a sudden, it's like, “Wow, we have all of megacorp [laugh] signed up, and then now someone has to sell them a plan to actually manage it and make sure it's not just sitting on all these personal drives.”Corey: Well, that's what AWS's original sales motion looked a lot like they would come in and talk to the CTO or whatnot at giant companies. And the CTO would say, “Great, why should we pick AWS for our cloud needs?” And the answer is, “Oh, I'm sorry. You have 87 distinct accounts within your organization that we've [unintelligible 00:19:12] up for you. We're just trying to offer you some management answers and unify the billing and this, and probably give you a discount as well because there is price breaks available at certain sizing.” It was a different conversation. It's like, “I'm not here to sell you anything. We're already there. We're just trying to formalize the relationship.” And that is a challenge.Again, I'm not trying to cast aspersions on procurement groups. I mean, I do sell enterprise consulting here at The Duckbill Group; we deal with an awful lot of procurement groups who have processes and procedures that don't often align to the way that we do things as a ten-person, fully remote company. We do not have commercial vehicle insurance, for example, because we do not have a commercial vehicle and that is a prerequisite to getting the insurance, for one. We're unlikely to buy one to wind up satisfying some contractual requirements, so we have to go back and forth and get things like that removed. And that is the nature of the beast.And we can say yes, we can say no on a lot of those questionnaires, but, “It depends,” or, “I don't know,” is the sort of thing that's going to cause giant red flags and derail everything. But that is exactly what Docker is doing. Now, it's the well, we have a sort of sloppy, weird set of habits with some of our engineers around the bring your own device to work thing. So, that's the enterprise thing. Let me be very clear, here at The Duckbill Group, we have a policy of issuing people company machines, we manage them very lightly just to make sure the drives are encrypted, so they—and that the screensaver comes out with a password, so if someone loses a laptop, it's just, “Replace the hardware,” not, “We have a data breach.”Let's be clear here; we are responsible about these things. But beyond that, it's oh, you want to have some personal thing installed on your machine or do some work on that stuff? Fine. By all means. It's a situation of we have no policy against it; we understand this is how work happens, and we trust people to effectively be grownups.There are some things I would strongly suggest that any employee—ours or anyone else—not cross the streams on for obvious IP ownership rights and the rest, we have those conversations with our team for a reason. It's, understand the nuances of what you're doing, and we're always willing to throw hardware at people to solve these problems. Not every company is like that. And ten million in revenue is not necessarily a very large company. I was doing the math out for ten million in revenue or 250 employees; assuming that there's no outside investment—which with VC is always a weird thing—it's possible—barely—to have a $10 million in revenue company that has 250 employees, but if they're full time they are damn close to a $15 an hour minimum wage. So, who does it apply to? More people than you might believe.Betty: Yeah, I'm really curious to how they're going to like—like you say, if it takes place in three or four months, roll that out, and how would you actually track it and true that up for people? So.Corey: Yeah. And there are tools and processes to do this, but it's also not in anyone's roadmap because people are not sitting here on their annual planning periods—which is always aspirational—but no one's planning for, “Oh, yeah, Q3, one of our software suppliers is going to throw a real procurement wrench at us that we have to devote time, energy, resources, and budget to figure out.” And then you have a problem. And by resources, I do mean resources of basically assigning work and tooling and whatnot and energy, not people. People are humans, they are not resources; I will die on that hill.Betty: Well, you know, actually resource-wise, the thing that's interesting is when you say supplier, if it's something that people have been able to download for free so far, it's not considered a supplier. So, it's—now they're going to go from just a thing I can use and maybe you've let your developers use to now it has to be something that goes through the official internal vetting as being a supplier. So, that's just—it's a whole different ball game entirely.Corey: My last job before I started this place, was a highly regulated financial institution, and even grabbing things were available for free, “Well, hang on a minute because what license is it using and how is it going to potentially be incorporated?” And this stuff makes sense, and it's important. Now, admittedly, I have the advantage of a number of my engineering peers in that I've been married to a corporate attorney for 11 years and have insight into that side of the world, which to be clear, is all about risk mitigation which is helpful. It is a nuanced and difficult field to—as are most things once you get into them—and it's just the uncertainty that befuddles me a bit. I wish them well with it, truly I do. I think the world is better with an independent Docker in it, but I question whether this is going to find success. That said, it doesn't matter what I think; what matters is what customers say and do, and I'm really looking forward to seeing how it plays out.Betty: A hundred percent; same here. As someone who spent a good chunk of my life there, their mark on the industry is not to be ignored, like you said, with what happened with containers. But I do wish them well. There's lot of good people over there, it's some really cool tech, and I want to see a future for them.Corey: One last topic I want to get into before we wind up wrapping this episode is that you are someone who was nominated to come on the show by a couple of folks, which is always great. I'm always looking for recommendations on this. But what's odd is that you are—if we look at it and dig a little bit beneath the titles and whatnot, you even self-describe as your history is marketing leadership positions. It is uncommon for engineering-types to recommend that I talk to marketing folks.s personally I think that is a mistake; I consider myself more of a marketer than not in some respects, but it is uncommon, which means I have to ask you, what is your philosophy of marketing because it very clearly is differentiated in the public eye.Betty: I'm flattered. I will say that—and this goes to how I hire people and how I coach teams—it's you have to be super curious because there's a ton of bad marketing out there, where it's just kind of like, “Hey, we do these five things and we always do these five things: blah, blah, blah, blah, blah.” But I think it's really being curious about what is the thing that you're marketing? There are people who are just focused on the function of marketing and not the thing. Because you're doing your marketing job in the service of a thing, this new widget, this new whatever, and you got to be super curious about it.And I'll tell you that, for me, it's really hard for me to market something if I'm not excited about it. I have to personally be super excited about the tech or something happening in the industry, and it's, kind of like, an all-in thing for me. And so in that sense, I do spend a ton of time with engineers and end-users, and I really try to understand what's going on. I want to understand how the thing works, and I always ask them, “Well”—so I'll ask the engineers, like, “So… okay, this sounds really cool. You just described this new feature and you're super excited about it because you wrote it, but how is your end-user, the person you're building this for, how did they do this before? Help me understand. How did they do this before and why is this better?”Just really dig into it because for me, I want to understand it deeply before I talk about it. I think the thing is, it shows a tremendous amount of respect for the builder, and then to try to really be empathetic, to understand what they're doing and then partner with them—I mean, this sounds so business-y the way I'm talking about this—but really be a partner with them and just help them make their thing really successful. I'm like the other end; you're going to build this great thing and now I'm going to make it sound like it's the best thing that's ever happened. But to do that, I really need to deeply understand what it is, and I have to care about it, too. I have to care about it in the way that you care about it.Corey: I cannot effectively market or sell something that I don't believe in, personally. I also, to be clear because you are a marketing professional—or at least far more of one than I ever was—I do not view what I do is marketing; I view it as spectacle. And it's about telling stories to people, it's about learning what the market thinks about it, and that informs product design in many respects. It's about understanding the product itself. It's about being able to use the product.And if people are listening to this and think, “Wait a minute, that sounds more like DevRel.” I have news for you. DevRel is marketing, they're just scared to tell you that. And I know people are going to disagree with me on that. You're wrong. But that's okay; reasonable people can disagree.And that's how I see it is that, okay, I'll talk to people building the service, I'll talk to people using the service, but then I'm going to build something with the service myself because until then, it's all a game of who sounds the most convincing in the stories that they tell. But okay, you can tell an amazing story about something, but if it falls over when I tried to use it, well, I'm sorry, you're not being accurate in your descriptions of it.Betty: A hundred percent. I hate to say, like, you're storytellers, but that's a big part of it, but it's kind of like you want to tell the story, so you do something to that people believe a certain thing. But that's part of a curated experience because you want them to try this thing in a certain way. Because you've designed it for something. “I built a spoon. I want you to use that to eat your soup because you can't eat soup with a fork.”So, then you'll have this amazing soup-eating experience, but if I build you a spoon and then not give you any directions and you start throwing it at cars, you're going to be like, “This thing sucks.” So, I kind of think of it in that way. To your point of it has to actually work, it's like, but they also need to know, “What am I supposed to use it for?”Corey: The problem I've always had on some visceral level with formal marketing departments for companies is that they can say that a product that they sell is good, they can say that the product is great, or they can choose to say nothing at all about that product, but when there's a product in the market that is clearly a turd, a marketing department is never going to be able to say that, which I think erodes its authenticity in many respects. I understand the constraints behind, that truly I do, but it's the one superpower I think that I bring to the table where even when I do sponsorship stuff it's, you can buy my attention but not my opinion. Because the authenticity of me being trusted to call them like I see them, for lack of a better term, to my mind at least outweighs any short-term benefit from saying good things about a product that doesn't deserve them. Now, I've been wrong about things, sure. I have also been misinformed in both directions, thinking something is great when it's not, or terrible when it isn't or not understanding the use case, and I am thrilled to engage in those debates. “But this is really expensive when you run for this use case,” and the answer can be, “Well, it's not designed for that use case.” But the answer should not be, “No it's not.” I promise you, expensive is in the eye of the customer not the person building the thing.Betty: Yes. This goes back to I have to believe in the thing. And I do agree it's, like not [sigh]—it's not a panacea. You're not going to make Product A and it's going to solve everything. But being super clear and focused on what it is good for, and then please just try it in this way because that's what we built it for.Corey: I want to thank you for taking the time to have a what for some people is no doubt going to be perceived as a surprisingly civil conversation about things that I have loud, heated opinions about. If people want to learn more, where can they find you?Betty: Well, they can follow me on Twitter. But um, I'd say go to vmware.com/cloud for our work thing.Corey: Exactly. VM where? That's right. VM there. And we will, of course, put links to that in the [show notes 00:30:07].Betty: [laugh].Corey: Thank you so much for taking the time to speak with me. I appreciate it.Betty: Thanks, Corey.Corey: Betty Junod, Senior Director of Multi-Cloud Solutions at VMware. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with a loud, ranting comment at the end. Then, if you work for a company that is larger than 250 people or $10 million in revenue, please also Venmo me $5.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

BadGeek
Les Cast Codeurs n°265 du 13/10/21 - LCC 265 - Chérie, ça va couper

BadGeek

Play Episode Listen Later Oct 13, 2021 63:39


Dans ce nouvel épisode news, Arnaud, Emmanuel et Audrey reviennent sur les annonces d'Oracle concernant le JDK, sur Spring One, mais aussi sur les petites fuites de données et autre panne généralisée qui ont fait l'actu récemment. Enregistré le 8 octobre 2021 Téléchargement de l'épisode [LesCastCodeurs-Episode-265.mp3](https://traffic.libsyn.com/lescastcodeurs/LesCastCodeurs-Episode-265.mp3) ## News ### Langages [Oracle annonce des LTS de deux ans](https://inside.java/2021/09/14/moving-the-jdk-to-a-2-year-lts-candence/) * Donc une LTS tous les 2 au lieu de 3 ans, ce qui fait que la prochaine sera la 21 et pas la 23. * Une enquête récente auprès de développeurs montre qu'entre un quart et la moitié utilisent les release de six mois en dev, mais moins de la moitié d'entre eux en prod * Mais pas de détail sur le temps de security patch support gratuit. Oracle en payant c'est 8 ans [Oracle offre Oracle JDK gratuitement avec support pendant 1 LTS + 1 an (donc 3 ans)](https://blogs.oracle.com/java/post/free-java-license) * Java 17 et + * Redistribution gratuite aussi. Pas de click through. * Sous license NFTC ("Oracle No-Fee Terms and Conditions"). * Ils en ont marre d'avoir de la compétition ? [Dans JDK 18, avec le JEP 400, le charset par défaut va enfin passer à UTF-8](https://inside.java/2021/10/04/the-default-charset-jep400/) * Autant ce n'était plus vraiment un problème pour les systèmes sour mac OS ou Linux, qui utilisent depuis assez longtemps UTF-8 par défaut, mais c'est surtout pour les systèmes Windows où c'est plus problématique * Dans JDK 17, la propriété système `System.getProperty("native.encoding")` avait été introduite si on veut lire par exemple un fichier avec * Deux approches de mitigation pour les problèmes de compatibilité, * en recompilant et en utilisant cette propriété quand on ouvre un fichier * en utilisant -Dfile.encoding=COMPAT sans recompilation, qui gardera le même comportement qu'en JDK 17 et avant * L'équipe d'Oracle suggère de tester ses applications avec -Dfile.encoding=UTF-8 pour voir s'il n'y a pas de soucis ### Librairies [JUnit 5.8](https://junit.org/junit5/docs/current/release-notes/index.html#release-notes-5.8.0) * les classes de test peuvent être ordonnées avec la Class Order API (par nom de classe, nom d'affichage, avec @order ou random) * les classes de test imbriquées peuvent l'être avec @TestClassOrder * @ExtendWith peut maintenant être utilisé pour enregistrer des extensions via des champs ou des paramètres de méthode (constructeur, méthodes de test ou lifecycle) * @RegisterExtension peut maintenant être utilisé sur des champs privés. * `assertThrowsExactly` version plus stricte de assertThrows() * `assertDoesNotThrow()` supporte les suspending functions Kotlin * `assertInstanceOf` produit de meilleurs messages d'erreurs (remplacement pour `assertTrue(obj instanceof X)`) * `assertNull` comporte maintenant le type de l'object si sa méthode `toString` retourne null pour éviter les messages de type `expected but was ` * @TempDir * peut maintenant être utilisé pour créer plusieurs répertoires temporaires (le retour au mode par context peut être fait par configuration) * fait un reset des permissions read and write du répetertoire root et de tout les répertoires contenus plutôt que d'échouer à les supprimer * peut maintenant être utilisé sur des champs private * Nouvel UniqueIdTrackingListener qui va générer un fichier contenant les identifiants des test executés et qui peut être utilisé pour re-executer ces tests dans une image GraalVM par exemple. [Stephen Colebourne avertit les utilisateurs de Joda Time de ne pas mettre à jour la base de données des fuseaux horaires](https://blog.joda.org/2021/09/big-problems-at-timezone-database.html) * Les personnes qui sont responsables de cette base de données veulent fusionner certaines zones ensemble, par exemple, Oslo et Berlin. Alors que ces deux villes (et d'autres) n'ont pas forcément toujours eu la même heure * La base est censée référencer tous les changements depuis 1970 * mais en fusionnant plusieurs zones, le risque est de perdre l'historique pré-1970 Recap Spring.io : * [Récap Jour 1](https://tanzu.vmware.com/content/blog/springone-2021-day-1-recap-and-highlights) * [Récap Jour 2](https://tanzu.vmware.com/content/blog/springone-2021-day-2-recap-and-highlights) * [Récap en vidéo par Josh Long](https://www.youtube.com/watch?v=VMtUzytjo6Y) * [State of Spring 2021](https://www.youtube.com/watch?v=O0-IhAKnkWM) * les chiffres: * 61% des sondés utilisent spring boot * 94% d'entre eux pour faire des micro services * 35% sur des architectures reactive * 61% voudraient passer sur du natif d'ici 2 ans * Nouvelle baseline pour Spring Framework 6.0 * Java 17 et Jakarta EE 9 dès la 6.0 M1 de Spring Framework qui arrive Q4 2021 (GA en Q4 2022) * Spring Native arrive dans Spring Framework * Compilation AOT bénéficiera aux déploiements JVM aussi * Spring Boot starter pour applications natives * Spring Boot proposera des plugin de build et configuration native dès la 3.0 * Support de RSocket and GraphQL * Spring Observability passe dans Spring Framework * API unifiée pour les metrics et le tracing, compatible Micrometer, Wavefront, Zipkin, Brave et OpenTelemetry * intégration consistante dans tout le portfolio * auto configuration dans Spring Boot 3.0 * Core abstractions dans Spring Framework 6.0 * [Spring Native](https://springone.io/2021/sessions/spring-native) * [De Spring framework 5.3 à 6.0](https://springone.io/2021/sessions/from-spring-framework-5-3-to-6-0) ### Infrastructure (suite annonces Spring.io) [Tanzu Application Platform](https://tanzu.vmware.com/content/blog/announcing-vmware-tanzu-application-platform) : * plateforme livrée avec toute la chaine d'outils mais configurable si les équipes préfèrent utiliser d'autres outils que ceux proposés * compatible AKS, EKS, GKS et TKG. * application accelerator (inspiré par spring initializer) pour générer les templates des applications qui seront ensuite déployées * Spring Cloud Gateway for K8s and API Portal for VMware Tanzu [Tanzu Community Edition](https://tanzu.vmware.com/content/blog/vmware-tanzu-community-edition-announcement) : * Version OSS de Tanzu ### Cloud [Azure installe des agents dans son image linux et ils sont vulnérables aux auto update](https://www.wiz.io/blog/secret-agent-exposes-azure-customers-to-unauthorized-code-execution) * Lié à OMI (open management infrastructure, l'équivalent de Windows Management Infrastructure (WMI) pour les systèmes UNIX qui s'exécute en root avec tous les privilèges * Dès qu'on utilise des services comme azure log, ils l'installent dans les VMs * L'article dit que c'est la faute à l'open source et que seulement 20 contributeurs. C'est un peu BS. * En fait si c'est installé via un service le service le mettra à jour * Mais MS recommande de mettre à jour manuellement aussi ### Web [Julia Evans nous explique CORS](https://twitter.com/b0rk/status/1445039796804542473) * Julia explique comment se comporte le navigateur qui voit qu'on essaie d'accéder à une URL différente de celle du domaine de la page web chargée, et le navigateur se demande s'il a le droit de charger cette page * Il va faire un “preflight” request (avec une méthode HTTP OPTIONS) pour savoir s'il a le droit ou non, puis si c'est le cas, pourra accéder à la resource * Julia explique la same-origin policy (càd qu'on ne doit accéder que des resources du domaine qu'on est en train de visiter dans son navigateur) ### Data [Kafka 3.0](https://blogs.apache.org/kafka/) * Le support Java 8 et Scala 2.12 est déprécié et sera retiré en version 4 * Nouvelles améliorations sur KRaft, le méchanisme de consensus qui remplacera à terme ZooKeeper ### Outillage [TravisCI fait un petit partage de vos secrets dans toutes les PRs de vos repos par accident](https://arstechnica.com/information-technology/2021/09/travis-ci-flaw-exposed-secrets-for-thousands-of-open-source-projects/) * le problème a duré 8 jours * rotation des secrets recommandé * Travis a patché discretement sans disclosure initialement ce qui a fait un raffut ### Architecture [Facebook est tombé pendant environ 6H ](https://engineering.fb.com/2021/10/05/networking-traffic/outage-details/) * Facebook prévoit de faire une maintenance sur son backbone (classique) * Un ingénieur lance par erreur une commande qui declare l'ensemble du backbone inaccessible * Oups, le système d'audit qui devrait empêcher de lancer une telle commande est buggé, la commande passe ... * Toute l'infra de Facebook est désormais déconnectée du net. Les avertissements BGP sont stoppées puisque l'infra FaceBook n'est plus dispo et les DNS déprovisionnent les entrées FaceBook, le monde ne peut plus accéder à FaceBook * Les ingé comprennent vite le problème sauf que ils ont perdus les accès remotes aux services et la plupart de leurs systèmes internes sont KO à cause du retrait des DNS * Ils envoient donc du personnel sur site dans les datacenters pour physiquement remettre en service l'infra mais l'accès physique aux machines est super protégé * Ils finissent par y arriver SAUF que le fait de tout redémarrer pause un vrai challenge du fait de l'affluence du traffic qui reprend. Ils risquent de refaire tomber les datacenters du fait de la surcharge électrique. (sans parler de sproblèmes plus haut niveau comme le rechargement des caches etc) * Heureusement ils ont un plan de reprise qu'ils testent régulièrement qui est plutôt prévu dans le cadre d'une tempête qui mettrait HS tout ou partie du réseau. Ce système marche bien et tout rentre dans l'ordre petit à petit, Facebook est sauvé, la planète a reperdu 5 points de QI * [Julia Evans explore BGP et son fonctionnement dans cet article](https://jvns.ca/blog/2021/10/05/tools-to-look-at-bgp-routes/) * [Vu de dehors avec Cloudflare](https://blog.cloudflare.com/october-2021-facebook-outage/) * Impact non seulement du DNS mais des routes BGP elles même. Ces routes disent qu'une IP (our série d'IP) appartient à une personne donnee. * Fondamentalement modèle de confiance. * Intéressant de voir comment Facebook DNS down ajouté beaucoup de traffic aux serveurs de DNS principaux qui ne cachent pas le SERVFAIL ### Sécurité [Fuite massive de données chez Twitch](https://cyberguerre.numerama.com/13464-fuite-sur-twitch-revenus-de-streameurs-4-questions-sur-le-leak-colossal-qui-frapperait-la-plateforme.html) * Quoi ? * l'intégralité du code source * Les revenus (sur 3 ans) de plus de 10 000 streamers Twitch ont été publiés sur le net. * certains codes d'accès AWS * attention c'est la partie 1, il pourrait y avoir d'autres données prochainement * Comment ? * Officiellement suite à une erreur dans un changement de config * Officieusement c'est plus probablement un employé ou un ex employé * Pourquoi ? * le message sur 4chan dénonce un « un cloaque toxique dégoûtant », ce qui pourrait faire référence aux problèmes de harcèlements et de raids hostiles visant des streameurs et des streameuses en raison de leur origine ethnique, de leur orientation sexuelle ou genre. * il est aussi question d'une revendication à une concurrence plus saine dans le secteur du streaming de jeu vidéo ## Conférences [DevFest Nantes les 21 et 22 octobre 2021](https://devfest.gdgnantes.com/) [DevFest Lille le 19 novembre 2021](https://devfest.gdglille.org/) [SunnyTech les 30 juin et 1er juillet 2022 à Montpellier](https://sunny-tech.io/) ## Nous contacter Soutenez Les Cast Codeurs sur Patreon [Faire un crowdcast ou une crowdquestion](https://lescastcodeurs.com/crowdcasting/) Contactez-nous via twitter sur le groupe Google ou sur le site web

Les Cast Codeurs Podcast
LCC 265 - Chérie, ça va couper

Les Cast Codeurs Podcast

Play Episode Listen Later Oct 13, 2021 63:39


Dans ce nouvel épisode news, Arnaud, Emmanuel et Audrey reviennent sur les annonces d'Oracle concernant le JDK, sur Spring One, mais aussi sur les petites fuites de données et autre panne généralisée qui ont fait l'actu récemment. Enregistré le 8 octobre 2021 Téléchargement de l'épisode LesCastCodeurs-Episode–265.mp3 News Langages Oracle annonce des LTS de deux ans Donc une LTS tous les 2 au lieu de 3 ans, ce qui fait que la prochaine sera la 21 et pas la 23. Une enquête récente auprès de développeurs montre qu'entre un quart et la moitié utilisent les release de six mois en dev, mais moins de la moitié d'entre eux en prod Mais pas de détail sur le temps de security patch support gratuit. Oracle en payant c'est 8 ans Oracle offre Oracle JDK gratuitement avec support pendant 1 LTS + 1 an (donc 3 ans) Java 17 et + Redistribution gratuite aussi. Pas de click through. Sous license NFTC (“Oracle No-Fee Terms and Conditions”). Ils en ont marre d'avoir de la compétition ? Dans JDK 18, avec le JEP 400, le charset par défaut va enfin passer à UTF–8 Autant ce n'était plus vraiment un problème pour les systèmes sour mac OS ou Linux, qui utilisent depuis assez longtemps UTF–8 par défaut, mais c'est surtout pour les systèmes Windows où c'est plus problématique Dans JDK 17, la propriété système System.getProperty("native.encoding") avait été introduite si on veut lire par exemple un fichier avec Deux approches de mitigation pour les problèmes de compatibilité, en recompilant et en utilisant cette propriété quand on ouvre un fichier en utilisant -Dfile.encoding=COMPAT sans recompilation, qui gardera le même comportement qu'en JDK 17 et avant L'équipe d'Oracle suggère de tester ses applications avec -Dfile.encoding=UTF–8 pour voir s'il n'y a pas de soucis Librairies JUnit 5.8 les classes de test peuvent être ordonnées avec la Class Order API (par nom de classe, nom d'affichage, avec @order ou random) les classes de test imbriquées peuvent l'être avec @TestClassOrder @ExtendWith peut maintenant être utilisé pour enregistrer des extensions via des champs ou des paramètres de méthode (constructeur, méthodes de test ou lifecycle) @RegisterExtension peut maintenant être utilisé sur des champs privés. assertThrowsExactly version plus stricte de assertThrows() assertDoesNotThrow() supporte les suspending functions Kotlin assertInstanceOf produit de meilleurs messages d'erreurs (remplacement pour assertTrue(obj instanceof X)) assertNull comporte maintenant le type de l'object si sa méthode toString retourne null pour éviter les messages de type expected but was @TempDir peut maintenant être utilisé pour créer plusieurs répertoires temporaires (le retour au mode par context peut être fait par configuration) fait un reset des permissions read and write du répetertoire root et de tout les répertoires contenus plutôt que d'échouer à les supprimer peut maintenant être utilisé sur des champs private Nouvel UniqueIdTrackingListener qui va générer un fichier contenant les identifiants des test executés et qui peut être utilisé pour re-executer ces tests dans une image GraalVM par exemple. Stephen Colebourne avertit les utilisateurs de Joda Time de ne pas mettre à jour la base de données des fuseaux horaires Les personnes qui sont responsables de cette base de données veulent fusionner certaines zones ensemble, par exemple, Oslo et Berlin. Alors que ces deux villes (et d'autres) n'ont pas forcément toujours eu la même heure La base est censée référencer tous les changements depuis 1970 mais en fusionnant plusieurs zones, le risque est de perdre l'historique pré–1970 Recap Spring.io : Récap Jour 1 Récap Jour 2 Récap en vidéo par Josh Long State of Spring 2021 les chiffres: 61% des sondés utilisent spring boot 94% d'entre eux pour faire des micro services 35% sur des architectures reactive 61% voudraient passer sur du natif d'ici 2 ans Nouvelle baseline pour Spring Framework 6.0 Java 17 et Jakarta EE 9 dès la 6.0 M1 de Spring Framework qui arrive Q4 2021 (GA en Q4 2022) Spring Native arrive dans Spring Framework Compilation AOT bénéficiera aux déploiements JVM aussi Spring Boot starter pour applications natives Spring Boot proposera des plugin de build et configuration native dès la 3.0 Support de RSocket and GraphQL Spring Observability passe dans Spring Framework API unifiée pour les metrics et le tracing, compatible Micrometer, Wavefront, Zipkin, Brave et OpenTelemetry intégration consistante dans tout le portfolio auto configuration dans Spring Boot 3.0 Core abstractions dans Spring Framework 6.0 Spring Native De Spring framework 5.3 à 6.0 Infrastructure (suite annonces Spring.io) Tanzu Application Platform : plateforme livrée avec toute la chaine d'outils mais configurable si les équipes préfèrent utiliser d'autres outils que ceux proposés compatible AKS, EKS, GKS et TKG. application accelerator (inspiré par spring initializer) pour générer les templates des applications qui seront ensuite déployées Spring Cloud Gateway for K8s and API Portal for VMware Tanzu Tanzu Community Edition : Version OSS de Tanzu Cloud Azure installe des agents dans son image linux et ils sont vulnérables aux auto update Lié à OMI (open management infrastructure, l'équivalent de Windows Management Infrastructure (WMI) pour les systèmes UNIX qui s'exécute en root avec tous les privilèges Dès qu'on utilise des services comme azure log, ils l'installent dans les VMs L'article dit que c'est la faute à l'open source et que seulement 20 contributeurs. C'est un peu BS. En fait si c'est installé via un service le service le mettra à jour Mais MS recommande de mettre à jour manuellement aussi Web Julia Evans nous explique CORS Julia explique comment se comporte le navigateur qui voit qu'on essaie d'accéder à une URL différente de celle du domaine de la page web chargée, et le navigateur se demande s'il a le droit de charger cette page Il va faire un “preflight” request (avec une méthode HTTP OPTIONS) pour savoir s'il a le droit ou non, puis si c'est le cas, pourra accéder à la resource Julia explique la same-origin policy (càd qu'on ne doit accéder que des resources du domaine qu'on est en train de visiter dans son navigateur) Data Kafka 3.0 Le support Java 8 et Scala 2.12 est déprécié et sera retiré en version 4 Nouvelles améliorations sur KRaft, le méchanisme de consensus qui remplacera à terme ZooKeeper Outillage TravisCI fait un petit partage de vos secrets dans toutes les PRs de vos repos par accident le problème a duré 8 jours rotation des secrets recommandé Travis a patché discretement sans disclosure initialement ce qui a fait un raffut Architecture Facebook est tombé pendant environ 6H Facebook prévoit de faire une maintenance sur son backbone (classique) Un ingénieur lance par erreur une commande qui declare l'ensemble du backbone inaccessible Oups, le système d'audit qui devrait empêcher de lancer une telle commande est buggé, la commande passe … Toute l'infra de Facebook est désormais déconnectée du net. Les avertissements BGP sont stoppées puisque l'infra FaceBook n'est plus dispo et les DNS déprovisionnent les entrées FaceBook, le monde ne peut plus accéder à FaceBook Les ingé comprennent vite le problème sauf que ils ont perdus les accès remotes aux services et la plupart de leurs systèmes internes sont KO à cause du retrait des DNS Ils envoient donc du personnel sur site dans les datacenters pour physiquement remettre en service l'infra mais l'accès physique aux machines est super protégé Ils finissent par y arriver SAUF que le fait de tout redémarrer pause un vrai challenge du fait de l'affluence du traffic qui reprend. Ils risquent de refaire tomber les datacenters du fait de la surcharge électrique. (sans parler de sproblèmes plus haut niveau comme le rechargement des caches etc) Heureusement ils ont un plan de reprise qu'ils testent régulièrement qui est plutôt prévu dans le cadre d'une tempête qui mettrait HS tout ou partie du réseau. Ce système marche bien et tout rentre dans l'ordre petit à petit, Facebook est sauvé, la planète a reperdu 5 points de QI Julia Evans explore BGP et son fonctionnement dans cet article Vu de dehors avec Cloudflare Impact non seulement du DNS mais des routes BGP elles même. Ces routes disent qu'une IP (our série d'IP) appartient à une personne donnee. Fondamentalement modèle de confiance. Intéressant de voir comment Facebook DNS down ajouté beaucoup de traffic aux serveurs de DNS principaux qui ne cachent pas le SERVFAIL Sécurité Fuite massive de données chez Twitch Quoi ? l'intégralité du code source Les revenus (sur 3 ans) de plus de 10 000 streamers Twitch ont été publiés sur le net. certains codes d'accès AWS attention c'est la partie 1, il pourrait y avoir d'autres données prochainement Comment ? Officiellement suite à une erreur dans un changement de config Officieusement c'est plus probablement un employé ou un ex employé Pourquoi ? le message sur 4chan dénonce un « un cloaque toxique dégoûtant », ce qui pourrait faire référence aux problèmes de harcèlements et de raids hostiles visant des streameurs et des streameuses en raison de leur origine ethnique, de leur orientation sexuelle ou genre. il est aussi question d'une revendication à une concurrence plus saine dans le secteur du streaming de jeu vidéo Conférences DevFest Nantes les 21 et 22 octobre 2021 DevFest Lille le 19 novembre 2021 SunnyTech les 30 juin et 1er juillet 2022 à Montpellier Nous contacter Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Faire un crowdcast ou une crowdquestion Contactez-nous via twitter https://twitter.com/lescastcodeurs sur le groupe Google https://groups.google.com/group/lescastcodeurs ou sur le site web https://lescastcodeurs.com/

Moonlight Audio Theatre
EVP'S WAVEFRONT ANTHOLOGY: "Voting is Anonymous"

Moonlight Audio Theatre

Play Episode Listen Later Aug 6, 2021 27:42


The Wave Front Anthology series is a science fiction collection that explores futuristic societies, dystopian nightmares, technological travesties and bleak tomorrows. Voting is Anonymous (SCI-FI) - In a dystopian future, voting is an earned right, "spoiling" ballots, means killing your political opponents. Dedicated to Bill Hollweg- the John Carpenter of Audio Drama. Starring: Jeffrey Billard, Lothar Tuppan, Jan Didur, Pete Lutz, Tanja Milojevic and Jack Ward. Written and Directed: Jack J. Ward Produced: Lothar Tuppan Music: Sharon Bee

8111
Barbara Townsend

8111

Play Episode Listen Later Jul 19, 2021 72:49


Barbara Townsend was born in Florida, spent some time in Arizona, Texas, St. Louis, and lived her formative years in Memphis. She got into radio as young person in high school with her own radio show. She went to school at Arkansas State where she earned a BS degree in Radio/TV. She moved back to St. Louis where she worked at a local access station running cameras for college football, taping interviews with fans, and doing general production work. She continued in broadcasting and eventually wound up working on the Today Show in NY, Saturday Night Live, and Meet the Press in Washington. She took the Chiron training course in Long Island and it opened doors throughout the industry. She began doing broadcast graphics with Paintbox and Harry systems. She approached her boss and asked to go out to Santa Barbara to train on Wavefront. A connection with Henry LaBounta led to an interview with John Berton in the computer graphics division of ILM. Barbara was hired and worked at the company as a technical director for the next 11 years. Her credits include, Baby's Day Out, Star Trek, The Mask, Twister, Mars Attacks!, Men in Black 2, Magnolia, Pearl Harbor, Hulk, and many others. She later spent a number of years working full-time as a mother. As her kids got a little older, she pursued a master's degree in counseling and today works as a full-time therapist engaging with clients in her private practice. She is engaged in psychedelic assisted therapy and using her systems knowledge, education, and life experience to help people with severe depression. Barbara is an amazing person and it was great to catch up with her and hear about her incredible journey.  

CG Garage
Episode 330 - Patrick Osborne - Animator, Writer, and Director

CG Garage

Play Episode Listen Later Jun 14, 2021 62:04


Patrick Osborne grew up with a passion for Nintendo, Jurassic Park, and Wavefront—and he got to see behind the scenes of movie merchandising via his dad's job as head of design at Kenner Toys. After attending the prestigious Ringling College of Art + Design, he joined Sony Imageworks, then Disney, and polished his skills as an animator. While at Disney, Patrick directed the short film “Feast,” which won an Academy Award, and then moved into directing full-time on the sitcom Imaginary Mary. Patrick talks about the tools he's used throughout his career, what it's like to become a Hollywood director, and his favorite part of the filmmaking process. He also gives a sneak peek into his upcoming experiments with real-time, VR, and in-progress shorts for Love, Death & Robots, and Apple.

The Mutual Audio Network
Wavefront Short: Reservations(053021)

The Mutual Audio Network

Play Episode Listen Later May 30, 2021 15:42


Just released this week and heard on Wednesday Wonders, Jack Ward from Electric Vicuna takes a comedic look at the Afterlife and Time Travel conundrums. Produced by Austin Beach and staring David Ault, John Bell, Angela Young, Jeffrey Billard, Lothar Tuppan and Jan Didur. Learn more about your ad choices. Visit megaphone.fm/adchoices

Sunday Showcase
Wavefront Short: Reservations

Sunday Showcase

Play Episode Listen Later May 30, 2021 15:43


Just released this week and heard on Wednesday Wonders, Jack Ward from Electric Vicuna takes a comedic look at the Afterlife and Time Travel conundrums. Produced by Austin Beach and staring David Ault, John Bell, Angela Young, Jeffrey Billard, Lothar Tuppan and Jan Didur. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Mutual Audio Network
Wavefront Short: Reservations(052621)

The Mutual Audio Network

Play Episode Listen Later May 26, 2021 15:25


In this week, another new Electric Vicuna Productions episode. This time Jack Ward takes a comedic look at the Afterlife and Time Travel conundrums. Produced by Austin Beach, this episode stars David Ault, John Bell, Angela Young, Jeffrey Billard, Lothar Tuppan and Jan Didur. Learn more about your ad choices. Visit megaphone.fm/adchoices

Wednesday Wonders
Wavefront Short: Reservations

Wednesday Wonders

Play Episode Listen Later May 26, 2021 15:25


In this week, another new Electric Vicuna Productions episode. This time Jack Ward takes a comedic look at the Afterlife and Time Travel conundrums. Produced by Austin Beach, this episode stars David Ault, John Bell, Angela Young, Jeffrey Billard, Lothar Tuppan and Jan Didur. Learn more about your ad choices. Visit megaphone.fm/adchoices

Screaming in the Cloud
The Switzerland of the Cloud with Sanjay Poonen

Screaming in the Cloud

Play Episode Listen Later May 18, 2021 40:46


About SanjaySanjay Poonen is the former COO of VMware, where he was responsible for worldwide sales, services, support, marketing and alliances. He was also responsible for the Security strategy and business at VMware. Prior to SAP, Poonen held executive roles at SAP, Symantec, VERITAS and Informatica, and he began his career as a software engineer at Microsoft, followed by Apple. Poonen holds two patents as well as an MBA from Harvard Business School, where he graduated a Baker Scholar; a master's degree in management science and engineering from Stanford University; and a bachelor's degree in computer science, math and engineering from Dartmouth College, where he graduated summa cum laude and Phi Beta Kappa.Links: VMware: https://www.vmware.com/ leadership values: https://www.youtube.com/watch?v=lxkysDMBM0Q Twitter: https://twitter.com/spoonen LinkedIn: https://www.linkedin.com/in/sanjaypoonen/ spoonen@vmware.com: mailto:spoonen@vmware.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It’s an awesome approach. I’ve used something similar for years. Check them out. But wait, there’s more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It’s awesome. If you don’t do something like this, you’re likely to find out that you’ve gotten breached, the hard way. Take a look at this. It’s one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That’s canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I’m a big fan of this. More from them in the coming weeks.Corey: Let’s be honest—the past year has been a nightmare for cloud financial management. The pandemic forced us to move workloads to the cloud sooner than anticipated, and we all know what that means—surprises on the cloud bill and headaches for anyone trying to figure out what caused them. The CloudLIVE 2021 virtual conference is your chance to connect with FinOps and cloud financial management practitioners and get a behind-the-scenes look into proven strategies that have helped organizations like yours adapt to the realities of the past year. Hosted by CloudHealth by VMware on May 20th, the CloudLIVE 2021 conference will be 100% virtual and 100% free to attend, so you have no excuses for missing out on this opportunity to connect with the cloud management community. Visit cloudlive.com/coreyto learn more and save your virtual seat today. That’s cloud-l-i-v-e.com/corey to register.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. I talk a lot about cloud in a variety of different contexts; this show is about the business of cloud. But, fundamentally, where cloud comes from was this novel concept, once upon a time, of virtualization. And that gave rise to a whole bunch of other things that later became, then containers, now it becomes Kubernetes, and if you want to go down the serverless path, you can.But it’s hard to think of a company that has had more impact on virtualization and that narrative than VMware. My guest today is Sanjay Poonen, Chief Operating Officer of VMware. Thank you for joining me.Sanjay: Thanks, Corey Quinn, it’s great to be with you and with your audience on this show.Corey: So, let’s start with the fun slash difficult questions. It’s easy to look at VMware as a way of virtualizing existing bare-metal workloads and moving those VMs around, but in many respects, that is perceived by some—ehem, ehem—to be something of a legacy model of cloud interaction where it solves the problem of on-premises, which is I’m really bad at running data centers so I’m just going to treat the cloud like a data center. And for some companies and some workloads, where, great, that’s fine. But isn’t that, I guess, a V1 vision of cloud, and if it is, why is VMware relevant to that?Sanjay: Great question, Corey. And I think it’s great to be straight up on a topic [unintelligible 00:02:01]. Yeah, I think you’re right. Listen, the ‘V’ in VMware is virtualization. The ‘VM’ is virtual machines.A lot of what is the underpinning of what made the private cloud, as we call it today, but the data center of the past successful was this virtualization technology. In the old days, people would send us electricity bills, before and after VMware, and how much they’re saving. So, this energy-saving concept of virtualization has been profound in the modernization of the data center and the advent of what’s called the private cloud. But as you looked at the public cloud innovate, whether it was AWS or even the SaaS applications—I mean, listen, the most popular capability initially on AWS was EC2 and S3, and the core of EC2 is virtualization. I think what we had to do, as this happened, was the foundation was certainly those services like EC2 and S3, but very quickly, the building phenomenon that attracted hundreds of thousands and I think now probably a few million customers to AWS was the large number of services, probably now 150, 200-odd services, that were built on top of that for everything from data, to AI, to a variety of other things that every year Andy Jassy and the team would build up.So, we had to make sure that over the course of the last, I’d say, certainly the last five to maybe eight years, we were becoming relevant to our customers that were a mix. There were customers who were large—I mean, we have about half a million customers—and in many cases, they have about 80, 90% of their workloads running on-prem and they want to move those workloads to the cloud, but they can’t just refactor and re-platform all of those apps that are running in the on-premise world. When they will try to do it by the end of the year—they may have 1000 applications—they got 10 done.Corey: Oh, and it’s not realistic and it’s unfair. I mean, there’s the idea of, “Oh, that’s legacy,” which is condescending engineering speak for it actually makes money because it’s been around for longer than six months. And sure you can have Twitter For Pets roll stuff out every day that you want; when you’re a bank, you have different constraints forced upon you. And I’m very sympathetic to folks who are in scenarios where they aren’t, for whatever reason, able to technically, culturally, or for regulatory reasons, be able to do continuous deployment of everything. I want to be very clear that I’ve in no way passing judgment on an entire sector of enterprise.Sanjay: But while that sector is important, there was also another sector starting to emerge: the Airbnbs, the Pinterests, the modern companies who may not need VMware at all as they’re building native, but may need some of our container in a new open-source capabilities. SaltStack was one of them; we will talk about that, I’m sure. So, we needed to be relevant to both customer communities because the Airbnbs of today, will be the Marriotts of tomorrow. So, we had to really rethink what is the future of VMware, what’s our existence in a public cloud phenomenon? That’s really what led to a complete watershed moment.I called publicly in the past sort of a Berlin Wall moment where Amazon and VMware were positioned pretty much as competitors for a long period of time when AWS was first started. Not that Andy was going around talking negatively about VMware, but I think people view these as two separate doors, and never the twain would meet. But when we decided to partner with them—I then quite frankly, the precursor to that was us divesting our public cloud strategy. We’d tried to build a competitive public cloud called vCloud Air between the period of 2012 and 2015, 2016—we had to reach an end of that movement, and catharsis of that, divest that asset, and it opened the door for a strategic partnership. But now we can go back to those customers and help them move their applications in a way that’s highly efficient, almost like a house on wheels, and then once it’s in that location in AWS—or one of the other public clouds—you can modernize it, too.So, then you get to both get the best of both worlds: get it into the public cloud, maybe retire some of your data centers if that’s what you want to do, and then modernize it with all the beautiful services. And that’s the best of both worlds. Now, if you have 1000 applications, you’re moving hundreds of them into the public cloud, and then using all of the powerful developer services on that VMware stack that’s built on the bare metal of AWS. So, we started out with AWS, but very quickly then, all the other public clouds, maybe the five or six that are named in the Gartner Magic Quadrant, came to us and said, “Well, if you’re doing that with AWS, would you consider doing that with us, too?”Corey: There’s definitely been an evolution of VMware. I mean, it’s in the name; you have the term VM sitting there. It’s easy to, at least from where I sit, think of, “Oh, VMware, back when running virtual machines was novel.” And there was a lot of skepticism around the idea. I’m going to level with you; I was a skeptic around virtualization. Then around cloud. Then around containers.And now I’m trying—all right I’m going to be in favor of serverless, which is almost certain to doom it because everything else that I’ve been skeptical of in this sense beyond any reasonable measure. So, there is this idea that VMs are this sort of old-school thinking. And that’s great if you have an existing workload that needs to be migrated, but there are a finite number of those in the world. As we turn towards net-new and greenfield build-outs, a lot of things are a lot more cloud-native than just hosting a bunch of—if you take the AWS example—EC2 instances hanging out in the network talking to other EC2 instances. Taking advantage of native offerings definitely seems to be on the rise. And there have been acquisitions that VMware has made. You talk about SaltStack, which was a great example, given that I wrote part of that very early on, and I don’t think the internet’s ever forgiven me for it. But also Bitnami—or BittenAMI, as I insist on pronouncing it—and you also acquired Wavefront. There’s a lot of interesting stuff that feels almost like a setting up a dichotomy of new VMware versus old VMware. What are the points of commonality there? What is the vision for the next 15 years of the company?Sanjay: Yeah, I think when we think about it, it’s very important that, first off, we acknowledge that our roots are what gives us sustenance because we have a large customer base that uses us. We have 80 million workloads running on that VMware infrastructure, formerly ESX, now vSphere. And that’s our heritage, and those customers are happy. In fact, they’re not, like, fleeing like birds into there, so we want to care for those customers.But we have to have a north star, like a magnet that pulls us into the modern world. And that’s been—you know, I talked about phase one was this really charting of the future of VMware for the cloud. Just as important has been focused on cloud-native and containers the last three, four years. So, we acquired Heptio. As you know, Heptio was founded by some of the inventors of Kubernetes who left Google, Joe Beda, and Craig McLuckie.And with that came a strong I would say relevancy, and trust to the Kubernetes, we’ve become one of the leading contributors to open-source Kubernetes. And that brain trust now, some of whom are at VMWare and many are in the community think of us very differently. And then we’ve supplemented that with many other moves that are much more cloud-native. You mentioned two or three of them: Bitnami, for that sort of marketplace; and then SaltStack for what we have been able to do in configuration management and infrastructure automation; Wavefront for container-based workloads. And we’re not done, and we think, listen, there will be many, many more things that the first 10, 15 years of VMware was very much about optimizing the private cloud, the next 10, 15 years could be optimizing for that app modernization cloud-native world.And we think that customers will want something that can work in a multi-cloud fashion. Now, multi-cloud for us is certainly private cloud and edge cloud, which may have very little to do with hardware that’s in the public cloud, but also AWS, Azure, and two or three other clouds. And if you think of each of these public clouds as mini skyscrapers—so AWS has 50 billion in revenue; I’m going to guess Azure is, like, 30, and then Google is I don’t know 12, 13; and then everyone else, and they’re all skyscrapers are different—it’s like, if we can be that company that fills the crevices between them with cement that’s valuable so that people can then build their houses on top of that, you’re probably not going to be best served with a container Stack that’s trapped to just one cloud. And then over time, you don’t have reasonable amount of flexibility if you choose to change that direction. Now, some people might say, “Listen, multi-cloud is—who cares about that?”But I think increasingly, we’re hearing from customers a desire to have more than just one cloud for a variety of reasons. They want to have options, portability, flexibility, negotiating price, in addition to their private cloud. So, it’s a two plus one, sometimes it might be a two plus two, meaning it’s a private cloud and the edge cloud. And I think VMware is a tremendous proposition to be that Switzerland-type company that’s relevant in a private cloud, one or two public clouds, and an edge cloud environment, Corey.Corey: Are you seeing folks having individual workloads that they want to flow from one cloud to another in a seamless way, or is it more aligned along an approach of having workload A lives in this cloud and workload B lives in this cloud? And you’re in a terrific position to opine on that more than most, given who you are.Sanjay: Yeah. We’re not yet as yet seeing these floating workloads that start here and move around, that’s—usually you build an application with purpose. Like, it sits here in this cloud and of course. But we’re seeing, increasingly, interest at customers’ not tethering it to proprietary services only. I mean, certainly, if you’re going to optimize it for AWS, you’re going to take advantage of EC2, S3, and then many of the, kind of, very capable [unintelligible 00:11:24], Aurora, there are others that might be there.But over time, especially the open-source movement that brings out open-source data services, open-source tooling, containers, all of that stuff, give ultimately customers the hope that certainly they should add economic value and developer productivity value, but they should also create some potential portability so that if in the future you wanted to make a change, you’re not bound to that cloud platform. And a particular cloud may not like us saying this, but that’s just the fact of how CIOs today are starting to think much more so as they build these up and as many of the other public clouds start to climb in functionality. Now, there are other use cases where particular SaaS applications of SaaS services are optimized for a particular [unintelligible 00:12:07], for example, Office 365, someone’s using a collaboration app, typically, there’s choices of one or two, you’re either using a G Suite and then it’s tied to Google, or it’s Office 365. But even there, we’re starting to see some nibbling around the edges. Just the phenomenon of Zoom; that wasn’t a capability that Microsoft brought very—and the services from Google, or Amazon, or Microsoft was just not as good as Zoom.And Zoom just took off and has become the leading video collaboration platform because they’re just simple, easy to use, and delightful. It doesn’t matter what infrastructure they run on, whether it’s AWS, I mean, now they’re running some of their workloads on Oracle. Who cares? It’s a SaaS service. So, I think increasingly, I think there will be a propensity towards SaaS applications over custom building. If I can buy it why would I want to build a video collaboration app myself internally, if I can buy it as a SaaS service from Zoom, or whoever have you?Corey: Oh, building it yourself would be ludicrous unless that was one of your core competencies.Sanjay: Exactly.Corey: And Zoom seems to have that on lock.Sanjay: Right. And so similarly, to the extent that I think IT folks can buy applications that are more SaaS than custom-built, or even on-prem, I mean, Salesforce—the success of Salesforce, and Workday, and Adobe, and then, of course, the smaller ones like Zoom, and Slack, and so on. So, it’s clear evidence that the world is going to move towards SaaS applications. But where you have to custom build an application because it’s very unique to your business or to something you need to very snap quickly together, I think there’s going to be increasingly a propensity towards using open-source types of tooling, or open-source platforms—Kubernetes being the best example of that—that then have some multi-cloud characteristics.Corey: In a similar note, I know that the term is apparently, at least this week on Twitter, being argued against, but what about cloud repatriation? A lot of noise has been made about people moving workloads from public cloud back to private cloud. And the example they always give is Dropbox moving its centralized storage service into an on-prem environment, and the second example is basically a pile of tumbleweeds because people don’t really have anything concrete to point at. Does that align with your experience? Is there a, I guess, a hidden wave of people doing a reverse cloud migration that just doesn’t get discussed?Sanjay: I think there’s a couple of phenomenons, Corey, that we watch here. Now, clearly a company of the scale of Dropbox has economics on data and storage, and I’ve talked to Drew and a variety of the folks there, as well as Box, on how they think about this because at that scale, they probably could get some advantages that I’m sure they’ve thought through in both the engineering and the cost. I mean, there’s both engineering optimization and costs that I’m sure Drew and the folks there are thinking through. But there’s a couple of phenomena that we do—I mean, if you go back to, I think, maybe three or four quarters ago, Brian Moynihan, the CEO of Bank of America, I think in 2019, mid to late 2019 made a statement in his earnings call, he was asked, “How do you think about cloud?” And he said, “Listen, I can run a private cloud cheaper and better than any of the public clouds, and I save 240%,” if I remember the data right.Now, his private cloud and Bank of America is a key customer [unintelligible 00:15:04] of us, we find that some of the bigger companies at scale are able to either get hardware at really good pricing, are able to engineer—because they have hundreds of thousands—they’re almost mini VMware, right, [unintelligible 00:15:18] themselves because they’ve got so many engineers. They can do certain things that a company that doesn’t want to hire those many—companies, Pinterest, Airbnb may not do. So, there are customers who are going to basically say, even prior to repatriation, that the best opportunity is a private cloud. And in that place, we have to work with our private cloud partners, whether it’s Dell or others, to make sure that stack of hardware from them plus the software VMware in the containers on top of that is as competitive and is best cost of ownership, best ROI. Now, when you get to your second—your question around repatriation, what we have found in certain regions outside the US because of sovereign data, sovereign clouds, sometimes some distrust of some of those countries of the US public cloud, are they worried about them getting too big, fear by monopoly, all those types of things, lead certain countries outside the US to think about something that they would need that’s sovereign to their country.And the idea of sovereign data and sovereign clouds does lead those to then investing in local cloud providers. I mean, for example in France, there is a provider called OVH that’s kind of trying to do some of that. In China, there’s a whole bunch of them, obviously, Alibaba being the biggest. And I think that’s going to continue to be a phenomenon where there’s a [federated said 00:16:32], we have a cloud provider program with this 4000 cloud providers, Corey, who built their stack on VMware; we’ve got to feed them. Now, while they are an individual revenue way smaller than the public clouds were, but collectively, they represent a significant mass of where those countries want to run in a local cloud provider.And from our perspective, we spent years and years enabling that group to be successful. We don’t see any decline. In fact, that business for us has been growing. I would have thought that business would just completely decline with the hyperscalers. If anything, they’ve grown.So, there’s a little bit of the rising tide is helping all boats rise, so to speak. And the hyperscaler’s growth has also relied on many of these, sort of, sovereign clouds. So, there’s repatriation happening; I think those sovereign clouds will benefit some, and it could also be in some cases where customers will invest appropriately in private cloud. But I don’t see that—I think if anything, it’s going to be the public cloud growing, the private cloud, and edge cloud growing. And then some of these, sort of, country-specific sovereign clouds also growing. I don’t see this being in a huge threat to the public cloud phenomena that we’re in.Corey: This episode is sponsored in part by our friends at Lumigo. If you've built anything from serverless, you know that if there's one thing that can be said universally about these applications, it's that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications. It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You've created more problems for yourself. Make one of them go away. To learn more, visit lumigo.io. Corey: I want to very clear, I think that there’s a common misconception that there’s this, somehow, ongoing fight between all the cloud providers, and all this cloud growth, and all this revenue is coming at the expense of other cloud providers. I think that it is simultaneously workloads that are being migrated from on-premises environments—yes—but a lot of it also feels like it’s net-new. It’s not just about increasingly capturing ever larger portions of the market but rather about the market itself expanding geometrically. For a long time, it felt like that was what tech was doing. Looking at the global IT spend numbers coming out of Gartner and other places, it seems like it’s certainly not slowing down. Does that align with your perception of it? Or are there clear winners and losers that are I guess, differentiating out?Sanjay: I think, Corey, you’re right. I think if you just use some of the data, the entire IT market, let’s just say it’s about $1 trillion, some estimates have it higher than that. Let’s break it down a little bit. Inside that 1 trillion market it is growing—I mean, obviously COVID, and GDP declined last year in calendar 2020 did affect overall IT, but I think let’s assume that we have some kind of U-shape or other kind of recovery, going into the second half of certainly into next year; technology should lead GDP in terms of its incline. But inside that trillion-dollar market, if you add up the SaaS market, it’s about $115 billion market.And these are companies like Salesforce, and Adobe, and Workday, and ServiceNow. You add them all up, and those are growing, I think the numbers were in the order of 15 or 20% in aggregate. But that SaaS market is [unintelligible 00:19:08]. And that’s growing, certainly faster than the on-prem applications market, just evidenced by the growth of those companies relative to on-premise investments in SAP or Oracle. And then if you look at the infrastructure market, it’s slightly bigger, it’s about $125 billion, growing slightly faster—20, 25%—and there you have the companies like AWS, Azure, and Google, and Alibaba, and whoever have you. And certainly, that growth is faster than some of the on-premise growth, but it’s not like the on-premise folks are declining. They’re growing at slower paces.Corey: It is harder to leave an on-premise environment running and rack up charges and blow out the bill that way, but it—not impossible, I suppose, but it’s harder to do than it is in public cloud. But I definitely agree that the growth rate surpasses what you would see if it were just people turning things on and forgetting to turn them off all the time.Sanjay: Yeah, and I think that phenomenon is a shift in spending where certainly last year we saw more spending in the cloud than on-premise. I think the on-premise vendors have a tremendous opportunity in front of them, which is to optimize every last dollar that is going to be spent in the data centers, private cloud. And between us and our partners like Dell and others, we’ve got to make sure we do that for our customer base that we’ve accumulated over last 10, 15 years. But there’s also a significant investment now moving to the edge. When I look at retailers, CPG companies—consumer packaged good companies—manufacturers, the conversation that I’m having with their C-level tech or business executives is all about putting compute in the stores.I mean, listen, what is the retailer concerned about? Fraud, and some of those other things, and empowering a quick self-service experience for a consumer who comes in and wants to check out of a Safeway or Walmart really quickly. These are just simple applications with local compute in the store, and the more that we can make that possible on top of almost like a nano data center or micro data center, running in the store with those applications resident there, talking—you know, you can’t just take all of that data, go back and forth to the cloud, but with resident services and capability right there, that’s a beautiful opportunity for the VMware and the Dells of the world. And that’s going to be a significant place where I think you’re going to see expansion of their focus. The Edge market today is I think, projected to be about $6 or $8 billion this year, and growing to $25 billion the next four or five years.So, much smaller than the previous numbers I shared—you know, $125, $115 billion for SaaS and IaaS—but I think the opportunity there, especially these industries that are federated: CPG, consumer packaged goods, manufacturing, retail, and logistics, too—you know, FedEx made a big announcement with VMware and Dell a few months ago about how they’re thinking about putting compute and local infrastructure at their distribution sites. I think this phenomenon, Corey, is going to happen in a number of different [unintelligible 00:21:48], and is a tremendous opportunity. Certainly, the public cloud vendors are trying to do that with Outposts and Azure Stack, but I think it does favor the on-premise vendors also having a very strong proposition for the edge cloud.Corey: I assumed that the whole discussion with FedEx started by someone dramatically misunderstanding what it meant to ship code to production.Sanjay: [laugh]. I mean, listen, at the end of the day, all of these folks who are in traditional industries are trying to hire world-class developers—like software companies—because all of them are becoming software companies. And I think the open-source movement, and all of these ways in which you have a software supply chain that’s more modernized, it’s affecting every company. So, I think if you went into the engineering product teams of Rob Carter, who runs technology for FedEx, you’ll find them and they may not have all of the sophistication as a world-class software company, but they’re getting increasingly very much digital in their focus of next generation. And same thing with UPS.I was talking to the CEO of UPS, we had her come and speak at our kickoff. It’s amazing how much her lingo—she was the former CFO of Home Depot—I felt like I was talking to a software executive, and this is the CEO of UPS, a logistics company. So, I think increasingly, every company is becoming a software company at their core. And you don’t need to necessarily know all the details of containers and virtualization, but you need to understand how software and digital transformation, how technology can power your digital transformation.Corey: One thing that I’ve noticed the more I get to talk to people doing different things in different roles was, at first I was excited because I get to talk to the people where they’re really doing it right and everything’s awesome. And I’ve increasingly of the opinion that those sites don’t actually exist. Everyone talks about the great thing is that they’re doing and aspirationally in certain areas in the terms of conference-ware, but you get down into the weeds, and everyone views their environment as being a burning tire fire of sadness and regret. Everyone thinks other people are doing it way better than they are. And in some cases they’re embarrassed about it, in some cases they’re open about it, but I feel like we’re still in the early days where no one is doing things in the quote-unquote, “Right ways,” but everyone thinks everyone else is.Sanjay: Yeah, I think, Corey, that’s absolutely right. We are very much early days in all of this phenomenon. I mean, listen, even the public cloud, Andy himself would say it’s [laugh]—he wouldn’t say it’s quite day one, but he would say it’s very early [unintelligible 00:24:03], even though they’ve had 15 years of incredible success and a $50 billion business. I would agree. And when you look at the customers and their persona—when I ask a CIO what percentage of—of an established company, not one of the modern ones who are built all cloud-native—but what percentage of your workloads are in a public cloud versus private cloud, the vast majority is still in a data center or private cloud.But with the intent—if it’s 90/10, let’s say 90 private 10—for that to become 70/30, 50/50. But very rarely do I hear a one of these large companies say it’s going to be 10/90 the opposite way in three, five years. Now, listen, I think every company as it grows that is more modern. I mean the Zooms of the world, the Modernas, the Airbnbs, as they get bigger and bigger, they represent a completely new phenomenon of how they are building applications that are all cloud-native. And the beautiful thing for me is just as a former engineering and developer, I mean, I grew up writing code in C, and C++ and then came BEA WebLogic, and IBM WebSphere, and [JGUI 00:25:04].And I was so excited for these frameworks. I’m not writing code, thankfully, anymore because it would create lots of problems if I did. But when I watched the phenomena, I think to myself, “Man, if I was a 22 year old entering the workforce now, it’s one of the most exciting times to write code and be a developer because what’s available to you, both in the combination of these cloud frameworks and open-source frameworks, is immense.” To be able to innovate much, much faster than we did 25, 30 years ago when I was a developer.Corey: It’s amazing there’s the pace of innovation, if cloud has changed nothing else, from my perspective, it’s been the idea that you can provision things without these hefty waiting periods. But I want to shift gears slightly because we’ve been talking about cloud for a bit in the context of infrastructure, and containers, and the rest, but if we start moving up the stack a little bit, that’s also considered cloud, which just seems to have that naming problem of namespace collision, just to confuse folks. But VMware is also active in this space, too. You’ve got things like Workspace ONE, you’ve got a bunch of other endpoint options as well that are focused on the security space. Is that aligned?Is that just sort of a different business unit? How does that, I guess, resonate between the various things that you folks do? Because it turns out, you’re kind of a big company, and it’s difficult to keep it all straight from an external perspective.Sanjay: Well, I think—listen, we’re roughly a little less than $12 billion in revenue last year. You can think of us in two buckets: everything in the first bucket is all that we talked about. Think of that as modernization of applications and cloud infrastructure, or what people might think about PaaS and IaaS without the underlying hardware; we’re not trying to build servers and storage and networking at the hardware level, you know, and so and so. But the software layer is about, that’s the first conversation we had for the last 15, 20 minutes. The second part of our business is where we’re touching end-users and infrastructure, and securing it.And we think that’s an important part because that also is something through software, and the cloud could be optimized. And we’ve had a long-standing digital workspace. In fact, when I came to VMware, it was the first business I was running in terms of all the products and end-user computing. And our thesis was many of the current tools, whether it’s the virtual desktop technology that people have from existing vendors, or even today, the security tools that they use is just too cumbersome. It’s too heavy.In many cases, people complain about the number of agents they have on their laptops, or the way in which they secure firewalls is too expensive and too many. We felt we could radically—VMware gets involved in problems where we can radically simplify thing with some disruptive innovation. And the idea was, first in the digital workspace was to radically reduce cost with software that was built for the cloud. And Workspace ONE and all of those things radically reduce the need for disparate technologies for virtual desktops, identity management, and endpoint management. We’ve done very well in that.We’re a leader in that segment, if you look at any of the analysts ratings, whether it’s Gardner or others. But security has been a more recent phenomenon where we felt like it leads us very quickly into securing those laptops because on those same laptops, you have antivirus, you have a variety of tools, and on the average, the CSOs, the Chief Security Officers tell me they have way too many agents, way too many consoles, way too many alerts, and if we could reduce that and have a single agent on a laptop, or maybe even agentless technology that secure this, that’s the Nirvana. And if you look at some of the recent things that have happened with SolarWinds, or Petya, WannaCry in the past, security’s of top concern, Corey, to boards. And the more that we could do to clean that up, I think we can emerge—which we’re already starting to—as a cybersecurity layer. So, that’s a smaller part of our business, but, I mean, it’s multi-billion now, and we think it’s a tremendous opportunity for us to take what we’re doing in workspace and security and make that a growth vector.So, I think both of these core areas, the cloud infrastructure, and modern applications—topic number one—workspace and security—topic number two—I’m both tremendous opportunities for VMware in our journey to grow from a $12 billion company to one day, hopefully, a $20 billion company.Corey: Would that we all had such problems, on some level. It’s really interesting seeing the evolution of companies going from relatively small companies and humble beginnings to these giant—I guess, I want to use the term Colossus, but I’m not sure if that’s insulting or [laugh] not—it’s phenomenal just to see the different areas of business that VMware has expanded into. I mean, I’ve had other folks from your org talking about what a Tanzu is or might be, so we aren’t even going to go down that rabbit hole due to time constraints at this point, but one thing that I do want to get into, slightly, has been a recurring theme in the show, which is where does the next generation of leaders come from? Where do the next generation engineers come from? And you’ve been devoting a bit of time to this. I think I saw one of your YouTube videos somewhat recently about your leadership values. Talk to me a little bit about that.Sanjay: Yeah. Corey, listen, I’m glad that we’re closing out this on some of the soft topics because I love talking to you, or other talented analysts and thought leaders around technology. It’s my roots; I’m a technical person at heart. I love technology. But I think the soft stuff is often the hard stuff.And the hard stuff is often the soft stuff. And what I mean by that is, when all this peels away, what your lasting legacy to the company are the people you invest in, the character you build. And, I mean, as an immigrant who came to this country, when I was 18 years old, $50 in my pocket, I was very fortunate to have a scholarship to go to a really nice University, Dartmouth College, to study computer science. I mean, I grew up in India and if it wasn’t for the opportunity to come here on a scholarship, I wouldn’t have [been here 00:30:32]. So, everything I consider a blessing and a learning opportunity where I’m looking at the advent of life as a growth mindset: what can I learn? And we all need to cultivate more and more aspects of that growth mindset where we move from being know-it-alls to learn-it-alls.And one of the key things that I talk about—and all of your listeners on this, listening to this, I welcome to go to YouTube and search Sanjay Poonen and leadership, it’s a 10-minute video—I’ll pick one of them. Most often as we get higher and higher in an organization, leaders tend to view things as a pyramid, and they’re kind of like this chief bird sitting at the top of the pyramid, and all these birds that are looking—below them on branches are looking up and all they see is crap falling down. Literally. That’s what happens when you look at the bird up. And our job as leaders is to invert that pyramid.And to actually think about the person who is on the front lines. In a software company, it’s an engineer and a sales rep. They are the folks on the frontline: they’re writing code or selling code. They are the true people who are making things happen. And when we as leaders look at ourselves as the bottom of the pyramid—some people call that, “Servant leadership.”Whatever way you call it, the phrase isn’t the point—the point is, invert that pyramid and to take obstacles out of people from the frontline. You really become not interested as much around what your own personal wellbeing, it’s about ensuring that those people in the middle layers and certainly at the leaf levels of the organization are enormously successful. Their success becomes your joy, and it becomes almost like a parent, right? I mean, Corey, you have kids; I’ve got kids. Imagine if you were a parent and you were jealous of your kid’s success.I mean, I want my three children, my daughter, my two children to do better than me, running races or whatever it is that they do. And I think as a leader, the more that we celebrate the successes of our teams and people, and our lasting legacy is not our own success; it’s what we have left behind, other people. I’ve say often there’s no success without successors. So, that mindset takes a lot of work because the natural tendency of the human mind and the human behavior is to be selfish and think about ourselves. But yeah, it’s a natural phenomenon.We’re born that way, we live in act that way, but the more that we start to create that, then taking that not just to our team, but also to the community allows us to build a better society. And that’s something I’m deeply passionate about, try to do my small piece for it, and in fact, I’m sometimes more excited about these topics of leadership than even technology.Corey: It feels like it’s the stuff that lasts; it has staying power. I could record a video now about technology choices and how to work with those technologies and unless it’s about Git, it’s probably not going to be too relevant in 10 years. But leadership is one of those eternal things where it’s, once you’ve experienced a certain level of success, you can really see what people do with that the people that I like to surround myself with, generally make it a point to send the elevator back down, so to speak.Sanjay: I agree, Corey, it’s—glad that you do it. I’m always looking for people that I can learn from, and it doesn’t matter where they are in society. I mean, I think you often—I mean, this is classic Dale Carnegie; one of the books that my dad gave to me at a young age that I encourage everyone to read, How to Win Friends and Influence People, talked about how you can detect a person’s character based on the way they treat the receptionist, or their assistants, the people who might be lower down the totem pole from them. And most often you have people who kiss up and kick down. And I think when you build an organization that’s that typical.A lot of companies are built that way where they kiss up and kick down, you actually have an inverted sense of values. And I think you have to go back to some of those old-school ways that Dale Carnegie or Steven Covey talked about because you don’t have to build a culture that’s obnoxious; you can build a company that’s both nice and competitive. It doesn’t mean that anything we’ve talked about for the last few minutes means that I’m any less competitive and I don’t want to beat the competition and win a deal. What you can do it nicely. And even that’s something that I’ve had to grow in.So, I think when we all look at ourselves as sculptures, work in progress, and we’re perfecting our craft, so to speak, both on the technical front, and the product front and customer relationship, but then also on the leadership and the personal growth front, we actually become both better people and then we also build better companies.Corey: And sometimes that’s really all that we can ask for. If people want to learn more about what you have to say and get your opinion on these things, okay can they find you?Sanjay: Listen, I’m very approachable. You can follow me on Twitter, I’m on LinkedIn [unintelligible 00:34:54], or my email spoonen@vmware.com. I’m out there.I read voraciously, and probably not as responsive, sometimes, but I try—certainly, customers will hear from me within 24 hours because I try to be very responsive to our customers. But you can connect with me on social media. And I’m honored to be on your show, Corey. I’ve been reading your stuff since it first came out, and then, obviously, a fan of the way you’re thinking about things. Sometimes I feel I need to correct your opinion, and some of that we did today. [laugh]. But you’ve been very—Corey: Oh, I would agree. I come out of this conversation with a different view of VMware than I went into it with. I’m being fully transparent on that.Sanjay: And you’ve helped us. I mean, quite frankly, your blogs and your focus on this and, like, is the V in VMware, like, a bad word? Is it legacy? It’s forced us to think, so I think it’s iron sharpens iron. I’m very delighted that we connected, I don’t know if it was a year or two years ago.And I’ve been a fan; I watch the stuff that you do at re:Invent, so keep going with what you’re doing. I think all of what you write and what you talk about is hopefully making an impact on people who read and listen. And look forward to continuing this dialogue, not just with me, but I think you’re talking to other people in VMware in the future. I’m not the smartest person at VMware, but I’m very fortunate to be [laugh] surrounded by many of them. So hopefully, you get to talk to them, also, in the near future.Corey: [laugh]. I will, of course, will put links to all that in the [show notes 00:36:11]. Thank you so much for taking the time to speak with me today. I really appreciate it.Sanjay: Thanks, Corey, and all the best of you and your organization.Corey: Sanjay Poonen, Chief Operating Officer of VMware, I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with a condescending comment telling me that in fact, it is a best practice to ship your code to production via FedEx.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.This has been a HumblePod production. Stay humble.

Sunday Showcase
Wavefront Short: Trans-Humanity

Sunday Showcase

Play Episode Listen Later Apr 25, 2021 18:18


At the twilight of humanity, a robot comes to the cabin door of an old woman to have her join the rest of the human species in the Singularity. Written by Jack J. Ward for the Wave Front Anthology. Produced by Austin Beach, this episode stars Rissa M, Jeffrey Billard, and Lothar Tuppan! Learn more about your ad choices. Visit megaphone.fm/adchoices

The Mutual Audio Network
Wavefront Short: Trans-Humanity(042521)

The Mutual Audio Network

Play Episode Listen Later Apr 25, 2021 18:18


At the twilight of humanity, a robot comes to the cabin door of an old woman to have her join the rest of the human species in the Singularity. Written by Jack J. Ward for the Wave Front Anthology. Produced by Austin Beach, this episode stars Rissa M, Jeffrey Billard, and Lothar Tuppan! Learn more about your ad choices. Visit megaphone.fm/adchoices

Wednesday Wonders
Wavefront Short: Trans-Humanity

Wednesday Wonders

Play Episode Listen Later Apr 21, 2021 19:06


In this week, another new Electric Vicuna Productions episode. This time Jack Ward writes about the coming singularity and the end of the human race and their "meat" existence for a permanent cyber integration. Produced by Austin Beach, this episode stars Rissa M, Jeffrey Billard, and Lothar Tuppan! Learn more about your ad choices. Visit megaphone.fm/adchoices

The Mutual Audio Network
Wavefront Short: Trans-Humanity(042121)

The Mutual Audio Network

Play Episode Listen Later Apr 21, 2021 19:06


In this week, another new Electric Vicuna Productions episode. This time Jack Ward writes about the coming singularity and the end of the human race and their "meat" existence for a permanent cyber integration. Produced by Austin Beach, this episode stars Rissa M, Jeffrey Billard, and Lothar Tuppan! Learn more about your ad choices. Visit megaphone.fm/adchoices

The Mutual Audio Network
Sunday Showcase for April 18th, 2021

The Mutual Audio Network

Play Episode Listen Later Apr 18, 2021 2:05


Welcome to your eleventy-first week of Sunday Showcase from the Mutual Audio Network- the world's largest curated collection of audio drama and fiction. We begin with Sonic Society #685 and Red House Rising and finish with "Alien Invasion Cancelled" from EVP's Wavefront anthology! Learn more about your ad choices. Visit megaphone.fm/adchoices

evp wavefront sunday showcase sonic society mutual audio network
Sunday Showcase
Sunday Showcase for April 18th, 2021

Sunday Showcase

Play Episode Listen Later Apr 18, 2021 2:05


Welcome to your eleventy-first week of Sunday Showcase from the Mutual Audio Network- the world's largest curated collection of audio drama and fiction. We begin with Sonic Society #685 and Red House Rising and finish with "Alien Invasion Cancelled" from EVP's Wavefront anthology! Learn more about your ad choices. Visit megaphone.fm/adchoices

evp wavefront sunday showcase sonic society mutual audio network
Sunday Showcase
Wavefront Short: Alien Invasion Cancelled

Sunday Showcase

Play Episode Listen Later Apr 18, 2021 18:10


At last! A new Electric Vicuna's The Wave Front Anthology series release! Jack Ward pens with tongue firmly in cheek a look at what happens when aliens arrive in the modern age. Produced by Austin Beach, this episode stars Tanja Milojevic Duane Noch and Jack Ward as Modhan. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Mutual Audio Network
Wavefront Short: Alien Invasion Cancelled(041821)

The Mutual Audio Network

Play Episode Listen Later Apr 18, 2021 18:10


At last! A new Electric Vicuna's The Wave Front Anthology series release! Jack Ward pens with tongue firmly in cheek a look at what happens when aliens arrive in the modern age. Produced by Austin Beach, this episode stars Tanja Milojevic Duane Noch and Jack Ward as Modhan. Learn more about your ad choices. Visit megaphone.fm/adchoices

Wednesday Wonders
Wavefront Short: Alien Invasion Cancelled

Wednesday Wonders

Play Episode Listen Later Apr 14, 2021 18:26


We return with Electric Vicuna's The Wave Front Anthology series. Jack Ward pens with tongue firmly in cheek a look at what happens when aliens arrive in the modern age. Produced by Austin Beach, this episode stars Tanja Milojevic Duane Noch and Jack Ward as Modhan. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Mutual Audio Network
Wavefront Short: Alien Invasion Cancelled(041421)

The Mutual Audio Network

Play Episode Listen Later Apr 14, 2021 18:26


We return with Electric Vicuna's The Wave Front Anthology series. Jack Ward pens with tongue firmly in cheek a look at what happens when aliens arrive in the modern age. Produced by Austin Beach, this episode stars Tanja Milojevic Duane Noch and Jack Ward as Modhan. Learn more about your ad choices. Visit megaphone.fm/adchoices

The I/O Tower
John Grower

The I/O Tower

Play Episode Listen Later Mar 4, 2021


Greetings, programs, and welcome to The I/O Tower: a podcast for all things TRON. I'm your host, David Fleming. In this episode, I talk with TRON scene coordinator John Grower, who details how he and others made the effects pipeline that brought us so many of our favorites scenes in TRON, such as the light cycles and the MCP! John regales us with stories of good times and hard work at Disney. He recalls long nights, dart gun fights, and playing Asteroids with Jeff Bridges and even Terry Gilliam of Monty Python fame - all from his office at Mickey Mouse Avenue and Dopey Drive! And there was that time he got disinvited from Disney's art department by...uhm...Art. John shares how the story of TRON prefigured the Internet today, and that TRON "lit a fire" under him to create better graphics and animation software for future films. John highlights breakthroughs that followed, such as the very first digital matte paintings! From Robert Abel & Associates, through Disney and onto Wavefront and Santa Barbara Studios, enjoy this ride along with John on TRON and the many films and computer graphics breakthroughs of which he was part. END OF LINE

The I/O Tower
John Grower

The I/O Tower

Play Episode Listen Later Mar 4, 2021


Greetings, programs, and welcome to The I/O Tower: a podcast for all things TRON. I'm your host, David Fleming. In this episode, I talk with TRON scene coordinator John Grower, who details how he and others made the effects pipeline that brought us so many of our favorites scenes in TRON, such as the light cycles and the MCP! John regales us with stories of good times and hard work at Disney. He recalls long nights, dart gun fights, and playing Asteroids with Jeff Bridges and even Terry Gilliam of Monty Python fame - all from his office at Mickey Mouse Avenue and Dopey Drive! And there was that time he got disinvited from Disney's art department by...uhm...Art. John shares how the story of TRON prefigured the Internet today, and that TRON "lit a fire" under him to create better graphics and animation software for future films. John highlights breakthroughs that followed, such as the very first digital matte paintings! From Robert Abel & Associates, through Disney and onto Wavefront and Santa Barbara Studios, enjoy this ride along with John on TRON and the many films and computer graphics breakthroughs of which he was part. END OF LINE

The Sonic Society
Episode 576- Virtual Votes

The Sonic Society

Play Episode Listen Later Nov 6, 2018 58:37


Tonight with David off, Jack is looking forward to the American Mid-Term elections... Maybe a little too much so! He's got a new Electric Vicuna Productions Wavefront Anthology show featuring Jeffrey Billard, Lothar Tuppan, Jan Didur, Tanja Milojevic and Pete Lutz thinking back on Bill Hollweg entitled "Voting is Anonymous". A second short from Pete Lutz and the 11th Hour Productions group called "A Real Bedtime Gory" and a SPECIAL Bells in the Batfry with John Bell rounds out the post-Halloween fun!

The Sonic Society
Episode 557: Pixies and other Pets

The Sonic Society

Play Episode Listen Later Apr 25, 2018 48:30


Tonight Rick Coste brings us the first of his new show Pixie! And our second feature is the latest of the EVP collection Wavefront Shorts with "Pets" starring the lads Rich Wentworth and Michael McQuilkin from Hadron Gospel Hour and produced by Rich Frolich of Texas Radio Theatre.

dunk!records
Set and Setting • Specular Wavefront of...

dunk!records

Play Episode Listen Later Jan 27, 2017 5:29


Set and Setting • Specular Wavefront of... by One hour of post-rock, post-metal, post-anything,...

The Sonic Society
Episode 488- Kraken Awakes

The Sonic Society

Play Episode Listen Later Oct 25, 2016 53:04


Tonight Jack and David present three shows beginning with the feature Kraken Mare from the good folks at YAP Audio and The Audio Drama Production Podcast. Ending with two shorts- Bells in the Batfry episode #149 and the remake of the classic Wavefront from EVP Name Please starring Erika Sanderson and David Ault. It's Audio Drama Time!

The Sonic Society
Sonic Speaks- 0112- The Musical Voice of Sharon Bee

The Sonic Society

Play Episode Listen Later Nov 23, 2014 38:40


The incredible singer, musician, composer, Sharon Bee is our special guest this week on Sonic Speaks. Sharon talks to us about writing the music for Electric Vicuña Productions like The Dead Line, Gate, Wavefront, and Firefly- Old Wounds

The Sonic Society
Episode 359- Seeing the Galaxy Master Gift Bell

The Sonic Society

Play Episode Listen Later Dec 18, 2013 59:20


This fantastic fun filled frolicking frequency fiesta includes Seeing Ear Theatre's recently public domain episode "Into the Sun", Electric Vicuna's Wavefront short "Galaxy Master versus the Varn", David having a fireside chat with the classic Christmas short story by O. Henry "Gift of the Magic", and an old favourite podcast special (Shh... we won't spoil it for you, we promise!) 2013- the year of the Sonic! 

The Sonic Society
EPISODE 350- Never Alone in the Sonic

The Sonic Society

Play Episode Listen Later Oct 15, 2013 65:43


The end of the Videk Agenda brings us an original full length Wavefront episode from EVP starring Jack Ward, Genevieve Jones, Tanja Milojevic, David Ault and John Bell. Production Editing, and sound design by Michael L. Stokes. Music by Sharon Bee, Michael L. Stokes and Kevin MacLeod courtesy of Incomputech.  Thank you EVERYONE from all of us at the Sonic Society for 350 regular season episodes. What a triumph! 

The Sonic Society
Episode 299- New Frontiers of Sound

The Sonic Society

Play Episode Listen Later Sep 4, 2012 53:10


Jack Ward and David Ault usher in Season 8 of the Sonic Society with two original Electric Vicuna Production offerings from the Wavefront Anthology: A full length episode of "Borrowed Time" starring Joe Stofko, John Bell, Lyn Cullen, and Colleen MacIssac, written and directed by Jack J. Ward with post production by John Bell! And "Name Please" another audio short from Jack J. Ward.