Podcasts about Kibana

  • 80PODCASTS
  • 134EPISODES
  • 46mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Dec 16, 2025LATEST
Kibana

POPULARITY

20172018201920202021202220232024


Best podcasts about Kibana

Latest podcast episodes about Kibana

Unstoppable Mindset
Episode 397 – Unstoppable Purpose Found Through Photography with Mobeen Ansari

Unstoppable Mindset

Play Episode Listen Later Dec 16, 2025 66:24


What happens when your voice is built through visuals, not volume? In this Unstoppable Mindset episode, I talk with photographer and storyteller Mobeen Ansari about growing up with hearing loss, learning speech with support from his family and the John Tracy Center, and using technology to stay connected in real time. We also explore how his art became a bridge across culture and faith, from documenting religious minorities in Pakistan to chronicling everyday heroes, and why he feels urgency to photograph climate change before more communities, heritage sites, and ways of life are lost. You'll hear how purpose grows when you share your story in a way that helps others feel less alone, and why Mobeen believes one story can become a blueprint for someone else to navigate their own challenge. Highlights: 00:03:54 - Learn how early family support can shape confidence, communication, and independence for life. 00:08:31 - Discover how deciding when to capture a moment can define your values as a storyteller. 00:15:14 - Learn practical ways to stay fully present in conversations when hearing is a daily challenge. 00:23:24 - See how unexpected role models can redefine what living fully looks like at any stage of life. 00:39:15 - Understand how visual storytelling can cross cultural and faith boundaries without words. 00:46:38 - Learn why documenting climate change now matters before stories, places, and communities disappear. About the Guest: Mobeen Ansari is a photographer, filmmaker and artist from Islamabad, Pakistan. Having a background in fine arts, he picked up the camera during high school and photographed his surroundings and friends- a path that motivated him to be a pictorial historian. His journey as a photographer and artist is deeply linked to a challenge that he had faced since after his birth.  Three weeks after he was born, Mobeen was diagnosed with hearing loss due to meningitis, and this challenge has inspired him to observe people more visually, which eventually led him to being an artist. He does advocacy for people with hearing loss.  Mobeen's work focuses on his home country of Pakistan and its people, promoting a diverse & poetic image of his country through his photos & films. As a photojournalist he focuses on human interest stories and has extensively worked on topics of climate change, global health and migration. Mobeen has published three photography books. His first one, ‘Dharkan: The Heartbeat of a Nation', features portraits of iconic people of Pakistan from all walks of life. His second book, called ‘White in the Flag' is based on the lives & festivities of religious minorities in Pakistan. Both these books have had two volumes published over the years. His third book is called ‘Miraas' which is also about iconic people of Pakistan and follows ‘Dharkan' as a sequel. Mobeen has also made two silent movies; 'Hellhole' is a black and white short film, based on the life of a sanitation worker, and ‘Lady of the Emerald Scarf' is based on the life of Aziza, a carpet maker in Guilmit in Northern Pakistan. He has exhibited in Pakistan & around the world, namely in UK, Italy, China Iraq, & across the US and UAE. His photographs have been displayed in many famous places as well, including Times Square in New York City. Mobeen is also a recipient of the Swedish Red Cross Journalism prize for his photography on the story of FIFA World Cup football manufacture in Sialkot. Ways to connect with Mobeen**:** www.mobeenansari.com Facebook: www.facebook.com/mobeenart  Linkedin: https://www.linkedin.com/in/mobeenansari/ Instagram: @mobeenansariphoto X: @Mobeen_Ansari About the Host: Michael Hingson is a New York Times best-selling author, international lecturer, and Chief Vision Officer for accessiBe. Michael, blind since birth, survived the 9/11 attacks with the help of his guide dog Roselle. This story is the subject of his best-selling book, Thunder Dog. Michael gives over 100 presentations around the world each year speaking to influential groups such as Exxon Mobile, AT&T, Federal Express, Scripps College, Rutgers University, Children's Hospital, and the American Red Cross just to name a few. He is Ambassador for the National Braille Literacy Campaign for the National Federation of the Blind and also serves as Ambassador for the American Humane Association's 2012 Hero Dog Awards. https://michaelhingson.com https://www.facebook.com/michael.hingson.author.speaker/ https://twitter.com/mhingson https://www.youtube.com/user/mhingson https://www.linkedin.com/in/michaelhingson/ accessiBe Links https://accessibe.com/ https://www.youtube.com/c/accessiBe https://www.linkedin.com/company/accessibe/mycompany/ https://www.facebook.com/accessibe/ Thanks for listening! Thanks so much for listening to our podcast! If you enjoyed this episode and think that others could benefit from listening, please share it using the social media buttons on this page. Do you have some feedback or questions about this episode? Leave a comment in the section below! Subscribe to the podcast If you would like to get automatic updates of new podcast episodes, you can subscribe to the podcast on Apple Podcasts or Stitcher. You can subscribe in your favorite podcast app. You can also support our podcast through our tip jar https://tips.pinecast.com/jar/unstoppable-mindset . Leave us an Apple Podcasts review Ratings and reviews from our listeners are extremely valuable to us and greatly appreciated. They help our podcast rank higher on Apple Podcasts, which exposes our show to more awesome listeners like you. If you have a minute, please leave an honest review on Apple Podcasts. Transcription Notes: Michael Hingson  00:00 Access Cast and accessiBe Initiative presents Unstoppable Mindset. The podcast where inclusion, diversity and the unexpected meet. Hi, I'm Michael Hingson, Chief Vision Officer for accessiBe and the author of the number one New York Times bestselling book, Thunder dog, the story of a blind man, his guide dog and the triumph of trust. Thanks for joining me on my podcast as we explore our own blinding fears of inclusion unacceptance and our resistance to change. We will discover the idea that no matter the situation, or the people we encounter, our own fears, and prejudices often are our strongest barriers to moving forward. The unstoppable mindset podcast is sponsored by accessiBe, that's a c c e s s i capital B e. Visit www.accessibe.com to learn how you can make your website accessible for persons with disabilities. And to help make the internet fully inclusive by the year 2025. Glad you dropped by we're happy to meet you and to have you here with us. Michael Hingson  01:20 Well, hi everyone, and welcome to another episode of unstoppable mindset. I am your host. Michael Hingson, we're really glad that you are here, and today we are going to talk to Mobeen Ansari, and Mobeen is in Islamabad. I believe you're still in Islamabad, aren't you? There we go. I am, yeah. And so, so he is 12 hours ahead of where we are. So it is four in the afternoon here, and I can't believe it, but he's up at four in the morning where he is actually I get up around the same time most mornings, but I go to bed earlier than he does. Anyway. We're really glad that he is here. He is a photographer, he speaks he's a journalist in so many ways, and we're going to talk about all of that as we go forward. Mobin also is profoundly hard of hearing. Uses hearing aids. He was diagnosed as being hard of hearing when he was three weeks old. So I'm sure we're going to talk about that a little bit near the beginning, so we'll go ahead and start. So mo bean, I want to welcome you to unstoppable mindset. We're really glad that you're here. Mobeen Ansari  02:32 It's a pleasure to be here, and I'm honored to plan your show. Thank you so much. Michael Hingson  02:37 Well, thank you very much, and I'm glad that we're able to make this work, and I should explain that he is able to read what is going on the screen. I use a program called otter to transcribe when necessary, whatever I and other people in a meeting, or in this case, in a podcast, are saying, and well being is able to read all of that. So that's one of the ways, and one of the reasons that we get to do this in real time. So it's really kind of cool, and I'm really excited by that. Well, let's go ahead and move forward. Why don't you tell us a little about the early Beau beam growing up? And obviously that starts, that's where your adventure starts in a lot of ways. So why don't you tell us about you growing up and all that. Mobeen Ansari  03:22 So I'm glad you mentioned the captions part, because, you know, that has been really, really revolutionary. That has been quite a lifesaver, be it, you know, Netflix, be it anywhere I go into your life, I read captions like there's an app on my phone that I use for real life competitions, and that's where I, you know, get everything. That's where technology is pretty cool. So I do that because of my hearing does, as you mentioned, when I was three weeks old, I had severe meningitis due to it, had lost hearing in both my ear and so when my hearing loss were diagnosed, it was, you know, around the time we didn't have resources, the technology that we do today. Michael Hingson  04:15 When was that? What year was that about? Mobeen Ansari  04:19 1986 okay, sorry, 1987 so yeah, so they figured that I had locked my hearing at three weeks of age, but didn't properly diagnose it until I think I was three months old. So yeah, then January was my diagnosis, okay. Michael Hingson  04:44 And so how did you how did you function, how did you do things when you were, when you were a young child? Because at that point was kind of well, much before you could use a hearing aid and learn to speak and so on. So what? Mobeen Ansari  05:00 You do. So my parents would have a better memory of that than I would, but I would say that they were, you know, extra hard. They went an extra mile. I mean, I would say, you know, 100 extra mile. My mother learned to be a peace therapist, and my father. He learned to be he learned how to read audiogram, to learn the audiology, familiarize himself with hearing a technology with an engineer support. My parents work around me. David went to a lot of doctors, obviously, I was a very difficult child, but I think that actually laid the foundation in me becoming an artist. Because, you know, today, the hearing is it fits right into my ear so you cannot see it, basically because my hair is longer. But back then, hearing aids used to be almost like on a harness, and you to be full of quiet, so you would actually stick out like a sore thumb. So, you know, obviously you stand out in a crowd. So I would be very conscious, and I would often, you know, get asked what this is. So I would say, this is a radio but for most part of my childhood, I was very introverted, but I absolutely love art. My grandmother's for the painter, and she was also photographer, as well as my grandfather, the hobbyist photographer, and you know, seeing them create all of the visuals in different ways, I was inspired, and I would tell my stories in form of sketching or making modified action figures. And photography was something I picked up way later on in high school, when the first digital camera had just come out, and I finally started in a really interacting with the world. Michael Hingson  07:13 So early on you you drew because you didn't really use the camera yet. And I think it's very interesting how much your parents worked to make sure they could really help you. As you said, Your mother was a speech you became a speech therapist, and your father learned about the technologies and so on. So when did you start using hearing aids? That's Mobeen Ansari  07:42 a good question. I think I probably started using it when I was two years old. Okay, yeah, yeah, that's gonna start using it, but then, you know, I think I'll probably have to ask my parents capacity, but a moment, Mobeen Ansari  08:08 you know, go ahead, I think they worked around me. They really improvised on the situation. They learned at the went along, and I think I learned speech gradually. Did a lot of, you know, technical know, how about this? But I would also have to credit John Troy clinic in Los Angeles, because, you know, back then, there was no mobile phone, there were no emails, but my mother would put in touch with John Troy center in LA and they would send a lot of material back and forth for many years, and they would provide a guidance. They would provide her a lot of articles, a lot of details on how to help me learn speech. A lot of visuals were involved. And because of the emphasis on visuals, I think that kind of pushed me further to become an artist, because I would speak more, but with just so to Michael Hingson  09:25 say so, it was sort of a natural progression for you, at least it seemed that way to you, to start using art as a way to communicate, as opposed as opposed to talking. Mobeen Ansari  09:39 Yeah, absolutely, you know, so I would like pass forward a little bit to my high school. You know, I was always a very shy child up until, you know, my early teens, and the first camera had just come out, this was like 2001 2002 at. It. That's when my dad got one, and I would take that to school today. You know, everyone has a smartphone back then, if you had a camera, you're pretty cool. And that is what. I started taking pictures of my friends. I started taking pictures of my teachers, of landscapes around me. And I would even capture, you know, funniest of things, like my friend getting late for school, and one day, a friend of mine got into a fight because somebody stole his girlfriend, or something like that happened, you know, that was a long time ago, and he lost the fight, and he turned off into the world court to cry, and he was just sort of, you're trying to hide all his vulnerability. I happened to be in the same place as him, and I had my camera, and I was like, should I capture this moment, or should I let this permit go? And well, I decided to capture it, and that is when human emotion truly started to fascinate me. So I was born in a very old city. I live in the capital of Islamabad right now, but I was born in the city of travel to be and that is home to lots of old, you know, heritage sites, lots of old places, lots of old, interesting scenes. And you know, that always inspired you, that always makes you feel alive. And I guess all of these things came together. And, you know, I really got into the art of picture storytelling. And by the end of my high school graduation, everybody was given an award. The certificate that I was given was, it was called pictorial historian, and that is what inspired me to really document everything. Document my country. Document is people, document landscape. In fact, that award it actually has in my studio right now been there for, you know, over 21 years, but it inspired me luck to this day. Michael Hingson  12:20 So going back to the story you just told, did you tell your friend that you took pictures of him when he was crying? Mobeen Ansari  12:32 Eventually, yes, I would not talk. You're familiar with the content back then, but the Catholic friend, I know so I mean, you know everyone, you're all kids, so yeah, very, yeah, that was a very normal circumstance. But yeah, you know, Michael Hingson  12:52 how did he react when you told him, Mobeen Ansari  12:56 Oh, he was fine. It's pretty cool about it, okay, but I should probably touch base with him. I haven't spoken to him for many years that Yeah, Michael Hingson  13:08 well, but as long as Yeah, but obviously you were, you were good friends, and you were able to continue that. So that's, that's pretty cool. So you, your hearing aids were also probably pretty large and pretty clunky as well, weren't they? Mobeen Ansari  13:26 Yeah, they were. But you know, with time my hearing aid became smaller. Oh sure. So hearing aid model that I'm wearing right now that kind of started coming in place from 1995 1995 96 onwards. But you know, like, even today, it's called like BDE behind the ear, hearing it even today, I still wear the large format because my hearing loss is more it's on the profound side, right? Just like if I take my hearing, it off. I cannot hear but that's a great thing, because if I don't want to listen to anybody, right, and I can sleep peacefully at night. Michael Hingson  14:21 Have you ever used bone conduction headphones or earphones? Mobeen Ansari  14:30 But I have actually used something I forgot what is called, but these are very specific kind of ear bone that get plugged into your hearing it. So once you plug into that, you cannot hear anything else. But it discontinued that. So now they use Bluetooth. Michael Hingson  14:49 Well, bone conduction headphones are, are, are devices that, rather than projecting the audio into your ear, they actually. Be projected straight into the bone and bypassing most of the ear. And I know a number of people have found them to be useful, like, if you want to listen to music and so on, or listen to audio, you can connect them. There are Bluetooth versions, and then there are cable versions, but the sound doesn't go into your ear. It goes into the bone, which is why they call it bone conduction. Mobeen Ansari  15:26 Okay, that's interesting, I think. Michael Hingson  15:29 And some of them do work with hearing aids as well. Mobeen Ansari  15:34 Okay, yeah, I think I've experienced that when they do the audio can test they put, like at the back of your head or something? Michael Hingson  15:43 Yeah, the the most common one, at least in the United States, and I suspect most places, is made by a company called aftershocks. I think it's spelled A, F, T, E, R, S, H, O, k, s, but something to think about. Anyway. So you went through high school mostly were, were your student colleagues and friends, and maybe not always friends? Were they pretty tolerant of the fact that you were a little bit different than they were. Did you ever have major problems with people? Mobeen Ansari  16:22 You know, I've actually had a great support system, and for most part, I actually had a lot of amazing friends from college who are still my, you know, friend to the dead, sorry, from school. I'm actually closer to my friend from school than I am two friends of college difficulties. You know, if you're different, you'll always be prone to people who sort of are not sure how to navigate that, or just want, you know, sort of test things out. So to say, so it wasn't without his problems, but for most part of it's surprisingly, surprisingly, I've had a great support system, but, you know, the biggest challenge was actually not being able to understand conversation. So I'm going to go a bit back and forth on the timeline here. You know, if so, in 2021, I had something known as menus disease. Menier disease is something, it's an irregular infection that arises from stress, and what happens is that you're hearing it drops and it is replaced by drinking and bathing and all sorts of real according to my experience, it affects those with hearing loss much more than it affects those with regular, normal hearing. It's almost like tinnitus on steroids. That is how I would type it. And I've had about three occurrences of that, either going to stress or being around loud situations and noises, and that is where it became so challenging that it became difficult to hear, even with hearing it or lip reading. So that is why I use a transcriber app wherever I go, and that been a lifesaver, you know. So I believe that every time I have evolved to life, every time I have grown up, I've been able to better understand people to like at the last, you know, four years I've been using this application to now, I think I'm catching up on all the nuances of conversation that I've missed. Right if I would talk to you five years ago, I would probably understand 40% of what you're saying. I would understand it by reading your lips or your body language or ask you to write or take something for me, but now with this app, I'm able to actually get to 99% of the conversation. So I think with time, people have actually become more tired and more accepting, and now there is more awareness. I think, awareness, right? Michael Hingson  19:24 Well, yeah, I was gonna say it's been an only like the last four years or so, that a lot of this has become very doable in real time, and I think also AI has helped the process. But do you find that the apps and the other technologies, like what we use here, do you find that occasionally it does make mistakes, or do you not even see that very much at all? Mobeen Ansari  19:55 You know it does make mistakes, and the biggest problem is when there is no data, when there is no. Wide network, or if it runs out of battery, you know, because now I kind of almost 24/7 so my battery just integrate that very fast. And also because, you know, if I travel in remote regions of Pakistan, because I'm a photographer, my job to travel to all of these places, all of these hidden corners. So I need to have conversation, especially in those places. And if that ad didn't work there, then we have a problem. Yeah, that is when it's problem. Sometimes, depending on accidents, it doesn't pick up everything. So, you know, sometimes that happens, but I think technology is improving. Michael Hingson  20:50 Let me ask the question. Let me ask the question this way. Certainly we're speaking essentially from two different parts of the world. When you hear, when you hear or see me speak, because you're you're able to read the transcriptions. I'm assuming it's pretty accurate. What is it like when you're speaking? Does the system that we're using here understand you well as in addition to understanding me? Mobeen Ansari  21:18 Well, yes, I think it does so like, you know, I just occasionally look down to see if it's catching up on everything. Yeah, on that note, I ought to try and improve my speech over time. I used to speak very fast. I used to mumble a lot, and so now I become more mindful of it, hopefully during covid. You know, during covid, a lot of podcasts started coming out, and I had my own actually, so I would, like brought myself back. I would look at this recording, and I would see what kind of mistakes I'm making. So I'm not sure if transcription pick up everything I'm saying, but I do try and improve myself, just like the next chapter of my life where I'm trying to improve my speech, my enunciation Michael Hingson  22:16 Well, and that's why I was was asking, it must be a great help to you to be able to look at your speaking through the eyes of the Translate. Well, not translation, but through the eyes of the speech program, so you're able to see what it's doing. And as you said, you can use it to practice. You can use it to improve your speech. Probably it is true that slowing down speech helps the system understand it better as well. Yeah, yeah. So that makes sense. Well, when you were growing up, your parents clearly were very supportive. Did they really encourage you to do whatever you wanted to do? Do they have any preconceived notions of what kind of work you should do when you grew up? Or do they really leave it to you and and say we're going to support you with whatever you do? Mobeen Ansari  23:21 Oh, they were supportive. And whatever I wanted to do, they were very supportive in what my brother had gone to do I had to enter brothers. So they were engineers. And you know what my my parents were always, always, you know, very encouraging of whatever period we wanted to follow. So I get the a lot of credit goes to my my parents, also, because they even put their very distinct fields. They actually had a great understanding of arts and photography, especially my dad, and that really helped me have conversations. You know, when I was younger to have a better understanding of art. You know, because my grandmother used to paint a lot, and because she did photography. When she migrated from India to Pakistan in 1947 she took, like, really, really powerful pictures. And I think that instilled a lot of this in me as well. I've had a great support that way. Michael Hingson  24:26 Yeah, so your grandmother helps as well. Mobeen Ansari  24:32 Oh yeah, oh yeah. She did very, very ahead of her time. She's very cool, and she made really large scale painting. So she was an example of always making the best of life, no matter where you are, no matter how old you are. She actually practiced a Kibana in the 80s. So that was pretty cool. So, you know. Yeah, she played a major part in my life. Michael Hingson  25:05 When did you start learning English? Because that I won't say it was a harder challenge for you. Was a different challenge, but clearly, I assume you learned originally Pakistani and so on. But how did you go about learning English? Mobeen Ansari  25:23 Oh, so I learned about the languages when I started speech. So I mean to be split the languages of Urdu. You are, be you. So I started learning about my mother tongue and English at the same time. You know, basically both languages at work to both ran in parallel, but other today, I have to speak a bit of Italian and a few other regional languages of Pakistan so and in my school. I don't know why, but we had French as a subject, but now I've completely forgotten French at Yeah, this kind of, it kind of helped a lot. It's pretty cool, very interesting. But yeah, I mean, I love to speak English. Just when I learned speech, what Michael Hingson  26:19 did you major in when you went to college? Mobeen Ansari  26:24 So I majored in painting. I went to National College of Arts, and I did my bachelor's in fine arts, and I did my majors in painting, and I did my minor in printmaking and sculpture. So my background was always rooted in fine arts. Photography was something that ran in parallel until I decided that photography was the ultimate medium that I absolutely love doing that became kind of the voice of my heart or a medium of oppression and tougher and bone today for Michael Hingson  27:11 did they even have a major in photography when you went to college? Mobeen Ansari  27:17 No, photography was something that I learned, you know, as a hobby, because I learned that during school, and I was self taught. One of my uncles is a globally renowned photographer. So he also taught me, you know, the art of lighting. He also taught me on how to interact with people, on how to set up appointments. He taught me so many things. So you could say that being a painter helped me become a better photographer. Being a photographer helped me become a better painter. So both went hand in hand report co existed. Yeah, so photography is something that I don't exactly have a degree in, but something that I learned because I'm more of an art photographer. I'm more of an artist than I am a photographer, Michael Hingson  28:17 okay, but you're using photography as kind of the main vehicle to display or project your art, absolutely. Mobeen Ansari  28:30 So what I try to do is I still try to incorporate painting into my photography, meaning I try to use the kind of lighting that you see in painting all of these subtle colors that Rembrandt of Caravaggio use, so I tried to sort of incorporate that. And anytime I press my photograph, I don't print it on paper, I print it on canvas. There's a paint really element to it, so so that my photo don't come up as a challenge, or just photos bottles or commercial in nature, but that they look like painting. And I think I have probably achieved that to a degree, because a lot of people asked me, Do you know, like, Okay, how much I did painting for and create painting. So I think you know, whatever my objective was, I think I'm probably just, you know, I'm getting there. Probably that's what my aim is. So you have a photography my main objective with the main voice that I use, and it has helped me tell stories of my homeland. It has helped me to tell stories of my life. It has helped me tell stories of people around Michael Hingson  29:49 me, but you're but what you do is as I understand you, you're, you may take pictures. You may capture the images. With a camera, but then you put them on canvas. Mobeen Ansari  30:05 Yeah, I just every time I have an exhibition or a display pictures which are present in my room right now, I always print them on Canvas, because when you print them on Canvas, the colors become more richer, right, Michael Hingson  30:22 more mentally. But what? But what you're doing, but what you're putting on Canvas are the pictures that you've taken with your camera. Mobeen Ansari  30:31 Oh, yeah, yeah, okay. But occasionally, occasionally, I tried to do something like I would print my photos on Canvas, and then I would try to paint on them. It's something that I've been experimenting with, but I'm not directly quite there yet. Conceptually, let's see in the future when these two things make properly. But now photographs? Michael Hingson  31:02 Yeah, it's a big challenge. I i can imagine that it would be a challenge to try to be able to print them on cameras and then canvas, and then do some painting, because it is two different media, but in a sense, but it will be interesting to see if you're able to be successful with that in the future. What would you say? It's easier today, though, to to print your pictures on Canvas, because you're able to do it from digital photographs, as opposed to what you must have needed to do, oh, 20 years ago and so on, where you had film and you had negatives and so on, and printing them like you do today was a whole different thing to do. Mobeen Ansari  31:50 Oh yeah, it's same to think good yesterday, somebody asked me if I do photography on an analog camera, and I have a lot of them, like lots and lots of them, I still have a lot of black and white film, but the problem is, nobody could develop them. I don't have that room. So otherwise I would do that very often. Otherwise I have a few functional cameras that tend to it. I'm consciously just thinking of reviving that. Let's see what happens to it. So I think it's become very difficult. You know also, because Pakistan has a small community of photographers, so the last person who everybody would go to for developing the film or making sure that the analog cameras became functional. He unfortunately passed away a few years ago, so I'm sort of trying to find somebody who can help me do this. It's a very fascinating process, but I haven't done any analog film camera photography for the last 15 years now, definitely a different ball game with, you know, typical cameras, yeah, the pattern, you could just take 36 pictures, and today you can just, you know, take 300 and do all sorts of trial and error. But I tried, you know, I think I'm a bit of a purist when it comes to photography, so I kind of try and make sure that I get the shots at the very first photograph, you know, because that's how my dad trained me on analog cameras, because back then, you couldn't see how the pictures are going to turn out until you printed them. So every time my dad took a picture, he would spend maybe two or three minutes on the setting, and he would really make the person in front of him wait a long time. And then you need to work on shutter speed or the aperture or the ISO, and once you would take that picture is perfect, no need to anything to it, Michael Hingson  34:09 but, but transposing it, but, but transferring it to from an analog picture back then to Canvas must have been a lot more of a challenge than it is today. Mobeen Ansari  34:24 No back then, working canvas printing. Canvas printing was something that I guess I just started discovering from 2014 onwards. So it would like during that this is laid up, Michael Hingson  34:38 but you were still able to do it because you just substituted Canvas for the the typical photographic paper that you normally would use is what I hear you say, Mobeen Ansari  34:50 Oh yeah, Canvas printing was something that I figured out much later on, right? Michael Hingson  34:59 Um. But you were still able to do it with some analog pictures until digital cameras really came into existence. Or did you always use it with a digital camera? Mobeen Ansari  35:11 So I basically, when I started off, I started with the handle camera. And obviously, you know, back in the 90s, if somebody asked you to take a picture, or we have to take a picture of something, you just had the analog camera at hand. Yeah. And my grandparents, my dad, they all had, you know, analog cameras. Some of it, I still have it Michael Hingson  35:36 with me, but were you able to do canvas painting from the analog cameras? No, yeah, that's what I was wondering. Mobeen Ansari  35:43 No, I haven't tried, yeah, but I think must have been possible, but I've only tried Canvas printing in the digital real. Michael Hingson  35:53 Do you are you finding other people do the same thing? Are there? Are there a number of people that do canvas painting? Mobeen Ansari  36:02 I lot of them do. I think it's not very common because it's very expensive to print it on canvas. Yeah, because you know, once you once you test again, but you don't know how it's going to turn out. A lot of images, they turn out very rough. The pictures trade, and if can, with print, expose to the camera, sometimes, sorry, the canvas print exposed to the sun, then there's the risk of a lot of fading that can happen. So there's a lot of risk involved. Obviously, printing is a lot better now. It can withstand exposure to heat and sun, but Canvas printing is not as common as you know, matte paper printing, non reflective, matte paper. Some photographers do. It depends on what kind of images you want to get out? Yeah, what's your budget is, and what kind of field you're hoping to get out of it. My aim is very specific, because I aim to make it very Painterly. That's my objective with the canvas. Michael Hingson  37:17 Yeah, you want them to look like paintings? Mobeen Ansari  37:21 Yeah? Yeah, absolutely, Michael Hingson  37:23 which, which? I understand it's, it is a fascinating thing. I hadn't really heard of the whole idea of canvas painting with photograph or photography before, but it sounds really fascinating to to have that Yeah, and it makes you a unique kind of person when you do that, but if it works, and you're able to make it work, that's really a pretty cool thing to do. So you have you you've done both painting and photography and well, and sculpting as well. What made you really decide, what was the turning point that made you decide to to go to photography is kind of your main way of capturing images. Mobeen Ansari  38:12 So it was with high school, because I was still studying, you know, art as a subject back then, but I was still consistently doing that. And then, like earlier, I mentioned to you that my school gave me an award called pictorial historian. That is what inspired me to follow this girl. That is what set me on this path. That is what made me find this whole purpose of capturing history. You know, Pakistan is home to a lot of rich cultures, rich landscapes, incredible heritage sites. And I think that's when I became fascinated. Because, you know, so many Pakistanis have these incredible stories of resilience entrepreneurship, and they have incredible faces, and, you know, so I guess that what made me want to capture it really. So I think, yeah, it was in high school, and then eventually in college, because, you know, port and school and college, I would be asked to take pictures of events. I'll be asked to take pictures of things around me. Where I went to college, it was surrounded by all kinds of, you know, old temples and churches and old houses and very old streets. So that, really, you know, always kept me inspired. So I get over time. I think it's just always been there in my heart. I decided to really, really go for it during college. Well. Michael Hingson  40:00 But you've, you've done pretty well with it. Needless to say, which is, which is really exciting and which is certainly very rewarding. Have you? Have you done any pictures that have really been famous, that that people regard as exceptionally well done? Mobeen Ansari  40:22 I Yes, obviously, that's it for the audience to decide. But right, I understand, yeah, I mean, but judging from my path exhibitions, and judging from system media, there have been quite a few, including the monitor out of just last week, I went to this abandoned railway station, which was on a British colonial time, abandoned now, but that became a very, very successful photograph. I was pretty surprised to see the feedback. But yes, in my career, they have been about, maybe about 10 to 15 picture that really, really stood out or transcended barriers. Because coming out is about transcending barriers. Art is about transcending barriers, whether it is cultural or political, anything right if a person entered a part of the world views a portrait that I've taken in Pakistan, and define the connection with the subject. My mission is accomplished, because that's what I would love to do through art, to connect the world through art, through art and in the absence of verbal communication. I would like for this to be a visual communication to show where I'm coming from, or the very interesting people that I beat. And that is that sort of what I do. So I guess you know, there have been some portraits. I've taken some landscapes or some heritage sites, and including the subjects that I have photography of my book that acting have probably stood out in mind of people. Michael Hingson  42:14 So you have published three books so far, right? Yes, but tell me about your books, if you would. Mobeen Ansari  42:24 So my first book is called Harkin. I will just hold it up for the camera. It is my first book, and what is it called? It is called turken, and the book is about iconic people of Pakistan who have impacted this history, be it philanthropist, be it sports people, be it people in music or in performing arts, or be it Even people who are sanitation workers or electricians to it's about people who who have impacted the country, whether they are famous or not, but who I consider to be icons. Some of them are really, really, really famous, very well known people around the world, you know, obviously based in Pakistan. So my book is about chronicling them. It's about documenting them. It's about celebrating them. My second book without, okay, most Michael Hingson  43:29 people are going to listen to the podcast anyway, but go ahead. Yeah. Mobeen Ansari  43:35 So basically it's writing the flag is about the religious minorities of Pakistan, because, you know, Pakistan is largely a Muslim country. But when people around the world, they look at Pakistan, they don't realize that it's a multicultural society. There's so many religions. Pakistan is home to a lot of ancient civilizations, a lot of religions that are there. And so this book document life and festivities of religious minorities of Pakistan. You know, like I in my childhood, have actually attended Easter mass, Christmas and all of these festivities, because my father's best friend was a Christian. So we had that exposure to, you know, different faiths, how people practice them. So I wanted to document that. That's my second book. Michael Hingson  44:39 It's wonderful that you had, it's wonderful that you had parents that were willing to not only experience but share experiences with you about different cultures, different people, so that it gave you a broader view of society, which is really cool. Mobeen Ansari  44:58 Yeah. Absolutely, absolutely. So your third book? So my third book is a sequel to my first one, same topic, people who have impacted the country. And you know, with the Pakistan has a huge, huge population, it had no shortage of heroes and heroines and people who have created history in the country. So my first book has 98 people, obviously, which is not enough to feature everybody. So my second book, it features 115 people. So it features people who are not in the first book. Michael Hingson  45:41 Your third book? Yeah, okay, yeah. Well, there's, you know, I appreciate that there's a very rich culture, and I'm really glad that you're, you're making Chronicles or or records of all of that. Is there a fourth book coming? Have you started working on a fourth book yet? Mobeen Ansari  46:05 You know in fact, yes, there is. Whenever people hear about my book, they assume that there's going to be landscape or portraits or street photography or something that is more anthropological in nature. That's the photography I truly enjoy doing. These are the photographs that are displayed in my studio right now. So, but I would never really study for it, because Pakistan had, you know, we have poor provinces. And when I started these books, I hadn't really documented everything. You know, I come from the urban city, and, you know, I just, just only take taking pictures in main cities at that time. But now I have taken pictures everywhere. I've been literally to every nook and cranny in the country. So now I have a better understanding, a better visual representation. So a fourth book, it may be down the line, maybe five years, 10 years, I don't know yet. Michael Hingson  47:13 Well, one thing that I know you're interested in, that you've, you've at least thought about, is the whole idea behind climate change and the environment. And I know you've done some work to travel and document climate change and the environment and so on. Tell us, tell us more about that and where that might be going. Mobeen Ansari  47:36 So on tape, note, Michael, you know there's a lot of flooding going on in Pakistan. You know, in just one day, almost 314 people died, but many others you had missing. You had some of the worst flooding test time round. And to be reeling from that, and we had some major flooding some teachers back in. Well, climate change is no longer a wake up call. We had to take action years ago, if not, you know, yesterday and till right now, we are seeing effects of it. And you know, Pakistan has a lot of high mountain peaks. It has, it is home to the second highest mountain in the world, Ketu, and it has a lot of glaciers. You know, people talk about melting polar ice caps. People talk about effects of climate change around the world, but I think it had to be seen everywhere. So in Pakistan, especially, climate change is really, really rearing space. So I have traveled to the north to capture melting glacier, to capture stories of how it affects different communities, the water supply and the agriculture. So that is what I'm trying to do. And if I take pictures of a desert down south where a sand dune is spreading over agricultural land that it wasn't doing up until seven months ago. So you know climate change is it's everywhere. Right now, we are experiencing rains every day. It's been the longest monsoon. So it has also affected the way of life. It has also affected ancient heritage sites. Some of these heritage sites, which are over 3000 years old, and they have bestowed, you know, so much, but they are not able to withstand what we are facing right now. Um, and unfortunately, you know, with unregulated construction, with carbon emissions here and around the world, where deforestation, I felt that there was a strong need to document these places, to bring awareness of what is happening to bring awareness to what we would lose if we don't look after mother nature, that the work I have been doing on climate change, as well as topics of global health and migration, so those two topics are also very close To My Heart. Michael Hingson  50:40 Have you done any traveling outside Pakistan? Mobeen Ansari  50:45 Oh, yeah. I mean, I've been traveling abroad since I was very little. I have exhibited in Italy, in the United States. I was just in the US debris. My brother lives in Dallas, so, yeah, I keep traveling because, because my workshop, because of my book events, or my exhibition, usually here and around the world. Michael Hingson  51:14 Have you done any photography work here in the United States? Mobeen Ansari  51:19 Yeah, I have, I mean, in the US, I just don't directly do photography, but I do workshop, because whatever tool that I captured from Pakistan, I do it there. Okay, funny thing is, a funny thing is that, you know, when you take so many pictures in Pakistan, you become so used to rustic beauty and a very specific kind of beauty that you have a hard time capturing what's outside. But I've always, always just enjoyed taking pictures in in Mexico and Netherlands, in Italy, in India, because they that rustic beauty. But for the first time, you know, I actually spent some time on photography. This year, I went to Chicago, and I was able to take pictures of Chicago landscape, Chicago cityscape, completely. You know, Snowden, that was a pretty cool kind of palette to work with. Got to take some night pictures with everything Snowden, traveling Chicago, downtown. So yeah, sometimes I do photography in the US, but I'm mostly there to do workshops or exhibitions or meet my brothers. Michael Hingson  52:34 What is your your work process? In other words, how do you decide what ideas for you are worthwhile pursuing and and recording and chronicling. Mobeen Ansari  52:46 So I think it depends on where their story, where there is a lot of uniqueness, that is what stands out to me, and obviously beauty there. But they have to be there. They have to be some uniqueness, you know, like, if you look at one of the pictures behind me, this is a person who used to run a library that had been there since 1933 his father, he had this really, really cool library. And you know, to that guy would always maintain it, that library would have, you know, three old books, you know, a philosophy of religion, of theology, and there was even a handwritten, 600 years old copy of the Quran with his religious book for Muslims. So, you know, I found these stories very interesting. So I found it interesting because he was so passionate about literature, and his library was pretty cool. So that's something that you don't get to see. So I love seeing where there is a soul, where there is a connection. I love taking pictures of indigenous communities, and obviously, you know, landscapes as well. Okay? Also, you know, when it comes to climate change, when it comes to migration, when it comes to global health, that's what I take picture to raise awareness. Michael Hingson  54:33 Yeah, and your job is to raise awareness. Mobeen Ansari  54:41 So that's what I try to do, if I'm well informed about it, or if I feel that is something that needed a light to be shown on it, that's what I do. Took my photograph, and also, you know. Whatever had this appeal, whatever has a beauty, whatever has a story that's in spur of the moment. Sometimes it determined beforehand, like this year, particularly, it particularly helped me understand how to pick my subject. Even though I've been doing this for 22 years, this year, I did not do as much photography as I normally do, and I'm very, very picky about it. Like last week I went to this abandoned railway station. I decided to capture it because it's very fascinating. It's no longer used, but the local residents of that area, they still use it. And if you look at it, it kind of almost looks like it's almost science fiction film. So, you know, I'm a big star. Was that Big Star Trek fan? So, yes, I'm in port the camps. So I also like something that had these elements of fantasy to it. So my work, it can be all over the place, sometimes, Michael Hingson  56:09 well, as a as a speaker, it's, it's clearly very important to you to share your own personal journey and your own experiences. Why is that? Why do you want to share what you do with others? Mobeen Ansari  56:28 So earlier, I mentioned to you that John Tracy center played a major, major role in my life. He helped my mother. They provided all the materials. You know, in late 80s, early 90s, and so I will tell you what happened. So my aunt, my mom's sister, she used to live in the US, and when my hearing loss were diagnosed, my mother jumped right into action. I mean, both my parents did. So my mother, she landed in New York, and to my aunt would live in New Jersey. So every day she would go to New York, and she landed in New York League of hard of hearing. And a lady over there asked my mom, do you want your child to speak, or do you want him to learn? Frank Lacher and my mother, without any hesitation, she said, I want my child to speak and to see what put in touch with John Troy center and rest with history, and they provided with everything that needed. So I am affiliated with the center as an alumni. And whenever I'm with the US, whenever I'm in LA, I visit the center to see how I can support parents of those with hearing loss, and I remember when I went in 2016 2018 I gave a little talk to the parents of those with hair in glass. And I got to two other place as well, where I spent my childhood joint. Every time I went there, I saw the same fears. I saw the same determination in parents of those with hearing loss, as I saw in my parents eyes. And by the end of my talk, they came up to me, and they would tell me, you know, that sharing my experiences helped them. It motivated them. It helped them not be discouraged, because having a child hearing loss is not easy. And you know, like there was this lady from Ecuador, and you know, she spoke in Spanish, and she see other translators, you know, tell me this, so to be able to reach out with those stories, to be able to provide encouragement and any little guidance, or whatever little knowledge I have from my experience, it gave me this purpose. And a lot of people, I think, you know, you feel less lonely in this you feel hurt, you feel seen. And when you share experiences, then you have sort of a blueprint how you want to navigate in one small thing can help the other person. That's fantastic. That's why I share my personal experiences, not just to help those with hearing loss, but with any challenge. Because you know when you. Have a challenge when you have, you know, when a person is differently able, so it's a whole community in itself. You know, we lift each other up, and if one story can help do that, because, you know, like for me, my parents told me, never let your hearing loss be seen as a disability. Never let it be seen as a weakness, but let it be seen as a challenge that makes you stronger and that will aspire to do be it when I get it lost all of my life, be it when I had the latest or many years, or anything. So I want to be able to become stronger from to share my experiences with it. And that is why I feel it's important to share the story. Michael Hingson  1:00:56 And I think that's absolutely appropriate, and that's absolutely right. Do you have a family of your own? Are you married? Do you have any children or anything? Not yet. Not yet. You're still working on that, huh? Mobeen Ansari  1:01:10 Well, so to say, Yeah, I've just been married to my work for way too long. Michael Hingson  1:01:16 Oh, there you are. There's nothing wrong with that. You've got something that you Mobeen Ansari  1:01:22 kind of get batting after a while, yeah. Michael Hingson  1:01:26 Well, if the time, if the right person comes along, then it, then that will happen. But meanwhile, you're, you're doing a lot of good work, and I really appreciate it. And I hope everyone who listens and watches this podcast appreciates it as well. If people want to reach out to you, how do they do that? Mobeen Ansari  1:01:45 They can send me an email, which is out there for everybody on my website. I'm on all my social media as well. My email is being.ansarima.com Michael Hingson  1:01:57 so can you spell that? Can you Yeah, M, o b e n, dot a do it once more, M O B, E N, Mobeen Ansari  1:02:07 M O B, double, e n, dot, a n, S, A R, i@gmail.com Michael Hingson  1:02:17 at gmail.com, okay, and your website is.com Mobeen Ansari  1:02:26 same as my name. Michael Hingson  1:02:27 So, okay, so it's mo bean.ansari@our.www.mo Michael Hingson  1:02:35 bean dot Ansari, or just mo Bean on, sorry, Mobeen Ansari  1:02:41 just moving on, sorry. We com, no.no. Michael Hingson  1:02:44 Dot between mobien and Ansari, okay, so it's www, dot mobile being on sorry, yeah, so it's www, dot, M, O, B, E, N, A, N, S, A, R, i.com Yes. Well, great. I have absolutely enjoyed you being with us today. I really appreciate your time and your insights, and I value a lot what you do. I think you represent so many things so well. So thank you for being here with us, and I want to thank all of you who are out there listening and watching the podcast today, I'd love to hear your thoughts. Please email me at Michael H, i@accessibe.com that's m, I, C, H, A, E, L, H, I at accessibe, A, C, C, E, S, S, i, b, e.com, and we appreciate it if you would give us a five star rating wherever you are observing the podcast. Please do that. We value that a great deal. And if you know anyone else who ought to be a guest, please let me know. We're always looking for people and mobeen you as well. If you know anyone else who you think ought to be a guest on the podcast, I would appreciate it if you would introduce us. But for now, I just want to thank you one more time for being here. This has been absolutely wonderful. Thank you for being on the podcast with us today. Mobeen Ansari  1:04:08 Thank you so much. It's been wonderful, and thank you for giving me the platform to share my stories. And I hope that it helps whoever watching this. Up to date. Michael Hingson  1:04:26 You have been listening to the Unstoppable Mindset podcast. Thanks for dropping by. I hope that you'll join us again next week, and in future weeks for upcoming episodes. To subscribe to our podcast and to learn about upcoming episodes, please visit www dot Michael hingson.com slash podcast. Michael Hingson is spelled m i c h a e l h i n g s o n. While you're on the site., please use the form there to recommend people who we ought to interview in upcoming editions of the show. And also, we ask you and urge you to invite your friends to join us in the future. If you know of any one or any organization needing a speaker for an event, please email me at speaker at Michael hingson.com. I appreciate it very much. To learn more about the concept of blinded by fear, please visit www dot Michael hingson.com forward slash blinded by fear and while you're there, feel free to pick up a copy of my free eBook entitled blinded by fear. The unstoppable mindset podcast is provided by access cast an initiative of accessiBe and is sponsored by accessiBe. Please visit www.accessibe.com . AccessiBe is spelled a c c e s s i b e. There you can learn all about how you can make your website inclusive for all persons with disabilities and how you can help make the internet fully inclusive by 2025. Thanks again for Listening. Please come back and visit us again next week.

In Numbers We Trust - Der Data Science Podcast
#82: Monitoring in MLOps: Tools, Tipps und Best Practices aus der Praxis

In Numbers We Trust - Der Data Science Podcast

Play Episode Listen Later Oct 9, 2025 44:02


Wie behält man eigentlich den Überblick, wenn Data Science Services in Produktion laufen? In dieser Folge sprechen Sebastian und Michelle darüber, wie man einen sinnvollen Monitoring-Stack aufsetzt – von Logs und Metriken bis hin zu Alerts und Dashboards. Wir schauen uns Tools wie Prometheus, Grafana, Loki und ELK an und klären, worin sie sich unterscheiden. Außerdem geht's um Best Practices fürs Alerting, sinnvolle Feedbackschleifen und die Frage, wann und wie man Monitoring in den Entwicklungsprozess integriert. **Zusammenfassung** Ziel von Monitoring: schnelle Feedbackschleifen zwischen Entwicklung und Produktion Unterschied zwischen CI/CD und Monitoring, letztere liefert Feedback nach dem Deployment Planung des Monitorings idealerweise schon bei der Architektur berücksichtigen Überblick über Monitoring-Ziele: Services, Infrastruktur, Daten, Modelle Vergleich Cloud vs. Self-Hosted Monitoring (Aufwand, Flexibilität, Kosten) Wichtige Tools: Prometheus/Grafana/Loki, ELK-Stack, Nagios/Icinga/Zabbix, Great Expectations, Redash/Metabase Best Practices fürs Alerting: sinnvolle Schwellenwerte, Vermeidung von "Alert Fatigue", klare Zuständigkeiten Fazit: Monitoring braucht klare Ziele, sinnvolle Alerts und gute Visualisierung, um echten Mehrwert zu liefern   **Links** #23: Unsexy aber wichtig: Tests und Monitoring https://www.podbean.com/ew/pb-vxp58-13f311a Prometheus – Open-Source Monitoring-System: https://prometheus.io Grafana – Visualisierung von Metriken und Logs: https://grafana.com Loki – Log-Aggregation für Grafana: https://grafana.com/oss/loki/ ELK Stack (Elasticsearch, Logstash, Kibana): https://www.elastic.co/elastic-stack Great Expectations – Datenvalidierung und Monitoring: https://greatexpectations.io Redash – SQL-basierte Dashboards und Visualisierungen: https://redash.io Metabase – Self-Service BI-Tool: https://www.metabase.com Nagios – klassisches System-Monitoring-Tool: https://www.nagios.org Icinga – moderner Nagios-Fork: https://icinga.com Zabbix – Monitoring-Plattform für Netzwerke & Server: https://www.zabbix.com Prometheus Alertmanager: https://prometheus.io/docs/alerting/latest/alertmanager/ PagerDuty – Incident Response Management: https://www.pagerduty.com  

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SANS Stormcast Wednesday, October 8th, 2025: FreePBX Exploits; Disrupting Teams Threats; Kibana and QT SVG Patches

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Play Episode Listen Later Oct 8, 2025 5:57


FreePBX Exploit Attempts (CVE-2025-57819) A FreePBX SQL injection vulnerability disclosed in August is being used to execute code on affected systems. https://isc.sans.edu/diary/Exploit%20Against%20FreePBX%20%28CVE-2025-57819%29%20with%20code%20execution./32350 Disrupting Threats Targeting Microsoft Teams Microsoft published a blog post outlining how to better secure Teams. https://www.microsoft.com/en-us/security/blog/2025/10/07/disrupting-threats-targeting-microsoft-teams/ Kibana XSS Patch CVE-2025-25009 Elastic patched a stored XSS vulnerability in Kibana https://discuss.elastic.co/t/kibana-8-18-8-8-19-5-9-0-8-and-9-1-5-security-update-esa-2025-20/382449 QT SVG Vulnerabilities CVE-2025-10728, CVE-2025-10729, The QT group fixed two vulnerabilities in the QT SVG module. One of the vulnerabilities may be used for code execution https://www.qt.io/blog/security-advisory-uncontrolled-recursion-and-use-after-free-vulnerabilities-in-qt-svg-module-impact-qt

Kodsnack in English
Kodsnack 652 - The best of nature, with Grace Jansen

Kodsnack in English

Play Episode Listen Later Jul 22, 2025 36:12


Fredrik talks to Grace Jansen about cloud tools, and bringing them to your local machine in a better way. Opentelemetry is a great tool, but it’s not the whole story for observability. Gathering the data is just the first step. In the second half, we leave telemetry and talk about realizing you have things to share and sharing them with other people. Find out what makes you tick, and share experiences around that. Grace also shares some concrete presentation-building tips at the end. Ask the question, and be more you! Recorded during Øredev 2024. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We a re @kodsnack, @tobiashieta, @oferlund and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Grace Øredev 2024 Grace’s Øredev 2024 presentations: Cloud-native dev tools: bringing the cloud back to earth, and Becoming a cloud-native doctor Opentelemetry Distributed tracing Microprofile - open source specification for distributed tracing Jakarta - the artist previously known as Java EE Reactive messaging Openapi Telemetry Openliberty Quarkus Payara Jboss Prometheus Grafana Kibana Fluid Jaeger - tracing platform Torill Kornfeldt talked about resurrecting mammoths at Øredev 2015 Sven Jungmann - can we teach machines to smell? Support us on Ko-fi! Ants and AI models Holly Cummins Less waste, more joy, and a lot more green: How Quarkus makes Java better - Holly’s Øredev 2024 presentation Titles After-lunch lull So polyglot Ready for microservices (You need) Many minds Now I have a pile (Take) The best of nature The path was being them Something I bring to the table Ask the question A unique presentation

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SANS Stormcast Friday Mar 7th: Chrome vs Extensions; Kibana Update; PrePw0n3d Android TV Sticks; Identifying APTs (@sans_edu, Eric LeBlanc)

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Play Episode Listen Later Mar 7, 2025 13:53


Latest Google Chrome Update Encourages UBlock Origin Removal The latest update to Google Chrome not only disabled the UBlock Origin ad blocker, but also guides users to uninstall the extension instead of re-enabling it. https://chromereleases.googleblog.com/2025/03/stable-channel-update-for-desktop.html https://www.reddit.com/r/youtube/comments/1j2ec76/ublock_origin_is_gone/ Critical Kibana Update Elastic published a critical Kibana update patching a prototype polution vulnerability that would allow arbitrary code execution for users with the "Viewer" role. https://discuss.elastic.co/t/kibana-8-17-3-security-update-esa-2025-06/375441 Certified PrePw0n3d Android TV Sticks Wired is reporting of over a million Android TV sticks that were found to be pre-infected with adware https://www.wired.com/story/android-tv-streaming-boxes-china-backdoor/ SANS.edu Research Paper Advanced Persistent Threats (APTs) are among the most challenging to detect in enterprise environments, often mimicking authorized privileged access prior to their actions on objectives. https://www.sans.edu/cyber-research/identifying-advanced-persistent-threat-activity-through-threat-informed-detection-engineering-enhancing-alert-visibility-enterprises/

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SANS Stormcast Friday Feb 21st: Kibana Queries; Mongoose Injection; U-Boot Flaws; Unifi Protect Camera Vulnerabilities; Protecting Network Devices as Endpoint (Austin Clark @sans_edu)

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Play Episode Listen Later Feb 21, 2025 12:29


Using ES|QL In Kibana to Query DShield Honeypot Logs Using the "Elastic Search Piped Query Language" to query DShield honeypot logs https://isc.sans.edu/diary/Using%20ES%7CQL%20in%20Kibana%20to%20Queries%20DShield%20Honeypot%20Logs/31704 Mongoose Flaws Put MongoDB at risk The Object Direct Mapping library Mongoose suffers from an injection vulnerability leading to the potenitial of remote code exeuction in MongoDB https://www.theregister.com/2025/02/20/mongoose_flaws_mongodb/ U-Boot Vulnerabilities The open source boot loader U-Boot does suffer from a number of issues allowing the bypass of its integrity checks. This may lead to the execution of malicious code on boot. https://www.openwall.com/lists/oss-security/2025/02/17/2 Unifi Protect Camera Update https://community.ui.com/releases/Security-Advisory-Bulletin-046-046/9649ea8f-93db-4713-a875-c3fd7614943f

Atareao con Linux
ATA 644 Un editor online, Traefik y otros servicios self hosted

Atareao con Linux

Play Episode Listen Later Nov 11, 2024 21:00


rustpad es un estupendo servicio #selfhosted para alojar en tu #vps o en un servidor #linux con #docker y que te permite editar un documento entre varios Esta última semana asistí a un curso muy interesante sobre Kibana. En el curso se utilizó una herramienta que desconocía por completo y que era, ni mas ni menos, que un editor colaborativo. Una herramienta que permitía compartir con otras personas de forma sencilla texto. Y esto, como te puedes imaginar, me llamó mucho la atención y me hizo buscar una alternativa que pudiera hospedar en mi propio servidor. Y te preguntarás para que quiero esto, pues muy sencillo, algo que seguro que tu has hecho en mas de una ocasión, para copiar texto entre diferentes dispositivos de forma sencilla. Por ejemplo, pasar una contraseña, el nombre de un usuario, o cualquier cosa. Esto me llevó a revisar algunos de otros servicios similares que tengo, como puede ser pastebin o opengist de los que te hablaré también en este episodio. Más información, enlaces y notas en https://atareao.es/podcast/644

Sospechosos Habituales
ATA 644 Un editor online, Traefik y otros servicios self hosted

Sospechosos Habituales

Play Episode Listen Later Nov 11, 2024 21:00


rustpad es un estupendo servicio #selfhosted para alojar en tu #vps o en un servidor #linux con #docker y que te permite editar un documento entre varios Esta última semana asistí a un curso muy interesante sobre Kibana. En el curso se utilizó una herramienta que desconocía por completo y que era, ni mas ni menos, que un editor colaborativo. Una herramienta que permitía compartir con otras personas de forma sencilla texto. Y esto, como te puedes imaginar, me llamó mucho la atención y me hizo buscar una alternativa que pudiera hospedar en mi propio servidor. Y te preguntarás para que quiero esto, pues muy sencillo, algo que seguro que tu has hecho en mas de una ocasión, para copiar texto entre diferentes dispositivos de forma sencilla. Por ejemplo, pasar una contraseña, el nombre de un usuario, o cualquier cosa. Esto me llevó a revisar algunos de otros servicios similares que tengo, como puede ser pastebin o opengist de los que te hablaré también en este episodio. Más información, enlaces y notas en https://atareao.es/podcast/644

Les Cast Codeurs Podcast
LCC 315 - les températures ne sont pas déterministes

Les Cast Codeurs Podcast

Play Episode Listen Later Sep 17, 2024 110:08


JVM summit, virtual threads, stacks applicatives, licences, déterminisme et LLMs, quantification, deux outils de l'épisode et bien plus encore. Enregistré le 13 septembre 2024 Téléchargement de l'épisode LesCastCodeurs-Episode–315.mp3 News Langages Netflix utilise énormément Java et a rencontré un problème avec les Virtual Thread dans Java 21. Les ingénieurs de Netflix analysent ce problème dans cet article : https://netflixtechblog.com/java–21-virtual-threads-dude-wheres-my-lock–3052540e231d Les threads virtuels peuvent améliorer les performances mais posent des défis. Un problème de locking a été identifié : les threads virtuels se bloquent mutuellement. Cela entraîne des performances dégradées et des instabilités. Netflix travaille à résoudre ces problèmes et à tirer pleinement parti des threads virtuels. Une syntax pour indiquer qu'un type est nullable ou null-restricted arriverait dans Java https://bugs.openjdk.org/browse/JDK–8303099 Foo! interdirait null Foo? indiquerait que null est accepté Foo?[]! serait un tableau non-null de valeur nullable Il y a aussi des idées de syntaxe pour initialiser les tableaux null-restricted JEP: https://openjdk.org/jeps/8303099 Les vidéos du JVM Language Summit 2024 sont en ligne https://www.youtube.com/watch?v=OOPSU4LnKg0&list=PLX8CzqL3ArzUEYnTa6KYORRbP3nhsK0L1 Project Leyden Update Project Babylon - Code Reflection Valhalla - Where Are We? An Opinionated Overview on Static Analysis for Java Rethinking Java String Concatenation Code Reflection in Action - Translating Java to SPIR-V Java in 2024 Type Specialization of Java Generics - What If Casts Have Teeth ? (avec notre Rémi Forax national !) aussi tip or tail pour tout l'ecosysteme quelques liens sur Babylon: Code reflection pour exprimer des langages etranger (SQL) dans Java: https://openjdk.org/projects/babylon/ et sont example en emulation de LINQ https://openjdk.org/projects/babylon/articles/linq Librairies Micronaut sort sa version 4.6 https://micronaut.io/2024/08/26/micronaut-framework–4–6–0-released/ essentiellement une grosse mise à jour de tonnes de modules avec les dernières versions des dépendances Microprofile 7 faire quelques changements et evolution incompatibles https://microprofile.io/2024/08/22/microprofile–7–0-release/#general enleve Metrics et remplace avec Telemetry (metrics, log et tracing) Metrics reste une spec mais standalone Microprofile 7 depende de Jakarta Core profile et ne le package plus Microprofile OpenAPI 4 et Telemetry 2 amenent des changements incompatibles Quarkus 3.14 avec LetsEncrypt et des serialiseurs JAckson sans reflection https://quarkus.io/blog/quarkus–3–14–1-released/ Hibernate ORM 6.6 Serialisateurs JAckson sans reflection installer des certificats letsencrypt simplement (notamment avec la ligne de commande qui aide sympa notamment avec ngrok pour faire un tunnel vers son localhost retropedalage sur @QuarkusTestResource vs @WithTestResource suite aux retour de OOME et lenteur des tests mieux isolés Les logs structurées dans Spring Boot 3.4 https://spring.io/blog/2024/08/23/structured-logging-in-spring-boot–3–4 Les logs structurées (souvent en JSON) vous permettent de les envoyer facilement vers des backends comme Elastic, AWS CloudWatch… Vous pouvez les lier à du reporting et de l'alerting. Spring Boot 3.4 prend en charge la journalisation structurée par défaut. Il prend en charge les formats Elastic Common Schema (ECS) et Logstash, mais il est également possible de l'étendre avec vos propres formats. Vous pouvez également activer la journalisation structurée dans un fichier. Cela peut être utilisé, par exemple, pour imprimer des journaux lisibles par l'homme sur la console et écrire des journaux structurés dans un fichier pour l'ingestion par machine. Infrastructure CockroachDB qui avait une approche Business Software License (source available puis ALS 3 ans apres), passe maintenant en license proprietaire avec source available https://www.cockroachlabs.com/blog/enterprise-license-announcement/ Polyform project offre des licences standardisees selon les besoins de gratuit vs payant https://polyformproject.org/ Cloud Azure fonctions, comment le demarrage a froid est optimisé https://www.infoq.com/articles/azure-functions-cold-starts/?utm_campaign=infoq_content&utm_source=twitter&utm_medium=feed&utm_term=Cloud fonctions ont une latence naturelle forte toutes les lantences longues ne sont aps impactantes pour le business les demarrages a froid peuvent etre mesures avec les outils du cloud provider donc faites en usage faites des decentilers de latences experience 381 ms cold et 10ms apres tracing pour end to end latence les strategies keep alive pings: reveiller la fonctione a intervalles reguliers pour rester “warm” dans le code de la fonction: initialiser les connections et le chargement des assemblies dans l'initialization configurer dans host.json le batching, desactiver file system logging etc deployer les fonctions as zips reduire al taille du code et des fichiers (qui sont copies sur le serveur froid) sur .net activer ready to run qui aide le JIT compiler instances azure avec plus de CPU et memoire sont plus cher amis baissent le cold start dedicated azure instances pour vos fonctions (pas aprtage avec les autres tenants) ensuite montre des exemples concrets Web Sortie de Vue.js 3.5 https://blog.vuejs.org/posts/vue–3–5 Vue.JS 3.5: Nouveautés clés Optimisations de performance et de mémoire: Réduction significative de la consommation de mémoire (–56%). Amélioration des performances pour les tableaux réactifs de grande taille. Résolution des problèmes de valeurs calculées obsolètes et de fuites de mémoire. Nouvelles fonctionnalités: Reactive Props Destructure: Simplification de la déclaration des props avec des valeurs par défaut. Lazy Hydration: Contrôle de l'hydratation des composants asynchrones. useId(): Génération d'ID uniques stables pour les applications SSR. data-allow-mismatch: Suppression des avertissements de désynchronisation d'hydratation. Améliorations des éléments personnalisés: Prise en charge de configurations d'application, d'API pour accéder à l'hôte et au shadow root, de montage sans Shadow DOM, et de nonce pour les balises. useTemplateRef(): Obtention de références de modèle via l'API useTemplateRef(). Teleport différé: Téléportation de contenu vers des éléments rendus après le montage du composant. onWatcherCleanup(): Enregistrement de callbacks de nettoyage dans les watchers. Data et Intelligence Artificielle On entend souvent parler de Large Language Model quantisés, c'est à dire qu'on utilise par exemple des entiers sur 8 bits plutôt que des floatants sur 32 bits, pour réduire les besoins mémoire des GPU tout en gardant une précision proche de l'original. Cet article explique très visuellement et intuitivement ce processus de quantisation : https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-quantization Guillaume continue de partager ses aventures avec le framework LangChain4j. Comment effectuer de la classification de texte : https://glaforge.dev/posts/2024/07/11/text-classification-with-gemini-and-langchain4j/ en utilisant la classe TextClassification de LangChain4j, qui utilise une approche basée sur les vector embeddings pour comparer des textes similaires en utilisant du few-shot prompting, sous différentes variantes, dans cet autre article : https://glaforge.dev/posts/2024/07/30/sentiment-analysis-with-few-shots-prompting/ et aussi comment faire du multimodal avec LangChain4j (avec le modèle Gemini) pour analyser des textes, des images, mais également des vidéos, du contenu audio, ou bien des fichiers PDFs : https://glaforge.dev/posts/2024/07/25/analyzing-videos-audios-and-pdfs-with-gemini-in-langchain4j/ Pour faire varier la prédictibilité ou la créativité des LLMs, certains hyperparamètres peuvent être ajustés, comme la température, le top-k et le top-p. Mais est-ce que vous savez vraiment comment fonctionnent ces paramètres ? Deux articles très clairs et intuitifs expliquent leur fonctionnement : https://medium.com/google-cloud/is-a-zero-temperature-deterministic-c4a7faef4d20 https://medium.com/google-cloud/beyond-temperature-tuning-llm-output-with-top-k-and-top-p–24c2de5c3b16 la tempoerature va ecraser la probabilite du prochain token mais il reste des variables: approximnation des calculs flottants, stacks differentes effectuants ces choix differemment, que faire en cas d'egalité de probabilité entre deux tokens mais il y a d'atures apporoches de configuiration des reaction du LLM: top-k (qui evite les tokens peu frequents), top-p pour avoir les n des tokens qui totalient p% des probabilités temperature d'abord puis top-k puis top-p explique quoi utiliser quand OSI propose une definition de l'IA open source https://www.technologyreview.com/2024/08/22/1097224/we-finally-have-a-definition-for-open-source-ai/ gros debats ces derniers mois utilisable pour tous usages sans besoin de permission chercheurs peuvent inspecter les components et etudier comment le system fonctionne systeme modifiable pour tout objectif y compris chager son comportement et paratger avec d'autres avec ou sans modification quelque soit l'usage Definit des niveaux de transparence (donnees d'entranement, code source, poids) Une longue rétrospective de PostgreSQL a des volumes de malades et les problèmes de lock https://ardentperf.com/2024/03/03/postgres-indexes-partitioning-and-lwlocklockmanager-scalability/ un article pour vous rassurer que vous n'aurez probablement jamais le problème histoire sous forme de post mortem des conseils pour éviter ces falaises Outillage Un premier coup d'oeil à la future notation déclarative de Gradle https://blog.gradle.org/declarative-gradle-first-eap un article qui explique à quoi ressemble cette nouvelle syntaxe déclarative de Gradle (en plus de Groovy et Kotlin) Quelques vidéos montrent le support dans Android Studio, pour le moment, ainsi que dans un outil expérimental, en attendant le support dans tous les IDEs L'idée est d'éviter le scripting et d'avoir vraiment qu'une description de son build Cela devrait améliorer la prise en charge de Gradle dans les IDEs et permettre d'avoir de la complétion rapide, etc c'est moi on on a Maven là? Support de Firefox dans Puppeteer https://hacks.mozilla.org/2024/08/puppeteer-support-for-firefox/ Puppeteer, la bibliothèque d'automatisation de navigateur, supporte désormais officiellement Firefox dès la version 23. Cette avancée permet aux développeurs d'écrire des scripts d'automatisation et d'effectuer des tests de bout en bout sur Chrome et Firefox de manière interchangeable. L'intégration de Firefox dans Puppeteer repose sur WebDriver BiDi, un protocole inter-navigateurs en cours de standardisation au W3C. WebDriver BiDi facilite la prise en charge de plusieurs navigateurs et ouvre la voie à une automatisation plus simple et plus efficace. Les principales fonctionnalités de Puppeteer, telles que la capture de journaux, l'émulation de périphériques, l'interception réseau et le préchargement de scripts, sont désormais disponibles pour Firefox. Mozilla considère WebDriver BiDi comme une étape importante vers une meilleure expérience de test inter-navigateurs. La prise en charge expérimentale de CDP (Chrome DevTools Protocol) dans Firefox sera supprimée fin 2024 au profit de WebDriver BiDi. Bien que Firefox soit officiellement pris en charge, certaines API restent non prises en charge et feront l'objet de travaux futurs. Guillaume a créé une annotation @Retry pour JUnit 5, pour retenter l'exécution d'un test qui est “flaky” https://glaforge.dev/posts/2024/09/01/a-retryable-junit–5-extension/ Guillaume n'avait pas trouvé d'extension par défaut dans JUnit 5 pour remplacer les Retry rules de JUnit 4 Mais sur les réseaux sociaux, une discussion intéressante s'ensuit avec des liens sur des extensions qui implémentent cette approche Comme JUnit Pioneer qui propose plein d'extensions utiles https://junit-pioneer.org/docs/retrying-test/ Ou l'extension rerunner https://github.com/artsok/rerunner-jupiter Arnaud a aussi suggéré la configuration de Maven Surefire pour relancer automatiquement les tests qui ont échoué https://maven.apache.org/surefire/maven-surefire-plugin/examples/rerun-failing-tests.html la question philosophique est: est-ce que c'est tolerable les tests qui ecouent de façon intermitente Architecture Un ancien fan de GraphQL en a fini avec la technologie GraphQL et réfléchit aux alternatives https://bessey.dev/blog/2024/05/24/why-im-over-graphql/ Problèmes de GraphQL: Sécurité: Attaques d'autorisation Difficulté de limitation de débit Analyse de requêtes malveillantes Performance: Problème N+1 (récupération de données et autorisation) Impact sur la mémoire lors de l'analyse de requêtes invalides Complexité accrue: Couplage entre logique métier et couche de transport Difficulté de maintenance et de tests Solutions envisagées: Adoption d'API REST conformes à OpenAPI 3.0+ Meilleure documentation et sécurité des types Outils pour générer du code client/serveur typé Deux approches de mise en œuvre d'OpenAPI: “Implementation first” (génération de la spécification à partir du code) “Specification first” (génération du code à partir de la spécification) retour interessant de quelqu'un qui n'utilise pas GraphQL au quotidien. C'était des problemes qui devaient etre corrigés avec la maturité de l'ecosysteme et des outils mais ca a montré ces limites pour cette personne. Prensentation de Grace Hoper en 1980 sur le future des ordinateurs. https://youtu.be/AW7ZHpKuqZg?si=w_o5_DtqllVTYZwt c'est fou la modernité de ce qu'elle décrit Des problèmes qu'on a encore aujourd'hui positive leadership Elle décrit l'avantage de systèmes fait de plusieurs ordinateurs récemment declassifié Leader election avec les conditional writes sur les buckets S3/GCS/Azure https://www.morling.dev/blog/leader-election-with-s3-conditional-writes/ L'élection de leader est le processus de choisir un nœud parmi plusieurs pour effectuer une tâche. Traditionnellement, l'élection de leader se fait avec un service de verrouillage distribué comme ZooKeeper. Amazon S3 a récemment ajouté le support des écritures conditionnelles, ce qui permet l'élection de leader sans service séparé. L'algorithme d'élection de leader fonctionne en faisant concourir les nœuds pour créer un fichier de verrouillage dans S3. Le fichier de verrouillage inclut un numéro d'époque, qui est incrémenté à chaque fois qu'un nouveau leader est élu. Les nœuds peuvent déterminer s'ils sont le leader en listant les fichiers de verrouillage et en vérifiant le numéro d'époque. attention il peut y avoir plusieurs leaders élus (horloges qui ont dérivé) donc c'est à gérer aussi Méthodologies Guillaume Laforge interviewé par Sfeir, où il parle de l'importance de la curiosité, du partage, de l'importance de la qualité du code, et parsemé de quelques photos des Cast Codeurs ! https://www.sfeir.dev/success-story/guillaume-laforge-maestro-de-java-et-esthete-du-code-propre/ Sécurité Comment crowdstrike met a genoux windows et de nombreuses entreprises https://next.ink/144464/crowdstrike-donne-des-details-techniques-sur-son-fiasco/ l'incident vient de la mise à jour de la configuration de Falcon l'EDR de crowdstrike https://www.crowdstrike.com/blog/falcon-update-for-windows-hosts-technical-details/ qu'est ce qu'un EDR? Un système Endpoint Detection and Response a pour but de surveiller votre machine ( access réseaux, logs, …) pour detecter des usages non habituels. Cet espion doit interagir avec les couches basses du système (réseau, sockets, logs systems) et se greffe donc au niveau du noyau du système d'exploitation. Il remonte les informations en live à une plateforme qui peut ensuite adapter les réponse en live si l'incident a duré moins de 1h30 coté crowdstrike plus de 8 millions de machines se sont retrouvées hors service bloquées sur le Blue Screen Of Death selon Microsoft https://blogs.microsoft.com/blog/2024/07/20/helping-our-customers-through-the-crowdstrike-outage/ cela n'est pas la première fois et était déjà arrivé il y a quelques mois sur Linux. Comme il s'agissait d'une incompatibilité de kernel il avait été moins important car les services ITs gèrent mieux ces problèmes sous Linux https://stackdiary.com/crowdstrike-took-down-debian-and-rocky-linux-a-few-months-ago-and-no-one-noticed/ Les benchmarks CIS, un pilier pour la sécurité de nos environnements cloud, et pas que ! (Katia HIMEUR TALHI) https://blog.cockpitio.com/security/cis-benchmarks/ Le CIS est un organisme à but non lucratif qui élabore des normes pour améliorer la cybersécurité. Les référentiels CIS sont un ensemble de recommandations et de bonnes pratiques pour sécuriser les systèmes informatiques. Ils peuvent être utilisés pour renforcer la sécurité, se conformer aux réglementations et normaliser les pratiques. Loi, société et organisation Microsoft signe un accord avec OVHCloud pour qu'il arretent leur plaine d'antitrust https://www.politico.eu/article/microsoft-signs-antitrust-truce-with-ovhcloud/ la plainte était en Europe mermet a des clients de plus facilement deployer les solutions Microsoft dans le fournisseur de cloud de leur choix la plainte avait ete posé à l'été 2021 ca rendait faire tourner les solutions MS plus cheres et non competitives vs MS ElasticSearch et Kibana sont de nouveau Open Source, en ajoutant la license AGPL à ses autres licences existantes https://www.elastic.co/fr/blog/elasticsearch-is-open-source-again le marché d'il y a trois ans et maintenant a changé AWS est une bon partenaire le flou Elasticsearch vs le produit d'AWS s'est clarifié donc retour a l'open source via AGPL Affero GPL Elastic n'a jamais cessé de croire en l'open source d'après Shay Banon son fondateur Le changement vers l'AGPL est une option supplémentaire, pas un remplacement d'une des autres licences existantes et juste apres, Elastic annonce des resultants decevants faisant plonger l'action de 25% https://siliconangle.com/2024/08/29/elastic-shares-plunge–25-lower-revenue-projections-amid-slower-customer-commitments/ https://unrollnow.com/status/1832187019235397785 et https://www.elastic.co/pricing/faq/licensing pour un résumé des licenses chez elastic Outils de l'épisode MailMate un client email Markdown et qui gere beaucoup d'emails https://medium.com/@nicfab/mailmate-a-powerful-client-email-for-macos-markdown-integrated-email-composition-e218fe2accf3 Emmanuel l'utilise sur les boites email secondaires un peu lent a demarrer (synchro) et le reste est rapide boites virtuelles (par requete) SpamSieve Que macOS je crois Trippy, un analyseur de réseau https://github.com/fujiapple852/trippy Il regroupe dans une CLI traceroute et ping Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 17 septembre 2024 : We Love Speed - Nantes (France) 17–18 septembre 2024 : Agile en Seine 2024 - Issy-les-Moulineaux (France) 19–20 septembre 2024 : API Platform Conference - Lille (France) & Online 20–21 septembre 2024 : Toulouse Game Dev - Toulouse (France) 25–26 septembre 2024 : PyData Paris - Paris (France) 26 septembre 2024 : Agile Tour Sophia-Antipolis 2024 - Biot (France) 2–4 octobre 2024 : Devoxx Morocco - Marrakech (Morocco) 3 octobre 2024 : VMUG Montpellier - Montpellier (France) 7–11 octobre 2024 : Devoxx Belgium - Antwerp (Belgium) 8 octobre 2024 : Red Hat Summit: Connect 2024 - Paris (France) 10 octobre 2024 : Cloud Nord - Lille (France) 10–11 octobre 2024 : Volcamp - Clermont-Ferrand (France) 10–11 octobre 2024 : Forum PHP - Marne-la-Vallée (France) 11–12 octobre 2024 : SecSea2k24 - La Ciotat (France) 15–16 octobre 2024 : Malt Tech Days 2024 - Paris (France) 16 octobre 2024 : DotPy - Paris (France) 16–17 octobre 2024 : NoCode Summit 2024 - Paris (France) 17–18 octobre 2024 : DevFest Nantes - Nantes (France) 17–18 octobre 2024 : DotAI - Paris (France) 30–31 octobre 2024 : Agile Tour Nantais 2024 - Nantes (France) 30–31 octobre 2024 : Agile Tour Bordeaux 2024 - Bordeaux (France) 31 octobre 2024–3 novembre 2024 : PyCon.FR - Strasbourg (France) 6 novembre 2024 : Master Dev De France - Paris (France) 7 novembre 2024 : DevFest Toulouse - Toulouse (France) 8 novembre 2024 : BDX I/O - Bordeaux (France) 13–14 novembre 2024 : Agile Tour Rennes 2024 - Rennes (France) 16–17 novembre 2024 : Capitole Du Libre - Toulouse (France) 20–22 novembre 2024 : Agile Grenoble 2024 - Grenoble (France) 21 novembre 2024 : DevFest Strasbourg - Strasbourg (France) 21 novembre 2024 : Codeurs en Seine - Rouen (France) 27–28 novembre 2024 : Cloud Expo Europe - Paris (France) 28 novembre 2024 : Who Run The Tech ? - Rennes (France) 2–3 décembre 2024 : Tech Rocks Summit - Paris (France) 3 décembre 2024 : Generation AI - Paris (France) 3–5 décembre 2024 : APIdays Paris - Paris (France) 4–5 décembre 2024 : DevOpsRex - Paris (France) 4–5 décembre 2024 : Open Source Experience - Paris (France) 5 décembre 2024 : GraphQL Day Europe - Paris (France) 6 décembre 2024 : DevFest Dijon - Dijon (France) 22–25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 30 janvier 2025 : DevOps D-Day #9 - Marseille (France) 6–7 février 2025 : Touraine Tech - Tours (France) 3 avril 2025 : DotJS - Paris (France) 16–18 avril 2025 : Devoxx France - Paris (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

The Cloud Pod
274: The Cloud Pod is Still Not Open Source

The Cloud Pod

Play Episode Listen Later Sep 11, 2024 68:02


Welcome to episode 274 of The Cloud Pod, where the forecast is always cloudy! Justin, Ryan and Matthew are your hosts this week as we explore the world of SnapShots, Maia, Open Source, and VMware – just to name a few of the topics. And stay tuned for an installment of our continuing Cloud Journey Series to explore ways to decrease tech debt, all this week on The Cloud Pod.   Titles we almost went with this week: The Cloud Pod in Parallel Cluster The Cloud Pod cringes at managing 1000 aws accounts The Cloud Pod welcomes Imagen 3 with less Wokeness The Cloud Pod wants to be instantly snapshotted The Cloud pod hates tech debt A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our slack channel for more info.  General News 00:32 Elasticsearch is Open Source, Again Shay Banon is pleased to call ElasticSearch and Kibana “open source” again.  He says everyone at Elastic is ecstatic to be open source again, it’s part of his and “Elastics DNA.”  They’re doing this by adding AGPL as another license option next to ELv2 and SSPL in the coming weeks.  They never stopped believing or behaving like an OSS company after they changed the license, but by being able to use the term open source and by using AGPL – an OSI approved license – removes any questions or fud people might have.  Shay says the change 3 years ago was because they had issues with AWS and the market confusion their offering was causing.  So, after trying all the other options, changing the license – all while knowing it would result in a fork with a different name – was the path they took.  While it was painful, they said it worked.  3 years later, Amazon is fully invested in their OpenSearch fork, the market confusion has mostly gone, and their partnership with AWS is stronger than ever. They are even being named partner of the year with AWS.  They want to “make life of our users as simple as possible,” so if you’re ok with the ELv2 or the SSPL, then you can keep using that license. They aren't removing anything, just giving you another option with AGPL. He calls out trolls and people who will pick at this announcement, so they are attempting to address the trolls in advance.  “Changing the license was a mistake, and Elastic now backtracks from it”. We removed a lot of market confusion when we changed our license 3 years ago. And because of our actions, a lot has changed. It's an entirely different landscape now. We aren't living in the past. We want to build a better future for our users. It's because we took action then, that we are in a position to take action now. “AGPL i

Ask Noah Show
Ask Noah Show 406

Ask Noah Show

Play Episode Listen Later Sep 4, 2024 53:51


This week we dig back into home automation, we talk a bit about choosing cameras for a large camera system, and of course we answer your questions! -- During The Show -- 00:52 Intro Home automation Weekend of learning 03:48 Monitoring Remote Location (Cameras) - Rob Powerline adapters might work Ubiquiti Nano Beam Synology Surveillance Station (https://www.synology.com/en-global/surveillance) Frigate Do not put the NVR on the internet Privacy File server upload Home Assistant events 17:18 Camera Systems for Tribal Lands - William NDAA compliant cameras and NVRs ReoLink NVR banned ReoLink Cameras depends - bad idea NDAA compliant brands 360 Vision Technology (360 VTL) Avigilon Axis Communications BCD International Commend FLIR Geutebrück iryx JCI/Tyco Security Mobotix Pelco Rhombus Systems Seek Thermal Solink Vaion/Ava WatchGuard Main 3 NVR in use Exac Vision Avigilon Milestone NDAA conversation Noah's favorites Axis FLIR #### 25:09 Charlie Finds e-ink android - Charlie Boox Palma (https://shop.boox.com/products/palma) Why a camera? Nice for reading Lineage or Graphene will NOT work 27:57 ESPDevices for Light Switches - Avri Shelly's are ESP32 devices Devices can talk to each other 30:00 Beaming podcasts to Volumio and Roku - Tiny Pulse Audio Write in! 31:40 News Wire 4M Linux 46 - opensourcefeed.org (https://www.opensourcefeed.org/4mlinux-46-release/) Debain Bookwork 12.7 - debian.org (https://www.debian.org/News/2024/20240831) Porteus 1.6 - porteus.org (https://forum.porteus.org/viewtopic.php?t=11426) Rhino Linux 2nd Release - itsfoss.com (https://news.itsfoss.com/rhino-linux-2024-2-release/) GNU Screen 5 - theregister.com (https://www.theregister.com/2024/09/03/gnu_screen_5/) Wireshark 4.4 - wireshark.org (https://www.wireshark.org/docs/relnotes/wireshark-4.4.0) Bugzilla releases - bugzilla.org (https://www.bugzilla.org/blog/2024/09/03/release-of-bugzilla-5.2-5.0.4.1-and-4.4.14/) Armbian 24.8 - armbian.com (https://www.armbian.com/newsflash/armbian-24-8-yelt/) Elasticsearch and Kibana licensing - businesswire.com (https://www.businesswire.com/news/home/20240829537786/en/Elastic-Announces-Open-Source-License-for-Elasticsearch-and-Kibana-Source-Code) Xe2 Linux Support - wccftech.com (https://wccftech.com/intel-push-out-xe2-graphics-enablement-linux-6-12-kernel/) Cicada3301 - thehackernews.com (https://thehackernews.com/2024/09/new-rust-based-ransomware-cicada3301.html) New Phi-3.5 AI Models - infoq.com (https://www.infoq.com/news/2024/08/microsoft-phi-3-5/) Open-Source, EU AI Act Compliant LLMs - techzine.eu (https://www.techzine.eu/blogs/privacy-compliance/123863/aleph-alphas-open-source-llms-fully-comply-with-the-ai-act/) View on Why AI Models Should be Open and Free for All - businessinsider.com (https://www.businessinsider.com/anima-anandkumar-ai-climate-change-open-source-caltech-nvidia-2024-8) 33:53 Hoptodesk Comparison to Team Viewer Hoptodesk (https://www.hoptodesk.com/) Free & Open Source Cross platform E2E Encryption Can self host the server Wayland is not officially supported 38:05 EmuDeck ArsTechnica (https://arstechnica.com/gaming/2024/08/emudeck-machines-pack-popular-emulation-suite-in-linux-powered-plug-and-play-pc/) Seeking funding Already been doing this on the steamdeck For retro games Drawing unwanted attention Powered by Bazzite 41:05 Home Automation Zwave Great for nerds/tinkering Not for professional installs RadioRA 2 Licensed dedicated frequency Central planning Never had a failure Designed to be integrated Orbit Panels and Shelly Pro Line Game changer 100% reliable People don't want a wall of dimmers Seeed Studio mmWave Sensor (https://wiki.seeedstudio.com/mmwave_human_detection_kit/) I don't like WiFi for automation Steve's experience -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/406) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)

OpenObservability Talks
Redis is No Longer Open Source. Is Valkey the Successor? - OpenObservability Talks S5E01

OpenObservability Talks

Play Episode Listen Later Jun 27, 2024 60:25


Redis is no longer open source. Just a few months ago, in March 2024, the project was relicensed, leaving its vast community confused. But the community did not give up, and started work to fork Redis to keep it open.  In this episode, we delve into the Valkey project, a prominent fork of Redis, established under the Linux Foundation, which brought together important figures from the Redis community, as well as leading industry giants including AWS, Google Cloud, Oracle and others. Valkey has rapidly gained momentum and just reached General Availability (GA).  Join us as we explore the motivations behind Valkey's creation, hear first-hand stories on its foundation and journey to GA, and learn of its Redis compatibility, roadmap and implications for the open-source community.  Valkey's first Contributor Summit is taking place June 5-6 in Seattle and we will bring you announcements and updates hot off the summit. Our guest is Kyle Davis, the Senior Developer Advocate on the Valkey project, and a past contributor for Redis.  Kyle currently works at AWS, a founding member of Valkey, and has a long history with open source and with forks. He was a founding contributor to the OpenSearch project, which started as a fork of Elasticsearch and Kibana after the latter's relicensing off OSS. Most recently Kyle worked to build a community around Bottlerocket OSS project.   The episode was live-streamed on 10 June 2024 and the video is available at youtube.com/live/HQ7TAdQpxu4 OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube. We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and chime in with your comments and questions on the live chat. ⁠⁠https://www.youtube.com/@openobservabilitytalks⁠   https://www.twitch.tv/openobservability⁠ Show Notes: 01:12 - Episode intro, Kyle Davis' Redis background  05:43 - Redis relicensing off open source  10:10 - Valkey vs. other Redis open source forks 16:50 - drop-in replacement of Redis 19:35 - Redis user experience during the relicensing 28:50 - From fork to GA in less than a month 34:00 - Valkey roadmap and Contributor Summit updates 40:00 - Valkey's Technical Steering Committee and leadership 44:14 - what Valkey latest GA is about  Resources: Valkey announced: https://www.linkedin.com/posts/horovits_redis-opensource-activity-7179186700470861824-Gghq Valkey first GA and new member companies: https://www.linkedin.com/posts/horovits_redis-valkey-valkey-activity-7186263342041198593-fsY3 Announcements from Valkey's first Contributor Summit: https://www.linkedin.com/posts/horovits_valkey-welcomes-new-partners-amid-growing-activity-7209084153718362112-OfdI/ For Kubernetes 10th anniversary - special episode with Kelsey Hightower: https://logz.io/blog/kubernetes-and-beyond-2023-reflection/?utm_source=devrel&utm_medium=devrel Socials: Twitter:⁠ https://twitter.com/OpenObserv⁠ YouTube: ⁠https://www.youtube.com/@openobservabilitytalks⁠ Dotan Horovits ============ Twitter: @horovits LinkedIn: in/horovits Mastodon: @horovits@fosstodon Kyle Davis ======== LinkedIn: linkedin.com/in/kyle-davis-linux/ Mastodon: @linux_mclinuxface@fosstodon.org

Programmers Quickie
Kibana KQL vs. Lucene

Programmers Quickie

Play Episode Listen Later Jun 22, 2024 2:18


WeSpeakCloud
Retour vers le futur d'Elastic

WeSpeakCloud

Play Episode Listen Later Nov 24, 2023 58:40


On ne présente plus la fameuse stack «ELK» (Elasticsearch, Logstash, Kibana). Né au début des années 2010, Elasticsearch a connu un rapide essor, qui s'est accéléré avec la création de la société Elastic NV qui en finance le développement à l'aide d'un ensemble de services payants. Après avoir traversé quelques transformations du monde de la Tech (le passage au Cloud, la création des «as a Service», et l'émergence des I.A.), où en est Elastic ? Vers où se dirige Elasticsearch ? Avec David Pilato, Developer Evangelist chez Elastic, prenons le temps de faire le point sur cette solution qui a su se réinventer régulièrement tout en gardant son esprit originel.   Tu likes/aimes/partage: Le podcast de musique de David: https://podcasts.apple.com/fr/podcast/dj-dadoo-net-mixes/id505824965 Et son site (qui diffuse le podcast directement): https://djdadoo.pilato.fr/

CHAOSScast
Episode 74: Building on Top of CHAOSS Software

CHAOSScast

Play Episode Listen Later Nov 21, 2023 41:53


CHAOSScast – Episode 74 On this episode, our host Georg Link kicks off the discussion, introducing a stellar lineup of panelists including Sean Goggins, Yehui Wang, Mike Nolan, and Cali Dolfi. The topics discussed today are the CHAOSS software, Augur, and GrimoireLab, and the different applications built on top of this software. The panel members discuss the projects they are involved in, such as the Augur project, OSS Compass, and Project Aspen's 8Knot. Then, we'll delve into Mystic's prototype software, aiming to transform how academic contributions are recognized and valued. The discussion dives deep into the role of CHAOSS software in open source and community health, talks about Augur and GrimoireLab projects, ecosystem-level analysis, and data visualization. Press download now to hear more! [00:00:58] The panelists each introduce themselves. [00:03:03] Georg explains the origins of CHAOSS software, particularly Augur and Grimoire Lab, and their development. He dives into Grimoire Lab's focus on data quality, flexibility, and its identity management tool, Sorting Hat. [00:05:55] Sean details Augur's inception, its focus on a relational database, and its capabilities in data collection and validation. Georg and Sean recall Augur's early days, focusing on GitHub archive data, and its evolution into a comprehensive system. [00:09:28] Yehui discusses OSS Compass, its goals, the integration of metrics models, and the choice of using Grimoire Lab as a backend. He elaborates on OSS Compass's ease of use and the adoption of new data sources like Gitee. [00:14:16] Mike inquires about the handling of the vast number of repositories on Gitee, and Yehui explains using a message bus and RabbitMQ for both data handling and parallel processing. Sean clarifies that Gitee is a Git platform similar to GitHub and GitLab, and OSS Compass is the metrics and modeling tool. [00:15:29] Cali asks about the visualization tool used, and Yehui mentions moving away from Kibana to front-end technologies and libraries like ECharts for creating visualizations, which is an Apache open source project. [00:16:29] Cali describes 8Knot under Project Aspen built in Plotly Dash and Repel, focusing on mapping open source ecosystems using Augur data. She emphasizes the data science approach to analyzing open source communities and the templated nature of 8Knot for easy visualization creation by data scientists. [00:20:19] Sean comments on the ease of adding new visualizations with Dash Plotly technology in 8Knot. Cali adds that new visualizations can be easily made an that 8Knot is connected to a maintained Augur database but can also be forked for specific community and company needs. [00:2342] Georg underlines the importance of ecosystem-level analysis, especially for software supply chain security. Cali shares the goals of analyzing ecosystems to understand relationships between projects, influenced by Red Hat's interests in investing in interconnected communities. [00:26:30] The conversation shifts to Mystic, and Mike describes it as a prototype software integrating both GrimoireLab and Augur, with the goal of better integrating these projects through development. [00:27:30] Mike outlines Mystic's goal to serve as a front-end to date collection systems, with a specific focus on the academic community's contributions to technology research. He envisions Mystic as a tool for academics to measure community health and impact of their projects, aiding in tenure and promotion cases. [00:30:52] Yehui asks about integration of Grimoire Lab and Augur within Mystic and the selection of components for the solution. Mike explains the early stages of integration and the plan to combine data collection services from GrimoireLab into Augur to support undergraduate student development. [00:32:30] Mike details research on Mystic, including interviews with faculty from various departments to understand their digital collaboration and artifact creation. He aims to develop generalized models of collaboration applicable to multiple data sources, allowing systems like Mystic to support diverse academic disciplines. Value Adds (Picks) of the week: [00:36:26] Georg's pick is focusing on the slogan, “One day at a time.” [00:37:12] Cali's pick is doing a Friendsgiving this week. [00:38:08] Sean's pick is the launch of the tv show ‘Moonlighting' from the 80's. [00:38:49] Yehui's pick is riding his bike to work which is peaceful for him. [00:39:52] Mike's pick is attending The Turing Way Book Dash. Panelists: Georg Link Sean Goggins Michael Nolan Cali Dolfi Yehui Wang Links: CHAOSS (https://chaoss.community/) CHAOSS Project X/Twitter (https://twitter.com/chaossproj?lang=en) CHAOSScast Podcast (https://podcast.chaoss.community/) podcast@chaoss.community (mailto:podcast@chaoss.community) Ford Foundation (https://www.fordfoundation.org/) Georg Link Website (https://georg.link/) Sean Goggins Website (https://www.seangoggins.net/) Mike Nolan LinkedIn (https://www.linkedin.com/in/mikenolansoftware/?originalSubdomain=uk) Cali Dolfi LinkedIn (https://www.linkedin.com/in/calidolfi/) Yehui Wang GitHub (https://github.com/eyehwan) Augur (https://github.com/chaoss/augur) GrimoireLab (https://chaoss.github.io/grimoirelab/) Perceval-GitHub (https://github.com/chaoss/grimoirelab-perceval) Gitee (https://gitee.com/) RabbitMQ (https://www.rabbitmq.com/) OSS Compass-GitHub (https://github.com/oss-compass) Kibana (https://www.elastic.co/kibana) Apache ECharts (https://echarts.apache.org/en/index.html) 8Knot (https://eightknot.osci.io/) Building an open source community health analytics platform (Mystic) (https://opensource.com/article/21/9/openrit-mystic) The Turing Way Book Dashes (https://the-turing-way.netlify.app/community-handbook/bookdash.html) Special Guests: Cali Dolfi, Mike Nolan, and Yehui Wang.

Giant Robots Smashing Into Other Giant Robots
497: Axiom with Seif Lotfy

Giant Robots Smashing Into Other Giant Robots

Play Episode Listen Later Oct 19, 2023 39:13


Victoria is joined by guest co-host Joe Ferris, CTO at thoughtbot, and Seif Lotfy, the CTO and Co-Founder of Axiom. Seif discusses the journey, challenges, and strategies behind his data analytics and observability platform. Seif, who has a background in robotics and was a 2008 Sony AIBO robotic soccer world champion, shares that Axiom pivoted from being a Datadog competitor to focusing on logs and event data. The company even built its own logs database to provide a cost-effective solution for large-scale analytics. Seif is driven by his passion for his team and the invaluable feedback from the community, emphasizing that sales validate the effectiveness of a product. The conversation also delves into Axiom's shift in focus towards developers to address their need for better and more affordable observability tools. On the business front, Seif reveals the company's challenges in scaling across multiple domains without compromising its core offerings. He discusses the importance of internal values like moving with urgency and high velocity to guide the company's future. Furthermore, he touches on the challenges and strategies of open-sourcing projects and advises avoiding platforms like Reddit and Hacker News to maintain focus. Axiom (https://axiom.co/) Follow Axiom on LinkedIn (https://www.linkedin.com/company/axiomhq/), X (https://twitter.com/AxiomFM), GitHub (https://github.com/axiomhq), or Discord (https://discord.com/invite/axiom-co). Follow Seif Lotfy on LinkedIn (https://www.linkedin.com/in/seiflotfy/) or X (https://twitter.com/seiflotfy). Visit his website at seif.codes (https://seif.codes/). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: This is the Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido, and with me today is Seif Lotfy, CTO and Co-Founder of Axiom, the best home for your event data. Seif, thank you for joining me. SEIF: Hey, everybody. Thanks for having me. This is awesome. I love the name of the podcast, given that I used to compete in robotics. VICTORIA: What? All right, we're going to have to talk about that. And I also want to introduce a guest co-host today. Since we're talking about cloud, and observability, and data, I invited Joe Ferris, thoughtbot CTO and Director of Development of our platform engineering team, Mission Control. Welcome, Joe. How are you? JOE: Good, thanks. Good to be back again. VICTORIA: Okay. I am excited to talk to you all about observability. But I need to go back to Seif's comment on competing with robots. Can you tell me a little bit more about what robots you've built in the past? SEIF: I didn't build robots; I used to program them. Remember the Sony AIBOs, where Sony made these dog robots? And we would make them compete. There was an international competition where we made them play soccer, and they had to be completely autonomous. They only communicate via Bluetooth or via wireless protocols. And you only have the camera as your sensor as well as...a chest sensor throws the ball near you, and then yeah, you make them play football against each other, four versus four with a goalkeeper and everything. Just look it up: RoboCup AIBO. Look it up on YouTube. And I...2008 world champion with the German team. VICTORIA: That sounds incredible. What kind of crowds are you drawing out for a robot soccer match? Is that a lot of people involved with that? SEIF: You would be surprised how big the RoboCup competition is. It's ridiculous. VICTORIA: I want to go. I'm ready. I want to, like, I'll look it up and find out when the next one is. SEIF: No more Sony robots but other robots. Now, there's two-legged robots. So, they make them play as two-legged robots, much slower than four-legged robots, but works. VICTORIA: Wait. So, the robots you were playing soccer with had four legs they were running around on? SEIF: Yeah, they were dogs [laughter]. VICTORIA: That's awesome. SEIF: We all get the same robot. It's just a competition on software, right? On a software level. And some other competitions within the RoboCup actually use...you build your own robot and stuff like that. But this one was...it's called the Standard League, where we all have a robot, and we have to program it. JOE: And the standard robot was a dog. SEIF: Yeah, I think back then...we're talking...it's been a long time. I think it started in 2001 or something. I think the competition started in 2001 or 2002. And I compete from 2006 to 2008. Robots back then were just, you know, simple. VICTORIA: Robots today are way too complicated [laughs]. SEIF: Even AI is more complicated. VICTORIA: That's right. Yeah, everything has gotten a lot more complicated [laughs]. I'm so curious how you went from being a world-champion robot dog soccer player [laughs] programmer [laughs] to where you are today with Axiom. Can you tell me a little bit more about your journey? SEIF: The journey is interesting because it came from open source. I used to do open source on the side a lot–part of the GNOME Project. That's where I met Neil and the rest of my team, Mikkel Kamstrup, the whole crowd, basically. We worked on GNOME. We worked on Ubuntu. Like, most of them were working professionally on it. I was working for another company, but we worked on the same project. We ended up at Xamarin, which was bought by Microsoft. And then we ended up doing Axiom. But we've been around each other professionally since 2009, most of us. It's like a little family. But how we ended up exactly in observability, I think it's just trying to fix pain points in my life. VICTORIA: Yeah, I was reading through the docs on Axiom. And there's an interesting point you make about organizations having to choose between how much data they have and how much they want to spend on it. So, maybe you can tell me a little bit more about that pain point and what you really found in the early stages that you wanted to solve. SEIF: So, the early stages of what we wanted to solve we were mainly dealing with...so, the early, early stage, we were actually trying to be a Datadog competitor, where we were going to be self-hosted. Eventually, we focused on logs because we found out that's what was a big problem for most people, just event data, not just metric but generally event data, so logs, traces, et cetera. We built out our own logs database completely from scratch. And one of the things we stumbled upon was; basically, you have three things when it comes to logging, which is low cost, low latency, and large scale. That's what everybody wants. But you can't get all three of them; you can only get two of them. And we opted...like, we chose large scale and low cost. And when it comes to latency, we say it should be just fast enough, right? And that's where we focused on, and this is how we started building it. And with that, this is how we managed to stand out by just having way lower cost than anybody else in the industry and dealing with large scale. VICTORIA: That's really interesting. And how did you approach making the ingestion pipeline for masses amount of data more efficient? SEIF: Just make it coordination-free as possible, right? And get rid of Kafka because Kafka just, you know, drains your...it's where you throw in money. Like maintaining Kafka...it's like back then Elasticsearch, right? Elasticsearch was the biggest part of your infrastructure that would cost money. Now, it's also Kafka. So, we found a way to have our own internal way of queueing things without having to rely on Kafka. As I said, we wrote everything from scratch to make it work. Like, every now and then, I think that we can spin this out of the company and make it a new product. But now, eyes on the prize, right? JOE: It's interesting to hear that somebody who spent so much time in the open-source community ended up rolling their own solution to so many problems. Do you feel like you had some lessons learned from open source that led you to reject solutions like Kafka, or how did that journey go? SEIF: I don't think I'm rejecting Kafka. The problem is how Kafka is built, right? Kafka is still...you have to set up all these servers. They have to communicate, et cetera, etcetera. They didn't build it in a way where it's stateless, and that's what we're trying to go to. We're trying to make things as stateless as possible. So, Kafka was never built for the cloud-native era. And you can't really rely on SQS or something like that because it won't deal with this high throughput. So, that's why I said, like, we will sacrifice some latency, but at least the cost is low. So, if messages show after half a second or a second, I'm good. It doesn't have to be real-time for me. So, I had to write a couple of these things. But also, it doesn't mean that we reject open source. Like, we actually do like open source. We open-source a couple of libraries. We contribute back to open source, right? We needed a solution back then for that problem, and we couldn't find any. And maybe one day, open source will have, right? JOE: Yeah. I was going to ask if you considered open-sourcing any of your high latency, high throughput solutions. SEIF: Not high latency. You make it sound bad. JOE: [laughs] SEIF: You make it sound bad. It's, like, fast enough, right? I'm not going to compete on milliseconds because, also, I'm competing with ClickHouse. I don't want to compete with ClickHouse. ClickHouse is low latency and large scale, right? But then the cost is, you know, off the charts a bit sometimes. I'm going the other route. Like, you know, it's fast enough. Like, how, you know, if it's under two, three seconds, everybody's happy, right? If the results come within two, three seconds, everybody is happy. If you're going to build a real-time trading system on top of it, I'll strongly advise against that. But if you're building, you know, you're looking at dashboards, you're more in the observability field, yeah, we're good. VICTORIA: Yeah, I'm curious what you found, like, which customer personas that market really resonated with. Like, is there a particular, like, industry type where you're noticing they really want to lower their cost, and they're okay with this just fast enough latency? SEIF: Honestly, with the current recession, everybody is okay with giving up some of the speed to reduce the money because I think it's not linear reduction. It's more exponential reduction at this point, right? You give up a second, and you're saving 30%. You give up two seconds, all of a sudden, you're saving 80%. So, I'd say in the beginning, everybody thought they need everything to be very, very fast. And now they're realizing, you know, with limitations you have around your budget and spending, you're like, okay, I'm okay with the speed. And, again, we're not slow. I'm just saying people realize they don't need everything under a second. They're okay with waiting for two seconds. VICTORIA: That totally resonates with me. And I'm curious if you can add maybe a non-technical or a real-life example of, like, how this impacts the operations of a company or organization, like, if you can give us, like, a business-y example of how this impacts how people work. SEIF: I don't know how, like, how do people work on that? Nothing changed, really. They're still doing the, like...really nothing because...and that aspect is you run a query, and, again, as I said, you're not getting the result in a second. You're just waiting two seconds or three seconds, and it's there. So, nothing really changed. I think people can wait three seconds. And we're still like–when I say this, we're still faster than most others. We're just not as fast as people who are trying to compete on a millisecond level. VICTORIA: Yeah, that's okay. Maybe I'll take it back even, like, a step further, right? Like, our audience is really sometimes just founders who almost have no formal technical training or background. So, when we talk about observability, sometimes people who work in DevOps and operations all understand it and kind of know why it's important [laughs] and what we're talking about. So, maybe you could, like, go back to -- SEIF: Oh, if you're asking about new types of people who've been using it -- VICTORIA: Yeah. Like, if you're going to explain to, like, a non-technical founder, like, why your product is important, or, like, how people in their organization might use it, what would you say? SEIF: Oh, okay, if you put it like that. It's more of if you have data, timestamp data, and you want to run analytics on top of it, so that could be transactions, that could be web vitals, rather than count every time somebody visits, you have a timestamp. So, you can count, like, how many visitors visited the website and what, you know, all these kinds of things. That's where you want to use something like Axiom. That's outside the DevOps space, of course. And in DevOps space, there's so many other things you use Axiom for, but that's outside the DevOps space. And we actually...we implemented as zero-config integration with Vercel that kind of went viral. And we were, for a while, the number one enterprise for self-integration because so many people were using it. So, Vercel users are usually not necessarily writing the most complex backends, but a lot of things are happening on the front-end side of things. And we would be giving them dashboards, automated dashboards about, you know, latencies, and how long a request took, and how long the response took, and the content type, and the status codes, et cetera, et cetera. And there's a huge user base around that. VICTORIA: I like that. And it's something, for me, you know, as a managing director of our platform engineering team, I want to talk more to founders about. It's great that you put this product and this app out into the world. But how do you know that people are actually using it? How do you know that people, like, maybe, are they all quitting after the first day and not coming back to your app? Or maybe, like, the page isn't loading or, like, it's not working as they expected it to. And, like, if you don't have anything observing what users are doing in your app, then it's going to be hard to show that you're getting any traction and know where you need to go in and make corrections and adjust. SEIF: We have two ways of doing this. Right now, internally, we use our own tools to see, like, who is sending us data. We have a deployment that's monitoring production deployment. And we're just, you know, seeing how people are using it, how much data they're sending every day, who stopped sending data, who spiked in sending data sets, et cetera. But we're using Mixpanel, and Dominic, our Head of Product, implemented a couple of key metrics to that for that specifically. So, we know, like, what's the average time until somebody starts going from building its own queries with the builder to writing APL, or how long it takes them from, you know, running two queries to five queries. And, you know, we just start measuring these things now. And it's been going...we've been growing healthy around that. So, we tend to measure user interaction, but also, we tend to measure how much data is being sent. Because let's keep in mind, usually, people go in and check for things if there's a problem. So, if there's no problem, the user won't interact with us much unless there's a notification that kicks off. We also just check, like, how much data is being sent to us the whole time. VICTORIA: That makes sense. Like, you can't just rely on, like, well, if it was broken, they would write a [chuckles], like, a question or something. So, how do you get those metrics and that data around their interactions? So, that's really interesting. So, I wonder if we can go back and talk about, you know, we already mentioned a little bit about, like, the early days of Axiom and how you got started. Was there anything that you found in the early discovery process that was surprising and made you pivot strategy? SEIF: A couple of things. Basically, people don't really care about the tech as much as they care [inaudible 12:51] and the packaging, so that's something that we had to learn. And number two, continuous feedback. Continuous feedback changed the way we worked completely, right? And, you know, after that, we had a Slack channel, then we opened a Discord channel. And, like, this continuous feedback coming in just helps with iterating, helps us with prioritizing, et cetera. And that changed the way we actually developed product. VICTORIA: You use Slack and Discord? SEIF: No. No Slack anymore. We had a community Slack. We had a community [inaudible 13:19] Slack. Now, there's no community Slack. We only have a community Discord. And the community Slack is...sorry, internally, we use Slack, but there's a community Discord for the community. JOE: But how do you keep that staffed? Is it, like, everybody is in the Discord during working hours? Is it somebody's job to watch out for community questions? SEIF: I think everybody gets involved now just...and you can see it. If you go on our Discord, you will just see it. Just everyone just gets involved. I think just people are passionate about what they're doing. At least most people are involved on Discord, right? Because there's, like, Discord the help sections, and people are just asking questions and other people answering. And now, we reached a point where people in the community start answering the questions for other people in the community. So, that's how we see it's starting to become a healthy community, et cetera. But that is one of my favorite things: when I see somebody from the community answering somebody else, that's a highlight for me. Actually, we hired somebody from that community because they were so active. JOE: Yeah, I think one of the biggest signs that a product is healthy is when there's a healthy ecosystem building up around it. SEIF: Yeah, and Discord reminds me of the old days of open sources like IRC, just with memes now. But because all of us come from the old IRC days, being on Discord and chatting around, et cetera, et cetera, just gives us this momentum back, gave us this momentum back, whereas Slack always felt a bit too businessy to me. JOE: Slack is like IRC with emoji. Discord is IRC with memes. SEIF: I would say Slack reminds me somehow of MSN Messenger, right? JOE: I feel like there's a huge slam on MSN Messenger here. SEIF: [laughs] What do you guys use internally, Slack or? I think you're using Slack, right? Or Teams. Don't tell me you're using Teams. JOE: No, we're using Slack. SEIF: Okay, good, because I shit talk. Like, there is this, I'll sh*t talk here–when I start talking about Teams, so...I remember that one thing Google did once, and that failed miserably. JOE: Google still has, like, seven active chat products. SEIF: Like, I think every department or every, like, group of engineers just uses one of them internally. I'm not sure. Never got to that point. But hey, who am I to judge? VICTORIA: I just feel like I end up using all of them, and then I'm just rotating between different tabs all day long. You maybe talked me into using Discord. I feel like I've been resisting it, but you got me with the memes. SEIF: Yeah, it's definitely worth it. It's more entertaining. More noise, but more entertaining. You feel it's alive, whereas Slack is...also because there's no, like, history is forever. So, you always go back, and you're like, oh my God, what the hell is this? VICTORIA: Yeah, I have, like, all of them. I'll do anything. SEIF: They should be using Axiom in the background. Just send data to Axiom; we can keep your chat history. VICTORIA: Yeah, maybe. I'm so curious because, you know, you mentioned something about how you realized that it didn't matter really how cool the tech was if the product packaging wasn't also appealing to people. Because you seem really excited about what you've built. So, I'm curious, so just tell us a little bit more about how you went about trying to, like, promote this thing you built. Or was, like, the continuous feedback really early on, or how did that all kind of come together? SEIF: The continuous feedback helped us with performance, but actually getting people to sign up and pay money it started early on. But with Vercel, it kind of skyrocketed, right? And that's mostly because we went with the whole zero-config approach where it's just literally two clicks. And all of a sudden, Vercel is sending your data to Axiom, and that's it. We will create [inaudible 16:33]. And we worked very closely with Vercel to do this, to make this happen, which was awesome. Like, yeah, hats off to them. They were fantastic. And just two clicks, three clicks away, and all of a sudden, we created Axiom organization for you, the data set for you. And then we're sending it...and the data from Vercel is being forwarded to it. I think that packaging was so simple that it made people try it out quickly. And then, the experience of actually using Axiom was sticky, so they continued using it. And then the price was so low because we give 500 gigs for free, right? You send us 500 gigs a month of logs for free, and we don't care. And you can start off here with one terabyte for 25 bucks. So, people just start signing up. Now, before that, it was five terabytes a month for $99, and then we changed the plan. But yeah, it was cheap enough, so people just start sending us more and more and more data eventually. They weren't thinking...we changed the way people start thinking of “what am I going to send to Axiom” or “what am I going to send to my logs provider or log storage?” To how much more can I send? And I think that's what we wanted to reach. We wanted people to think, how much more can I send? JOE: You mentioned latency and cost. I'm curious about...the other big challenge we've seen with observability platforms, including logs, is cardinality of labels. Was there anything you had to sacrifice upfront in terms of cardinality to manage either cost or volume? SEIF: No, not really. Because the way we designed it was that we should be able to deal with high cardinality from scratch, right? I mean, there's open-source ways of doing, like, if you look at how, like, a column store, if you look at a column store and every dimension is its own column, it's just that becomes, like, you can limit on the amount of columns you're creating, but you should never limit on the amount of different values in a column could be. So, if you're having something like stat tags, right? Let's say hosting, like, hostname should be a column, but then the different hostnames you have, we never limit that. So, the cardinality on a value is something that is unlimited for us, and we don't really see it in cost. It doesn't really hit us on cost. It reflects a bit on compression if you get into technical details of that because, you know, high cardinality means a lot of different data. So, compression is harder, but it's not repetitive. But then if you look at, you know, oh, I want to send a lot of different types of fields, not values with fields, so you have hostname, and latency, and whatnot, et cetera, et cetera, yeah, that's where limitation starts because then they have...it's like you're going to a wide range of...and a wider dimension. But even that, we, yeah, we can deal with thousands at this point. And we realize, like, most people will not need more than three or four. It's like a Postgres table. You don't need more than 3,000 to 4000 columns; else, you know, you're doing a lot. JOE: I think it's actually pretty compelling in terms of cost, though. Like, that's one of the things we've had to be most careful about in terms of containing cost for metrics and logs is, a lot of providers will...they'll either charge you based on the number of unique metric combinations or the performance suffers greatly. Like, we've used a lot of Prometheus-based solutions. And so, when we're working with developers, even though they don't need more than, you know, a few dozen metric combinations most of the time, it's hard for people to think of what they need upfront. It's much easier after you deploy it to be able to query your data and slice it retroactively based on what you're seeing. SEIF: That's the detail. When you say we're using Prometheus, a lot of the metrics tools out there are using, just like Prometheus, are using the Gorilla data structure. And the real data structure was never designed to deal with high cardinality labels. So, basically, to put it in a simple way, every combination of tags you send for metrics is its own file on disk. That's, like, the very simple way of explaining this. And then, when you're trying to search through everything, right? And you have a lot of these combinations. I actually have to get all these files from this conversion back together, you know, and then they're chunked, et cetera. So, it's a problem. Generally, how metrics are doing it...most metrics products are using it, even VictoriaMetrics, et cetera. What they're doing is they're using either the Prometheus TSDB data structure, which is based on Gorilla. Influx was doing the same thing. They pivoted to using more and more like the ones we use, and Honeycomb uses, right? So, we might not be as fast on metrics side as these highly optimized. But then when it comes to high [inaudible 20:49], once we start dealing with high cardinality, we will be faster than those solutions. And that's on a very technical level. JOE: That's pretty cool. I realize we're getting pretty technical here. Maybe it's worth defining cardinality for the audience. SEIF: Defining cardinality to the...I mean, we just did that, right? JOE: What do you think, Victoria? Do you know what cardinality is now? [laughs] VICTORIA: All right. Now I'm like, do I know? I was like, I think I know what it means. Cardinality is, like, let's say you have a piece of data like an event or a transaction. SEIF: It's like the distinct count on a property that gives you the cardinality of a property. VICTORIA: Right. It's like how many pieces of information you have about that one event, basically, yeah. JOE: But with some traditional metrics stores, it's easy to make mistakes. For example, you could have unbounded cardinality by including response time as one of the labels -- SEIF: Tags. JOE: And then it's just going to -- SEIF: Oh, no, no. Let me give you a better one. I put in timestamp at some point in my life. JOE: Yeah, I feel like everybody has done that one. [laughter] SEIF: I've put a system timestamp at some point in my life. There was the actual timestamp, and there was a system timestamp that I would put because I wanted to know when the...because I couldn't control the timestamp, and the only timestamp I had was a system timestamp. I would always add the actual timestamp of when that event actually happened into a metric, and yeah, that did not scale. MID-ROLL AD: Are you an entrepreneur or start-up founder looking to gain confidence in the way forward for your idea? At thoughtbot, we know you're tight on time and investment, which is why we've created targeted 1-hour remote workshops to help you develop a concrete plan for your product's next steps. Over four interactive sessions, we work with you on research, product design sprint, critical path, and presentation prep so that you and your team are better equipped with the skills and knowledge for success. Find out how we can help you move the needle at tbot.io/entrepreneurs. VICTORIA: Yeah. I wonder if you could maybe share, like, a story about when it's gone wrong, and you've suddenly charged a lot of money [laughs] just to get information about what's happening in the system. Any, like, personal experiences with observability that kind of informed what you did with Axiom? SEIF: Oof, I have a very bad one, like, a very, very bad one. I used to work for a company. We had to deploy Elasticsearch on Windows Servers, and it was US-East-1. So, just a combination of Elasticsearch back in 2013, 2014 together with Azure and Windows Server was not a good idea. So, you see where this is going, right? JOE: I see where it's going. SEIF: Eventually, we had, like, we get all these problems because we used Elasticsearch and Kibana as our, you know, observability platform to measure everything around the product we were building. And funny enough, it cost us more than actually maintaining the infrastructure of the product. But not just that, it also kept me up longer because most of the downtimes I would get were not because of the product going down. It's because my Elasticsearch cluster started going down, and there's reasons for that. Because back then, Microsoft Azure thought that it's okay for any VM to lose connection with the rest of the VMs for 30 seconds per day. And then, all of a sudden, you have Elasticsearch with a split-brain problem. And there was a phase where I started getting alerted so much that back then, my partner threatened to leave me. So I bought a...what I think was a shock bracelet or a shock collar via Bluetooth, and I connected it to phone for any notification. And I bought that off Alibaba, by the way. And I would charge it at night, put it on my wrist, and go to sleep. And then, when alert happens, it will fully discharge the battery on me every time. JOE: Okay, I have to admit, I did not see where that was going. SEIF: Yeah, did that for a while; definitely did not save my relationship either. But eventually, that was the point where, you know, we started looking into other observability tools like Datadog, et cetera, et cetera, et cetera. And that's where the actual journey began, where we moved away from Elasticsearch and Kibana to look for something, okay, that we don't have to maintain ourselves and we can use, et cetera. So, it's not about the costs as much; it was just pain. VICTORIA: Yeah, pain is a real pain point, actual physical [chuckles] and emotional pain point [laughter]. What, like, motivates you to keep going with Axiom and to keep, like, the wind in your sails to keep working on it? SEIF: There's a couple of things. I love working with my team. So, honestly, I just wake up, and I compliment my team. I just love working with them. They're a lot of fun to work with. And they challenge me, and I challenge them back. And I upset them a lot. And they can't upset me, but I upset them. But I love working with them, and I love working with that team. And the other thing is getting, like, having this constant feedback from customers just makes you want to do more and, you know, close sales, et cetera. It's interesting, like, how I'm a very technical person, and I'm more interested in sales because sales means your product works, the product, the technical parts, et cetera. Because if technically it's not working, you can't build a product on top of it. And if you're not selling it, then what's the point? You only sell when the product is good, more or less, unless you're Oracle. VICTORIA: I had someone ask me about Oracle recently, actually. They're like, "Are you considering going back to it?" And I'm maybe a little allergic to it from having a federal consulting background [laughs]. But maybe they'll come back around. I don't know. We'll see. SEIF: Did you sell your soul back then? VICTORIA: You know, I feel like I just grew up in a place where that's what everyone did was all. SEIF: It was Oracle, IBM, or HP back in the day. VICTORIA: Yeah. Well, basically, when you're working on applications that were built in, like, the '80s, Oracle was, like, this hot, new database technology [laughs] that they just got five years ago. So, that's just, yeah, interesting. SEIF: Although, from a database perspective, they did a lot of the innovations. A lot of first innovations could have come from Oracle. From a technical perspective, they're ridiculous. I'm not sure from a product perspective how good they are. But I know their sales team is so big, so huge. They don't care about the product anymore. They can still sell. VICTORIA: I think, you know, everything in tech is cyclical. So, you know, if they have the right strategy and they're making some interesting changes over there, there's always a chance [laughs]. Certain use cases, I mean, I think that's the interesting point about working in technology is that you know, every company is a tech company. And so, there's just a lot of different types of people, personas, and use cases for different types of products. So, I wonder, you know, you kind of mentioned earlier that, like, everyone is interested in Axiom. But, you know, I don't know, are you narrowing the market? Or, like, how are you trying to kind of focus your messaging and your sales for Axiom? SEIF: I'm trying to focus on developers. So, we're really trying to focus on developers because the experience around observability is crap. It's stupid expensive. Sorry for being straightforward, right? And that's what we're trying to change. And we're targeting developers mainly. We want developers to like us. And we'll find all these different types of developers who are using it, and that's the interesting thing. And because of them, we start adding more and more features, like, you know, we added tracing, and now that enables, like, billions of events pushed through for, you know, again, for almost no money, again, $25 a month for a terabyte of data. And we're doing this with metrics next. And that's just to address the developers who have been giving us feedback and the market demand. I will sum it up, again, like, the experience is crap, and it's stupid expensive. I think that's the [inaudible 28:07] of observability is just that's how I would sum it up. VICTORIA: If you could go back in time and talk to yourself when you were still a developer, now that you're CTO, what advice would you give yourself? JOE: Besides avoiding shock collars. VICTORIA: [laughs] Yes. SEIF: Get people's feedback quickly so you know you're on the right track. I think that's very, very, very, very important. Don't just work in the dark, or don't go too long into stealth mode because, eventually, people catch up. Also, ship when you're 80% ready because 100% is too late. I think it's the same thing here. JOE: Ship often and early. SEIF: Yeah, even if it's not fully ready, it's still feedback. VICTORIA: Ship often and early and talk to people [laughs]. Just, do you feel like, as a developer, did you have the skills you needed to be able to get the most out of those feedback and out of those conversations you were having with people around your product? SEIF: I still don't think I'm good enough. You're just constantly learning, right? I just accepted I'm part of a team, and I have my contributions. But as an individual, I still don't think I know enough. I think there's more I need to learn at this point. VICTORIA: I wonder, what questions do you have for me or Joe? SEIF: How did you start your podcast, and why the name? VICTORIA: Oh, man, I hope I can answer. So, the podcast was started...I think it's, like, we're actually about to be at our 500th Episode. So, I've only been a host for the last year. Maybe Joe even knows more than I do. But what I recall is that one person at thoughtbot thought it would be a great idea to start a podcast, and then they did it. And it seems like the whole company is obsessed with robots. I'm not really sure where that came from. There used to be a tiny robot in the office, is what I remember. And people started using that as, like, the mascot. And then, yeah, that's it, that's the whole thing. SEIF: Was the robot doing anything useful or just being cute? JOE: It was just cute, and it's hard to make a robot cute. SEIF: Was it a real robot, or was it like a -- JOE: No, there was, at one point, a toy robot. The name...I actually forget the origin–origin of the name, but the name Giant Robots comes from our blog. So, we named the podcast the same as the blog: Giant Robots Smashing Into Other Giant Robots. SEIF: Yes, it's called transformers. VICTORIA: Yeah, I like it. It's, I mean, now I feel like -- SEIF: [laughs] VICTORIA: We got to get more, like, robot dogs involved [laughs] in the podcast. SEIF: Like, I wanted to add one thing when we talked about, you know, what gets me going. And I want to mention that I have a six-month-old son now. He definitely adds a lot of motivation for me to wake up in the morning and work. But he also makes me wake up regardless if I want to or not. VICTORIA: Yeah, you said you had invented an alarm clock that never turns off. Never snoozes [laughs]. SEIF: Yes, absolutely. VICTORIA: I have the same thing, but it's my dog. But he does snooze, actually. He'll just, like, get tired and go back to sleep [laughs]. SEIF: Oh, I have a question. Do dogs have a Tamagotchi phase? Because, like, my son, the first three months was like a Tamagotchi. It was easy to read him. VICTORIA: Oh yeah, uh-huh. SEIF: Noisy but easy. VICTORIA: Yes, yes. SEIF: Now, it's just like, yeah, I don't know, like, the last month he has opinions at six months. I think it's because I raised him in Europe. I should take him back to the Middle East [laughs]. No opinions. VICTORIA: No, dogs totally have, like, a communication style, you know, I pretty much know what he, I mean, I can read his mind, obviously [laughs]. SEIF: Sure, but that's when they grow a bit. But what when they were very...when the dog was very young? VICTORIA: Yeah, they, I mean, they also learn, like, your stuff, too. So, they, like, learn how to get you to do stuff or, like, I know she'll feed me if I'm sitting here [laughs]. SEIF: And how much is one dog year, seven years? VICTORIA: Seven years. SEIF: Seven years? VICTORIA: Yeah, seven years? SEIF: Yeah. So, basically, in one year, like, three months, he's already...in one month, he's, you know, seven months old. He's like, yeah. VICTORIA: Yeah. In a year, they're, like, teenagers. And then, in two years, they're, like, full adults. SEIF: Yeah. So, the first month is basically going through the first six months of a human being. So yeah, you pass...the first two days or three days are the Tamagotchi phase that I'm talking about. VICTORIA: [chuckles] I read this book, and it was, like, to understand dogs, it's like, they're just like humans that are trying to, like, maximize the number of positive experiences that they have. So, like, if you think about that framing around all your interactions about, like, maybe you're trying to get your son to do something, you can be like, okay, how do I, like, I don't know, train him that good things happen when he does the things I want him to do? [laughs] That's kind of maybe manipulative but effective. So, you're not learning baby sign language? You're just, like, going off facial expressions? SEIF: I started. I know how Mama looks like. I know how Dada looks like. I know how more looks like, slowly. And he already does this thing that I know that when he's uncomfortable, he starts opening and closing his hands. And when he's completely uncomfortable and basically that he needs to go sleep, he starts pulling his own hair. VICTORIA: [laughs] I do the same thing [laughs]. SEIF: You pull your own hair when you go to sleep? I don't have that. I don't have hair. VICTORIA: I think I do start, like, touching my head though, yeah [inaudible 33:04]. SEIF: Azure took the last bit of hair I had! Went away with Azure, Elasticsearch, and the shock collar. VICTORIA: [laughs] SEIF: I have none of them left. Absolutely nothing. I should sue Elasticsearch for this shit. VICTORIA: [laughs] Let me know how that goes. Maybe there's more people who could join your lawsuit, you know, with a class action. SEIF: [laughs] Yeah. Well, one thing I wanted to also just highlight is, right now, one of the things that also makes the company move forward is we realized that in a single domain, we proved ourselves very valuable to specific companies, right? So, that was a big, big thing, milestone for us. And now we're trying to move into a handful of domains and see which one of those work out the best for us. Does that make sense? VICTORIA: Yeah. And I'm curious: what are the biggest challenges or hurdles that you associate with that? SEIF: At this point, you don't want just feedback. You want constructive criticism. Like, you want to work with people who will criticize the applic...and you iterate with them based on this criticism, right? They're just not happy about you and trying to create design partners. So, for us, it was very important to have these small design partners who can work with us to actually prove ourselves as valuable in a single domain. Right now, we need to find a way to scale this across several domains. And how do you do that without sacrificing? Like, how do you open into other domains without sacrificing the original domain you came from? So, there's a lot of things [inaudible 34:28]. And we are in the middle of this. Honestly, I Forrest Gumped my way through half of this, right? Like, I didn't know what I was doing. I had ideas. I think it's more of luck at this point. And I had luck. No, we did work. We did work a lot. We did sleepless nights and everything. But I think, in the last three years, we became more mature and started thinking more about product. And as I said, like, our CEO, Neil, and Dominic, our head of product, are putting everything behind being a product-led organization, not just a tech-led organization. VICTORIA: That's super interesting. I love to hear that that's the way you're thinking about it. JOE: I was just curious what other domains you're looking at pushing into if you can say. SEIF: So, we are going to start moving into ETL a bit more. We're trying to see how we can fit in specific ML scenarios. I can't say more about the other, though. JOE: Do you think you'll take the same approaches in terms of value proposition, like, low cost, good enough latency? SEIF: Yes, that's definitely one thing. But there's also...so, this is the values we're bringing to the customer. But also, now, our internal values are different. Now it's more of move with urgency and high velocity, as we said before, right? Think big, work small. The values in terms of values we're going to take to the customers it's the same ones. And maybe we'll add some more, but it's still going to be low-cost and large-scale. And, internally, we're just becoming more, excuse my French, agile. I hate that word so much. Should be good with Scrum. VICTORIA: It's painful, but everyone knows what you're talking about [laughs], you know, like -- SEIF: See, I have opinions here about Scrum. I think Scrum should be only used in terms of iceScrum [inaudible 36:04], or something like that. VICTORIA: Oh no [laughter]. Well, it's a Rugby term, right? Like, that's where it should probably stay. SEIF: I did not know it's a rugby term. VICTORIA: Yeah, so it should stay there, but -- SEIF: Yes [laughs]. VICTORIA: Yeah, I think it's interesting. Yeah, I like the being flexible. I like the just, like, continuous feedback and how you all have set up to, like, talk with your customers. Because you mentioned earlier that, like, you might open source some of your projects. And I'm just curious, like, what goes into that decision for you when you're going to do that? Like, what makes you think this project would be good for open source or when you think, actually, we need to, like, keep it? SEIF: So, we open source libraries, right? We actually do that already. And some other big organizations use our libraries; even our competitors use our libraries, that we do. The whole product itself or at least a big part of the product, like database, I'm not sure we're going to open source that, at least not anytime soon. And if we open source, it's going to be at a point where the value-add it brings is nothing compared to how well our product is, right? So, if we can replace whatever's at the back with...the storage engine we have in the back with something else and the product doesn't get affected, that's when we open source it. VICTORIA: That's interesting. That makes sense to me. But yeah, thank you for clarifying that. I just wanted to make sure to circle back. Since you have this big history in open source, yeah, I'm curious if you see... SEIF: Burning me out? VICTORIA: Burning you out, yeah [laughter]. Oh, that's a good question. Yeah, like, because, you know, we're about to be in October here. Do you have any advice or strategies as a maintainer for not getting burned out during the next couple of weeks besides, like, hide in a cave and without internet access [laughs]? SEIF: Stay away from Reddit and Hacker News. That's my goal for October now because I'm always afraid of getting too attached to an idea, or too motivated, or excited by an idea that I drift away from what I am actually supposed to be doing. VICTORIA: Last question is, is there anything else you would like to promote? SEIF: Yeah, check out our website; I think it's at axiom.co. Check it out. Sign up. And comment on Discord and talk to me. I don't bite, sometimes grumpy, but that's just because of lack of sleep in the morning. But, you know, around midday, I'm good. And if you're ever in Berlin and you want to hang out, I'm more than willing to hang out. VICTORIA: Whoo, that's awesome. Yeah, Berlin is great. I was there a couple of years ago but no plans to go back anytime soon, but maybe I'll keep that in mind. You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. And you could find me on Twitter @victori_ousg. And this podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time. Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at tbot.io/referral. Or you can email us at referrals@thoughtbot.com with any questions. Special Guests: Joe Ferris and Seif Lotfy.

CHAOSScast
Episode 71: What's New in CHAOSS: Podcast Reboot Episode

CHAOSScast

Play Episode Listen Later Oct 5, 2023 47:23


CHAOSScast – Episode 71 In this episode, the CHAOSScast team is back! Georg Link, Dawn Foster, Sean Goggins, Matt Germonprez, and Elizabeth Barron discuss the relaunch of the podcast after taking a short break. They delve into the fascinating world of open source community health, focusing on metrics, metric models, and the CHAOSS Project's role in measuring the health of open source communities. They share insights on how they're working to make metrics more accessible and how they interpret these metrics within the context of specific projects. Additionally, they highlight the Data Science Initiative, the growth of CHAOSS community chapters worldwide, and their initiative to improve newcomer experience and promote diversity and inclusion in open source. Download this episode now to find out much more! [00:02:48] We hear more about where CHAOSS is with developing metrics and metric models and the Context Groups they've developed to bring together individuals interested in the health of specific projects or communities. [00:06:06] The Metric Development Process is brought up, which is the process of defining and releasing metrics has evolved. While some working groups still develop metrics, there's an effort to consolidate and organize metrics to make them more accessible to users, including categorizing and tagging them. [00:08:11] Dawn brings up Metrics Models which are collections of metrics that provide insights into specific aspects of open source community health. These models help users understand various phenomena in open source software health and use metrics effectively. [00:12:14] Georg brings up something new called the Data Science Initiative within CHAOSS, and Dawn talks about her role as Director of Data Science. The initiative aims to provide guidance to users of CHAOSS metrics and tools for interpreting data effectively and she tells us all the key areas that it's focused on. [00:16:14] Matt asks Dawn about the balance between maintain an agnostic stance on metrics and providing more guidance to users in interpreting metrics. Dawn discusses the importance of helping users interpret metrics in the context of their specific projects. [00:17:55] Georg and Dawn talk about using metrics as pointers to prompt users to investigate specific aspects of their communities and projects. [00:18:53] Elizabeth asks if CHAOSS should play a role in advising users on how to make changes in their communities based on metric insights without adversely affecting other metrics. Dawn shares her thoughts and Sean mentions the experience of CHAOSS members in evaluating different communities and interpreting metrics. [00:20:34] Georg expresses excitement about the future of CHAOSS and its journey. [00:21:54] Sean provides an overview of Augur and its evolution over time, including its ability to capture large volumes of data and the development of an API. [00:24:19] Georg discusses recent developments in Grimoire Lab, including multi tenancy support, scalability improvements, and optimization of data enrichment processes. He also talks about the migration of Grimoire Lab from Elasticsearch to OpenSearch for data storage and visualization, and Sorting Hat, a module within Grimoire Lab for managing identities. [00:27:40] Dawn asks about the future use of Kibiter, the Kibana fork used in Grimoire Lab, and Georg confirms a full migration to OpenSearch and Open Search Dashboards, indicating that Kibiter may be phased out. [00:28:52] Matt asks about recent challenges and achievements related to data management and data cleaning in Augur and Grimoire Lab. Sean mentions the importance of data in operationalizing metrics and making them tangible. Georg emphasizes two critical aspects of data quality. [00:33:32] Elizabeth shares insight into the growth of the CHAOSS community. She discusses the challenges of managing the growing community, and a group CHAOSS is partnering with called “All in” to develop badging for open source projects, addressing scalability challenges. [00:41:53] Elizabeth talks about the DEI Reflection Project which was crucial in identifying blind spots and improving the CHAOSS community. It led to valuable recommendations, including enhancing the newcomer experience and promoting diversity and inclusion. Value Adds (Picks) of the week: [00:44:30] Georg's pick is living in his new house that he loves. [00:45:11] Matt's pick is his cool morning bike rides to his office. [00:45:44] Dawn's pick is a warm, sunny vacation she took in Malta. [00:46:15] Elizabeth's pick is seeing her granddaughter getting excited to see flowers, birds, mushrooms, and be out in nature. [00:46:48] Sean's pick is his daughter, an English PHD student, who published her first academic paper, and has another up for a revise and resubmit. *Panelists: * Georg Link Dawn Foster Matt Germonprez Sean Goggins Elizabeth Barron Links: CHAOSS (https://chaoss.community/) CHAOSS Mastodon (https://fosstodon.org/@chaoss) CHAOSScast Podcast (https://podcast.chaoss.community/) podcast@chaoss.community (mailto:podcast@chaoss.community) Ford Foundation (https://www.fordfoundation.org/) Georg Link Website (https://georg.link/) Dawn Foster Twitter (https://twitter.com/geekygirldawn) Matt Germonprez Twitter (https://twitter.com/germ) Sean Goggins Twitter (https://twitter.com/sociallycompute) Elizabeth Barron Twitter (https://twitter.com/elizabethn) CHAOSS Data Science Working Group (https://github.com/chaoss/wg-data-science) Data Science Initiative-Raw data from the Understanding Challenges survey (https://github.com/chaoss/wg-data-science/commit/d86a02841f221308b913d08bc9ae644adced69fc) Augur repositories (https://ai.chaoss.io/) Project Aspen (https://github.com/oss-aspen#8knot-explorer) 8Knot-Metrix CHAOSS (https://metrix.chaoss.io/) Bitergia Analytics- GrimoireLab (https://chaoss.biterg.io) OpenSearch (https://opensearch.org/) Sorting Hat (https://github.com/chaoss/grimoirelab-sortinghat) Kibiter (https://github.com/chaoss/grimoirelab-kibiter) OpenSearch Dashboards (https://opensearch.org/docs/latest/dashboards/index/) All In (https://allinopensource.org/) GitHub All in (https://github.com/AllInOpenSource/All-In) CHAOSS Software (https://chaoss.community/software/) CHAOSScast Podcast-Episode 54: CHAOSS DEI Reflection Project (https://podcast.chaoss.community/54)

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for June 27th, 2023 - Episode 199

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Jun 27, 2023 62:26


2023-06-27 Weekly News - Episode 199Watch the video version on YouTube at https://youtube.com/live/YhGqAVLYZk4?feature=shareHosts:  Gavin Pickin - Senior Developer at Ortus Solutions Brad Wood - Senior Developer at Ortus Solutions Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there. A few ways  to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube.  Help ORTUS reach for the Stars - Star and Fork our Repos Star all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github  Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week BOXLife store: https://www.ortussolutions.com/about-us/shop Buy Ortus's Books  102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Learn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes   Patreon Support ()We have 40 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsCFCamp was a blastBrad said: Back on US soil again, but still smiling from the wonderful experience at CFCamp.  It was so good to be back in Germany and see my EU friends again in person.  I'd say the first time back since Covid was a smashing success!Alex Well said: Back at home from my trip to 2023‘s #CFCamp

Software Sessions
David Cramer on Application Monitoring with Sentry

Software Sessions

Play Episode Listen Later Jun 14, 2023 76:03


Sentry is an application monitoring tool that surfaces errors and performance problems. It minimizes the need to manually look at logs or dashboards by identifying common problems across applications and frameworks. David Cramer is the co-founder and CTO of Sentry. This episode originally aired on Software Engineering Radio. Topics covered: What's Sentry? Treating performance problems as errors Why you might no need logs Identifying common problems in applications and frameworks Issues with Open Telemetry data Why front-end applications are difficult to instrument The evolution of Sentry's architecture Switching from a permissive license to the Business Source License Related Links Sentry David's Blog Sentry 9.1 and Upcoming Changes Re-Licensing Sentry Transcript You can help edit this transcript on GitHub. [00:00:00] Jeremy: Today I'm talking to David Kramer. He's the founder and CTO of Sentry. David, welcome to Software Engineering Radio. [00:00:08] David: Thanks for having me. Excited for today's conversation. What's Sentry? [00:00:11] Jeremy: I think the first thing we could start with is defining what Sentry is. I know some people refer to it as an error tracker. Some people have referred to it as, an application performance monitoring tool. I wonder if you could kind of describe in, in your words what it is. [00:00:30] David: You know, as somebody who doesn't work in marketing, I just tell it how it is. So Sentry started out doing error monitoring, which. You know, dependent on who you talk to, you might just think of as logging, right? Like that's the honest truth. It is just logging just a different shape or form. these days it's hard to not classify us as just an APM tool that's like the industry that exists. It's like the tools people understand. So I would just say it's an APM tool, right? We do a bunch of things within that space, and maybe it's not, you know, item for item the same as say a product like New Relic. but a lot of the overlap's there, so it's like errors performance, which is like latency and sort of throughput. And then we have some stuff that just goes a little bit deeper within that. The, the one thing i would say that is different for us versus a lot of these tools is we actually only do application monitoring. So we don't do any since like systems or infrastructure monitoring. Meaning Sentry is not gonna tell you when you need to replace a hard drive or even that you need new hard, like more disk space or something like that because it's just, it's a domain that we don't think is relevant for sort of our customers and product. Application Performance Monitoring is about finding crashes and performance problems that users would associate with bugs [00:01:31] Jeremy: For people who aren't familiar with the term application performance monitoring, what is that compared to just error tracking? [00:01:41] David: The way I always reason about it, this is what I tell new hires and what I would tell, like my mother, if I had to explain what I do, is like, you load Uber and it crashes. We all know that's bad, right? That's error monitoring. We capture the crash report, we send it to developers. You load Uber and it's a 30 second spinner, like a loading indicator as a customer. Same outcome for me. I assume the app is broken, right? So we also know that's bad. Um, but that's different than a crash. Okay. Sentry captures that same thing and send it to developers. lastly the third example we use, which is a little bit more. I think, untraditional, but a non-traditional rather, uh, you load the Uber app and it's like a blank screen or there's no button to submit, like log in or something like this. So it's kind of like a, it's broken, but it maybe isn't erroring and it's not like a slow thing. Right. Same outcome. It's probably a bug of some sorts. Like it's what an end user would describe it as a bug. So for me, APM just translates to there are bugs, user perceived bugs in your application and we're able to monitor and, and help the software teams sort of prioritize and resolve those, those concerns. [00:02:42] Jeremy: Earlier you were talking about actual crashes, and then your second case is, may be more of if the app is running slowly, then that's not necessarily a crash, but it's still something that an APM would monitor. [00:02:57] David: Yeah. Yeah. And I, I think to be fair, APM, historically, it's not a very meaningful term. Like I as a, when I was more of just an individual contributor, I would associate APM to, like, there's a dashboard that will tell me what's slow in my application, which it does. And that is kind of core to APM, but it would also, none of the traditional tools, pre sentry would actually tell you why it's broken, like when there's an error, a crash. It was like most of those tools were kind of useless. And I don't know, I do actually know, but I'm gonna pretend I don't know about most people and just say for myself. But most of the time my problems are errors. They are not like it's fast or slow, you know? and so we just think of it as like it's a holistic thing to say, when I've changed the application and something's broken, or it's a bug, you know, what is that bug? How do we help people fix it? And that comes from a lot of different, like data signals and things like that. the end result is still the same. You either are gonna fix it or it's not important and you ignore it. I don't know. So it's a pretty straightforward, premise for us. But again, most companies in the space, like the traditional company is when you grow a big company, what happens is like you build one thing and then you build lots of check boxes to sell more things. And so I think a lot of the APM vendors, like they've created a lot of different products. Like RUM is a good example of another acronym that lives with an APM. And I would tell you RUM is completely meaningless. It, it stands for real user monitoring. And so I'm like, well, what's not real about monitoring the application? Well, nothing's not real, but like they created a new category because that's how marketing engines work. And that new category is more like analytics than it is like application telemetry. And it's only because they couldn't collect the app, the application telemetry at the time. And so there's just a lot of fluff, i would say. But at the end of the day too, like developers or engineering teams, it's like new version of the application. You broke something, let's tell you about it so you can fix it. You might not need logging or performance monitoring [00:04:40] Jeremy: And, and so earlier you were saying how this is a kind of logging, but there's also other companies, other products that are considered like logging infrastructure. Like I, I would think of companies like Paper Trail or Log Tail. So what space does Sentry fill that's that's different than that kind of logging? [00:05:03] David: Um, so the way I always think about it, and this is both personally true, and what I advise other folks is when you're building something new, when you start from zero, right, you can often take Sentry put it in, and that's good enough. You don't even need performance monitoring. You just need like errors, right? Like you're just causing bugs all the time. And you could do that with logging, but like the delta between air monitoring and logging is night and day. From a user experience, like error monitoring for us, or what we built at the very least, aggregates the errors. It, it helps you understand the frequency. It helps you when they're new versus old. it really gives you a lot of detail where logs don't, and so you don't need logging often. And I will tell you today at Sentry. Engineers do not use logs for the most part. Uh, I had a debate with one of our, our team members about it, like, why does he use logs recently? But you should not need them because logs serve a different purpose. Like if you have traces which tell you like, like fast and slow in a bunch of other network data and you have this sort of crash report collection or error monitoring thing, logs become like a compliance or an audit trail or like a security forensics, tool, and there's just not a lot of value that you would get out of them otherwise, like once in a while, maybe there's like some weird obscure use case, but generally speaking, you can just pretend that you don't need logs most days. Um, and to me that's like an evolution of the industry. And so when, when Sentry is getting started, most people were still logs. And if you go talk to SRE teams, they're like, oh, login is what we know. Some of that's changed a little bit, but. But at the end of the day, they should only be needed for more complicated audit trails because they're just not a good solution to the problem. It's just free form data. Structured or not, doesn't really matter. It's not aggregated. It's not something that you can really use. And it's why whenever you see logging tools, um, not even the papertrails of the world, but the bigger ones like Splunk or Cabana, it's like this weird, what we describe as choose your own adventure. Like go have fun, build your dashboards and try to make the logs useful kind of story. Whereas like something like Sentry, it's just like, why would you waste any time trying to build dashboards when we can just tell you when something new is broken? Like that's the ideal situation. [00:06:59] Jeremy: So it sounds like maybe the distinction is with a more general logging tool, like you mentioned Splunk and Kibana it's a collection of all this information. of things happening, even though nothing's necessarily wrong, whereas Sentry is more Sentry is it's going to log things, but it's only going to log things if Sentry believes something is wrong, either because of a crash or because of some kind of performance issue. People don't want to dig through logs or dashboards, they want to be told when something is wrong and whyMost software is built the same way, so we know common problems [00:07:28] David: Yeah. I, i would say it's about like actionability, right? Like, like nobody wants to spend their time digging through logs, digging through dashboards. Metrics are another good example of this. Like just charts with metrics on them. Yeah. They tell me something's happening. If there's lots of log statements, they tell me something's going on, but they're not, they're not optimized to like, help me solve a problem, right? And so our philosophy was always like, we haven't necessarily nailed this in all cases for what it's worth, but. It was like, the goal is we identify an actual problem, like close to like a root cause kind of problem, and we escalate that up and that's it. Uh, versus asking somebody to like go have to like build these dashboards, build these things, figure out what data matters and all this because most software looks exactly the same. Like if you have a web service, it doesn't matter what language it's written in, it doesn't matter how different you think your architecture is from somebody else's, they're all the same. It's like you've got a request, you've got a database, you've got some cache, you've got all these like known, known quantity things, and the slowness comes from the same places. Errors are structured while logs are not [00:08:25] David: The errors come from the same places. They're all exhibiting the same kinds of behavior. So logging is very unstructured. And what I mean by that is like there's no schema. Like you can hypothetically like make it JSON and everybody does that, but it's still unstructured. Whereas like errors, it's, it's a tight schema. It's like there's a type of error, there's a message for the error, there's a stack trace, there's all these things that you know. Right. And as soon as you know and you define those things, you can just build better products. And so distributed tracing is similar. Hypothetically, it's a little bit abstract to be fair, but hypothetically, distributed tracing is creating a schema out of basically network annotations. And somebody will yell at me for just simplifying it to that. I would tell 'em that's what it is. But, same goal in mind. If you know what the data is, you can take action on it. It's not quite entirely true. Um, because tracing is much more freeform. For example, it doesn't say if you have a SQL statement, it should be like this, it should be formatted this way, things like that. whereas like stack traces, there's a file name, there's there's a line number, there's like all these things, right? And so that's how I think about the delta between what is useful information and what isn't, I guess. And what allows you to actually build things like Sentry versus just build abstract exploration. Inferring problems rather than having user identify them [00:09:36] Jeremy: Kind of paint the picture of how someone would get started with a tool like Sentry. Do they need to tell Sentry anything about their application? Do they need to modify their source code at all? give us a picture of how that works. [00:09:50] David: Yeah, like one of our fundamentals, which I think applies for any real business these days is you've gotta like reduce user friction, right? Like you've gotta make it dead simple to use. Uh, and for us there were, there was like kind of a fundamental driving constraint behind that. So in many situations, um, APM vendors especially will require you to run an agent a basically like some kind of process that runs on your servers somewhere. Well, if you look at modern tech stacks, that doesn't really work because I don't run the servers half my stuff's in the browser, or it's a mobile app or a desktop app, and. Even if I do have those servers, it's like an entirely different team that controls them. So deploying like a sidecar, an agent is actually like much more complicated. And so we, we looked at that and also because like, it's much easier to have control if you just ship within the application. We're like, okay, let's build like an SDK and dependency that just injects into the, the application that runs, set an API key and then you're done. And so what that translates for Sentry is we spend a lot of time knowing what Django is or what Rails is or what expresses like all these frameworks. And just knowing how to plug into the right signals in those frameworks. And then at that point, like the user doesn't have to do anything. And so like the ideal outcome for Sentry is like you install the dependency in whatever language makes sense, right? You somehow configure the API key and maybe there's a couple other minor settings you add and that gives you the bare bones and that's it. Like it should just work from there. Now there's a lot you can do on top of that to enrich data and whatnot, but for the most part, especially for errors, like that's good enough. And that, that's always been a fundamental goal of ours. And I, I think we actually do it phenomenally well. [00:11:23] Jeremy: So it sounds like it infers things about the application without manual configuration. Can you give some examples of the kind of things that Sentry knows without the user having to tell it? [00:11:38] David: Yeah. So a good example. So on the errors side, we know literally everything because an error object in each language has all these attributes with it. It, it gives you the stack trace, it gives you a lot of these things. So that one's straightforward. On the performance side, we use a combination of leveraging some like open source, I guess implementations, like open telemetry where it's got all this instrumentation already and we can just soak that in, um, as well as we automatically instrument a bunch of stuff. So for example, say you've got like a Python application and you're using, let's say like SQL Alchemy or something. I don't actually know if this is how our SDK works right now, but, we will build something that's aware of that library and make sure it can automatically instrument the things it needs to get the right information out of it. And be fair. That's always been true for like APM vendors and stuff like that. The delta is, we've often gone a lot deeper. And so for Python for example, you plug it into an application, we'll capture things like the error, error object, which is like exception class name exception value, right? Stack trace, file, name, line number, all those normal things, function name. We'll also collect source code. So we'll, we'll give you sort of surrounding source code blocks for each line in the stack trace, which makes it infinitely easier to consume. And then in Python and, and php, and I forget if we do this anywhere else right now, we'll actually even allow you to collect what are called stack locals. So it'll, it'll give you basically the variables that are defined almost like a debugger. And that is actually, actually like game changing from a development point of view. Because if I can go look in production when there's an incident or a bug and I can actually see the state of the application. , I, I never need to know like, oh, what was going on here? Oh, what if like, do I need to go reproduce this somehow? I always have the right information. And so all of that for us is automatic and we only succeed like, it, it's, it's like by definition inside of Sentry, it has to be automatic. Like if we ask the user to do anything whatsoever, we're failing. And so whenever we design any product or anything, and to be fair, this is how every product company should operate. it's gotta be with as little user input as humanly possible. And so you can't always pull that off. Sometimes you have to have users configure stuff, but the goal should always be no input. Detecting errors through unhandled exceptions [00:13:42] Jeremy: So you, you're talking about getting a stack trace, getting, the state of variables, source code. That sounds like that's primarily gonna be through unhandled exceptions. Would you say that's, that's the primary way that you get error? [00:13:58] David: Yeah, you can integrate in other ways. So you can like trigger our API to capture an, uh, an exception. You can also, for better or worse, it's not always good. You can integrate through logging adapters. So if you're already using a logging framework and you log their errors there, we can often capture those. However, I will say in most cases, people use the logging APIs wrong and the data becomes junk. A good, a good example of this is like, uh, it varies per language. So I'm just gonna go to Python because Python is like sort of core to Sentry. Um, in Python you have the ability to log messages, you can log them as errors, you can log like actual error objects as errors. But what usually happens is somebody does a try-catch. They, they capture the error they rescue from it. They create a logging call, like log dot error or something, put the, the error message or value in there. And then they send that upstream. And what happens is the stack trace is gone because we don't know that it's an error object. And so for example, in Python, there's actually an an A flag. You pass the logging call to make sure that stack trace stays present. But if you don't know that the data becomes junk all of a sudden, and if we don't have a stack trace, we can't actually aggregate data because like there's just not enough information to like, to run hashing on it. And so, so there are a lot of ways, I guess, to capture the information, but there are like good ways and there are bad ways and I think it, it's in everybody's benefit when they design their, their apt to like build some of these abstractions. And so like as an example, when, whenever I would start a new project these days, I will add some kind of helper function for me to like log an exception when I like, try catch and then I can just plug in whatever I need later if I want to enrich the data or if I wanna send that to Sentry manually or send it to logs manually. And it just makes life a lot easier versus having to go back and like augment every single call in the code base. [00:15:37] Jeremy: So it, it sounds like. When you're using a tool like Sentry, there's gonna be the, the unhandled exceptions, which are ones that you weren't expecting. So those should I guess happen without you catching them. And then the ones that you perhaps do anticipate, but you still consider to be a problem, you would catch that and then you would add some kind of logging statement to your code that talks to Sentry directly. Finding issues like performance problems (N+1 queries) that are not explicit errorsz [00:16:05] David: Potentially. Yeah. It becomes a, a personal choice to be fair at that, at that point. but yeah, the, the way, one of the ways we've been thinking about this lately, because we've been changing our error monitoring product to not just be about errors, so we call it issues, and that's in the guise of like, it's like an issue tracker, a bug tracker. And so we started, we started putting what are effectively like, almost like static analysis concerns inside of this issue tracker. So for example, In our performance monitor, we'll do something called like detect n plus one queries, which is where you execute a, a duplicate query in a loop. It's not necessarily an error. It might not be causing a problem, but it could be causing a problem in the future. But it's like, you know, the, the, the qualities of it are not the same as an error. Like it's not necessarily causing the user to experience a bug. And so we've started thinking more about this, and, and this is the same as like logging errors that you handle. It's like, well, they're not really, they're not really bugs. It's like expected behavior, but maybe you still want to keep it like tracking somewhere. And I think about like, you know, Lins and things like that, where it's like, well, I've got some things that I definitely should be fixing. Then I've got a bunch of other stuff that's like informing me that maybe I should take action on or not. But only I, the human can really know at the end of the day, right, if I, if I should prioritize that or not. And so that's how I kind of think about like, if I'm gonna try catch and then log. Yeah, you should probably collect that data. It's probably less important than like the, these other concerns, like, like an actual unhandled exception. But you do, you do want to know that they're happening and whatnot. And so, I dunno, Sentry has not had a strong opinion on this historically. We're just like, send us whatever you want to capture in this regard, and you can pay for it, that's fine. It's like usage based, you know? we're starting to think a lot more about what should that look like if we, if we go back to like, what's the, what's the opinion we have for how you should use the product or how you should solve these kinds of software problems. [00:17:46] Jeremy: So you gave the example of detecting n plus one queries is, is that like being aware of the framework or the ORM the person is using and that's how you're determining this? Or is it at more of a lower level than that? [00:18:03] David: it is, yeah. It's at the framework level. So this is actually where Open Telemetry causes a lot of harm, uh, for us because we need to know what a database query is. Uh, we need to know like the structure of the query because we actually wanna parse it out in a lot of cases. Cause we actually need to identify if it's duplicate, right? And we need to know that it's a database query, not a random annotation that you've added. Um, and so what we do is within these traces, which is like if you, if you don't know what a trace is, it's basically just like, it's a tree, like a tree structure. So it's like A calls B, calls C, B also calls D and E and et cetera, right? And so you just, you know, it's a trace. Um, and so we actually just look at that trace data. We try to find these patterns, which is like, okay, B was a, a SQL query or something. And every single sibling of B is that same SQL query, but sort of removing certain parameters and stuff for the value. So we'll look at that data and we'll try to pull out anomalies. So m plus one is an example of like a fairly obvious anti pattern that everybody knows is bad and can be optimized. Uh, but there's a lot of other that are a little bit more subjective. I'll give you an example. If you execute three SQL statements back to back, one could argue that you could just batch those SQL statements together. I would argue most of the time it doesn't matter and I don't need to do that. And also it's not guaranteed that that is better. So it becomes much more like, well, in my particular situation this is valuable, but in this other situation it might not be. And that's where I go back to like, it's almost like a linter, you know? But we're trying to infer all of that from the data stream. So, so Sentry's kind of, we're kind of a backwards product company. So we build our product from a technology vision, not from customers want this, or we have this great product vision or anything like that. And so in our case, the technology vision is like, there's a lot of application data that comes in, a lot of telemetry, right? Errors, traces. We have a bunch of other streams now. within that telemetry there is like signal. And so one, it's all structured data so we know what it is so we can actually interpret it. And then we can identify that signal that might be a problem. And that signal in our case is often going to translate to like this issue concept. And then the goal is like, well, can we identify these problems for people and surface them versus the choose your own adventure model, which is like, we'll just capture everything and feed it to the user and they can figure out what matters. Because again, a web service is a web service. A database is a database. They're all the same problems for everybody. All you know, it's just, and so that's kind of the model we've built and are continuing to evolve on and, and so far works pretty well to, to curate a lot of these workflows. Want to infer everything, but there are challenges [00:20:26] Jeremy: You talked a little bit about how people will sometimes use tracing. And in cases like that, they may need some kind of session ID to track. Somebody making a call to a service and that talks to a database and that talks to other services. And you, inside of your application, you have to instrument some way of tracking. This all came from this one request. Is that something that Sentry can infer or is there something that the developer has to put into play so that you can track that sort of thing? [00:21:01] David: Yeah, so it's, it's like a bit of both. And i would say our goal is that we can infer everything. The reality is there is so much complexity and there's too much of a, like, too many technologies in the world. Like I was complaining about this the other day, like, the classic example on web service is if we have a middleware hook, We kind of know request response, usually that's how middleware would work, right? And so we can infer a lot from there. Like basically we can infer the boundaries, which is a really big deal. Okay. That's one thing is boundaries is a problem. What we, we describe that as a transaction. So like when the request starts. When the request ends, right? That's a very important boundary for everybody to understand because when I'm working on the api, I care about the API boundary. I actually don't care about what the database is doing at its low level or what the JavaScript application might be doing above it. I want my boundary. So that's one that we kind of can do. But it's hard in a lot of situations because of the way frameworks and technology has been designed, but at least traditional stuff like a, a traditional web stack, it works like a Rails app or a DDjango app or PHP app kind of thing, right? And then within that it becomes, well, how do you actually build a trace versus just have a bunch of arbitrary labels? And so we have a bunch of complicated tech within each language that tries to establish that tree. and then we annotate a lot of things along the way. And so we will either leverage Open Telemetry, which is an open format spec that ideally has very high quality data. Ideally, not realistically, but ideally it has high quality data. Every library author implements it great, everybody's happy. We don't have to do anything ever again. The reality is that data is like all over the map because there's not like strict requirements for what, how the data should be labeled and stuff. And not everything even has that data. Like not everything's instrumented with open telemetry. So we also have a bunch of stuff that, unrelated to using that we'll say, okay, we know what this library is, we're gonna try to infer some characteristics from this library, or we know what maybe like the DDjango template engine is. So we're gonna try to infer like when the template renders so you can capture that block of information. it is a very imperfect science and I would tell you like it's not, even though like Open Telemetry is a very fun topic for people. It is not necessarily good, like it's not in a good state. Could will it ever be good? I don't know in all honesty, but like the data quality is like all over the map and so that's honestly one of our biggest challenges to making this experience that, you know, tells you what's going on in your database so it tells you what's going on with the cash or things like this is like, I dunno, the cash might be called something completely random in one implementation and something totally different in another. And so it's a lot of like, like data normalization that you have to deal with. But for the most part, those libraries of things you don't control can and will be instrumented. Now the other interesting thing, which we'll see how this works out, so, so one thing Sentry tries to do there, we have all these layers of telemetry, so we have errors and traces, right? Those are pretty high level concepts. We also have profiling data, which is very, very, very, very low level. So it's usually only if you have like disc. I like. It's where is all the CPU time being spent in my application? Mostly not waiting. Like waiting's usually like a network call, right? But it's like, okay, I have a loop that's doing a lot of math, or I'm writing a bunch of stuff to disc and that's really slow. Like often those are not instrumented or it's like these black box areas of a performance. And so what we're trying to do with profiling data, instead of just showing you flame charts and stuff, is actually say, could we fill in these gaps in these traces? Like basically like, Hey, I've got a long period of time where the app's doing something. You know, here's an API call, here's the database stuff. But then there's this block, okay, what's that function or something? Can we pull that out of the profiling data? And so in that case, again, that's just automatic because the profile actually knows everything about the application and know it. It has full access to the function and the stack and everything, right? And so the dream is that you would just always have everything filled in the, the customer never has to do anything with one minor asterisk. And the asterisk is what I would call like business context. So a good example would be, You might wanna associate requests with a specific customer or something like that. Like you might wanna say, well it's uh, I don't know, Goldman Sachs or one of these big companies or something. So you can know like, well when Goldman Sachs is having performance issues or whatever it is, oh maybe I should focus on them cuz maybe they pay you a lot of money or something. Right. Sentry would never know that at the end of the day. So we also have these like kind of tagging contextual APIs that will say like, tell us some informations, maybe it's like customer, maybe it's something else that's relevant to your application. And we'll keep that data associated with the telemetry that's like present, you know, um, but the, at least the telemetry, like again, application's just worth the same, should be, there should be a day in the next few years that it's just all automatic. and again, the only challenge today is like, can it be high quality and automatic? And so that, that's like to be determined. [00:25:50] Jeremy: What you're kind of saying is the ideal is being able to look at this profiling information and be able to build a full picture of. a, a call from beginning to end, all the different things to talk to, but I guess what's the, what's the reality today? Like, what, what is Sentry able to determine, in the world we live in right now? [00:26:11] David: So we've done a lot of this like performance detection stuff already. So we actually can do a lot now. We put a lot of time into it and I, I will tell you, if you look at other tools trying to do tracing, their approach is much more abstract. It's like your traditional monitoring tool that's like, we're just gonna collect a lot of signals and maybe we'll find magic anomaly detection or something going on in it, which, you know, props, but that can figure that out. But, a lot of what we've done is like, okay, we kind of know what this data looks like. Let's go after this very like known quantity problem. Let's normalize the data. And let's make it happen like that's today. Um, the enrichment of profiles is new for us, but it, we actually can already do it. It's not perfect. Detection of blocking the UI thread in mobile apps [00:26:49] David: Um, and I think we're launching something in April or May, something around the, that timeframe where hopefully for the, the technologies we can instrument, we're actually able to surface that in a useful way. but as an example that, that concept that I was talking about, like with n plus one queries, the team built something using profiling data. and I think this, this might be for like a mobile app more so than anything where mobile apps have this problem of, it's, you've got a main thread and if you block that main thread, the app is basically frozen. You see this on desktop apps all the time. You, you very rarely see it on web apps anymore. But, but it's a really big problem when you have a web, uh, a mobile or desktop app because you don't want that like thing to be non-responsive. Right? And so one of the things they did was detect when you're doing like file io on the main thread, you know, right. When you're writing a disc, which is probably a slow thing or something like that, that's gonna block the whole thing. Because you should just do it on a separate thread. It's like an easy fix, potentially may not be a problem, but it could become a problem. Same thing as n plus one. But what's really interesting about it is what the team did is like they used the profiling data to detect it because we already know threads and everything in there, and then they actually recreated a stack trace out of that profiling data when it's surfaced. So it's actually like useful data with that. You could like that I or you as a developer might know how to take and actually be like, oh, this is where it happens at the source code. I can actually figure it out and go fix it myself. And to me, like as like I, I'm still very much in the weeds with software that is like one of the biggest gaps to most things. Is it just, it doesn't make it easy to consume or like take action on, right? Like if I've got a, a chart that says my error rate is high, what am I gonna do with that? I'm like, okay, what's breaking? That's immediately my next question. Right? Okay. This is the error. Where is that error happening at? Again, my next question, it, it's literally just root cause analysis, right? Um, and so that, that to me is very exciting. and I, I don't know that we're the first people to do that, I'm not sure. But like, if we can make that kind of data, that level of actionable and consumable, that's like a big deal for me because I will tell you is like I have 20 years of software experience. I still hate flame charts and like I struggle to use them. Like they're not a friendly visualization. They're almost like a, a hypothetically necessary evil. But I also think one where nobody said like, do we even need to use that? Do we need that to be like the way we operate? and so anyways, like I guess that's my long-winded way of saying like, I'm very excited for how we can leverage that data and change how it's used. [00:29:10] Jeremy: Yeah. So it sounds like in this example, both in the mobile app blocking the UI or the n plus one query is the Sentry, suppose, SDK or instrumentation that's hooked inside of your application. There are certain behaviors that it knows are, are not like ideal I guess, just based on. people's prior experience, like your own developers know that, hey, if you block the UI thread in this mobile application, then you're gonna have performance problems. And so that way, rather than just telling you, Hey, your app is slow, it can tell you your app is slow and it's because you're blocking the UI thread. Don't just aggregate metrics, the error tracker should have an opinion on what actual problems are [00:29:55] David: Exactly, and I, and I actually think, I don't know why so many people don't recognize this gap, because at the end of the day, like, I don't know, I don't need more people to tell me response times are bad or anything. I need you to have an opinion about what's good because. The only way it's like math education, right? Like, yeah, you learn the basics, but you're not expected to say, go to calc, but, and then like, do all the fundamentals. You're like, don't get a calculator and start simplifying the problem. Like, yeah, we're gonna teach you a few of these things so you understand it. We're gonna teach you how to use a calculator and then just use the calculator and then make it easier for everybody else. But we're also not teaching you how to build a calculator because who cares? Like, that's not the purpose of it. And so for me, this is like, we should be helping people sort of get to the finish line instead of making them run the entirety of the race over and over if they don't need to. I don't, I don't know if that's a good analogy, but that has been the biggest gap, I think, in so much of this software throughout the industry. And it's, it's, it's common everywhere. And there's no reason for that gap to exist these days. Like the technology's fine. And the technology's been fine for like 10 years. Like Sentry started in oh eight at this point. And I think there was only one other company I recall at the time that was doing anything that was even similar to like air monitoring and Sentry when we built it, we're just like, what if we just go deeper? What if we collect all this information that will help you debug the problem instead of just stopping it like a log aggregator or something kind of thing, so we can actually have an opinion about it. And I, I genuinely, it baffles me that more people do not think this way because it was not a hard problem at the time. It's certainly not hard these days, but there's still very, I mean, a lot more people do it now. They've seen Sentry successful and there's a lot of similar implementations, but it's, it's just amazes me. It's like, why don't you, why don't people try to make the data more actionable and more useful, the teams versus just collect more of it, you know? 40 people working on learning the common issues with languages and frameworks [00:31:41] Jeremy: it, it sounds like maybe the, the popularity of the stack the person is using or of the framework means that you're gonna have better insights, right? Like if somebody makes a, a Django application or a Rails application, there's all these lessons that your team has picked up in terms of, Hey, if you use the ORM this way, your application is gonna be slow. Whereas if somebody builds something totally homegrown, you won't know these patterns and you won't be able to like help as much basically. [00:32:18] David: Yeah. Yeah, that's exactly, and, and you might think that that is a challenge, but then you look at how many employees exist at like large tech companies and it's, it's not that big of a deal, like, , you might even think collecting all the information for each, like programming, runtime or framework is a challenge. We have like 40 people that work on that and it's totally fine. Like, and, and so I think actually all these scale just fine. Um, but you do have to understand like the domain, right? And so the counter version of this is if you look at say like browser applications, like very rich, uh, single page application type experiences. It's not really obvious like what the opinions are. Like, like if, if you, and this is like real, like if you go to Sentry, it's, it's kind of slow, like the app is kind of slow. Uh, we even make fun of ourselves for how slow it is cuz it's a lot of JavaScript and stuff. If you ask somebody internally, Hey, how would we make pick a page fast? They're gonna have no clue. Like, even if they have like infinite domain experience, they're gonna be like, I'm not entirely sure. Because there's a lot of like moving parts and it's not even clear what like, like good is right? Like we know n plus one is bad. So we can say not doing that is the better solution. And so if you have a JavaScript app, which is like where a lot of the slowness will come from is like the render times itself. Like how do you fix it? You, you can't actually build a product that tells you what to fix without knowing how to fix it, right? And so some of these newer and very fast moving targets are, are frankly very difficult for us. Um, and so that's one thing that I think is a challenge for the entire industry. And so, like, as an example, a lot of the browser folks have latched onto web vitals, which are just metrics that hopefully tell you something about the application, but they're not always actionable either. It'll be like, the idea with like web vitals is like, okay, time to interactive is an an important metric. It's like how long until the page loads that a user can do what they're probably there to do. Okay. Like abstractly, it makes sense to us, but like put into action. How do I optimize time to interactive? Don't block the page. That's one thing. I don't know. Defer assets, that's another thing. Okay. So you've gotta like, you've gotta build a technology that knows these assets could be deferred and aren't. Okay, which ones can be deferred? I don't know. Like, it, it, it's like such a deep rabbit hole. And then the problem is, six months from now, the tech will have completely changed, right? And it won't have like, necessarily solved some of these problems. It will just have changed and they're now a completely different shape of problem. But still the same fundamental like user experience is the same, you know? Um, and to me that's like the biggest challenge in the industry right now is that like dilemma of the browser at the end of the day. And so even from our end, we're like, okay, maybe we should step back, focus on servers again, focus on web services. Those are known quantities. We can do that really well. We can sort of change that to be better than it's been in the past and easier to consume with things like our n plus one detections. Um, and then take like a holistic, fresh look at browser and say, okay, now how would we solve this to make sure we can actually really latch onto the problems that like people have and, and we understand, right? And, you know, we'll see when we get there. I don't think any product does a great job these days for helping, uh, solve those problems. . But I think even without the, the products, like I said, like even our team would be like, fixing this is gonna take months because it's gonna take months just to figure out exactly where the, the common bottlenecks are and all these other things within an application. And so I, I guess what I mean to say with that is there's a lot of opportunity, I think with the moving landscape of technology, we can find a way to, whether it's standardized or Sentry, can find a way to make that data actionable want it something in between there. There are many ways to build things on the frontend with JavaScript which makes it harder to detect common problems compared to backend [00:35:52] Jeremy: So it sounds like what you're saying, With the, the back end, there's almost like a standard way of doing things or a way that a lot of people do it the same way. Whereas on the front end, even if you're looking at a React application, you could look at tenant react applications and they could all be doing state management a totally different way. They could be like the, the way that the application is structured could be totally different, and that makes it difficult for you to infer sort of these standard patterns on the front end side. [00:36:32] David: Yeah, that's definitely true. And it, it goes, it's even worse than that because well, one, there's just like the nature of JavaScript, which is asynchronous in the sense of like, it's a lot of callbacks and things like that. And so that already makes it hard to understand what's going on, uh, where things are happening. And then you have these abstractions like React, which are very good, but like they pull a lot of that away. And so, as an example of a common problem, you load the application, it has to do a lot of stuff to make the page render. You might call that hydration or whatever. Okay. And then there's a completely different state, which is going from, it's already hydrated. Page one, I, I've done an interaction or something. Or maybe I've navigated a page too, that's an entirely different, like, sort of performance problem. But that hydration time, that's like a known thing. That's kind of like time to interactive, right? But if the problem is in your framework, which a lot of it is like a lot of the problems today exist because of frameworks, not because of the technology's bad or the framework's bad, but just because it's abstracted and it's really hard to make it work in all these situations, it's complicated. And again, they have the same problem where it's like changing non sem. And so if the problem is the framework is somehow incorrectly re rendering the page as an example, and this came up recently, for some big technology stack, it's re rendering the page. That's a really bad problem for the, the customer because it's making the, it's probably actually causing a lot of CPU seconds. This is why like your Chrome browser tabs are using so much memory in cpu, right? How do you fix that? Can you even fix that? Do you just say, I don't know, blame the technology? Is that the solution? Maybe that is right, but how would we even blame the technology like that alone, just to identify why it's happening. and you need to know the why. Right? Like, that is such a hard problem these days. And, and personally, I think the only solution is if the industry sort of almost like standardizes on a way to like, on a belief of how this should be optimized and how it should be measured and monitored kind of thing. Because like how errors work is like a standardization effectively. It may not be like a formal like declaration of like, this is what an error is, but more or less they always have the same attributes because we've all kind of understood that. Like those are the valuable things, right? Okay. I've got a server rendered application that has client interaction, which is sort of the current generation of the technology. We need to standardize on what, like that web request, like response life cycle is, right? and what are the moving targets within there. And it just, to me, I, I honestly feel like a lot of what we use every day in technology is like beta. Right. And it's, I think it's one of the reasons why we're constantly always having to up, like upgrade and, and refactor and, and, and shift dependencies and things like that because it is not, it's very much a prototype, right? It's a moving target, which I personally do not think is great for the industry because like customers do not care. They do not care that you're using some technology that like needs a change every few months and things like that. now it has improved things to be fair. Like web applications are much more like interactive and responsive sometimes. Um, but it is a very hard problem I think for a lot of people in the world. [00:39:26] Jeremy: And, and when you refer to, to things feeling like beta, I suppose, are, are you referring to the frameworks people are using or the libraries they're using to support their front end development? I, I'm curious what you're, you're thinking there. [00:39:41] David: Um, I think it's everything. Even like the browser APIs are constantly shifting. It's, that's gotten a little bit better. But even the idea like type script and stuff, it's just like we're running like basically compilers to make all this code work. And, and so the, even that they're constantly adding features just because they can, which means behaviors are constantly changing. But like, if you look at a real world example, like React is like the, the most dominant technology. It's very well designed for managing the dom. It's basically just a rendering engine at the end of the day. It's like it's managed to process updates to the dom. Okay. Makes sense. But we've all learned that these massive single page applications where you build all your application logic and loaded into a bundle is a problem. Like, like, I don't know how big Sentry's bundle is, but it's multiple megs in size and it takes a little while for like a, even on fast fiber here in the Bay Area, it takes a, you know, several seconds for the UI to load. And that's not ideal. Like, it's like at some point half of us became okay with this. So we're like, okay, what we need to do is go back, literally just go back 10 years and we need to render it on the server. And then we need some stuff that makes interactions, you know, highly responsive in the UI or dynamic content in the ui, you know, bring, it's like bringing back jQuery or something. And so we're kind of going full circle, but that is actually like very complicated because the way people are trying to do is like, okay, we wanna, we wanna have the rendering engine operate the same on the server and is on as on the client, right? So it's like we just write one, path of code that basically it's like a template engine to some degree, right? And okay, that makes sense. Like we can all get behind that kind of model. But that is actually really hard to make work with a lot of people's software and, and I think the challenge and framers have adopted it, right? So they've taken this, so for example, it's like, uh, react server components, which is basically just like, can we render it on the server and then also keep that same interaction in the ui. But the problem is like frameworks take that, they abstract it and so it's another layer of complexity on something that is already enormously complex. And then they add their own flavor onto it, like their own opinions for maybe what the world way the world is going. And I will say like personally, I find those. Those flavors to be very hard to adapt to like things that are tried and true or importantly in this context, things that we know how to monitor and fix, right? And so I, I don't know what, what the be all end all is, but my thesis on this is you need to treat the UI like a template engine, and that's it. Remove all like complexity behind it. And so if you think about that, the term I've labeled it as, which I did not come up with, I saw this from somebody at some point, is like, it's like your front end as a service. Like you need to take that application that renders on the server and the front end, and it's just an entirely different application, which is annoying. and it just calls your APIs and that's how it gets the data it needs. So you're literally just treating it as if it's like a single page application that can't connect to your database. But the frameworks have not quite done that. And they're like, no, no, no. We'll connect to the database and we'll do all this stuff, but then it doesn't work because you've got, like, it works this way on the back end and this way on the front end anyways. Again, long winded way of saying like, it's very complicated. I don't think the technology can solve it today. I think the technology has to change before these problems can actually genuinely become solvable. And that's why I think the whole thing is like a beta, it's like, it's very much like a moving target that we're eventually we'll get there and it's definitely had value, but I don't know that, um, responsiveness for low latency connections is where the value has been created. You know, for like folks with bad internet and say remote Africa or something, like I'm sure the internet is not a very fun place for them to use these days. Some frontend code runs on the server and some in the browser which creates challenges [00:43:05] Jeremy: I guess one of the things you mentioned is there's this, almost like this split where you have the application running on the server. It has its own set of rules because it, like you said, has access to the database and it can do things that you can't do in the browser, and then you have to sort of run the same application in the browser, but it's not quite the same application because it doesn't have access to the same things in the browser. So you have this weird disconnect, I suppose. [00:43:35] David: Yeah. Yeah. And, and, and then the challenges is like a developer that's actually complicated for you from the experience point of view, cuz you have to know somehow, okay, these things are ta, these are actually running on the server and only on the server. And like, so I think the two biggest technologies that try to do this, um, or at least do it well enough, or the two that I've used, there might be some others, um, are NextJS and remix and they have very different takes on how to do this. But, remix is the one I use most recently. So I, I'll comment on that. But like, there's a, a way that you kind of say, well, this only runs on, I think the client as an example. And that helps you a little bit. You're like, okay, this is only gonna render on the client. I can, I actually can think about that and reason about that. But then there's this thing like, okay, sometimes this runs on the server, only this part runs on the server. And it's, it just becomes like the mental capacity to figure out what's going on and debug it is like so difficult. And that database problem is like the, the normal problem, right? Like of like, I can only query the database on the server because I need secure credentials or something. Okay. I understand that as a developer, but I don't understand how to make sure the application is doing what I expect it to do and how to fix it if something goes wrong. And that, that's why I think. , I'm a, I'm a believer in constraints. The only way you make progress is you simplify problems. Like you just give up on solving the complicated thing and you make the problem simpler. Right? And so for me, that's why I'm like, just take the database outta the equation. We can create APIs from the client, from the server, same security levels. Okay? Make it so it can only do that and it has to be run as almost like a UI only thing. Now that creates complexity cuz you have to run this other service, right? And, and like I personally do not wanna have to spin up a bunch of containers just to write like a simple like web application. but again, I, I think the problem has not been simplified yet for a lot of folks. Like React did this to be fair, um, it made it a lot easier to, to build UI that was responsive and, and just updated values when they changed, you know, which was a big deal for a long period of time. But I feel like everything after has not quite reached that that area, whereas it's simple and even react is hard to debug when it doesn't do what you want it to do. So I don't know, there, there's so gaps I guess is what i would say. And. Hopefully, hopefully, you know, in the next five years we'll kind of see this come to completion because it does feel like it's, it's getting closer to that compromise. You know, where like we used to have pure server rendered apps with some weird janky JavaScript on top. Now we've got this bridge of really complicated, you know, JavaScript on top, and the server apps are also complicated and it's just, it's a nightmare. And then this newer generation of these frameworks that work for some types of technology, but not all. And, and we're kind of almost coming full circle to like server rendered, you know, everything. But with like allowing the same level of interactions that we've been desiring, I guess, on the web. So, and I, fingers crossed this gets better, but right now I do not see like a clear like, oh, it's definitely there. I can see it coming. I'm like, well, we're kind of making progress. I don't love being the beta tester of the whole thing, but we're kind of getting there. And so, you know, we'll see. There are multiple ways to write mobile apps as well (flutter, react native, web views) [00:46:36] Jeremy: I guess you, you've been saying this whole shifting landscape of how Front End works has made it difficult for Sentry to provide like automatic instrumentation and things like that for, for mobile apps. Is that a different story? Like is it pretty standardized in terms of how do you instrument an Android app or an iOS app. [00:46:58] David: Sort of, but also, no, like, a good example here is like early days mobile, it's a native application. You ship a binary known quantity, right? Or maybe you embedded a web browser, but like, that was like a very different thing. Okay. And then they did things where like, okay, more of it's like embedded web browser type stuff, or dynamically render content. So that's now a moving target. the current version of that, which I'm not a mobile dev so like people have strong opinions on both sides of this fence, but it's like, okay, do you use like a, a hybrid framework which allows you to build. Say, uh, react native, which is like arou you to sort of write a JavaScript ish thing and it runs on both Android and mobile, but not really well on either. Um, or do you write a native, native app, which is like a known quantity, but then you may maintain like two code bases, have two degrees of expertise and stuff. Flutters the same thing. so there's still that version of complexity that goes on within it. And I, I think people care less about mobile cuz it impacts people less. Like, you know, there's that whole generation of like, oh, mobile's the future, everything's gonna be mobile, let's not become true. Uh, mobile's very important, but like we have desktops still. We use web software all the time, half the time on mobile. We're just using the web software at the end of the day, so at least we know that's a thing. And I think, so I think that investment in mobile has died down some. Um, but some companies like mobile is like their main experience or one of their driving experience is like a, like a company like DoorDash, mobile is as important as web, if not more, right? Because of like the types of customers. Spotify probably same thing, but I don't know, Sentry. We don't need a mobile app, who cares? It's irrelevant to the problem space, right? And so I, I think it's just not quite taken on. And so mobile is still like this secondary citizen at a lot of companies, and I think the evolution of it has been like complicated. And so I, I think a lot of the problems are known, but maybe people care less or there's just less customers. And so the weight doesn't, like, the weight is wildly different. Like JavaScript's probably like a hundred times the size from an investment point of view for everyone in the world than say mobile applications are, is how I would think about it. And so whether mobile is or isn't solved is almost irrelevant to the, the, the like general problem at hand. and I think at the very least, like mobile applications, there's like, there's like a tool chain where you can debug a lot of stuff that works fairly well and hasn't changed over the years, whereas like the web you have like browser tools, but that's about it. So. Mobile apps can have large binaries or pull in lots of dependencies at runtime [00:49:16] Jeremy: So I guess with mobile. Um, I was initially thinking of native apps, but you're, you're bringing up that there's actually people who would make a native app that's just a web view for a webpage, or there's React native or there's flutters, so there's actually, it really isn't standard how to make a mobile app. [00:49:36] David: Yeah. And even within those, it comes back to like, okay, is it now the same problem where we're loading in a bunch of JavaScript or downloading a bunch of JavaScript and content remotely and stuff? And like, you'll see this when you install a mobile app, and sometimes the binaries are huge, right? Sometimes they're really small, and then you load it up and it's downloading like several gigs of data and stuff, right? And those are completely different patterns. And even within those like subsets, I'm sure the implementations are wildly different, right? And so, you know, I, that may not be the same as like the runtime kind of changing, but I remember there was this, uh, this must be a decade ago. I, I used, I still am a gamer, but. Um, early in my career I worked a lot with like games like World of Warcraft and stuff, and I remember when games started launching progressive loading where it's like you could download a small chunk of the game and actually start playing and maybe the textures were lower, uh, like resolution and everything was lower fidelity and, and you could only go so far until the game fully installed. But like, imagine like if you're like focused on performance or something like that, measuring it there is completely different than measuring it once, say everything's installed, you know? And so I think those often become very complex use cases. And I think that used to be like an extreme edge case that was like such a, a hyper-specific optimization for like what The Warcraft, which is like one of the biggest games of all time that it made sense, you know, okay, whatever. They can build their own custom tooling and figure it out from there. And now we've taken that degree of complexity and tried to apply it to everything in the world. And it's like uhoh, like nobody has the teams or the, the, the talent or the, the experience to necessarily debug a lot of these complicated problems just like Sentry like. You know, we're not dealing with React internals. If something's wrong in the React internals, it's like somebody might be able to figure it out, but it's gonna take us so much time to figure out what's going on, versus, oh, we're rendering some html. Cool. We understand how it works. It's, it's a known, known problem. We can debug it. Like there's nothing to even debug most of the time. Right. And so, I, I don't know, I think the industry has to get to a place where you can reason about the software, where you have the calculator, right. And you don't have to figure out how the calculator works. You just can trust that it's gonna work for you. How Sentry's stack has become more complex over time [00:51:35] Jeremy: so kind of. Shifting over a little bit to Sentry's internals. You, you said that Sentry started in, was it 2008 you said? [00:51:47] David: Uh, the open source project was in 2008. Yeah. [00:51:50] Jeremy: The stack that's used in Sentry has evolved. Like I remembered that there was a period where I think you could run it with a pretty minimal stack, like I think it may have even supported SQLite. [00:52:02] David: Yeah. [00:52:03] Jeremy: And so it was something that people could run pretty easily on their own. But things have, have obviously changed a lot. And so I, I wonder if you could speak to sort of the evolution of that process. Like when do you decide like, Hey, this thing that I built in 2008, Is, you know, not gonna cut it. And I really need to re-architect what this system is. [00:52:25] David: Yeah, so I don't know if that's actually the reality of why things have changed, that it's like, oh, this doesn't work anymore. We've definitely introduced complexity in the sense of like, probably the biggest shift for Sentry was like, it used to be everything, and it was a SQL database, and everything was kind of optional. I think half that was maintainable because it was mostly built by. And so I could maintain like an architectural vision that kept it minimal. I had the experience to figure it out and duct tape the right things. Um, so that was one thing. And I think eventually, you know, that doesn't scale as you're trying to do more and build more into the product. So there's some complexity there. but for the most part you can, it can still

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for June 13th, 2023 - Episode 198

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Jun 13, 2023 43:38


2023-06-13 Weekly News - Episode 198Watch the video version on YouTube at https://youtube.com/live/r1L8Aec5-mk?feature=share Hosts:  Gavin Pickin - Senior Developer at Ortus Solutions Grant Copley - Senior Developer at Ortus Solutions Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there. A few ways  to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube.  Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github  Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week BOXLife store: https://www.ortussolutions.com/about-us/shop Buy Ortus's Books 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Learn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes   Patreon Support ()We have 40 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsOrtus Training - ColdBox Zero to HeroOctober 4th and 5thVenue Confirmation in Progress - will be less than 2 miles from the Mirage.Registration will be open soon!CF Camp Pre Conference Workshop DiscountWe can offer a 30% discount by using the code "OrtusPre30".Thank you for your ongoing support!https://www.eventbrite.com/e/cfcamp-pre-conference-workshops-by-ortus-solutions-tickets-641489421127 New Releases and UpdatesColdBox 6.9.0 ReleasedWe are excited to announce the release of ColdBox 6.9.0 LTS, packed with new features, improvements, and bug fixes. In this version, we focused on enhancing the debugging capabilities, improving the ScheduledTasks module, and fixing an important issue related to RestHandler. Let's dive into the details of these updates.https://www.ortussolutions.com/blog/coldbox-690-released Lucee 6 Beta 2Following a long last few weeks of final development, testing and bug fixing, the Lucee team is really proud to present Lucee 6 BETA 2https://dev.lucee.org/t/lucee-6-0-451-beta-2/12673 Lucee 6.0 Launch at @cf_camp

On part en prod
#4 - Mathias Bouvant - Ubisoft - Un management humain au service de l'agilité

On part en prod

Play Episode Listen Later Nov 11, 2022 114:38


Pour ce 4e épisode, je vais à la rencontre de Mathias Bouvant, manager d'une équipe de développeurs UBISOFT à Bordeaux. 1) Dans cet épisode, il parle de son parcours : - Ses débuts dans l'informatique avec un PC ZX80 - Les spécificités du secteur du développement de jeux vidéos dans une grosse entreprise - La mission de son équipe : la création de micro services à destination des jeux Ubisoft - Cette culture de l'agilité qui l'anime au quotidien et qui facilite la vie des projets - Son rôle de manager d'une équipe technique Donc si vous suivez le podcast, évidemment ça parle tech. Mais nous faisons des focus plus précis autour du management des équipes IT, du Scrum et du secteur du gaming. 2) Le Scrum, un sujet vaste et passionnant : - Comment la méthode Scrum peut-elle vraiment réussir (sans lasser les équipes) ? - Le point de vue Mathias : “Le Scrum doit être basé sur l'entraide. Si tu commences ton Sprint et qu'on se revoit dans 2 semaines et on regarde ce que tu as fait : ça risque de ne pas marcher.” - Il y voit un vrai gain de temps sur les projets grâce à des cycles plus courts et donc des process itératifs de plus en plus efficace. 3) Comment fonctionne son service ? - Les journées débutent par les Daily (max 15 minutes) - La revue de code : pour chaque tâche une revue de code - Coder à 2 voire à 3 (”Pair Programming”) : le travail en binôme avec un expert et l'autre qui l'est moins est le meilleur moyen pour faire monter en compétences (et améliorer la qualité du code au passage). 4) Et son métier de manager ? - Très humble, Mathias à une vision très humaine du management. - Il coconstruit et challenge les développements en place avec ses équipes. - Il intègre les nouveaux arrivants et s'inspire de leurs anciennes expériences pour nourrir les méthodologies en place dans son service (notamment l'esprit agile). - Les points en One & One : essentiels pour limiter la frustration et faire progresser. - Un souci avec un collaborateur ? Il faut poser des questions, détecter les points de frictions. Et surtout : apprendre à faire des feedback positifs pour ne pas casser le collaborateur ! Au contraire : il faut l'aider à s'améliorer. 5) Et le code dans tout ça ? - Clairement, ce n'est pas coder qui prend le plus de temps ! Le code représente environ 15 à 20% du job… - Et le reste ? Les tests, les tests et les tests ! C'est la revue de code qui est la tâche la plus longue. Donc le maître mot : Patience ! Merci encore Mathias pour ton temps. Allez c'est parti pour ce nouvel épisode ! Bonne écoute et à dans 15 jours ! P.S : pour en savoir encore plus sur l'agilité et Scrum, vous pouvez écouter l'épisode numéro 2 avec Jean Pierre Lambert, créateur de la chaîne YouTube “Scrum Life”. ▬▬▬▬▬▬▬▬▬▬ Soutenez ce super podcast dédié à la tech' : - Abonnez-vous - Laissez un avis et 5 ⭐ - Merci beaucoup ! - Inscrivez-vous sur On part en prod pour ne louper aucun épisode Les informations mentionnées dans cet épisode - PC ZX80 - UBISOFT - ALCATEL - Pair/Peer Programming - SHADOW (boitier de streaming de jeux vidéo) - Agile, Scrum - MIRO, RETROMAT, Google STADIA, Jira, Elasticsearch, Kibana, Grafana, Splunk, Voice Chat - Jeux vidéo cités : Assassin's Creed, Street Fighter 2, Rainbow Six, Just Dance - Livre "La règle ? Pas de règles !: Netflix et la culture de la réinvention” de Reed Hastings - Livre “Dream Team: Les meilleurs secrets des managers pour recruter et fidéliser votre équipe idéale” de Ludovic Girodon - Série “Drôle” de Fanny HERRERO : https://www.captainwatch.com/serie/157726/drole - DELUXE : groupe de musique originaire d'Aix en Provence - GHOST : groupe de musique Pour suivre l'actualité de Mathias LinkedIn de Mathias : https://www.linkedin.com/in/mathias-bouvant-8346a6132/ ▬▬▬▬▬▬▬▬▬▬ Postproduction Audio : Guillaume Lefebvre Music by MADiRFAN from Pixabay

Screaming in the Cloud
ChaosSearch and the Evolving World of Data Analytics with Thomas Hazel

Screaming in the Cloud

Play Episode Listen Later Oct 4, 2022 35:21


About ThomasThomas Hazel is Founder, CTO, and Chief Scientist of ChaosSearch. He is a serial entrepreneur at the forefront of communication, virtualization, and database technology and the inventor of ChaosSearch's patented IP. Thomas has also patented several other technologies in the areas of distributed algorithms, virtualization and database science. He holds a Bachelor of Science in Computer Science from University of New Hampshire, Hall of Fame Alumni Inductee, and founded both student & professional chapters of the Association for Computing Machinery (ACM).Links Referenced: ChaosSearch: https://www.chaossearch.io/ Twitter: https://twitter.com/ChaosSearch Facebook: https://www.facebook.com/CHAOSSEARCH/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted episode is brought to us by our returning sponsor and friend, ChaosSearch. And once again, the fine folks at ChaosSearch has seen fit to basically subject their CTO and Founder, Thomas Hazel, to my slings and arrows. Thomas, thank you for joining me. It feels like it's been a hot minute since we last caught up.Thomas: Yeah, Corey. Great to be on the program again, then. I think it's been almost a year. So, I look forward to these. They're fun, they're interesting, and you know, always a good time.Corey: It's always fun to just take a look at companies' web pages in the Wayback Machine, archive.org, where you can see snapshots of them at various points in time. Usually, it feels like this is either used for long-gone things and people want to remember the internet of yesteryear, or alternately to deliver sick burns with retorting a “This you,” when someone winds up making an unpopular statement. One of the approaches I like to use it for, which is significantly less nefarious—usually—is looking back in time at companies' websites, just to see how the positioning of the product evolves over time.And ChaosSearch has had an interesting evolution in that direction. But before we get into that, assuming that there might actually be people listening who do not know the intimate details of exactly what it is you folks do, what is ChaosSearch, and what might you folks do?Thomas: Yeah, well said, and I look forward to [laugh] doing the Wayback Time because some of our ideas, way back when, seemed crazy, but now they make a lot of sense. So, what ChaosSearch is all about is transforming customers' cloud object stores like Amazon S3 into an analytical database that supports search and SQL-type use cases. Now, where's that apply? In log analytics, observability, security, security data lakes, operational data, particularly at scale, where you just stream your data into your data lake, connect our service, our SaaS service, to that lake and automagically we index it and provide well-known APIs like Elasticsearch and integrate with Kibana or Grafana, and SQL APIs, something like, say, a Superset or Tableau or Looker into your data. So, you stream it in and you get analytics out. And the key thing is the time-cost complexity that we all know that operational data, particularly at scale, like terabytes and a day and up causes challenges, and we all know how much it costs.Corey: They certainly do. One of the things that I found interesting is that, as I've mentioned before, when I do consulting work at The Duckbill Group, we have absolutely no partners in the entire space. That includes AWS, incidentally. But it was easy in the beginning because I was well aware of what you folks were up to, and it was great when there was a use case that matched of you're spending an awful lot of money on Elasticsearch; consider perhaps migrating some of that—if it makes sense—to ChaosSearch. Ironically, when you started sponsoring some of my nonsense, that conversation got slightly trickier where I had to disclose, yeah our media arm is does have sponsorships going on with them, but that has no bearing on what I'm saying.And if they take their sponsorships away—please don't—then we would still be recommending them because it's the right answer, and it's what we would use if we were in your position. We receive no kickbacks or partner deal or any sort of reseller arrangement because it just clouds the whole conflict of interest perception. But you folks have been fantastic for a long time in a bunch of different ways.Thomas: Well, you know, I would say that what you thought made a lot of sense made a lot of sense to us as well. So, the ChaosSearch idea just makes sense. Now, you had to crack some code, solve some problems, invent some technology, and create some new architecture, but the idea that Elasticsearch is a useful solution with all the tooling, the visualization, the wonderful community around that, was a good place to start, but here's the problem: setting it up, scaling it out, keep it up, when things are happening, things go bump in the night. All those are real challenges, and one of them was just the storaging of the data. Well, what if you could make S3 the back-end store? One hundred percent; no SSDs or HDDs. Makes a lot of sense.And then support the APIs that your tooling uses. So, it just made a lot of sense on what we were trying to do, just no one thought of it. Now, if you think about the Northstar you were talking about, you know, five, six years ago, when I said, transforming cloud storage into an analytical database for search and SQL, people thought that was crazy and mad. Well, now everyone's using Cloud Storage, everyone's using S3 as a data lake. That's not in question anymore.But it was a question five, six, you know, years ago. So, when we met up, you're like, “Well, that makes sense.” It always made sense, but people either didn't think was possible, or were worried, you know, I'll just try to set up an Elastic cluster and deal with it. Because that's what happens when you particularly deal with large-scale implementations. So, you know, to us, we would love the Elastic API, the tooling around it, but what we all know is the cost, the time the complexity, to manage it, to scale it out, just almost want to pull your hair out. And so, that's where we come in is, don't change what you do, just change how you do it.Corey: Every once in a while, I'll talk to a client who's running an Amazon Elasticsearch cluster, and they have nothing but good things to say about it. Which, awesome. On the one hand, part of me wishes that I had some of their secrets, but often what's happened is that they have this down to a science, they have a data lifecycle that's clearly defined and implemented, the cluster is relatively static, so resizes aren't really a thing, and it just works for their use cases. And in those scenarios, like, “Do you care about the bill?” “Not overly. We don't have to think about it.”Great. Then why change? If there's no pain, you're not going to sell someone something, especially when we're talking, this tends to be relatively smaller-scale as well. It's okay, great, they're spending $5,000 a month on it. It doesn't necessarily justify the engineering effort to move off.Now, when you start looking at this, and, “Huh, that's a quarter million bucks a month we're spending on this nonsense, and it goes down all the time,” yeah, that's when it starts to be one of those logical areas to start picking apart and diving into. What's also muddied the waters since the last time we really went in-depth on any of this was it used to be we would be talking about it exactly like we are right now, about how it's Elasticsearch-compatible. Technically, these days, we probably shouldn't be saying it is OpenSearch compatible because of the trademark issues between Elastic and AWS and the Schism of the OpenSearch fork of the Elasticsearch project. And now it feels like when you start putting random words in front of the word search, ChaosSearch fits right in. It feels like your star is rising.Thomas: Yeah, no, well said. I appreciate that. You know, it's funny when Elastic changed our license, we all didn't know what was going to happen. We knew something was going to happen, but we didn't know what was going to happen. And Amazon, I say ironically, or, more importantly, decided they'll take up the open mantle of keeping an open, free solution.Now, obviously, they recommend running that in their cloud. Fair enough. But I would say we don't hear as much Elastic replacement, as much as OpenSearch replacement with our solution because of all the benefits that we talked about. Because the trigger points for when folks have an issue with the OpenSearch or Elastic stack is got too expensive, or it was changing so much and it was falling over, or the complexity of the schema changing, or all the above. The pipelines were complex, particularly at scale.That's both for Elasticsearch, as well as OpenSearch. And so, to us, we want either to win, but we want to be the replacement because, you know, at scale is where we shine. But we have seen a real trend where we see less Elasticsearch and more OpenSearch because the community is worried about the rules that were changed, right? You see it day in, day out, where you have a community that was built around open and fair and free, and because of business models not working or the big bad so-and-so is taking advantage of it better, there's a license change. And that's a trust change.And to us, we're following the OpenSearch path because it's still open. The 600-pound gorilla or 900-pound gorilla of Amazon. But they really held the mantle, saying, “We're going to stay open, we assume for as long as we know, and we'll follow that path. But again, at that scale, the time, the costs, we're here to help solve those problems.” Again, whether it's on Amazon or, you know, Google et cetera.Corey: I want to go back to what I mentioned at the start of this with the Wayback Machine and looking at how things wound up unfolding in the fullness of time. The first time that it snapshotted your site was way back in the year 2018, which—Thomas: Nice. [laugh].Corey: Some of us may remember, and at that point, like, I wasn't doing any work with you, and later in time I would make fun of you folks for this, but back then your brand name was in all caps, so I would periodically say things like this episode is sponsored by our friends at [loudly] CHAOSSEARCH.Thomas: [laugh].Corey: And once you stopped capitalizing it and that had faded from the common awareness, it just started to look like I had the inability to control the volume of my own voice. Which, fair, but generally not mid-sentence. So, I remember those early days, but the positioning of it was, “The future of log management and analytics,” back in 2018. Skipping forward a year later, you changed this because apparently in 2019, the future was already here. And you were talking about, “Log search analytics, purpose-built for Amazon S3. Store everything, ask anything all on your Amazon S3.”Which is awesome. You were still—unfortunately—going by the all caps thing, but by 2020, that wound up changing somewhat significantly. You were at that point, talking for it as, “The data platform for scalable log analytics.” Okay, it's clearly heading in a log direction, and that made a whole bunch of sense. And now today, you are, “The data lake platform for analytics at scale.” So, good for you, first off. You found a voice?Thomas: [laugh]. Well, you know, it's funny, as a product mining person—I'll take my marketing hat off—we've been building the same solution with the same value points and benefits as we mentioned earlier, but the market resonates with different terminology. When we said something like, “Transforming your Cloud Object Storage like S3 into an analytical database,” people were just were like, blown away. Is that even possible? Right? And so, that got some eyes.Corey: Oh, anything is a database if you hold that wrong. Absolutely.Thomas: [laugh]. Yeah, yeah. And then you're saying log analytics really resonated for a few years. Data platform, you know, is more broader because we do more broader things. And now we see over the last few years, observability, right? How do you fit in the observability viewpoint, the stack where log analytics is one aspect to it?Some of our customers use Grafana on us for that lens, and then for the analysis, alerting, dashboarding. You can say that Kibana in the hunting aspect, the log aspects. So, you know, to us, we're going to put a message out there that resonates with what we're hearing from our customers. For instance, we hear things like, “I need a security data lake. I need that. I need to stream all my data. I need to have all the data because what happens today that now, I need to know a week, two weeks, 90 days.”We constantly hear, “I need at least 90 days forensics on that data.” And it happens time and time again. We hear in the observability stack where, “Hey, I love Datadog, but I can't afford it more than a week or two.” Well, that's where we come in. And we either replace Datadog for the use cases that we support, or we're auxiliary to it.Sometimes we have an existing Grafana implementation, and then they store data in us for the long tail. That could be the scenario. So, to us, the message is around what resonates with our customers, but in the end, it's operational data, whether you want to call it observability, log analytics, security analytics, like the data lake, to us, it's just access to your data, all your data, all the time, and supporting the APIs and the tooling that you're using. And so, to me, it's the same product, but the market changes with messaging and requirements. And this is why we always felt that having a search and SQL platform is so key because what you'll see in Elastic or OpenSearch is, “Well, I only support the Elastic API. I can't do correlations. I can't do this. I can't do that. I'm going to move it over to say, maybe Athena but not so much. Maybe a Snowflake or something else.”Corey: “Well, Thomas, it's very simple. Once you learn our own purpose-built, domain-specific language, specifically for our product, well, why are you still sitting here, go learn that thing.” People aren't going to do that.Thomas: And that's what we hear. It was funny, I won't say what the company was, a big banking company that we're talking to, and we hear time and time again, “I only want to do it via the Elastic tooling,” or, “I only want to do it via the BI tooling.” I hear it time and time again. Both of these people are in the same company.Corey: And that's legitimate as well because there's a bunch of pre-existing processes pointing at things and we're not going to change 200 different applications in their data model just because you want to replace a back-end system. I also want to correct myself. I was one tab behind. This year's branding is slightly different: “Search and analyze unlimited log data in your cloud object storage.” Which is, I really like the evolution on this.Thomas: Yeah, yeah. And I love it. And what was interesting is the moving, the setting up, the doubling of your costs, let's say you have—I mean, we deal with some big customers that have petabytes of data; doubling your petabytes, that means, if your Elastic environment is costing you tens of millions and then you put into Snowflake, that's also going to be tens of millions. And with a solution like ours, you have really cost-effective storage, right? Your cloud storage, it's secure, it's reliable, it's Elastic, and you attach Chaos to get the well-known APIs that your well-known tooling can analyze.So, to us, our evolution has been really being the end viewpoint where we started early, where the search and SQL isn't here today—and you know, in the future, we'll be coming out with more ML type tooling—but we have two sides: we have the operational, security, observability. And a lot of the business side wants access to that data as well. Maybe it's app data that they need to do analysis on their shopping cart website, for instance.Corey: The thing that I find curious is, the entire space has been iterating forward on trying to define observability, generally, as whatever people are already trying to sell in many cases. And that has seemed to be a bit of a stumbling block for a lot of folks. I figured this out somewhat recently because I've built the—free for everyone to use—the lasttweetinaws.com, Twitter threading client.That's deployed to 20 different AWS regions because it's go—the idea is that should be snappy for people, no matter where they happen to be on the planet, and I use it for conferences when I travel, so great, let's get ahead of it. But that also means I've got 20 different sources of logs. And given that it's an omnibus Lambda function, it's very hard to correlate that to users, or user sessions, or even figure out where it's going. The problem I've had is, “Oh, well, this seems like something I could instrument to spray logs somewhere pretty easily, but I don't want to instrument it for 15 different observability vendors. Why don't I just use otel—or Open Telemetry—and then tell that to throw whatever I care about to various vendors and do a bit of a bake-off?” The problem, of course, is that open telemetry and Lambda seem to be in just the absolute wrong directions. A lot.Thomas: So, we see the same trend of otel coming out, and you know, this is another API that I'm sure we're going to go all-in on because it's getting more and more talked about. I won't say it's the standard that I think is trending to all your points about I need to normalize a process. But as you mentioned, we also need to correlate across the data. And this is where, you know, there are times where search and hunting and alerting is awesome and wonderful and solves all your needs, and sometimes correlation. Imagine trying to denormalize all those logs, set up a pipeline, put it into some database, or just do a SELECT *, you know, join this to that to that, and get your answers.And so, I think both OpenTelemetry and SQL and search all need to be played into one solution, or at least one capability because if you're not doing that, you're creating some hodgepodge pipeline to move it around and ultimately get your questions answered. And if it takes weeks—maybe even months, depending on the scale—you may sometimes not choose to do it.Corey: One other aspect that has always annoyed me about more or less every analytics company out there—and you folks are no exception to this—is the idea of charging per gigabyte ingested because that inherently sets up a weird dichotomy of, well, this is costing a lot, so I should strive to log less. And that is sort of the exact opposite, not just of the direction you folks want customers to go in, but also where customers themselves should be going in. Where you diverge from an awful lot of those other companies because of the nature of how you work, is that you don't charge them again for retention. And the idea that, yeah, the fact that anything stored in ChaosSearch lives in your own S3 buckets, you can set your own lifecycle policies and do whatever you want to do with that is a phenomenal benefit, just because I've always had a dim view of short-lived retention periods around logs, especially around things like audit logs. And these days, I would consider getting rid of audit logging data and application logging data—especially if there's a correlation story—any sooner than three years feels like borderline malpractice.Thomas: [laugh]. We—how many times—I mean, we've heard it time and time again is, “I don't have access to that data because it was too costly.” No one says they don't want the data. They just can't afford the data. And one of the key premises that if you don't have all the data, you're at risk, particularly in security—I mean, even audits. I mean, so many times our customers ask us, you know, “Hey, what was this going on? What was that go on?” And because we can so cost-effectively monitor our own service, we can provide that information for them. And we hear this time and time again.And retention is not a very sexy aspect, but it's so crucial. Anytime you look in problems with X solution or Y solution, it's the cost of the data. And this is something that we wanted to address, officially. And why do we make it so cost-effective and free after you ingest it was because we were using cloud storage. And it was just a great place to land the data cost-effective, securely.Now, with that said, there are two types of companies I've seen. Everybody needs at least 90 days. I see time and time again. Sure, maybe daily, in a weeks, they do a lot of their operation, but 90 days is where it lands. But there's also a bunch of companies that need it for years, for compliance, for audit reasons.And imagine trying to rehydrate, trying to rebuild—we have one customer—again I won't say who—has two petabytes of data that they rehydrate when they need it. And they say it's a nightmare. And it's growing. What if you just had it always alive, always accessible? Now, as we move from search to SQL, there are use cases where in the log world, they just want to pay upfront, fixed fee, this many dollars per terabyte, but as we get into the more ad hoc side of it, more and more folks are asking for, “Can I pay per query?”And so, you'll see coming out soon, about scenarios where we have a different pricing model. For logs, typically, you want to pay very consistent, you know, predetermined cost structure, but in the case of more security data lakes, where you want to go in the past and not really pay for something until you use it, that's going to be an option as well coming out soon. So, I would say you need both in the pricing models, but you need the data to have either side, right?Corey: This episode is sponsored in part by our friends at ChaosSearch. You could run Elasticsearch or Elastic Cloud—or OpenSearch as they're calling it now—or a self-hosted ELK stack. But why? ChaosSearch gives you the same API you've come to know and tolerate, along with unlimited data retention and no data movement. Just throw your data into S3 and proceed from there as you would expect. This is great for IT operations folks, for app performance monitoring, cybersecurity. If you're using Elasticsearch, consider not running Elasticsearch. They're also available now in the AWS marketplace if you'd prefer not to go direct and have half of whatever you pay them count towards your EDB commitment. Discover what companies like Equifax, Armor Security, and Blackboard already have. To learn more, visit chaossearch.io and tell them I sent you just so you can see them facepalm, yet again.Corey: You'd like to hope. I mean, you could always theoretically wind up just pulling what Ubiquiti apparently did—where this came out in an indictment that was unsealed against an insider—but apparently one of their employees wound up attempting to extort them—which again, that's not their fault, to be clear—but what came out was that this person then wound up setting the CloudTrail audit log retention to one day, so there were no logs available. And then as a customer, I got an email from them saying there was no evidence that any customer data had been accessed. I mean, yeah, if you want, like, the world's most horrifyingly devilish best practice, go ahead and set your log retention to nothing, and then you too can confidently state that you have no evidence of anything untoward happening.Contrast this with what AWS did when there was a vulnerability reported in AWS Glue. Their analysis of it stated explicitly, “We have looked at our audit logs going back to the launch of the service and have conclusively proven that the only time this has ever happened was in the security researcher who reported the vulnerability to us, in their own account.” Yeah, one of those statements breeds an awful lot of confidence. The other one makes me think that you're basically being run by clowns.Thomas: You know what? CloudTrail is such a crucial—particularly Amazon, right—crucial service because of that, we see time and time again. And the challenge of CloudTrail is that storing a long period of time is costly and the messiness the JSON complexity, every company struggles with it. And this is how uniquely—how we represent information, we can model it in all its permutations—but the key thing is we can store it forever, or you can store forever. And time and time again, CloudTrail is a key aspect to correlate—to your question—correlate this happened to that. Or do an audit on two years ago, this happened.And I got to tell you, to all our listeners out there, please store your CloudTrail data—ideally in ChaosSearch—because you're going to need it. Everyone always needs that. And I know it's hard. CloudTrail data is messy, nested JSON data that can explode; I get it. You know, there's tricks to do it manually, although quite painful. But CloudTrail, every one of our customers is indexing with us in CloudTrail because of stories like that, as well as the correlation across what maybe their application log data is saying.Corey: I really have never regretted having extra logs lying around, especially with, to be very direct, the almost ridiculously inexpensive storage classes that S3 offers, especially since you can wind up having some of the offline retrieval stuff as part of a lifecycle policy now with intelligent tiering. I'm a big believer in just—again—the Glacier Deep Archive I've at the cost of $1,000 a month per petabyte, with admittedly up to 12 hours of calling that as a latency. But that's still, for audit logs and stuff like that, why would I ever want to delete things ever again?Thomas: You're exactly right. And we have a bunch of customers that do exactly that. And we automate the entire process with you. Obviously, it's your S3 account, but we can manage across those tiers. And it's just to a point where, why wouldn't you? It's so cost-effective.And the moments where you don't have that information, you're at risk, whether it's internal audits, or you're providing a service for somebody, it's critical data. With CloudTrail, it's critical data. And if you're not storing it and if you're not making it accessible through some tool like an Elastic API or Chaos, it's not worth it. I think, to your point about your story, it's epically not worth it.Corey: It's really not. It's one of those areas where that is not a place to overly cost optimize. This is—I mean we talked earlier about my business and perceptions of conflict of interest. There's a reason that I only ever charge fixed-fee and not percentage of savings or whatnot because, at some point, I'll be placed in a position of having to say nonsense, like, “Do you really need all of these backups?” That doesn't make sense at that point.I do point out things like you have hourly disk snapshots of your entire web fleet, which has no irreplaceable data on them dating back five years. Maybe cleaning some of that up might be the right answer. The happy answer is somewhere in between those two, and it's a business decision around exactly where that line lies. But I'm a believer in never regretting having kept logs almost into perpetuity. Until and unless I start getting more or less pillaged by some particularly rapacious vendor that's oh, yeah, we're going to charge you not just for ingest, but also for retention. And for how long you want to keep it, we're going to treat it like we're carving it into platinum tablets. No. Stop that.Thomas: [laugh]. Well, you know, it's funny, when we first came out, we were hearing stories that vendors were telling customers why they didn't need their data, to your point, like, “Oh, you don't need that,” or, “Don't worry about that.” And time and time again, they said, “Well, turns out we didn't need that.” You know, “Oh, don't index all your data because you just know what you know.” And the problem is that life doesn't work out that way business doesn't work out that way.And now what I see in the market is everyone's got tiering scenarios, but the accessibility of that data takes some time to get access to. And these are all workarounds and bandaids to what fundamentally is if you design an architecture and a solution is such a way, maybe it's just always hot; maybe it's just always available. Now, we talked about tiering off to something very, very cheap, then it's like virtually free. But you know, our solution was, whether it's ultra warm, or this tiering that takes hours to rehydrate—hours—no one wants to live in that world, right? They just want to say, “Hey, on this date on this year, what was happening? And let me go look, and I want to do it now.”And it has to be part of the exact same system that I was using already. I didn't have to call up IT to say, “Hey, can you rehydrate this?” Or, “Can I go back to the archive and look at it?” Although I guess we're talking about archiving with your website, viewing from days of old, I think that's kind of funny. I should do that more often myself.Corey: I really wish that more companies would put themselves in the customers' shoes. And for what it's worth, periodically, I've spoken to a number of very happy ChaosSearch customers. I haven't spoken to any angry ones yet, which tells me you're either terrific at crisis comms, or the product itself functions as intended. So, either way, excellent job. Now, which team of yours is doing that excellent job, of course, is going to depend on which one of those outcomes it is. But I'm pretty good at ferreting out stories on those things.Thomas: Well, you know, it's funny, being a company that's driven by customer ask, it's so easy build what the customer wants. And so, we really take every input of what the customer needs and wants—now, there are cases where we relace Splunk. They're the Cadillac, they have all the bells and whistles, and there's times where we'll say, “Listen, that's not what we're going to do. We're going to solve these problems in this vector.” But they always keep on asking, right? You know, “I want this, I want that.”But most of the feedback we get is exactly what we should be building. People need their answers and how they get it. It's really helped us grow as a company, grow as a product. And I will say ever since we went live now many, many years ago, all our roadmap—other than our Northstar of transforming cloud storage into a search SQL big data analytics database has been customer-driven, market customer-driven, like what our customer is asking for, whether it's observability and integrating with Grafana and Kibana or, you know, security data lakes. It's just a huge theme that we're going to make sure that we provide a solution that meets those needs.So, I love when customers ask for stuff because the product just gets better. I mean, yeah, sometimes you have to have a thick skin, like, “Why don't you have this?” Or, “Why don't you have that?” Or we have customers—and not to complain about customers; I love our customers—but they sometimes do crazy things that we have to help them on crazy-ify. [laugh]. I'll leave it at that. But customers do silly things and you have to help them out. I hope they remember that, so when they ask for a feature that maybe takes a month to make available, they're patient with us.Corey: We sure can hope. I really want to thank you for taking so much time to once again suffer all of my criticisms, slings and arrows, blithe market observations, et cetera, et cetera. If people want to learn more, where's the best place to find you?Thomas: Well, of course, chaossearch.io. There's tons of material about what we do, use cases, case studies; we just published a big case study with Equifax recently. We're in Gartner and a whole bunch of Hype Cycles that you can pull down to see how we fit in the market.Reach out to us. You can set up a trial, kick the tires, again, on your cloud storage like S3. And ChaosSearch on Twitter, we have a Facebook, we have all this classic social medias. But our website is really where all the good content and whether you want to learn about the architecture and how we've done it, and use cases; people who want to say, “Hey, I have a problem. How do you solve it? How do I learn more?”Corey: And we will, of course, put links to that in the show notes. For my own purposes, you could also just search for the term ChaosSearch in your email inbox and find one of their sponsored ads in my newsletter and click that link, but that's a little self-serving as we do it. I'm kidding. I'm kidding. There's no need to do that. That is not how we ever evaluate these things. But it is funny to tell that story. Thomas, thank you so much for your time. As always, it's appreciated.Thomas: Corey Quinn, I truly enjoyed this time. And I look forward to upcoming re:Invent. I'm assuming it's going to be live like last year, and this is where we have a lot of fun with the community.Corey: Oh, I have no doubt that we're about to go through that particular path very soon. Thank you. It's been an absolute pleasure.Thomas: Thank you.Corey: Thomas Hazel, CTO and Founder of ChaosSearch. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry, insulting comment that I will then set to have a retention period of one day, and then go on to claim that I have received no negative feedback.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Streaming Audio: a Confluent podcast about Apache Kafka
Real-Time Stream Processing, Monitoring, and Analytics With Apache Kafka

Streaming Audio: a Confluent podcast about Apache Kafka

Play Episode Listen Later Sep 15, 2022 34:07 Transcription Available


Processing real-time event streams enables countless use cases big and small. With a day job designing and building highly available distributed data systems, Simon Aubury (Principal Data Engineer, Thoughtworks) believes stream-processing thinking can be applied to any stream of events. In this episode, Simon shares his Confluent Hackathon '22 winning project—a wildlife monitoring system to observe population trends over time using a Raspberry Pi, along with Apache Kafka®, Kafka Connect, ksqlDB, TensorFlow Lite, and Kibana. He used the system to count animals in his Australian backyard and perform trend analysis on the results. Simon also shares ideas on how you can use these same technologies to help with other real-world challenges.Open-source, object detection models for TensorFlow, which appropriately are collected into "model zoos," meant that Simon didn't have to provide his own object identification as part of the project, which would have made it untenable. Instead, he was able to utilize the open-source models, which are essentially neural nets pretrained on relevant data sets—in his case, backyard animals.Simon's system, which consists of around 200 lines of code, employs a Kafka producer running a while loop, which connects to a camera feed using a Python library. For each frame brought down, object masking is applied in order to crop and reduce pixel density, and then the frame is compared to the models mentioned above. A Python dictionary containing probable found objects is sent to a Kafka broker for processing; the images themselves aren't sent. (Note that Simon's system is also capable of alerting if a specific, rare animal is detected.) On the broker, Simon uses ksqlDB and windowing to smooth the data in case the frames were inconsistent for some reason (it may look back over thirty seconds, for example, and find the highest number of animals per type). Finally, the data is sent to a Kibana dashboard for analysis, through a Kafka Connect sink connector. Simon's system is an extremely low-cost system that can simulate the behaviors of more expensive, proprietary systems. And the concepts can easily be applied to many other use cases. For example, you could use it to estimate traffic at a shopping mall to gauge optimal opening hours, or you could use it to monitor the queue at a coffee shop, counting both queued patrons as well as impatient patrons who decide to leave because the queue is too long.EPISODE LINKSReal-Time Wildlife Monitoring with Apache KafkaWildlife Monitoring GithubksqlDB Fundamentals: How Apache Kafka, SQL, and ksqlDB Work TogetherEvent-Driven Architecture - Common Mistakes and Valuable LessonsWatch the video version of this podcastKris Jenkins' TwitterJoin the Confluent CommunityLearn more on Confluent DeveloperUse PODCAST100 to get $100 of free Confluent Cloud usage (details)   

The Kim Doyal Show
Build in Public & Grow with Twitter: Interview with Kevon Cheung FTH 098

The Kim Doyal Show

Play Episode Listen Later Aug 8, 2022 58:37


Kim Doyal 0:01 Welcome to F the hustle. I'm your host, Kim Doyal. You want a life that is meaningful and exciting. In this podcast, we're going to talk about launching and growing an online business that fits your lifestyle. F the hustle is all about doing good work, building real relationships, and most importantly, creating a business that supports how you want to live your life. You don't have to sacrifice the quality of your life today to create something that sets your soul on fire. And yes, that includes making a lot of money. So we'll be talking about selling, charging, what you're worth, and how earning more means helping more people. My goal is to help you find freedom and create a business on your terms. Hey, what's going on everybody? Welcome back to another episode of EFF the hustle with Kim Doyal. I am your host, Kim Doyal. I'm really excited today because I swear to God, come on. I feel like my good friend come on. And we've known each other like two months or something. But this I feel like it's been a long time coming, but it hasn't we met a few months ago. My guest is C'mon Chung. Did I say your name correctly? Unknown Speaker 1:04 That's correct. Very good. Kim Doyal 1:06 Okay, I was like, you know, it's funny, I have a tent, I do this. And I'm like, Kim, you need to clarify this before you actually get on the interview. But anyway, Kibana and I connected through Twitter. And I just kind of fell in love with his content and what he was doing. I signed up for his free email course, which he's going to talk about everything he's doing. And one of the best things that I just love this is in his follow up sequence. He said, hit reply, and tell me, he said I reply to every email. And he did. And I just thought, this is friggin brilliant. I shared what he was doing. It was it was just a real fun engagement. And so come on. Thank you for being here today. Unknown Speaker 1:45 Yeah, thank you, Kim, for having me here. Seriously, I reply to 100% of my email. But sometimes like seven days, late 14 days late, like today, I was replying emails 14 days late. But late is better than never showing up. Right. So that's my, that's my way of doing things. Kim Doyal 2:04 Oh, absolutely. And you know, it's funny simply, I obviously love email. I do so much with email. I still it's kind of my almost a preferred choice of communication. But I like to get into conversations with people. I think it's, it's fantastic. So all right. We're gonna talk about everything. I love starting with the backstory. And you do this full time now you're a full time creator, and I should we should clarify for people. So our time zones are a little bit off. It's eight o'clock in Costa Rica. Where are you? And what time is it for you right now? Unknown Speaker 2:33 Well, I am based in Hong Kong is 10pm over here. But if you ask me, I am living in my computer right now. Because most of my friends are actually online, I just feel more connected to people like you, who were doing similar things where we're passionate about what we do, and it's hard to find it locally, honestly. Kim Doyal 2:56 Oh, you know, it's crazy. I was I'm from Northern California, San Francisco Bay Area, and I was out in the suburbs. So it was very, it felt very difficult for a long time. Like nobody gets what I do. Nobody understands. Unknown Speaker 3:09 I guess I feel the same way. Kim Doyal 3:12 Yeah, absolutely. And I'm a big believer online friends are friends. So how long? I'd love to hear your backstory, like I said, what got you into doing this? You know, a lot of people, you know, maybe it's just a desire or quit a job, whatever. But how did you start your online journey? What were you doing before? Unknown Speaker 3:30 So you know, the kind of life changing point for that is 20 months ago, I felt like a nobody. And I will tell you why. Because I have been in startups all my career for nine

ITOps, DevOps, AIOps - All Things Ops
Ep 7 - When solving the issue becomes an issue - with Elastic's Philipp Krenn

ITOps, DevOps, AIOps - All Things Ops

Play Episode Listen Later Jul 12, 2022 48:49


Some products solve a huge issue for their users. But sometimes, it is this big innovation that stands in the way of users adopting newer features. The product falls victim to its own success.Philipp Krenn, EMEA Team Lead at Elastic, shares how they dealt with it, and talks with Elias about Elastic's experience.What's in it for you:1. Why innovative products can fall victim to their own success2. What changing licensing on Elastic and abandoning open source changed in Elastic's community3. What are the drivers behind adoption of cloud delivery for Elastic4. Philipp's thoughts on the benefits and trade-offs of observability players expanding into the security domainAbout Philipp:Philipp lives to demo interesting technology. Having worked as a web, infrastructure, and database engineer for over ten years, Philipp is now a developer advocate and EMEA team lead at Elastic — the company behind the Elastic Stack consisting of Elasticsearch, Kibana, Beats, and Logstash. Based in Vienna, Austria, he is constantly traveling Europe and beyond to speak and discuss open source software, search, databases, infrastructure, and security.Find Philipp on LinkedIn: https://www.linkedin.com/in/philippkrenn/ Find him on Twitter: @xeraaFind him on GitHub: https://github.com/xeraa _______About Elastic:From the early days of Elasticsearch to how the ELK Stack came to be, a period of awesome (but chaotic) development, the introduction of the Elastic Stack, and a new era of search-based solutions for enterprise search, observability, and security. There's a lot of goodness to unpack around Elastic.Website: https://www.elastic.co/ Industry: Analytics, Cloud Computing, Open Source, SaaS, Search Engine, SoftwareCompany size: 1001-5000Headquarters: San Francisco Bay Area, Silicon Valley, West CoastFounded: 2012_______ About the host Elias:Elias is Director of International and Indirect Business at tribe29. He comes from a strategy consulting background, but has been an entrepreneur for the better part of the last 10 years. In his spare time, he likes to do triathlon.Get in touch with Elias via LinkedIn or email elias.voelker@tribe29.com__________Podcast MusicMusic by Ströme, used by permission‚Panta Rhei‘ written by Mario Schoenhofer(c)+p 2022, Compost Medien GmbH & Co KGwww.stroeme.comhttps://compost-rec.com/ 

OpenObservability Talks
OpenSearch 2.0 and beyond with Eli - OpenObservability Talks E2E11

OpenObservability Talks

Play Episode Listen Later Apr 28, 2022 61:12


OpenSearch is a community-driven, open-source search and analytics suite derived from Apache 2.0 licensed Elasticsearch 7.10.2 & Kibana 7.10.2. The OpenSearch project started just over a year ago and is now the open-source alternative to ELK, which is no longer open source. The team has spent much of the last year getting the project going, but there was innovation as well. We will cover and discuss what OpenSearch has accomplished, but more importantly what's coming next, including a big 2.0 release. We are joined in this episode by Eli Fisher, who is the product lead at AWS, working on the OpenSearch project. He'll dive into recent launches, including several observability features, and innovations planned for 2.0 and beyond.    The podcast episodes are available for listening on your favorite podcast app and on this YouTube channel.   We live-stream the episodes, and you're welcome to join the stream here on YouTube Live or at https://www.twitch.tv/openobservability​.   

Azure Friday (HD) - Channel 9
Add rich search experiences to your applications in Azure with Elastic

Azure Friday (HD) - Channel 9

Play Episode Listen Later Feb 15, 2022


Isaac Levin from Elastic joins Scott Hanselman to discuss Elastic Cloud on Azure. Elastic Cloud is an Elasticsearch and Kibana managed service - with solutions for enterprise search, observability, and security. Running Elastic on Azure enables you to take data from any source - reliably and securely, in any format - then search, analyze, and visualize that data in real time. Elastic on Azure users experience frictionless integration directly within the Azure portal, allowing for faster time to market. With deployment models to meet your unique use case, you'll gain the speed, scale, and relevance you need to react quickly to support your rapidly evolving business needs. Chapters 00:00 - Introduction 01:04 - Getting started with Elasticsearch 04:05 - Enterprise search 05:10 - App Search: Engines 06:06 - App Search: Analytics 06:58 - App Search: Web crawler 08:16 - App Search: Search UI 10:17 - App Search: Relevance tuning 12:13 - App Search: Synonyms 14:56 - App Search: Curations 17:15 - Wrap-up Recommended resources Elastic on Azure Elastic Enterprise Search Elastic Search UI Create a free account Connect Scott Hanselman | Twitter: @shanselman Isaac Levin | Twitter: @isaacrlevin Elastic | Twitter: @elastic Azure Friday | Twitter: @azurefriday

Azure Friday (Audio) - Channel 9
Add rich search experiences to your applications in Azure with Elastic

Azure Friday (Audio) - Channel 9

Play Episode Listen Later Feb 15, 2022


Isaac Levin from Elastic joins Scott Hanselman to discuss Elastic Cloud on Azure. Elastic Cloud is an Elasticsearch and Kibana managed service - with solutions for enterprise search, observability, and security. Running Elastic on Azure enables you to take data from any source - reliably and securely, in any format - then search, analyze, and visualize that data in real time. Elastic on Azure users experience frictionless integration directly within the Azure portal, allowing for faster time to market. With deployment models to meet your unique use case, you'll gain the speed, scale, and relevance you need to react quickly to support your rapidly evolving business needs. Chapters 00:00 - Introduction 01:04 - Getting started with Elasticsearch 04:05 - Enterprise search 05:10 - App Search: Engines 06:06 - App Search: Analytics 06:58 - App Search: Web crawler 08:16 - App Search: Search UI 10:17 - App Search: Relevance tuning 12:13 - App Search: Synonyms 14:56 - App Search: Curations 17:15 - Wrap-up Recommended resources Elastic on Azure Elastic Enterprise Search Elastic Search UI Create a free account Connect Scott Hanselman | Twitter: @shanselman Isaac Levin | Twitter: @isaacrlevin Elastic | Twitter: @elastic Azure Friday | Twitter: @azurefriday

The Azure Podcast
Episode 409 - Azure Service Connector

The Azure Podcast

Play Episode Listen Later Jan 27, 2022


Xin Shi, an Azure PM focused on the Developer Experience, tells us about the Service Connector service in Azure which makes is easy for developers to ensure apps have all the right connectivity and security in place to access their Azure resources.   Media file: https://azpodcast.blob.core.windows.net/episodes/Episode409.mp3 YouTube: https://youtu.be/odJubQN6SJ8 Resources: https://aka.ms/service-connector   Other updates: http://azure.microsoft.com/en-us/updates/public-preview-azure-static-web-apps-enterprisegrade-edge/   General availability: Ultra disks support on AKS | Azure updates | Microsoft Azure   Public Preview: Managed Certificate support for Azure API Management | Azure updates | Microsoft Azure   Azure DDoS Protection—2021 Q3 and Q4 DDoS attack trends https://azure.microsoft.com/en-us/blog/azure-ddos-protection-2021-q3-and-q4-ddos-attack-trends/   Rightsize to maximize your cloud investment with Microsoft Azure https://azure.microsoft.com/en-us/blog/rightsize-to-maximize-your-cloud-investment-with-microsoft-azure/   7 reasons to attend Azure Open Source Day https://azure.microsoft.com/en-us/blog/7-reasons-to-attend-azure-open-source-day/     Generally available: Kibana dashboards and visualizations on top of Azure Data Explorer | Azure updates | Microsoft Azure     Public preview: Support for managed identity in Azure Cache for Redis | Azure updates | Microsoft Azure

The PeopleSoft Administrator Podcast
#300 - Auto-Applying PRPs

The PeopleSoft Administrator Podcast

Play Episode Listen Later Aug 20, 2021 36:52


This week on the podcast, Dan talk about email authenticity and his experience with the auto-applying PRPs in FS Image 40, and Kyle shares ways to find hidden errors with Kibana. Show Notes Email Authenticity @ 2:45 https://www.alexblackie.com/articles/email-authenticity-dkim-spf-dmarc/ https://simonandrews.ca/articles/how-to-set-up-spf-dkim-dmarc Kibana and researching errors @ 13:00 Auto-Apply PRPs @ 21:15

Linux Action News
Linux Action News 198

Linux Action News

Play Episode Listen Later Jul 18, 2021 23:17


Steam Deck looks impressive; we cover the details you care about and one aspect that concerns us. Plus, how Microsoft just gave a boost to the Linux Desktop and more.

OpenObservability Talks
OpenSearch: The Open Source Successor of Elasticsearch? - OpenObservability Talks S1E12

OpenObservability Talks

Play Episode Listen Later May 27, 2021 61:19


OpenSearch project was born out of the passion for Elasticsearch and Kibana and the desire to keep them open source in the face of Elastic's decision to close-source them. After a couple of months of hard work led by AWS, the Beta release was announced earlier this month under Apache2 license. On this episode of OpenObservability Talks I hosted Kyle Davis, Senior Developer Advocate for OpenSearch at AWS. We talked about how OpenSearch came to be, what it took to fork Elasticsearch and Kibana, what the engineers discovered when they dug into the code, what's planned ahead, and much more. About Kyle Davis: While being a relative newcomer to Amazon, Kyle has a long history with software development and databases. When not working, Kyle enjoys 3D printing, and getting his hand dirty in his Edmonton, Alberta-based home garden. The episode was live-streamed on 27 May 2021 and the video is available at https://youtube.com/live/UDvWdTeH5V4 Resources: https://github.com/opensearch-project Beta announcement Roadmap available Put the OPEN in Observability: Elasticsearch and Kibana relicensing and community chat - OpenObservability Talks S1E08 Socials: Twitter:⁠ https://twitter.com/OpenObserv⁠ YouTube: ⁠https://www.youtube.com/@openobservabilitytalks⁠

OpenObservability Talks
Put the OPEN in Observability: Elasticsearch and Kibana relicensing and community chat - OpenObservability Talks S1E8

OpenObservability Talks

Play Episode Listen Later Jan 28, 2021 37:57


The eighth of our OpenObservability Talks has Tomer Levy, CEO & Founder of Logz.io. The community is in turmoil around Elastic's announced plan to take Elasticsearch and Kibana off open source. In this episode, both Dotan and Mike have the pleasure of hosting Tomer where we discuss the recent news of Elastic moving Elasticsearch and Kibana to a dual non-OSS license - SSPL and Elastic License - and the implications that have on the open source community around it, including plans to fork Elasticsearch and Kibana, AWS announcement and more. We also talk about what Logz.io hopes to do, and how it wants the OSS to be better than ever. Tomer Levy is co-founder and CEO of Logz.io. Before founding Logz.io, Tomer was the co-founder and CTO of Intigua, and prior to that he managed the Intrusion Prevention System at CheckPoint. Tomer has an M.B.A. from Tel Aviv University and a B.S. in computer science and is an enthusiastic kitesurfer. The live streaming of the OpenObservability Talks is on the last Thursday of each month, and you can join us on Twitch or YouTube Live. Socials: Website: https://openobservability.io/ Twitter: https://twitter.com/OpenObserv Twitch: https://www.twitch.tv/openobservability YouTube: https://www.youtube.com/channel/UCLKOtaBdQAJVRJqhJDuOlPg

Linux Action News
Linux Action News 173

Linux Action News

Play Episode Listen Later Jan 24, 2021 33:58


Why we don't think Red Hat's expanded developer program is enough, our reaction to Ubuntu sticking with an older Gnome release, and a tiny delightful surprise.

The Hoot from Humio
The Hoot - Episode 38 - Humio at Lunar with Kasper Nissen

The Hoot from Humio

Play Episode Listen Later Nov 18, 2020 14:04


In this week's podcast we have a conversation with Kasper Nissen, Site Reliability Engineer at Lunar, about his experience with the new Humio Operator for Kubernetes.  Lunar is a Nordic bank with more than 200,000 users in Denmark, Sweden, and Norway. Lunar seeks to change banking for the better so that its users can control their spending, save smarter and make their money grow. Born in the cloud, Lunar uses technology to react swiftly to user needs and expectations. Previously on The Hoot, Kasper introduced us to Lunar's cloud-native environment, and what it took to make the environment at this innovative fintech startup reliable and secure. The platform is built entirely as a cloud-native app hosted in AWS. Lunar uses Humio to achieve observability into what is happening in all parts of the environment, so they log everything they can from the cloud.  Currently, Kasper is in the process of centralizing log management on a cluster in Lunar's Kubernetes environment. He's using the new Humio Operator to simplify the process of creating and running Humio in Kubernetes.  “Running Humio with the Operator is so much easier because it minimizes the operational overhead of running Humio in Kubernetes. The Operator also provides us with a distributed set up out of the box, which is awesome, especially now that we can push the burden of managing Kafka and Zookeeper, which are notoriously difficult systems to run, to the cloud provider.”  Kasper Nissen, SRE at Lunar Listen to our conversation with Kasper to learn:  How Humio addresses the challenge of volumes being tied to Availability Zones in AWS How the Humio Operator simplifies the deployment and management of Humio in Kubernetes How Lunar uses Humio and Git as a single source of truth for all of its environments How Humio helps Lunar optimize their cloud storage Show notes:  Listen to episode 32, when Kasper introduced us to Lunar's cloud-native environment. Read about Lunar's log management journey, which took them from an Elasticsearch and Kibana setup to Humio.  Learn more about the Humio Operator for running Humio on Kubernetes.  Watch an on-demand webinar to learn more about the Humio Operator from one of the engineers who helped build it!

The PeopleSoft Administrator Podcast
#245 - Bring Your PET to Work

The PeopleSoft Administrator Podcast

Play Episode Listen Later Jul 10, 2020 40:22


This week on the podcast, Jim Marion joins us to talk about PTF and the changes to Selenium, Kibana related content, and (re)introduces Pluggable Encryption Technology. If you want to learn more about PeopleTools, Fluid, Integrations and so much more, Jim offers the best PeopleSoft training out there. Show Notes Sasank on building Related Content for Kibana @ 1:30 PTF Changes in 8.57.x and 8.58 @ 13:45 Pluggable Encryption Technology @ 22:00 Jim Marion on using PET Greg Kelly on using PET

The PeopleSoft Administrator Podcast
#240 - PeopleSoft's New Best Friend

The PeopleSoft Administrator Podcast

Play Episode Listen Later Jun 5, 2020 26:58


This week on the podcast, Kyle and Dan talk all about Kibana and how it integrates with PeopleSoft. There is also a video of this week's podcast that you can view here. Show Notes Kibana is super cool! @ 3:00 Kibana Architecture @ 8:00 Walkthrough of Kibana and PeopleSoft @ 15:00

The PeopleSoft Administrator Podcast
#217 - PeopleTools 8.58 is Here!

The PeopleSoft Administrator Podcast

Play Episode Listen Later Dec 27, 2019 34:21


This week on the podcast we talk about PeopleTools 8.58 and some of the new features we are most excited about. Dan also shares his experience with using Cloud Manager to auto-upgrade an environment to 8.58. Show Notes PeopleTools 8.58 is Out @ 1:30 PeopleTools 8.58 Video Features Overview Automatic PeopleTools Upgrade @ 4:30 New User Interface @ 6:45 Config/Customization Improvements @ 12:30 Reporting and Kibana! @ 18:45 Health Center and Logstash @ 22:00 Machine Learning Framework @ 30:00

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Apple Security Updates Details Released https://support.apple.com/en-us/HT201222 Untitled Goose Deserialization https://pulsesecurity.co.nz/advisories/untitled-goose-game-deserialization Insecure Pagers Leak Medical Data https://techcrunch.com/2019/10/30/nhs-pagers-medical-health-data/ Kibana Vulnerablity https://research.securitum.com/prototype-pollution-rce-kibana-cve-2019-7609/

The PeopleSoft Administrator Podcast
#206 - Infrastructure DPKs

The PeopleSoft Administrator Podcast

Play Episode Listen Later Oct 11, 2019 37:05


This week on the podcast, Kyle and Dan discuss how to use the Update Manager Dashboard for improving Event Mapping lifecycle management, the upcoming Infrastructure DPKs, and integrating Kibana into more of PeopleSoft. Show Notes ANU Attack Debrief @ 2:30 Event Mapping Lifecycle Management with PUM @ 9:15 Idea Space - PeopleTools Idea Space - Lifecycle Management Infrastructure DPKs @ 16:00 Kibana on 8.58! @ 26:45 Wayne Fuller on Elasticsearch

The PeopleSoft Administrator Podcast
#204 - Open World 2019 Recap Part 1

The PeopleSoft Administrator Podcast

Play Episode Listen Later Sep 27, 2019 31:59


This week on the podcast, Graham Smith, Sasank, Vemana and Jim Marion join Dan at Oracle Open World 2019 to recap the big PeopleSoft announcements. The group discusses the upcoming UI changes in PeopleTools 8.58, the upcoming Kibana visualizations capabilities for application data, and a discussion about search security. Show Notes 8.58 UI Changes @ 3:00 Kibana Visualizations @ 10:00 Search Security @ 18:00 Chatbots and demos that go wrong @ 27:00

The PeopleSoft Administrator Podcast
#195 - Fluid and Federation w/ Matthew Haavisto

The PeopleSoft Administrator Podcast

Play Episode Listen Later Jul 26, 2019 41:00


This week on the podcast, Matthew Haavisto joins us to talk about Fluid, Interaction Hub, Federation and planning for new PeopleTools releases. PeopleTools Release Planning @ 3:30 PeopleTools Idea Space Fluid @ 8:00 Interaction Hub @ 14:00 Innovator Awards Page Federation @ 25:00 Thresholds @ 27:30 Elasticsearch & Kibana @ 33:00 Chatbots @ 36:30

The Elasticast
Episode 8: NS1 with Christian Saide and Devin Bernosky

The Elasticast

Play Episode Listen Later Nov 7, 2018 35:39


Christian Saide and Devin Bernosky of NS1 join Aaron and Mike to talk about what NS1 does and how they leverage the Elastic stack to provide data-driven DNS. The latest news regarding Elastic going public and changes to Kibana are mentioned and we answer the question: 'What is an availability zone in the context of Elastic Cloud?'

The Elasticast
Episode 4: Kibana Canvas with Rashid Khan

The Elasticast

Play Episode Listen Later Sep 12, 2018 39:57


Rashid Khan joins Mike and Aaron to discuss the Canvas project in Kibana--a composable, extendable, creative space for live data. Aaron and Mike answer "What is the difference between Beats and Logstash".

The PeopleSoft Administrator Podcast
#125 - Push Notifications and Phire

The PeopleSoft Administrator Podcast

Play Episode Listen Later Mar 23, 2018 35:20


This week on the podcast, Kyle talks about Push Notifications he built for Phire and how you could extend Phire. Dan explains the pt_password DPK Puppet Type, and Kyle discusses a leak of 2 billion passwords. Show Notes 2 Billion Passwords @ 1:00 pt_password Puppet Type @ 5:30 Kibana and PeopleSoft @ 12:15 Push Notifications and Phire @ 24:30 Andy Dorfman's Push Notifications Write-up Writing Push Notifications

The PeopleSoft Administrator Podcast
#125 - Push Notifications and Phire

The PeopleSoft Administrator Podcast

Play Episode Listen Later Mar 23, 2018 35:20


This week on the podcast, Kyle talks about Push Notifications he built for Phire and how you could extend Phire. Dan explains the pt_password DPK Puppet Type, and Kyle discusses a leak of 2 billion passwords. Show Notes 2 Billion Passwords @ 1:00 pt_password Puppet Type @ 5:30 Kibana and PeopleSoft @ 12:15 Push Notifications and Phire @ 24:30 Andy Dorfman's Push Notifications Write-up Writing Push Notifications

The PeopleSoft Administrator Podcast

This week on the podcast, Dan and Kyle talk about using web traffic data to analyze user activity, new information on Jolt Failover, and how we generate and distribute compare reports. Then, they discuss Critical Patch Updates and how they affect PeopleSoft Administrators. Show Notes Building a Billion User Load Balancer @ 1:30 X-Forwarded-For and Kibana @ 6:00 Alliance 2017 Presentation Collaborate 2017 Presentation Windows Command Line Tip and Windows 10 @ 14:45 start . Jolt Failover Update @ 20:45 Response Compression and Servlet Filters @ 27:45 LLE Warnings in Tuxedo Logs @ 30:30 How Do You Do it? Compare Reports @ 31:30 Define Compare Location in Config Manager CPU Patching and Testing @ 40:00 Comparing Download Hashes Recommended Patch Advisor Java MOS Homepage CPU Patching with the DPK? Not yet.

The PeopleSoft Administrator Podcast

This week on the podcast, Dan and Kyle talk about using web traffic data to analyze user activity, new information on Jolt Failover, and how we generate and distribute compare reports. Then, they discuss Critical Patch Updates and how they affect PeopleSoft Administrators. Show Notes Building a Billion User Load Balancer @ 1:30 X-Forwarded-For and Kibana @ 6:00 Alliance 2017 Presentation Collaborate 2017 Presentation Windows Command Line Tip and Windows 10 @ 14:45 start . Jolt Failover Update @ 20:45 Response Compression and Servlet Filters @ 27:45 LLE Warnings in Tuxedo Logs @ 30:30 How Do You Do it? Compare Reports @ 31:30 Define Compare Location in Config Manager CPU Patching and Testing @ 40:00 Comparing Download Hashes Recommended Patch Advisor Java MOS Homepage CPU Patching with the DPK? Not yet.

BSD Now
131: BSD behind the chalkboard

BSD Now

Play Episode Listen Later Mar 2, 2016 101:09


This week on the show, we have an interview with Jamie This episode was brought to you by Headlines BSDCan 2016 List of Talks (http://www.bsdcan.org/2016/list-of-talks.txt) We are all looking forward to BSDCan Make sure you arrive in time for the Goat BoF, the evening of Tuesday June 7th at the Royal Oak, just up the street from the university residence There will also be a ZFS BoF during lunch of one of the conference days, be sure to grab your lunch and bring it to the BoF room Also, don't forget to get signed up for the various DevSummits taking place at BSDCan. *** What does Load Average really mean (https://utcc.utoronto.ca/~cks/space/blog/unix/ManyLoadAveragesOfUnix) Chris Siebenmann, a sysadmin at the University of Toronto, does some comparison of what “Load Average” means on different unix systems, including Solaris/IllumOS, FreeBSD, NetBSD, OpenBSD, and Linux It seems that no two OSes use the same definition, so comparing load averages is impossible On FreeBSD, where I/O does not affect load average, you can divide the load average by the number of CPU cores to be able to compare across machines with different core counts *** GPL violations related to combining ZFS and Linux (http://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/) As we mentioned in last week's episode, Ubuntu was preparing to release their next version with native ZFS support. + As expected, the Software Freedom Conservancy has issued a statement detailing the legal argument why they believe this is a violation of the GPL license for the Linux kernel. It's a pretty long and complete article, but we wanted to bring you the summary of the whole, and encourage you to read the rest, since it's good to be knowledgeable about the various open-source projects and their license conditions. “We are sympathetic to Canonical's frustration in this desire to easily support more features for their users. However, as set out below, we have concluded that their distribution of zfs.ko violates the GPL. We have written this statement to answer, from the point of view of many key Linux copyright holders, the community questions that we've seen on this matter. Specifically, we provide our detailed analysis of the incompatibility between CDDLv1 and GPLv2 — and its potential impact on the trajectory of free software development — below. However, our conclusion is simple: Conservancy and the Linux copyright holders in the GPL Compliance Project for Linux Developers believe that distribution of ZFS binaries is a GPL violation and infringes Linux's copyright. We are also concerned that it may infringe Oracle's copyrights in ZFS. As such, we again ask Oracle to respect community norms against license proliferation and simply relicense its copyrights in ZFS under a GPLv2-compatible license.” The Software Freedom Law Center's take on the issue (https://softwarefreedom.org/resources/2016/linux-kernel-cddl.html) Linux SCSI subsystem Maintainer, James Bottomley, asks “where is the harm” (http://blog.hansenpartnership.com/are-gplv2-and-cddl-incompatible/) FreeBSD and ZFS (http://freebsdfoundation.blogspot.ca/2016/02/freebsd-and-zfs.html) *** DragonFly i915 reaches Linux 4.2 (https://www.phoronix.com/scan.php?page=news_item&px=DragonFlyBSD-i915-4.2) The port of the Intel i915 DRM/KMS Linux driver to DragonFlyBSD has been updated to match Linux kernel 4.2 Various improvements and better support for new hardware are included One big difference, is that DragonFlyBSD will not require the binary firmware blob that Linux does François Tigeot explains: "starting from Linux 4.2, a separate firmware blob is required to save and restore the state of display engines in some low-power modes. These low-power modes have been forcibly disabled in the DragonFly version of this driver in order to keep it blob-free." Obviously this will have some disadvantage, but as those modes were never available on DragonFlyBSD before, users are not likely to miss them *** Interview - Jamie McParland - mcparlandj@newberg.k12.or.us (mailto:mcparlandj@newberg.k12.or.us) / @nsdjamie (https://twitter.com/nsdjamie) FreeBSD behind the chalkboard *** iXsystems My New IXSystems Mail Server (https://www.reddit.com/r/LinuxActionShow/comments/48c9nt/my_new_ixsystems_mail_server/) News Roundup Installing ELK on FreeBSD, Tutorial Part 1 (https://blog.gufi.org/2016/02/15/elk-first-part/) Are you an ELK user, or interested in becoming one? If so, Gruppo Utenti has a nice blog post / tutorial on how to get started with it on FreeBSD. Maybe you haven't heard of ELK, but its not the ELK in ports, specifically in this case he is referring to “ElasticSearch/Logstash/Kibana” as a stack. Getting started is relatively simply, first we install a few ports/packages: textproc/elasticsearch sysutils/logstash textproc/kibana43 www/nginx After enabling the various services for those (hint: sysrc may be easier), he then takes us through the configuration of ElasticSearch and LogStash. For the most part they are fairly straightforward, but you can always copy and paste his example config files as a template. Follow up to Installing ELK on FreeBSD (https://blog.gufi.org/2016/02/23/elk-second-part/) Jumping directly into the next blog entry, he then takes us through the “K” part of ELK, specifically setting up Kibana, and exposing it via nginx publically. At this point most of the CLI work is finished, and we have a great walkthrough of doing the Kibana configuration via their UI. We are still awaiting the final entry to the series, where the setup of ElastAlert will be detailed, and we will bring that to your attention when it lands. *** From 1989: An Empirical Study of the Reliablity of Unix Utilities (http://ftp.cs.wisc.edu/paradyn/technical_papers/fuzz.pdf) A paper from 1989 on the results of fuzz testing various unix utilities across a range of available unix operating systems Very interesting results, it is interesting to look back at before the start of the modern BSD projects New problems are still being found in utilities using similar testing methodologies, like afl (American Fuzzy lop) *** Google Summer of Code Both FreeBSD (https://summerofcode.withgoogle.com/organizations/4892834293350400/) and NetBSD (https://summerofcode.withgoogle.com/organizations/6246531984261120/) Are running 2016 Google Summer of Code projects. Students can start submitting proposals on March 14th. In the meantime, if you have any ideas, please post them to the Summer Of Code Ideas Page (https://wiki.freebsd.org/SummerOfCodeIdeas) on the FreeBSD wiki Students can start looking at the list now and try to find mentors to get a jump start on their project. *** High Availablity Sync for ipfw3 in Dragonfly (http://lists.dragonflybsd.org/pipermail/commits/2016-February/459424.html) Similar to pfsync, this new protocol allows firewall dynamic rules (state) to be synchronized between two firewalls that are working together in HA with CARP Does not yet sync NAT state, it seems libalias will need some modernization first Apparently it will be relatively easy to port to FreeBSD This is one of the only features ipfw lacks when compared to pf *** Beastie Bits FreeBSD 10.3-BETA3 Now Available (https://lists.freebsd.org/pipermail/freebsd-stable/2016-February/084238.html) LibreSSL isnt affected by the OpenSSL DROWN attack (http://undeadly.org/cgi?action=article&sid=20160301141941&mode=expanded) NetBSD machines at the Open Source Conference 2016 in Toyko (http://mail-index.netbsd.org/netbsd-advocacy/2016/02/29/msg000703.html) OpenBSD removes Linux Emulation (https://marc.info/?l=openbsd-ports-cvs&m=145650279825695&w=2) Time is an illusion - George Neville-Neil (https://queue.acm.org/detail.cfm?id=2878574) OpenSSH 7.2 Released (http://www.openssh.com/txt/release-7.2) Feedback/Questions Shane - IPSEC (http://slexy.org/view/s2qCKWWKv0) Darrall - 14TB Zpool (http://slexy.org/view/s20CP3ty5P) Pedja - ZFS setup (http://slexy.org/view/s2qp7K9KBG) ***

BSD Now
59: BSDって聞いたことある?

BSD Now

Play Episode Listen Later Oct 15, 2014 80:07


This week on the show we'll be talking with Hiroki Sato about the status of BSD in Japan. We also get to hear about how he got on the core team, and we just might find out why NetBSD is so popular over there! Answers to all your emails, the latest news, and even a brand new segment, on BSD Now - the place to B.. SD. This episode was brought to you by Headlines BSD talks at XDC 2014 (https://www.youtube.com/channel/UCXlH5v1PkEhjzLFTUTm_U7g/videos) This year's Xorg conference featured a few BSD-related talks Matthieu Herrb, Status of the OpenBSD graphics stack (https://www.youtube.com/watch?v=KopgD4nTtnA) Matthieu's talk details what's been done recently in Xenocara the OpenBSD kernel for graphics (slides here (http://www.openbsd.org/papers/xdc2014-xenocara.pdf)) Jean-Sébastien Pédron, The status of the graphics stack on FreeBSD (https://www.youtube.com/watch?v=POmxFleN3Bc) His presentation gives a history of major changes and outlines the current overall status of graphics in FreeBSD (slides here (http://www.x.org/wiki/Events/XDC2014/XDC2014PedronFreeBSD/XDC-2014_FreeBSD.pdf)) Francois Tigeot, Porting DRM/KMS drivers to DragonFlyBSD (https://www.youtube.com/watch?v=NdM7_yPGFDk) Francois' talk tells the story of how he ported some of the DRM and KMS kernel drivers to DragonFly (slides here (http://www.x.org/wiki/Events/XDC2014/XDC2014TigeotDragonFlyBSD/XDC-2014_Porting_kms_drivers_to_DragonFly.pdf)) *** FreeBSD Quarterly Status Report (https://www.freebsd.org/news/status/report-2014-07-2014-09.html) The FreeBSD project has a report of their activities between July and September of this year Lots of ARM work has been done, and a goal for 11.0 is tier one support for the platform The release includes reports from the cluster admin team, release team, ports team, core team and much more, but we've already covered most of the items on the show If you're interested in seeing what the FreeBSD community has been up to lately, check the full report - it's huge *** Monitoring pfSense logs using ELK (http://elijahpaul.co.uk/monitoring-pfsense-2-1-logs-using-elk-logstash-kibana-elasticsearch/) If you're one of those people who loves the cool graphs and charts that pfSense can produce, this is the post for you ELK (ElasticSearch, Logstash, Kibana) is a group of tools that let you collect, store, search and (most importantly) visualize logs It works with lots of different things that output logs and can be sent to one central server for displaying This post shows you how to set up pfSense to do remote logging to ELK and get some pretty awesome graphs *** Some updates to IPFW (https://svnweb.freebsd.org/base?view=revision&revision=272840) Even though PF gets a lot of attention, a lot of FreeBSD people still love IPFW While mostly a dormant section of the source tree, some updates were recently committed to -CURRENT The commit lists the user-visible changes, performance changes, ABI changes and internal changes It should be merged back to -STABLE after a month or so of testing, and will probably end up in 10.2-RELEASE Also check this blog post (http://blog.cochard.me/2014/10/ipfw-improvement-on-freebsd-current.html) for some more information and fancy graphs *** Interview - Hiroki Sato (佐藤広生) - hrs@freebsd.org (mailto:hrs@freebsd.org) / @hiroki_sato (https://twitter.com/hiroki_sato) BSD in Japan, technology conferences, various topics News Roundup pfSense on Hyper-V (https://virtual-ops.de/?p=600) In case you didn't know, the latest pfSense snapshots support running on Hyper-V Unfortunately, the current stable release is based on an old, unsupported FreeBSD 8.x base, so you have to use the snapshots for now The author of the post tells about his experience running pfSense and gives lots of links to read if you're interested in doing the same He also praises pfSense above other Linux-based solutions for its IPv6 support and high quality code *** OpenBSD as a daily driver (https://www.reddit.com/r/openbsd/comments/2isz24/openbsd_as_a_daily_driver/) A curious Reddit user posts to ask the community about using OpenBSD as an everyday desktop OS The overall consensus is that it works great for that, stays out of your way and is quite reliable Caveats would include there being no Adobe Flash support (though others consider this a blessing..) and it requiring a more hands-on approach to updating If you're considering running OpenBSD as a "daily driver," check all the comments for more information and tips *** Getting PF log statistics (https://secure.ciscodude.net/2014/10/09/firewall-log-stats/) The author of this post runs an OpenBSD box in front of all his VMs at his colocation, and details his experiences with firewall logs He usually investigates any IPs of interest with whois, nslookup, etc. - but this gets repetitive quickly, so.. He sets out to find the best way to gather firewall log statistics After coming across a perl script (http://www.pantz.org/software/pf/pantzpfblockstats.html) to do this, he edited it a bit and is now a happy, lazy admin once again You can try out his updated PF script here (https://github.com/tbaschak/Pantz-PFlog-Stats) *** FlashRD 1.7 released (http://www.nmedia.net/flashrd/) In case anyone's not familiar, flashrd is a tool to create OpenBSD images for embedded hardware devices, executing from a virtualized environment This new version is based on (the currently unreleased) OpenBSD 5.6, and automatically adapts to the number of CPUs you have for building It also includes fixes for 4k drives and lots of various other improvements If you're interested in learning more, take a look at some of the slides and audio from the main developer on the website *** Feedback/Questions Antonio writes in (http://slexy.org/view/s20XvSa4h0) Don writes in (http://slexy.org/view/s20lGUXW3d) Andriy writes in (http://slexy.org/view/s2al5DFIO7) Richard writes in (http://slexy.org/view/s203QoFuWs) Robert writes in (http://slexy.org/view/s29WIplL6k) *** Mailing List Gold Subtle trolling (https://marc.info/?l=openbsd-cvs&m=141271076115386&w=2) Old bugs with old fixes (https://marc.info/?l=openbsd-cvs&m=141275713329601&w=2) A pig reinstall (https://lists.freebsd.org/pipermail/freebsd-ports/2014-October/095906.html) Strange DOS-like environment (https://lists.freebsd.org/pipermail/freebsd-doc/2014-October/024408.html) ***