Hosting service for software projects using Git
Inbal Shani is the chief product officer at GitHub, where she leads core product management, along with product strategy, marketing, open source, and communities, including the development of GitHub Copilot. Prior to joining GitHub, she led engineering and product teams at Amazon and Microsoft. In today's conversation, we discuss:• What Inbal believes is overhyped and underhyped in the rapidly changing field of AI• How AI-driven code generation is changing software development• Her take on whether AI will replace developers• How software development looks in 3 to 5 years• How product teams operate at GitHub• GitHub's Next team, and other ways the company fosters a culture of innovation• The success metrics and philosophy behind GitHub's Copilot—Brought to you by Jira Product Discovery—Atlassian's new prioritization and roadmapping tool built for product teams | Sanity—The most customizable content layer to power your growth engine | HelpBar by Chameleon—The free in-app universal search solution built for SaaS—Find the transcript for this episode and all past episodes at: https://www.lennyspodcast.com/episodes/. Today's transcript will be live by 8 a.m. PT.—Where to find Inbal Shani:• LinkedIn: https://www.linkedin.com/in/inbalshani/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Inbal's background(04:17) Why generative AI is not going to replace developers in the near future (05:54) Why AI-driven testing is underhyped(07:48) What the next 3 to 5 years will look like(10:13) Stats around the use of GitHub Copilot (12:07) How Copilot enables engineers to work more efficiently(13:38) Common mistakes when adopting AI into your workflows(16:42) How GitHub operationalizes “dogfooding”(18:46) The philosophy behind Copilot(20:24) Copilot's success metrics(24:54) How Copilot encourages collaboration(26:37) What we lose when AI writes code for us(29:35) A retrospective on the generative AI space(30:47) Inbal's thoughts on the future of AI(32:35) How to make space for innovative product ideas(34:37) How GitHub stays on the cutting edge of innovation(36:44) The GitHub Next team(39:20) Advice for early product managers(42:17) Inbal's “biggest learning” from her career(45:34) Inbal's closing thoughts(46:19) Lightning round—Referenced:• How to measure and improve developer productivity | Nicole Forsgren (Microsoft Research, GitHub, Google): https://www.lennyspodcast.com/how-to-measure-and-improve-developer-productivity-nicole-forsgren-microsoft-research-github-goo/• DORA: https://dora.dev/• The role of AI in product development | Ryan J. Salva (VP of Product at GitHub, Copilot): https://www.lennyspodcast.com/the-role-of-ai-in-new-product-development-ryan-j-salva-vp-of-product-at-github-copilot/• GitHub Universe 2023 day 2 keynote: The productivity platform for all developers: https://www.youtube.com/watch?v=h_o9kFPVeiw• Satya Nadella on LinkedIn: https://www.linkedin.com/in/satyanadella/• TomTom: https://www.tomtom.com/• Failing Forward: Turning Mistakes into Stepping Stones for Success: https://www.amazon.com/Failing-Forward-Turning-Mistakes-Stepping/dp/0785288570/• Good to Great: Why Some Companies Make the Leap and Others Don't: https://www.amazon.com/Good-Great-Some-Companies-Others/dp/0066620996• Turning the Flywheel: A Monograph to Accompany Good to Great: https://www.amazon.com/Turning-Flywheel-Monograph-Accompany-Great/dp/0062933795• Dare to Lead Like a Girl: How to Survive and Thrive in the Corporate Jungle: https://www.amazon.com/Dare-Lead-Like-Girl-Corporate/dp/1538163527• All the Light We Cannot See on Netflix: https://www.netflix.com/title/81083008• The Wheel of Time on Amazon Prime: https://www.amazon.com/Wheel-Time-Season-1/dp/B09F59CZ7R—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email firstname.lastname@example.org.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Catch us at Modular's ModCon next week with Chris Lattner, and join our community!Due to Bryan's very wide ranging experience in data science and AI across Blue Bottle (!), StitchFix, Weights & Biases, and now Hex Magic, this episode can be considered a two-parter.Notebooks = Chat++We've talked a lot about AI UX (in our meetups, writeups, and guest posts), and today we're excited to dive into a new old player in AI interfaces: notebooks! Depending on your background, you either Don't Like or you Like notebooks — they are the most popular example of Knuth's Literate Programming concept, basically a collection of cells; each cell can execute code, display it, and share its state with all the other cells in a notebook. They can also simply be Markdown cells to add commentary to the analysis. Notebooks have a long history but most recently became popular from iPython evolving into Project Jupyter, and a wave of notebook based startups from Observable to DeepNote and Databricks sprung up for the modern data stack.The first wave of AI applications has been very chat focused (ChatGPT, Character.ai, Perplexity, etc). Chat as a user interface has a few shortcomings, the major one being the inability to edit previous messages. We enjoyed Bryan's takes on why notebooks feel like “Chat++” and how they are building Hex Magic:* Atomic actions vs Stream of consciousness: in a chat interface, you make corrections by adding more messages to a conversation (i.e. “Can you try again by doing X instead?” or “I actually meant XYZ”). The context can easily get messy and confusing for models (and humans!) to follow. Notebooks' cell structure on the other hand allows users to go back to any previous cells and make edits without having to add new ones at the bottom. * “Airlocks” for repeatability: one of the ideas they came up with at Hex is “airlocks”, a collection of cells that depend on each other and keep each other in sync. If you have a task like “Create a summary of my customers' recent purchases”, there are many sub-tasks to be done (look up the data, sum the amounts, write the text, etc). Each sub-task will be in its own cell, and the airlock will keep them all in sync together.* Technical + Non-Technical users: previously you had to use Python / R / Julia to write notebooks code, but with models like GPT-4, natural language is usually enough. Hex is also working on lowering the barrier of entry for non-technical users into notebooks, similar to how Code Interpreter is doing the same in ChatGPT. Obviously notebooks aren't new for developers (OpenAI Cookbooks are a good example), but haven't had much adoption in less technical spheres. Some of the shortcomings of chat UIs + LLMs lowering the barrier of entry to creating code cells might make them a much more popular UX going forward.RAG = RecSys!We also talked about the LLMOps landscape and why it's an “iron mine” rather than a “gold rush”: I'll shamelessly steal [this] from a friend, Adam Azzam from Prefect. He says that [LLMOps] is more of like an iron mine than a gold mine in the sense of there is a lot of work to extract this precious, precious resource. Don't expect to just go down to the stream and do a little panning. There's a lot of work to be done. And frankly, the steps to go from this resource to something valuable is significant.Some of my favorite takeaways:* RAG as RecSys for LLMs: at its core, the goal of a RAG pipeline is finding the most relevant documents based on a task. This isn't very different from traditional recommendation system products that surface things for users. How can we apply old lessons to this new problem? Bryan cites fellow AIE Summit speaker and Latent Space Paper Club host Eugene Yan in decomposing the retrieval problem into retrieval, filtering, and scoring/ranking/ordering:As AI Engineers increasingly find that long context has tradeoffs, they will also have to relearn age old lessons that vector search is NOT all you need and a good systems not models approach is essential to scalable/debuggable RAG. Good thing Bryan has just written the first O'Reilly book about modern RecSys, eh?* Narrowing down evaluation: while “hallucination” is a easy term to throw around, the reality is more nuanced. A lot of times, model errors can be automatically fixed: is this JSON valid? If not, why? Is it just missing a closing brace? These smaller issues can be checked and fixed before returning the response to the user, which is easier than fixing the model.* Fine-tuning isn't all you need: when they first started building Magic, one of the discussions was around fine-tuning a model. In our episode with Jeremy Howard we talked about how fine-tuning leads to loss of capabilities as well. In notebooks, you are often dealing with domain-specific data (i.e. purchases, orders, wardrobe composition, household items, etc); the fact that the model understands that “items” are probably part of an “order” is really helpful. They have found that GPT-4 + 3.5-turbo were everything they needed to ship a great product rather than having to fine-tune on notebooks specifically.Definitely recommend listening to this one if you are interested in getting a better understanding of how to think about AI, data, and how we can use traditional machine learning lessons in large language models. The AI PivotFor more Bryan, don't miss his fireside chat at the AI Engineer Summit:Show Notes* Hex Magic* Bryan's new book: Building Recommendation Systems in Python and JAX* Bryan's whitepaper about MLOps* “Kitbashing in ML”, slides from his talk on building on top of foundation models* “Bayesian Statistics The Fun Way” by Will Kurt* Bryan's Twitter* “Berkeley man determined to walk every street in his city”* People:* Adam Azzam* Graham Neubig* Eugene Yan* Even OldridgeTimestamps* [00:00:00] Bryan's background* [00:02:34] Overview of Hex and the Magic product* [00:05:57] How Magic handles the complex notebook format to integrate cleanly with Hex* [00:08:37] Discussion of whether to build vs buy models - why Hex uses GPT-4 vs fine-tuning* [00:13:06] UX design for Magic with Hex's notebook format (aka “Chat++”)* [00:18:37] Expanding notebooks to less technical users* [00:23:46] The "Memex" as an exciting underexplored area - personal knowledge graph and memory augmentation* [00:27:02] What makes for good LLMops vs MLOps* [00:34:53] Building rigorous evaluators for Magic and best practices* [00:36:52] Different types of metrics for LLM evaluation beyond just end task accuracy* [00:39:19] Evaluation strategy when you don't own the core model that's being evaluated* [00:41:49] All the places you can make improvements outside of retraining the core LLM* [00:45:00] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, Partner and CTO-in-Residence of Decibel Partners, and today I'm joining by Bryan Bischof. [00:00:15]Bryan: Hey, nice to meet you. [00:00:17]Alessio: So Bryan has one of the most thorough and impressive backgrounds we had on the show so far. Lead software engineer at Blue Bottle Coffee, which if you live in San Francisco, you know a lot about. And maybe you'll tell us 30 seconds on what that actually means. You worked as a data scientist at Stitch Fix, which used to be one of the premier data science teams out there. [00:00:38]Bryan: It used to be. Ouch. [00:00:39]Alessio: Well, no, no. Well, you left, you know, so how good can it still be? Then head of data science at Weights and Biases. You're also a professor at Rutgers and you're just wrapping up a new O'Reilly book as well. So a lot, a lot going on. Yeah. [00:00:52]Bryan: And currently head of AI at Hex. [00:00:54]Alessio: Let's do the Blue Bottle thing because I definitely want to hear what's the, what's that like? [00:00:58]Bryan: So I was leading data at Blue Bottle. I was the first data hire. I came in to kind of get the data warehouse in order and then see what we could build on top of it. But ultimately I mostly focused on demand forecasting, a little bit of recsys, a little bit of sort of like website optimization and analytics. But ultimately anything that you could imagine sort of like a retail company needing to do with their data, we had to do. I sort of like led that team, hired a few people, expanded it out. One interesting thing was I was part of the Nestle acquisition. So there was a period of time where we were sort of preparing for that and didn't know, which was a really interesting dynamic. Being acquired is a very not necessarily fun experience for the data team. [00:01:37]Alessio: I build a lot of internal tools for sourcing at the firm and we have a small VCs and data community of like other people doing it. And I feel like if you had a data feed into like the Blue Bottle in South Park, the Blue Bottle at the Hanahaus in Palo Alto, you can get a lot of secondhand information on the state of VC funding. [00:01:54]Bryan: Oh yeah. I feel like the real source of alpha is just bugging a Blue Bottle. [00:01:58]Alessio: Exactly. And what's your latest book about? [00:02:02]Bryan: I just wrapped up a book with a coauthor Hector Yee called Building Production Recommendation Systems. I'll give you the rest of the title because it's fun. It's in Python and JAX. And so for those of you that are like eagerly awaiting the first O'Reilly book that focuses on JAX, here you go. [00:02:17]Alessio: Awesome. And we'll chat about that later on. But let's maybe talk about Hex and Magic before. I've known Hex for a while, I've used it as a notebook provider and you've been working on a lot of amazing AI enabled experiences. So maybe run us through that. [00:02:34]Bryan: So I too, before I sort of like joined Hex, saw it as this like really incredible notebook platform, sort of a great place to do data science workflows, quite complicated, quite ad hoc interactive ones. And before I joined, I thought it was the best place to do data science workflows. And so when I heard about the possibility of building AI tools on top of that platform, that seemed like a huge opportunity. In particular, I lead the product called Magic. Magic is really like a suite of sort of capabilities as opposed to its own independent product. What I mean by that is they are sort of AI enhancements to the existing product. And that's a really important difference from sort of building something totally new that just uses AI. It's really important to us to enhance the already incredible platform with AI capabilities. So these are things like the sort of obvious like co-pilot-esque vibes, but also more interesting and dynamic ways of integrating AI into the product. And ultimately the goal is just to make people even more effective with the platform. [00:03:38]Alessio: How do you think about the evolution of the product and the AI component? You know, even if you think about 10 months ago, some of these models were not really good on very math based tasks. Now they're getting a lot better. I'm guessing a lot of your workloads and use cases is data analysis and whatnot. [00:03:53]Bryan: When I joined, it was pre 4 and it was pre the sort of like new chat API and all that. But when I joined, it was already clear that GPT was pretty good at writing code. And so when I joined, they had already executed on the vision of what if we allowed the user to ask a natural language prompt to an AI and have the AI assist them with writing code. So what that looked like when I first joined was it had some capability of writing SQL and it had some capability of writing Python and it had the ability to explain and describe code that was already written. Those very, what feel like now primitive capabilities, believe it or not, were already quite cool. It's easy to look back and think, oh, it's like kind of like Stone Age in these timelines. But to be clear, when you're building on such an incredible platform, adding a little bit of these capabilities feels really effective. And so almost immediately I started noticing how it affected my own workflow because ultimately as sort of like an engineering lead and a lot of my responsibility is to be doing analytics to make data driven decisions about what products we build. And so I'm actually using Hex quite a bit in the process of like iterating on our product. When I'm using Hex to do that, I'm using Magic all the time. And even in those early days, the amount that it sped me up, that it enabled me to very quickly like execute was really impressive. And so even though the models weren't that good at certain things back then, that capability was not to be underestimated. But to your point, the models have evolved between 3.5 Turbo and 4. We've actually seen quite a big enhancement in the kinds of tasks that we can ask Magic and even more so with things like function calling and understanding a little bit more of the landscape of agent workflows, we've been able to really accelerate. [00:05:57]Alessio: You know, I tried using some of the early models in notebooks and it actually didn't like the IPyNB formatting, kind of like a JSON plus XML plus all these weird things. How have you kind of tackled that? Do you have some magic behind the scenes to make it easier for models? Like, are you still using completely off the shelf models? Do you have some proprietary ones? [00:06:19]Bryan: We are using at the moment in production 3.5 Turbo and GPT-4. I would say for a large number of our applications, GPT-4 is pretty much required. To your question about, does it understand the structure of the notebook? And does it understand all of this somewhat complicated wrappers around the content that you want to show? We do our very best to abstract that away from the model and make sure that the model doesn't have to think about what the cell wrapper code looks like. Or for our Magic charts, it doesn't have to speak the language of Vega. These are things that we put a lot of work in on the engineering side, to the AI engineer profile. This is the AI engineering work to get all of that out of the way so that the model can speak in the languages that it's best at. The model is quite good at SQL. So let's ensure that it's speaking the language of SQL and that we are doing the engineering work to get the output of that model, the generations, into our notebook format. So too for other cell types that we support, including charts, and just in general, understanding the flow of different cells, understanding what a notebook is, all of that is hard work that we've done to ensure that the model doesn't have to learn anything like that. I remember early on, people asked the question, are you going to fine tune a model to understand Hex cells? And almost immediately, my answer was no. No we're not. Using fine-tuned models in 2022, I was already aware that there are some limitations of that approach and frankly, even using GPT-3 and GPT-2 back in the day in Stitch Fix, I had already seen a lot of instances where putting more effort into pre- and post-processing can avoid some of these larger lifts. [00:08:14]Alessio: You mentioned Stitch Fix and GPT-2. How has the balance between build versus buy, so to speak, evolved? So GPT-2 was a model that was not super advanced, so for a lot of use cases it was worth building your own thing. Is with GPT-4 and the likes, is there a reason to still build your own models for a lot of this stuff? Or should most people be fine-tuning? How do you think about that? [00:08:37]Bryan: Sometimes people ask, why are you using GPT-4 and why aren't you going down the avenue of fine-tuning today? I can get into fine-tuning specifically, but I do want to talk a little bit about the good old days of GPT-2. Shout out to Reza. Reza introduced me to GPT-2. I still remember him explaining the difference between general transformers and GPT. I remember one of the tasks that we wanted to solve with transformer-based generative models at Stitch Fix were writing descriptions of clothing. You might think, ooh, that's a multi-modal problem. The answer is, not necessarily. We actually have a lot of features about the clothes that are almost already enough to generate some reasonable text. I remember at that time, that was one of the first applications that we had considered. There was a really great team of NLP scientists at Stitch Fix who worked on a lot of applications like this. I still remember being exposed to the GPT endpoint back in the days of 2. If I'm not mistaken, and feel free to fact check this, I'm pretty sure Stitch Fix was the first OpenAI customer, unlike their true enterprise application. Long story short, I ultimately think that depending on your task, using the most cutting-edge general model has some advantages. If those are advantages that you can reap, then go for it. So at Hex, why GPT-4? Why do we need such a general model for writing code, writing SQL, doing data analysis? Shouldn't a fine-tuned model just on Kaggle notebooks be good enough? I'd argue no. And ultimately, because we don't have one specific sphere of data that we need to write great data analysis workbooks for, we actually want to provide a platform for anyone to do data analysis about their business. To do that, you actually need to entertain an extremely general universe of concepts. So as an example, if you work at Hex and you want to do data analysis, our projects are called Hexes. That's relatively straightforward to teach it. There's a concept of a notebook. These are data science notebooks, and you want to ask analytics questions about notebooks. Maybe if you trained on notebooks, you could answer those questions, but let's come back to Blue Bottle. If I'm at Blue Bottle and I have data science work to do, I have to ask it questions about coffee. I have to ask it questions about pastries, doing demand forecasting. And so very quickly, you can see that just by serving just those two customers, a model purely fine-tuned on like Kaggle competitions may not actually fit the bill. And so the more and more that you want to build a platform that is sufficiently general for your customer base, the more I think that these large general models really pack a lot of additional opportunity in. [00:11:21]Alessio: With a lot of our companies, we talked about stuff that you used to have to extract features for, now you have out of the box. So say you're a travel company, you want to do a query, like show me all the hotels and places that are warm during spring break. It would be just literally like impossible to do before these models, you know? But now the model knows, okay, spring break is like usually these dates and like these locations are usually warm. So you get so much out of it for free. And in terms of Magic integrating into Hex, I think AI UX is one of our favorite topics and how do you actually make that seamless. In traditional code editors, the line of code is like kind of the atomic unit and HEX, you have the code, but then you have the cell also. [00:12:04]Bryan: I think the first time I saw Copilot and really like fell in love with Copilot, I thought finally, fancy auto-complete. And that felt so good. It felt so elegant. It felt so right sized for the task. But as a data scientist, a lot of the work that you do previous to the ML engineering part of the house, you're working in these cells and these cells are atomic. They're expressing one idea. And so ultimately, if you want to make the transition from something like this code, where you've got like a large amount of code and there's a large amount of files and they kind of need to have awareness of one another, and that's a long story and we can talk about that. But in this atomic, somewhat linear flow through the notebook, what you ultimately want to do is you want to reason with the agent at the level of these individual thoughts, these atomic ideas. Usually it's good practice in say Jupyter notebook to not let your cells get too big. If your cell doesn't fit on one page, that's like kind of a code smell, like why is it so damn big? What are you doing in this cell? That also lends some hints as to what the UI should feel like. I want to ask questions about this one atomic thing. So you ask the agent, take this data frame and strip out this prefix from all the strings in this column. That's an atomic task. It's probably about two lines of pandas. I can write it, but it's actually very natural to ask magic to do that for me. And what I promise you is that it is faster to ask magic to do that for me. At this point, that kind of code, I never write. And so then you ask the next question, which is what should the UI be to do chains, to do multiple cells that work together? Because ultimately a notebook is a chain of cells and actually it's a first class citizen for Hex. So we have a DAG and the DAG is the execution DAG for the individual cells. This is one of the reasons that Hex is reactive and kind of dynamic in that way. And so the very next question is, what is the sort of like AI UI for these collections of cells? And back in June and July, we thought really hard about what does it feel like to ask magic a question and get a short chain of cells back that execute on that task. And so we've thought a lot about sort of like how that breaks down into individual atomic units and how those are tied together. We introduced something which is kind of an internal name, but it's called the airlock. And the airlock is exactly a sequence of cells that refer to one another, understand one another, use things that are happening in other cells. And it gives you a chance to sort of preview what magic has generated for you. Then you can accept or reject as an entire group. And that's one of the reasons we call it an airlock, because at any time you can sort of eject the airlock and see it in the space. But to come back to your question about how the AI UX fits into this notebook, ultimately a notebook is very conversational in its structure. I've got a series of thoughts that I'm going to express as a series of cells. And sometimes if I'm a kind data scientist, I'll put some text in between them too, explaining what on earth I'm doing. And that feels, in my opinion, and I think this is quite shared amongst exons, that feels like a really nice refinement of the chat UI. I've been saying for several months now, like, please stop building chat UIs. There is some irony because I think what the notebook allows is like chat plus plus. [00:15:36]Alessio: Yeah, I think the first wave of everything was like chat with X. So it was like chat with your data, chat with your documents and all of this. But people want to code, you know, at the end of the day. And I think that goes into the end user. I think most people that use notebooks are software engineer, data scientists. I think the cool things about these models is like people that are not traditionally technical can do a lot of very advanced things. And that's why people like code interpreter and chat GBT. How do you think about the evolution of that persona? Do you see a lot of non-technical people also now coming to Hex to like collaborate with like their technical folks? [00:16:13]Bryan: Yeah, I would say there might even be more enthusiasm than we're prepared for. We're obviously like very excited to bring what we call the like low floor user into this world and give more people the opportunity to self-serve on their data. We wanted to start by focusing on users who are already familiar with Hex and really make magic fantastic for them. One of the sort of like internal, I would say almost North Stars is our team's charter is to make Hex feel more magical. That is true for all of our users, but that's easiest to do on users that are already able to use Hex in a great way. What we're hearing from some customers in particular is sort of like, I'm excited for some of my less technical stakeholders to get in there and start asking questions. And so that raises a lot of really deep questions. If you immediately enable self-service for data, which is almost like a joke over the last like maybe like eight years, if you immediately enabled self-service, what challenges does that bring with it? What risks does that bring with it? And so it has given us the opportunity to think about things like governance and to think about things like alignment with the data team and making sure that the data team has clear visibility into what the self-service looks like. Having been leading a data team, trying to provide answers for stakeholders and hearing that they really want to self-serve, a question that we often found ourselves asking is, what is the easiest way that we can keep them on the rails? What is the easiest way that we can set up the data warehouse and set up our tools such that they can ask and answer their own questions without coming away with like false answers? Because that is such a priority for data teams, it becomes an important focus of my team, which is, okay, magic may be an enabler. And if it is, what do we also have to respect? We recently introduced the data manager and the data manager is an auxiliary sort of like tool on the Hex platform to allow people to write more like relevant metadata about their data warehouse to make sure that magic has access to the best information. And there are some things coming to kind of even further that story around governance and understanding. [00:18:37]Alessio: You know, you mentioned self-serve data. And when I was like a joke, you know, the whole rush to the modern data stack was something to behold. Do you think AI is like in a similar space where it's like a bit of a gold rush? [00:18:51]Bryan: I have like sort of two comments here. One I'll shamelessly steal from a friend, Adam Azzam from Prefect. He says that this is more of like an iron mine than a gold mine in the sense of there is a lot of work to extract this precious, precious resource. And that's the first one is I think, don't expect to just go down to the stream and do a little panning. There's a lot of work to be done. And frankly, the steps to go from this like gold to, or this resource to something valuable is significant. I think people have gotten a little carried away with the old maxim of like, don't go pan for gold, sell pickaxes and shovels. It's a much stronger business model. At this point, I feel like I look around and I see more pickaxe salesmen and shovel salesmen than I do prospectors. And that scares me a little bit. Metagame where people are starting to think about how they can build tools for people building tools for AI. And that starts to give me a little bit of like pause in terms of like, how confident are we that we can even extract this resource into something valuable? I got a text message from a VC earlier today, and I won't name the VC or the fund, but the question was, what are some medium or large size companies that have integrated AI into their platform in a way that you're really impressed by? And I looked at the text message for a few minutes and I was finding myself thinking and thinking, and I responded, maybe only co-pilot. It's been a couple hours now, and I don't think I've thought of another one. And I think that's where I reflect again on this, like iron versus gold. If it was really gold, I feel like I'd be more blown away by other AI integrations. And I'm not yet. [00:20:40]Alessio: I feel like all the people finding gold are the ones building things that traditionally we didn't focus on. So like mid-journey. I've talked to a company yesterday, which I'm not going to name, but they do agents for some use case, let's call it. They are 11 months old. They're making like 8 million a month in revenue, but in a space that you wouldn't even think about selling to. If you were like a shovel builder, you wouldn't even go sell to those people. And Swix talks about this a bunch, about like actually trying to go application first for some things. Let's actually see what people want to use and what works. What do you think are the most maybe underexplored areas in AI? Is there anything that you wish people were actually trying to shovel? [00:21:23]Bryan: I've been saying for a couple of months now, if I had unlimited resources and I was just sort of like truly like, you know, on my own building whatever I wanted, I think the thing that I'd be most excited about is building sort of like the personal Memex. The Memex is something that I've wanted since I was a kid. And are you familiar with the Memex? It's the memory extender. And it's this idea that sort of like human memory is quite weak. And so if we can extend that, then that's a big opportunity. So I think one of the things that I've always found to be one of the limiting cases here is access. How do you access that data? Even if you did build that data like out, how would you quickly access it? And one of the things I think there's a constellation of technologies that have come together in the last couple of years that now make this quite feasible. Like information retrieval has really improved and we have a lot more simple systems for getting started with information retrieval to natural language is ultimately the interface that you'd really like these systems to work on, both in terms of sort of like structuring the data and preparing the data, but also on the retrieval side. So what keys off the query for retrieval, probably ultimately natural language. And third, if you really want to go into like the purely futuristic aspect of this, it is latent voice to text. And that is also something that has quite recently become possible. I did talk to a company recently called gather, which seems to have some cool ideas in this direction, but I haven't seen yet what I, what I really want, which is I want something that is sort of like every time I listen to a podcast or I watch a movie or I read a book, it sort of like has a great vector index built on top of all that information that's contained within. And then when I'm having my next conversation and I can't quite remember the name of this person who did this amazing thing, for example, if we're talking about the Memex, it'd be really nice to have Vannevar Bush like pop up on my, you know, on my Memex display, because I always forget Vannevar Bush's name. This is one time that I didn't, but I often do. This is something that I think is only recently enabled and maybe we're still five years out before it can be good, but I think it's one of the most exciting projects that has become possible in the last three years that I think generally wasn't possible before. [00:23:46]Alessio: Would you wear one of those AI pendants that record everything? [00:23:50]Bryan: I think I'm just going to do it because I just like support the idea. I'm also admittedly someone who, when Google Glass first came out, thought that seems awesome. I know that there's like a lot of like challenges about the privacy aspect of it, but it is something that I did feel was like a disappointment to lose some of that technology. Fun fact, one of the early Google Glass developers was this MIT computer scientist who basically built the first wearable computer while he was at MIT. And he like took notes about all of his conversations in real time on his wearable and then he would have real time access to them. Ended up being kind of a scandal because he wanted to use a computer during his defense and they like tried to prevent him from doing it. So pretty interesting story. [00:24:35]Alessio: I don't know but the future is going to be weird. I can tell you that much. Talking about pickaxes, what do you think about the pickaxes that people built before? Like all the whole MLOps space, which has its own like startup graveyard in there. How are those products evolving? You know, you were at Wits and Biases before, which is now doing a big AI push as well. [00:24:57]Bryan: If you really want to like sort of like rub my face in it, you can go look at my white paper on MLOps from 2022. It's interesting. I don't think there's many things in that that I would these days think are like wrong or even sort of like naive. But what I would say is there are both a lot of analogies between MLOps and LLMops, but there are also a lot of like key differences. So like leading an engineering team at the moment, I think a lot more about good engineering practices than I do about good ML practices. That being said, it's been very convenient to be able to see around corners in a few of the like ML places. One of the first things I did at Hex was work on evals. This was in February. I hadn't yet been overwhelmed by people talking about evals until about May. And the reason that I was able to be a couple of months early on that is because I've been building evals for ML systems for years. I don't know how else to build an ML system other than start with the evals. I teach my students at Rutgers like objective framing is one of the most important steps in starting a new data science project. If you can't clearly state what your objective function is and you can't clearly state how that relates to the problem framing, you've got no hope. And I think that is a very shared reality with LLM applications. Coming back to one thing you mentioned from earlier about sort of like the applications of these LLMs. To that end, I think what pickaxes I think are still very valuable is understanding systems that are inherently less predictable, that are inherently sort of experimental. On my engineering team, we have an experimentalist. So one of the AI engineers, his focus is experiments. That's something that you wouldn't normally expect to see on an engineering team. But it's important on an AI engineering team to have one person whose entire focus is just experimenting, trying, okay, this is a hypothesis that we have about how the model will behave. Or this is a hypothesis we have about how we can improve the model's performance on this. And then going in, running experiments, augmenting our evals to test it, et cetera. What I really respect are pickaxes that recognize the hybrid nature of the sort of engineering tasks. They are ultimately engineering tasks with a flavor of ML. And so when systems respect that, I tend to have a very high opinion. One thing that I was very, very aligned with Weights and Biases on is sort of composability. These systems like ML systems need to be extremely composable to make them much more iterative. If you don't build these systems in composable ways, then your integration hell is just magnified. When you're trying to iterate as fast as people need to be iterating these days, I think integration hell is a tax not worth paying. [00:27:51]Alessio: Let's talk about some of the LLM native pickaxes, so to speak. So RAG is one. One thing is doing RAG on text data. One thing is doing RAG on tabular data. We're releasing tomorrow our episode with Kube, the semantic layer company. Curious to hear your thoughts on it. How are you doing RAG, pros, cons? [00:28:11]Bryan: It became pretty obvious to me almost immediately that RAG was going to be important. Because ultimately, you never expect your model to have access to all of the things necessary to respond to a user's request. So as an example, Magic users would like to write SQL that's relevant to their business. And it's important then to have the right data objects that they need to query. We can't expect any LLM to understand our user's data warehouse topology. So what we can expect is that we can build a RAG system that is data warehouse aware, data topology aware, and use that to provide really great information to the model. If you ask the model, how are my customers trending over time? And you ask it to write SQL to do that. What is it going to do? Well, ultimately, it's going to hallucinate the structure of that data warehouse that it needs to write a general query. Most likely what it's going to do is it's going to look in its sort of memory of Stack Overflow responses to customer queries, and it's going to say, oh, it's probably a customer stable and we're in the age of DBT, so it might be even called, you know, dim customers or something like that. And what's interesting is, and I encourage you to try, chatGBT will do an okay job of like hallucinating up some tables. It might even hallucinate up some columns. But what it won't do is it won't understand the joins in that data warehouse that it needs, and it won't understand the data caveats or the sort of where clauses that need to be there. And so how do you get it to understand those things? Well, this is textbook RAG. This is the exact kind of thing that you expect RAG to be good at augmenting. But I think where people who have done a lot of thinking about RAG for the document case, they think of it as chunking and sort of like the MapReduce and the sort of like these approaches. But I think people haven't followed this train of thought quite far enough yet. Jerry Liu was on the show and he talked a little bit about thinking of this as like information retrieval. And I would push that even further. And I would say that ultimately RAG is just RecSys for LLM. As I kind of already mentioned, I'm a little bit recommendation systems heavy. And so from the beginning, RAG has always felt like RecSys to me. It has always felt like you're building a recommendation system. And what are you trying to recommend? The best possible resources for the LLM to execute on a task. And so most of my approach to RAG and the way that we've improved magic via retrieval is by building a recommendation system. [00:30:49]Alessio: It's funny, as you mentioned that you spent three years writing the book, the O'Reilly book. Things must have changed as you wrote the book. I don't want to bring out any nightmares from there, but what are the tips for people who want to stay on top of this stuff? Do you have any other favorite newsletters, like Twitter accounts that you follow, communities you spend time in? [00:31:10]Bryan: I am sort of an aggressive reader of technical books. I think I'm almost never disappointed by time that I've invested in reading technical manuscripts. I find that most people write O'Reilly or similar books because they've sort of got this itch that they need to scratch, which is that I have some ideas, I have some understanding that we're hard won, I need to tell other people. And there's something that, from my experience, correlates between that itch and sort of like useful information. As an example, one of the people on my team, his name is Will Kurt, he wrote a book sort of Bayesian statistics the fun way. I knew some Bayesian statistics, but I read his book anyway. And the reason was because I was like, if someone feels motivated to write a book called Bayesian statistics the fun way, they've got something to say about Bayesian statistics. I learned so much from that book. That book is like technically like targeted at someone with less knowledge and experience than me. And boy, did it humble me about my understanding of Bayesian statistics. And so I think this is a very boring answer, but ultimately like I read a lot of books and I think that they're a really valuable way to learn these things. I also regrettably still read a lot of Twitter. There is plenty of noise in that signal, but ultimately it is still usually like one of the first directions to get sort of an instinct for what's valuable. The other comment that I want to make is we are in this age of sort of like archive is becoming more of like an ad platform. I think that's a little challenging right now to kind of use it the way that I used to use it, which is for like higher signal. I've chatted a lot with a CMU professor, Graham Neubig, and he's been doing LLM evaluation and LLM enhancements for about five years and know that I didn't misspeak. And I think talking to him has provided me a lot of like directionality for more believable sources. Trying to cut through the hype. I know that there's a lot of other things that I could mention in terms of like just channels, but ultimately right now I think there's almost an abundance of channels and I'm a little bit more keen on high signal. [00:33:18]Alessio: The other side of it is like, I see so many people say, Oh, I just wrote a paper on X and it's like an article. And I'm like, an article is not a paper, but it's just funny how I know we were kind of chatting before about terms being reinvented and like people that are not from this space kind of getting into AI engineering now. [00:33:36]Bryan: I also don't want to be gatekeepy. Actually I used to say a lot to people, don't be shy about putting your ideas down on paper. I think it's okay to just like kind of go for it. And I, I myself have something on archive that is like comically naive. It's intentionally naive. Right now I'm less concerned by more naive approaches to things than I am by the purely like advertising approach to sort of writing these short notes and articles. I think blogging still has a good place. And I remember getting feedback during my PhD thesis that like my thesis sounded more like a long blog post. And I now feel like that curmudgeonly professor who's also like, yeah, maybe just keep this to the blogs. That's funny.Alessio: Uh, yeah, I think one of the things that Swyx said when he was opening the AI engineer summit a couple of weeks ago was like, look, most people here don't know much about the space because it's so new and like being open and welcoming. I think it's one of the goals. And that's why we try and keep every episode at a level that it's like, you know, the experts can understand and learn something, but also the novices can kind of like follow along. You mentioned evals before. I think that's one of the hottest topics obviously out there right now. What are evals? How do we know if they work? Yeah. What are some of the fun learnings from building them into X? [00:34:53]Bryan: I said something at the AI engineer summit that I think a few people have already called out, which is like, if you can't get your evals to be sort of like objective, then you're not trying hard enough. I stand by that statement. I'm not going to, I'm not going to walk it back. I know that that doesn't feel super good because people, people want to think that like their unique snowflake of a problem is too nuanced. But I think this is actually one area where, you know, in this dichotomy of like, who can do AI engineering? And the answer is kind of everybody. Software engineering can become AI engineering and ML engineering can become AI engineering. One thing that I think the more data science minded folk have an advantage here is we've gotten more practice in taking very vague notions and trying to put a like objective function around that. And so ultimately I would just encourage everybody who wants to build evals, just work incredibly hard on codifying what is good and bad in terms of these objective metrics. As far as like how you go about turning those into evals, I think it's kind of like sweat equity. Unfortunately, I told the CEO of gantry several months ago, I think it's been like six months now that I was sort of like looking at every single internal Hex request to magic by hand with my eyes and sort of like thinking, how can I turn this into an eval? Is there a way that I can take this real request during this dog foodie, not very developed stage? How can I make that into an evaluation? That was a lot of sweat equity that I put in a lot of like boring evenings, but I do think ultimately it gave me a lot of understanding for the way that the model was misbehaving. Another thing is how can you start to understand these misbehaviors as like auxiliary evaluation metrics? So there's not just one evaluation that you want to do for every request. It's easy to say like, did this work? Did this not work? Did the response satisfy the task? But there's a lot of other metrics that you can pull off these questions. And so like, let me give you an example. If it writes SQL that doesn't reference a table in the database that it's supposed to be querying against, we would think of that as a hallucination. You could separately consider, is it a hallucination as a valuable metric? You could separately consider, does it get the right answer? The right answer is this sort of like all in one shot, like evaluation that I think people jump to. But these intermediary steps are really important. I remember hearing that GitHub had thousands of lines of post-processing code around Copilot to make sure that their responses were sort of correct or in the right place. And that kind of sort of defensive programming against bad responses is the kind of thing that you can build by looking at many different types of evaluation metrics. Because you can say like, oh, you know, the Copilot completion here is mostly right, but it doesn't close the brace. Well, that's the thing you can check for. Or, oh, this completion is quite good, but it defines a variable that was like already defined in the file. Like that's going to have a problem. That's an evaluation that you could check separately. And so this is where I think it's easy to convince yourself that all that matters is does it get the right answer? But the more that you think about production use cases of these things, the more you find a lot of this kind of stuff. One simple example is like sometimes the model names the output of a cell, a variable that's already in scope. Okay. Like we can just detect that and like we can just fix that. And this is the kind of thing that like evaluations over time and as you build these evaluations over time, you really can expand the robustness in which you trust these models. And for a company like Hex, who we need to put this stuff in GA, we can't just sort of like get to demo stage or even like private beta stage. We really hunting GA on all of these capabilities. Did it get the right answer on some cases is not good enough. [00:38:57]Alessio: I think the follow up question to that is in your past roles, you own the model that you're evaluating against. Here you don't actually have control into how the model evolves. How do you think about the model will just need to improve or we'll use another model versus like we can build kind of like engineering post-processing on top of it. How do you make the choice? [00:39:19]Bryan: So I want to say two things here. One like Jerry Liu talked a little bit about in his episode, he talked a little bit about sort of like you don't always want to retrain the weights to serve certain use cases. Rag is another tool that you can use to kind of like soft tune. I think that's right. And I want to go back to my favorite analogy here, which is like recommendation systems. When you build a recommendation system, you build the objective function. You think about like what kind of recs you want to provide, what kind of features you're allowed to use, et cetera, et cetera. But there's always another step. There's this really wonderful collection of blog posts from Eugene Yon and then ultimately like even Oldridge kind of like iterated on that for the Merlin project where there's this multi-stage recommender. And the multi-stage recommender says the first step is to do great retrieval. Once you've done great retrieval, you then need to do great ranking. Once you've done great ranking, you need to then do a good job serving. And so what's the analogy here? Rag is retrieval. You can build different embedding models to encode different features in your latent space to ensure that your ranking model has the best opportunity. Now you might say, oh, well, my ranking model is something that I've got a lot of capability to adjust. I've got full access to my ranking model. I'm going to retrain it. And that's great. And you should. And over time you will. But there's one more step and that's downstream and that's the serving. Serving often sounds like I just show the s**t to the user, but ultimately serving is things like, did I provide diverse recommendations? Going back to Stitch Fix days, I can't just recommend them five shirts of the same silhouette and cut. I need to serve them a diversity of recommendations. Have I respected their requirements? They clicked on something that got them to this place. Is the recommendations relevant to that query? Are there any hard rules? Do we maybe not have this in stock? These are all things that you put downstream. And so much like the recommendations use case, there's a lot of knobs to pull outside of retraining the model. And even in recommendation systems, when do you retrain your model for ranking? Not nearly as much as you do other s**t. And even this like embedding model, you might fiddle with more often than the true ranking model. And so I think the only piece of the puzzle that you don't have access to in the LLM case is that sort of like middle step. That's okay. We've got plenty of other work to do. So right now I feel pretty enabled. [00:41:56]Alessio: That's great. You obviously wrote a book on RecSys. What are some of the key concepts that maybe people that don't have a data science background, ML background should keep in mind as they work in this area? [00:42:07]Bryan: It's easy to first think these models are stochastic. They're unpredictable. Oh, well, what are we going to do? I think of this almost like gaseous type question of like, if you've got this entropy, where can you put the entropy? Where can you let it be entropic and where can you constrain it? And so what I want to say here is think about the cases where you need it to be really tightly constrained. So why are people so excited about function calling? Because function calling feels like a way to constrict it. Where can you let it be more gaseous? Well, maybe in the way that it talks about what it wants to do. Maybe for planning, if you're building agents and you want to do sort of something chain of thoughty. Well, that's a place where the entropy can happily live. When you're building applications of these models, I think it's really important as part of the problem framing to be super clear upfront. These are the things that can be entropic. These are the things that cannot be. These are the things that need to be super rigid and really, really aligned to a particular schema. We've had a lot of success in making specific the parts that need to be precise and tightly schemified, and that has really paid dividends. And so other analogies from data science that I think are very valuable is there's the sort of like human in the loop analogy, which has been around for quite a while. And I have gone on record a couple of times saying that like, I don't really love human in the loop. One of the things that I think we can learn from human in the loop is that the user is the best judge of what is good. And the user is pretty motivated to sort of like interact and give you kind of like additional nudges in the direction that you want. I think what I'd like to flip though, is instead of human in the loop, I'd like it to be AI in the loop. I'd rather center the user. I'd rather keep the user as the like core item at the center of this universe. And the AI is a tool. By switching that analogy a little bit, what it allows you to do is think about where are the places in which the user can reach for this as a tool, execute some task with this tool, and then go back to doing their workflow. It still gets this back and forth between things that computers are good at and things that humans are good at, which has been valuable in the human loop paradigm. But it allows us to be a little bit more, I would say, like the designers talk about like user-centered. And I think that's really powerful for AI applications. And it's one of the things that I've been trying really hard with Magic to make that feel like the workflow as the AI is right there. It's right where you're doing your work. It's ready for you anytime you need it. But ultimately you're in charge at all times and your workflow is what we care the most about. [00:44:56]Alessio: Awesome. Let's jump into lightning round. What's something that is not on your LinkedIn that you're passionate about or, you know, what's something you would give a TED talk on that is not work related? [00:45:05]Bryan: So I walk a lot. [00:45:07]Bryan: I have walked every road in Berkeley. And I mean like every part of every road even, not just like the binary question of, have you been on this road? I have this little app that I use called Wanderer, which just lets me like kind of keep track of everywhere I've been. And so I'm like a little bit obsessed. My wife would say a lot a bit obsessed with like what I call new roads. I'm actually more motivated by trails even than roads, but like I'm a maximalist. So kind of like everything and anything. Yeah. Believe it or not, I was even like in the like local Berkeley paper just talking about walking every road. So yeah, that's something that I'm like surprisingly passionate about. [00:45:45]Alessio: Is there a most underrated road in Berkeley? [00:45:49]Bryan: What I would say is like underrated is Kensington. So Kensington is like a little town just a teeny bit north of Berkeley, but still in the Berkeley hills. And Kensington is so quirky and beautiful. And it's a really like, you know, don't sleep on Kensington. That being said, one of my original motivations for doing all this walking was people always tell me like, Berkeley's so quirky. And I was like, how quirky is Berkeley? Turn it out. It's quite, quite quirky. It's also hard to say quirky and Berkeley in the same sentence I've learned as of now. [00:46:20]Alessio: That's a, that's a good podcast warmup for our next guests. All right. The actual lightning ground. So we usually have three questions, acceleration, exploration, then a takeaway acceleration. What's, what's something that's already here today that you thought would take much longer to arrive in AI and machine learning? [00:46:39]Bryan: So I invited the CEO of Hugging Face to my seminar when I worked at Stitch Fix and his talk at the time, honestly, like really annoyed me. The talk was titled like something to the effect of like LLMs are going to be the like technology advancement of the next decade. It's on YouTube. You can find it. I don't remember exactly the title, but regardless, it was something like LLMs for the next decade. And I was like, okay, they're like one modality of model, like whatever. His talk was fine. Like, I don't think it was like particularly amazing or particularly poor, but what I will say is damn, he was right. Like I, I don't think I quite was on board during that talk where I was like, ah, maybe, you know, like there's a lot of other modalities that are like moving pretty quick. I thought things like RL were going to be the like real like breakout success. And there's a little pun with Atari and breakout there, but yeah, like I, man, I was sleeping on LLMs and I feel a little embarrassed. I, yeah. [00:47:44]Alessio: Yeah. No, I mean, that's a good point. It's like sometimes the, we just had Jeremy Howard on the podcast and he was saying when he was talking about fine tuning, everybody thought it was dumb, you know, and then later people realize, and there's something to be said about messaging, especially like in technical audiences where there's kind of like the metagame, you know, which is like, oh, these are like the cool ideas people are exploring. I don't know where I want to align myself yet, you know, or whatnot. So it's cool exploration. So it's kind of like the opposite of that. You mentioned RL, right? That's something that was kind of like up and up and up. And then now it's people are like, oh, I don't know. Are there any other areas if you weren't working on, on magic that you want to go work on? [00:48:25]Bryan: Well, I did mention that, like, I think this like Memex product is just like incredibly exciting to me. And I think it's really opportunistic. I think it's very, very feasible, but I would maybe even extend that a little bit, which is I don't see enough people getting really enthusiastic about hardware with advanced AI built in. You're hearing whispering of it here and there, put on the whisper, but like you're starting to see people putting whisper into pieces of hardware and making that really powerful. I joked with, I can't think of her name. Oh, Sasha, who I know is a friend of the pod. Like I joked with Sasha that I wanted to make the big mouth Billy Bass as a babble fish, because at this point it's pretty easy to connect that up to whisper and talk to it in one language and have it talk in the other language. And I was like, this is the kind of s**t I want people building is like silly integrations between hardware and these new capabilities. And as much as I'm starting to hear whisperings here and there, it's not enough. I think I want to see more people going down this track because I think ultimately like these things need to be in our like physical space. And even though the margins are good on software, I want to see more like integration into my daily life. Awesome. [00:49:47]Alessio: And then, yeah, a takeaway, what's one message idea you want everyone to remember and think about? [00:49:54]Bryan: Even though earlier I was talking about sort of like, maybe like not reinventing things and being respectful of the sort of like ML and data science, like ideas. I do want to say that I think everybody should be experimenting with these tools as much as they possibly can. I've heard a lot of professors, frankly, express concern about their students using GPT to do their homework. And I took a completely opposite approach, which is in the first 15 minutes of the first class of my semester this year, I brought up GPT on screen and we talked about what GPT was good at. And we talked about like how the students can sort of like use it. I showed them an example of it doing data analysis work quite well. And then I showed them an example of it doing quite poorly. I think however much you're integrating with these tools or interacting with these tools, and this audience is probably going to be pretty high on that distribution. I would really encourage you to sort of like push this into the other people in your life. My wife is very technical. She's a product manager and she's using chat GPT almost every day for communication or for understanding concepts that are like outside of her sphere of excellence. And recently my mom and my sister have been sort of like onboarded onto the chat GPT train. And so ultimately I just, I think that like it is our duty to help other people see like how much of a paradigm shift this is. We should really be preparing people for what life is going to be like when these are everywhere. [00:51:25]Alessio: Awesome. Thank you so much for coming on, Bryan. This was fun. [00:51:29]Bryan: Yeah. Thanks for having me. And use Hex magic. [00:51:31] Get full access to Latent Space at www.latent.space/subscribe
This week Steve goes through his data migration story at his house. What things should you consider before moving large datasets around, and what things need to be taken into account for a solid backup plan? -- During The Show -- 01:52 Home Automation Leak Detection - Jeremy You can't really Using cameras 08:06 mmWave sensor update/comparison Seedstudio mmWave Sensor (https://wiki.seeedstudio.com/mmwave_human_detection_kit/) Space for other sensors Way better than a PIR sensor Aqara Water Sensor (https://cloudfree.shop/product/aqara-water-sensor/) 11:19 Point of sale gear? - Charlie Odoo (https://github.com/odoo/odoo) Open Source POS (https://github.com/opensourcepos/opensourcepos) UniCenta (https://unicenta.com/) Squirrel Systems (https://www.squirrelsystems.com/squirrel-pos-for-hotels) 13:28 Succession Planning - David Password dump Bitwarden Network diagram with pictures Good documentation Techy friends Dave Ramsey - Legacy box Legacy Folder Data, external drives 23:23 Odoo for Accounting and Bookkeeping - Tiny Looks like a solid platform Expensive Self hosting not really an option Accounting solid but very basic no payroll Not fully open source 25:51 Backups? - Mike Copying the file MIGHT be ok if file system has bit rot protection works till it doesn't Better to use database tools External drives 3.5 StarTech Enclosure (https://www.amazon.com/StarTech-com-10Gbps-Enclosure-SATA-Drives/dp/B00XLAZEFC) Pelican 1120 Case 2.5 Cable Matters Enclosure (https://www.amazon.com/Cable-Matters-Aluminum-External-Enclosure/dp/B07CQD6M5B) Steve's M.2 Enclosure (https://www.amazon.com/gp/product/B09T97Z7DM) ASUS ROG M.2 Enclosure (https://www.amazon.com/ASUS-ROG-Arion-Aluminum-Enclosure/dp/B07ZKB4SLK) 37:57 News Wire OpenZFS 2.2.1 - Phoronix (https://www.phoronix.com/news/OpenZFS-2.2.1-Released) Weston 13.0 - Freedesktop.org (https://lists.freedesktop.org/archives/wayland-devel/2023-November/043326.html) OpenSSL 3.2 - GitHub (https://github.com/openssl/openssl/blob/openssl-3.2.0/NEWS.md) PipeWire 1.0 - Phoronix (https://www.phoronix.com/news/PipeWire-1.0-Released) LibreOffice 7.6.3 On Android - Document Foundation (https://blog.documentfoundation.org/blog/2023/11/23/libreoffice-763-and-android-viewer-app/) Wine 8.21 - Gaming On Linux (https://www.gamingonlinux.com/2023/11/wine-821-brings-high-dpi-scaling-and-initial-vulkan-support-for-wayland/) Studio One 6.5 - Presonus Software (https://www.presonussoftware.com/en_US/blog/studio-one-6-5-for-linux) PeerTube v6 - Frama Blog (https://framablog.org/2023/11/28/peertube-v6-is-out-and-powered-by-your-ideas/) Proxmox 8.1 - Proxmox (https://www.proxmox.com/en/about/press-releases/proxmox-virtual-environment-8-1) OpenMandriva - LX 5.0 - Beta News (https://betanews.com/2023/11/25/openmandriva-lx-50-linux-download/) Nitrix 3.2.0 - NXOS.org (https://nxos.org/changelog/release-announcement-nitrux-3-2-0/) Ultra Marine Linux 39 - Fyra Labs (https://blog.fyralabs.com/ultramarine-39-released/) Linux 6.6 tagged LTS - Security Boulevard (https://securityboulevard.com/2023/11/linux-6-6-is-now-officially-an-lts-release/) Linux Runs 20% Faster on Ryzen 7995WX - Toms Hardware (https://www.tomshardware.com/news/ubuntu-runs-20-faster-than-windows-11-on-amd-threadripper-pro-7995wx) MicroCloud - Infoq (https://www.infoq.com/news/2023/11/canonical-microcloud-open-source/) GIMP Team Targeting May 2024 - Librearts.org (https://librearts.org/2023/11/gimp-3-0-roadmap/) X11 Being Removed from RHEL 10 - Red Hat (https://www.redhat.com/en/blog/rhel-10-plans-wayland-and-xorg-server) Fuctional Source License - The Register (https://www.theregister.com/2023/11/24/opinion_column/) Kinsing Malware - Hack Read (https://www.hackread.com/kinsing-crypto-malware-linux-apache-activemq-flaw/) SysJoker Malware - Cyber Security News (https://cybersecuritynews.com/sysjoker-malware-attacking-windows-linux-and-mac-users-abusing-onedrive/) Looney Tunables - Security Affairs (https://securityaffairs.com/154573/security/cisa-known-exploited-vulnerabilities-catalog-looney-tunables.html) Open Source Tesla - The Verge (https://www.theverge.com/2023/11/23/23973701/tesla-roadster-is-now-fully-open-source) AMD GPU & RISC-V - Toms Hardware (https://www.tomshardware.com/pc-components/gpus/amds-fastest-gaming-gpu-now-works-with-risc-v-cpus-amd-radeon-rx-7900-xtx-open-source-linux-drivers-available) Real AI - Mark Tech Post (https://www.marktechpost.com/2023/11/23/real-ai-wins-project-to-build-europes-open-source-large-language-model/) Synthetic Machine Learning Data - SD Times (https://sdtimes.com/data/capital-one-open-sources-new-project-for-generating-synthetic-data/) Uploading Minds - Crypto Slate (https://cryptoslate.com/buterin-sees-benefit-of-uploading-minds-and-need-for-open-source-innovation-in-ai/) AI Linux Optimization - Toms Hardware (https://www.tomshardware.com/news/chinese-company-uses-ai-to-optimize-linux-kernel) 41:11 Nativefier Makes native Linux app out of web pages Saves credentials and session Mind Drip One (http://docs.minddripone.com/how-to/install-use-nativefier/) Nativefier GUI GitHub (https://github.com/mattruzzi/nativefier-gui) 45:44 Data Migration Good to rotate drives Disk burn in (bunch of rsync) Rsync 26 hours rsync will preserve hard links with the right flags software raid is more portable nuke & pave 2 vdevs, 3 drives per vdev can only loose one drive ZFS send/receive is much faster and better IDrive (https://www.idrive.com/) Kopia (https://kopia.io/) Spider Oak One Plan for your target rsync commands a: Archive mode, which preserves permissions, ownership, and timestamps. v: Verbose mode, which prints out detailed information about the transfer. H: Preserve hard links. P: Preserve permissions. Dumping a database is intensive Proxmox gets in the way doesn't gain Steve anything Special snowflake Custom UI Good for multi node No updates KVM works the same everywhere Cockpit GUI Will eventually replace virtmanager -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/365) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)
The hype behind AI is real, and those that dawdle are already at risk of being behind. Way behind. So Omni Talk is pleased to announce that Romeo Bolibol, the Managing Partner & Head of Sales for Strategic Retail & Consumer Goods at Microsoft, and Crys Kelch, the Enterprise Solutions Sales Leader for the Americas at GitHub, joined Chris Walton and Anne Mezzenga for the latest installment of their Omni Talk Ask An Expert Series to explain why timing is of the essence when it comes to AI. Chris and Anne went deep with Romeo and Crys on: - Why the hype around AI might not be big enough - The common fears over AI that retail executives are sharing in the boardroom - The extent of the investment in AI needed across one's full tech stack - Real-life examples of next-generation AI already in use at leading retailers - And, perhaps most importantly, how the right approach to AI will create efficiencies that stack on top of each other so that one plus one will equal three or even four or five in your organization *Sponsored Content*
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meGetting Daniel Lee on the show is a real treat — with 20 years of experience in numeric computation; 10 years creating and working with Stan; 5 years working on pharma-related models, you can ask him virtually anything. And that I did…From joint models for estimating oncology treatment efficacy to PK/PD models; from data fusion for U.S. Navy applications to baseball and football analytics, as well as common misconceptions or challenges in the Bayesian world — our conversation spans a wide range of topics that I'm sure you'll appreciate!Daniel studied Mathematics at MIT and Statistics at Cambridge University, and, when he's not in front of his computer, is a savvy basketball player and… a hip hop DJ — you actually have his SoundCloud profile in the show notes if you're curious!Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas and Luke Gorrie.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Daniel on Linkedin: https://www.linkedin.com/in/syclik/Daniel on Twitter: https://twitter.com/djsyclikDaniel on GitHub: https://github.com/syclikDaniel's DJ profile:
Joël recaps his time at RubyConf! He shares insights from his talk about different aspects of time in software development, emphasizing the interaction with the audience and the importance of post-talk discussions. Stephanie talks about wrapping up a long-term client project, the benefits of change and variety in consulting, and maintaining a balance between project engagement and avoiding burnout. They also discuss strategies for maintaining work-life balance, such as physical separation and device management, particularly in a remote work environment. Rubyconf (https://rubyconf.org/) Joël's talk slides (https://speakerdeck.com/joelq/which-time-is-it) Flaky test summary slide (https://speakerdeck.com/aridlehoover/the-secret-ingredient-how-to-understand-and-resolve-just-about-any-flaky-test?slide=170) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: Well, as of this recording, I have just gotten back from spending the week in San Diego for RubyConf. STEPHANIE: Yay, so fun. JOËL: It's always so much fun to connect with the community over there, talk to other people from different companies who work in Ruby, to be inspired by the talks. This year, I was speaking, so I gave a talk on time and how it's not a single thing but multiple different quantities. In particular, I distinguish between a moment in time like a point, a duration and amount of time, and then a time of day, which is time unconnected to a particular day, and how those all connect together in the software that we write. STEPHANIE: Awesome. How did it go? How was it received? JOËL: It was very well received. I got a lot of people come up to me afterwards and make a variety of time puns, which those are so easy to make. I had to hold myself back not to put too many in the talk itself. I think I kept it pretty clean. There were definitely a couple of time puns in the description of the talk, though. STEPHANIE: Yeah, absolutely. You have to keep some in there. But I hear you that you don't want it to become too punny [laughs]. What I really love about conferences, and we've talked a little bit about this before, is the, you know, like, engagement and being able to connect with people. And you give a talk, but then that ends up leading to a lot of, like, discussions about it and related topics afterwards in the hallway or sitting together over a meal. JOËL: I like to, in my talks, give little kind of hooks for people who want to have those conversations in the hallway. You know, sometimes it's intimidating to just go up to a speaker and be like, oh, I want to, like, dig into their talk a little bit. But I don't have anything to say other than just, like, "I liked your talk." So, if there's any sort of side trails I had to cut for the talk, I might give a shout-out to it and say, "Hey, if you want to learn more about this aspect, come talk to me afterwards." So, one thing that I put in this particular talk was like, "Hey, we're looking at these different graphical ways to think about time. These are similar to but not the same as thinking of time as a one-dimensional vector and applying vector math to it, which is a whole other side topic. If you want to nerd out about that, come find me in the hallway afterwards, and I'd love to go deeper on it." And yeah, some people did. STEPHANIE: That's really smart. I like that a lot. You're inviting more conversation about it, which I know, like, you also really enjoy just, like, taking it further or, like, caring about other people's experiences or their thoughts about vector math [laughs]. JOËL: I think it serves two purposes, right? It allows people to connect with me as a speaker. And it also allows me to feel better about pruning certain parts of my talk and saying, look, this didn't make sense to keep in the talk, but it's cool material. I'd love to have a continuing conversation about this. So, here's a path we could have taken. I'm choosing not to, as a speaker, but if you want to take that branch with me, let's have that afterwards in the hallway. STEPHANIE: Yeah. Or even as, like, new content for yourself or for someone else to take with them if they want to explore that further because, you know, there's always something more to explore [chuckles]. JOËL: I've absolutely done that with past talks. I've taken a thing I had to prune and turned it into a blog post. A recent example of that was when I gave a talk at RailsConf Portland, which I guess is not so recent. I was talking about ways to deal with a test suite that's making too many database requests. And talking about how sometimes misusing let in your RSpec tests can lead to more database requests than you expect. And I had a whole section about how to better understand what database requests will actually be made by a series of let expressions and dealing with the eager versus lazy and all of that. I had to cut it. But I was then able to make a blog post about it and then talk about this really cool technique involving dependency graphs. And that was really fun. So, that was a thing where I was able to say, look, here's some content that didn't make it into the talk because I needed to focus on other things. But as its own little, like, side piece of content, it absolutely works, and here's a blog post. STEPHANIE: Yeah. And then I think it turned into a Bike Shed episode, too [laughs]. JOËL: I think it did, yes. I think, in many ways, creativity begets creativity. It's hard to get started writing or producing content or whatever, but once you do, every idea you have kind of spawns new ideas. And then, pretty soon, you have a backlog that you can't go through. STEPHANIE: That's awesome. Any other highlights from the conference you want to shout out? JOËL: I'd love to give a shout-out to a couple of talks that I went to, Aji Slater's talk on the Enigma machine as a German code machine from World War II and how we can sort of implement our own in Ruby and an exploration of object-oriented programming was fantastic. Aji is just a masterful storyteller. So, that was really great. And then Alan Ridlehoover's talk on dealing with flaky tests that one, I think, was particularly useful because I think it's one of the talks that is going to be immediately relevant on Monday morning for, like, every developer that was in that room and is going back to their regular day job. And they can immediately use all of those principles that Alan talked about to deal with the flaky tests in their test suite. And there's, in particular, at the end of his presentation, Alan has this summary slide. He kind of broke down flakiness across three different categories and then talked about different strategies for identifying and then fixing tests that were flaky because of those reasons. And he has this table where he sort of summarizes basically the entire talk. And I feel like that's the kind of thing that I'm going to save as a cheat sheet. And that can be, like, I'm going to link to this and share it all over because it's really useful. Alan has already put his slides up online. It's all linked to that particular slide in the show notes because I think that all of you would benefit from seeing that. The talks themselves are recorded, but they're not going to be out for a couple of weeks. I'm sure when they do, we're going to go through and watch some and probably comment on some of the talks as well. So, Stephanie, what is new in your world? STEPHANIE: Yeah. So, I'm celebrating wrapping up a client project after a nine-month engagement. JOËL: Whoa, that's a pretty long project. STEPHANIE: Yeah, that's definitely on the longer side for thoughtbot. And I'm, I don't know, just, like, feeling really excited for a change, feeling really, you know, proud of kind of, like, all of the work that we had done. You know, we had been working with this client for a long time and had been, you know, continuing to deliver value to them to want to keep working with us for that long. But I'm, yeah, just looking forward to a refresh. And I think that's one of my favorite things about consulting is that, you know, you can inject something new into your work life at a kind of regular cadence. And, at least for me, that's really important in reducing or, like, preventing the burnout. So, this time around, I kind of started to notice, and other people, too, like my manager, that I was maybe losing a bit of steam on this client project because I had been working on it for so long. And part of, you know, what success at thoughtbot means is that, like, we as employees are also feeling fulfilled, right? And, you know, what are the different ways that we can try to make sure that that remains the case? And kind of rotating folks on different projects and kind of making sure that things do feel fresh and exciting is really important. And so, I feel very grateful that other people were able to point that out for me, too, when I wasn't even fully realizing it. You know, I had people checking in on me and being like, "Hey, like, you've been on this for a while now. Kind of what I've been hearing is that, like, maybe you do need something new." I'm just excited to get that change. JOËL: How do you find the balance between sort of feeling fulfilled and maybe, you know, finding that point where maybe you're feeling you're running out of steam–versus, you know, some projects are really complex, take a while to ramp up; you want to feel productive; you want to feel like you have contributed in a significant way to a project? How do you navigate that balance? STEPHANIE: Yeah. So, the flip side is, like, I also don't think I would enjoy having to be changing projects all the time like every couple of months. That maybe is a little too much for me because I do like to...on our team, Boost, we embed on our team. We get to know our teammates. We are, like, building relationships with them, and supporting them, and teaching them. And all of that is really also fulfilling for me, but you can't really do that as much if you're on more shorter-term engagements. And then all of that, like, becomes worthwhile once you're kind of in that, like, maybe four or five six month period where you're like, you've finally gotten your groove. And you're like, I'm contributing. I know how this team works. I can start to see patterns or, like, maybe opportunities or gaps. And that is all really cool, and I think also another part of what I really like about being on Boost. But yeah, I think what I...that losing steam feeling, I started to identify, like, I didn't have as much energy or excitement to push forward change. When you kind of get a little bit too comfortable or start to get that feeling of, well, these things are the way they are [laughs], -- JOËL: Right. Right. STEPHANIE: I've now identified that that is kind of, like, a signal, right? JOËL: Maybe time for a new project. STEPHANIE: Right. Like starting to feel a little bit less motivated or, like, less excited to push myself and push the team a little bit in areas that it needs to be pushed. And so, that might be a good time for someone else at thoughtbot to, like, rotate in or maybe kind of close the chapter on what we've been able to do for a client. JOËL: It's hard to be at 100% all the time and sort of always have that motivation to push things to the max, and yeah, variety definitely helps with that. How do you feel about finding signals that maybe you need a break, maybe not from the project but just in general? The idea of taking PTO or having kind of a rest day. STEPHANIE: Oh yeah. I, this year, have tried out taking time off but not going anywhere just, like, being at home but being on vacation. And that was really great because then it was kind of, like, less about, like, oh, I want to take this trip in this time of year to this place and more like, oh, I need some rest or, like, I just need a little break. And that can be at home, right? Maybe during the day, I'm able to do stuff that I keep putting off or trying out new things that I just can't seem to find the time to do [chuckles] during my normal work schedule. So, that has been fun. JOËL: I think, yeah, sometimes, for me, I will sort of hit that moment where I feel like I don't have the ability to give 100%. And sometimes that can be a signal to be like, hey, have you taken any time off recently? Maybe you should schedule something. Because being able to refresh, even short-term, can sort of give an extra boost of energy in a way where...maybe it's not time for a rotation yet, but just taking a little bit of a break in there can sort of, I guess, extend the time where I feel like I'm contributing at the level that I want to be. STEPHANIE: Yeah. And I actually want to point out that a lot of that can also be, like, investing in your life outside of work, too, so that you can come to work with a different approach. I've mentioned the month that I spent in the Hudson Valley in New York and, like, when I was there, I felt, like, so different. I was, you know, just, like, so much more excited about all the, like, novel things that I was experiencing that I could show up to work and be like, oh yeah, like, I'm feeling good today. So, I have all this, you know, energy to bring to the tasks that I have at work. And yeah, so even though it wasn't necessarily time off, it was investing in other things in my life that then brought that refresh at work, even though nothing at work really changed [laughs]. JOËL: I think there's something to be said for the sort of energy boost you get from novelty and change, and some of that you get it from maybe rotating to a different project. But like you were saying, you can change your environment, and that can happen as well. And, you know, sometimes it's going halfway across the country to live in a place for a month. I sometimes do that in a smaller way by saying, oh, I'm going to work this morning from a coffee shop or something like that. And just say, look, by changing the environment, I can maybe get some focus or some energy that I wouldn't have if I were just doing same old, same old. STEPHANIE: Yeah, that's a good point. So, one particularly surprising refresh that I experienced in offboarding from my client work is coming back to my thoughtbot, like, internal company laptop, which had been sitting gathering dust [laughs] a little bit because I had a client-issued laptop that I was working in most of the time. And yeah, I didn't realize how different it would feel. I had, you know, gotten everything set up on my, you know, my thoughtbot computer just the way that I liked it, stuff that I'd never kind of bothered to set up on my other client-issued laptop. And then I came back to it, and then it ended up being a little bit surprising. I was like, oh, the icons are smaller on this [laughs] computer than the other computer. But it definitely did feel like returning to home, I think, instead of, like, being a guest in someone else's house that you haven't quite, like, put all your clothes in the closet or in the drawers. You're still maybe, like, living out of a suitcase a little bit [laughs]. So yeah, I was kind of very excited to be in my own space on my computer again. JOËL: I love the metaphor of coming home, and yeah, being in your own space, sleeping in your own bed. There's definitely some of that that I feel, I think, when I come back to my thoughtbot laptop as well. Do you feel like you get a different sense of connection with the rest of our thoughtbot colleagues when you're working on the thoughtbot-issued laptop versus a client-issued one? STEPHANIE: Yeah. Even though on my client-issued computer I had the thoughtbot Slack, like, open on there so I could be checking in, I wasn't necessarily in, like, other thoughtbot digital spaces as much, right? So, our, like, project management tools and our, like, internal company web app, those were things that I was on less of naturally because, like, the majority of my work was client work, and I was all in their digital spaces. But coming back and checking in on, like, all the GitHub discussions that have been happening while I haven't had enough time to catch up on them, just realizing that things were happening [laughs] even when I was doing something else, that is both cool and also like, oh wow, like, kind of sad that I [chuckles] missed out on some of this as it was going on. JOËL: That's pretty similar to my experience. For me, it almost feels a little bit like the difference between back when we used to be in person because thoughtbot is now fully remote. I would go, usually, depending on the client, maybe a couple of days a week working from their offices if they had an office. Versus some clients, they would come to our office, and we would work all week out of the thoughtbot offices, particularly if it was like a startup founder or something, and they might not already have office space. And that difference and feeling the connection that I would have from the rest of the thoughtbot team if I were, let's say, four days a week out of a client office versus two or four days a week out of the thoughtbot office feels kind of similar to what it's like working on a client-issued laptop versus on a thoughtbot-issued one. STEPHANIE: Another thing that I guess I forgot about or, like, wasn't expecting to do was all the cleanup, just the updating of things on my laptop as I kind of had it been sitting. And it reminded me to, I guess, extend that, like, coming home metaphor a little bit more. In the game Animal Crossing, if you haven't played the game in a while because it tracks, like, real-time, so it knows if you haven't, you know, played the game in a few months, when you wake up in your home, there's a bunch of cockroaches running around [laughs], and you have to go and chase and, like, squash them to clean it up. JOËL: Oh no. STEPHANIE: And it kind of felt like that opening my computer. I was like, oh, like, my, like, you know, OS is out of date. My browsers are out of date. I decided to get an internal company project running in my local development again, and I had to update so many things, you know, like, install the new Ruby version that the app had, you know, been upgraded to and upgrade, like, OpenSSL and all of that stuff on my machine to, yeah, get the app running again. And like I mentioned earlier, just the idea of like, oh yeah, this has evolved and changed, like, without me [laughs] was just, you know, interesting to see. And catching myself up to speed on that was not trivial work. So yeah, like, all that maintenance stuff still got to do it. It's, like, the digital cleanup, right? JOËL: Exactly. So, you mentioned that on the client machine, you still had the thoughtbot Slack. So, you were able to keep up at least some messages there on one device. I'm curious about the experience, maybe going the other way. How much does thoughtbot stuff bleed into your personal devices, if at all? STEPHANIE: Barely. I am very strict about that, I think. I used to have Slack on my phone, I don't know, just, like, in an earlier time in my career. But now I have it a rule to keep it off. I think the only thing that I have is my calendar, so no email either. Like, that is something that I, like, don't like to check on my personal time. Yeah, so it really just is calendar just in case I'm, like, out in the morning and need to be, like, oh, when is my first meeting? But [laughs] I will say that the one kind of silly thing is that I also refuse to sign into my Google account for work. So, I just have the calendar, like, added to my personal calendar but all the events are private. So, I can't actually see what the events are [laughs]. I just know that I have something going on at, like, 10:00 a.m. So, I got to make sure I'm back home by then [laughs], which is not so ideal. But at the risk of being signed in and having other things bleed into my personal devices, I'm just living with that for now [laughs]. JOËL: What I'm hearing is that I could put some mystery events on your calendar, and you would have a fun surprise in the morning because you wouldn't know what it is. STEPHANIE: Yeah, that is true [laughs]. If you put, like, a meeting at, like, 8:00 a.m., [laughs] then I'm like, oh no, what's this? And then I arrive, and it's just, like [laughs], a fun prank meeting. So, you know, you were talking about how you were at the conference this week. And I'm wondering, how connected were you to work life? JOËL: Uh, not very. I tried to be very present in the moment at the conference. So, I'm, you know, connected to all the other thoughtboters who were there and connecting with the attendees. I do have Slack on my phone, so if I do need to check it for something. There was a little bit of communication that was going on for different things regarding the conference, so I did check in for that. But otherwise, I tried to really stay focused on the in-person things that are happening. I'm not doing any client work during those days that I'm at RubyConf, and so I don't need to deal with anything there. I had my thoughtbot laptop with me because that's what I used to give my presentation. But once the presentation was done, I closed that laptop and didn't open it again, and, honestly, that felt kind of good. STEPHANIE: Yeah, that is really nice. I'm the same way, where I try to be pretty connected at conferences, and, like, I will actually redownload Slack sometimes just for, like, coordinating purposes with other folks who are there. But I think I make it pretty clear that I'm, like, away. You know, like, I'm not actually...like, even though I'm on work time, I'm not doing any other work besides just being present there. JOËL: So, you mentioned the idea of work time. Do you have, like, a pretty strict boundary between personal time and work time and, like, try not to allow either to bleed into each other? STEPHANIE: Yeah. I can't remember if I've mentioned this on the show. I think I have, but I'm going to again because one of my favorite things that I picked up from The Bike Shed back when Chris Toomey and Steph Viccari were hosting the show is Chris had, like, a little ritual that he would do every day to signal that he was done with work. He would close his laptop and say, "Schedule shutdown complete," I think. And I've started adopting it because then it helps me be like, I'm not going to reopen my laptop after this because I have said the words. And even if I think of something that I maybe need to add to my to-do list, I will, instead of opening my computer and adding to my, like, whatever digital to-do list, I will, like, write it down on a piece of paper instead for the sake of, you know, not risking getting sucked back into, you know, whatever might be going on after the time that I've, like, decided that I need to be done. JOËL: So, you have a very strict divisioning between work time and personal time. STEPHANIE: Yeah, I would say so. I think it's important for me because even when I take time off, you know, sometimes folks might work a half day or something, right? I really struggle with having even a half day feel like, once I'm done with work, having that feel like okay, like, now I'm back in my personal time. I'd much prefer not working the entire day at all because that is kind of the only way that I can feel like I've totally reclaimed that time. Otherwise, it's like, once I start thinking about work stuff, it's like I need a mental boundary, right? Because if I'm thinking about a work problem, or, like, an interaction or, like, just anything, it's frustrating because it doesn't feel like time in my own brain [laughs] is my own. What do work and personal time boundaries look like for you? JOËL: I think it's evolved over time. Device usage is definitely a little bit more blurry for me. One thing that I have started doing since we've gone fully remote as the pandemic has been winding down and, you know, you can do things, but we're still working from home, is that more days than not, I work from home during the day, and then I leave my home during the evening. I do a variety of social activities. And because I like to be sort of present in the moment, that means that by being physically gone, I have totally disconnected because I'm not checking emails or anything like that. Even though I do have thoughtbot email on my phone, Gmail allows me to like log into my personal account and my thoughtbot account. I have to, like, switch between the two accounts, and so, that's, like, more work than I would want. I don't have any notifications come in for the thoughtbot account. So, unless I'm, like, really wanting to see if a particular email I'm waiting for has come in, I don't even look at it, ever. It's mostly just there in case I need to see something. And then, by being focused in the moment doing social things with other people, I don't find too much of a temptation to, like, let work life bleed into personal life. So, there's a bit of a physical disconnect that ends up happening by moving out of the space I work in into leaving my home. STEPHANIE: Yeah. And I'm sure it's different for everyone. As you were saying that, I was reminded of a funny meme that I saw a long time ago. I don't think I could find it if I tried to search for it. But basically, it's this guy who is, you know, sitting on one side of the couch, clearly working. And he's kind of hunched over and, like, typing and looking very serious. And then he, like, closes his laptop, moves over, like, just slides to the other side of the couch, opens his laptop. And then you see him, like, lay back, like, legs up on the coffee table. And it's, like, work computer, personal computer, but it's the same computer [laughs]. It's just the, like, how you've decided like, oh, it's time for, you know, legs up, Netflix watching [laughs]. JOËL: Yeah. Yeah. I'm curious: do you use your thoughtbot computer for any personal things? Or is it just you shut that down; you do the closing ritual, and then you do things on a separate device? STEPHANIE: Yeah, I do things on a separate device. I think the only thing there might be some overlap for are, like, career-related extracurriculars or just, like, development stuff that I'm interested in doing, like, separate from what I am paid to do. But that, you know, kind of overlaps a little bit because of, like, the tools and the stuff I have installed on my computer. And, you know, with our investment time, too, that ends up having a bit of a crossover. JOËL: I think I'm similar in that I'll tend to do development things on my thoughtbot machine, even though they're not necessarily thoughtbot-related, although they could be things that might slot into something like investment time. STEPHANIE: Yeah, yeah. And it's because you have all your stuff set up for it. Like, you're not [laughs] trying to install the latest Ruby version on two different machines, probably [laughs]. JOËL: Yeah. Also, my personal device is a Windows machine. And I've not wanted to bother learning how to set that up or use the Windows Subsystem for Linux or any of those tools, which, you know, may be good professional learning activities. But that's not where I've decided to invest my time. STEPHANIE: That makes sense. I had an interesting conversation with someone else today, actually, about devices because I had mentioned that, you know, sometimes I still need to incorporate my personal devices into work stuff, especially, like, two-factor authentication. And specifically on my last client project...I have a very old iPhone [laughs]. I need to start out by saying it's an iPhone 8 that I've had for, like, six or seven years. And so, it's old. Like, one time I went to the Apple store, and I was like, "Oh, I'm looking for a screen protector for this." And they're like, "Oh, it's an iPhone 8. Yikes." [laughs] This was, you know, like, not too long ago [laughs]. And the multi-factor authentication policy for my client was that, you know, we had to use this specific app. And it also had, like, security checks. Like, there's a security policy that it needed to be updated to the latest iOS. So, even if I personally didn't want to update my iOS [laughs], I felt compelled to because, otherwise, I would be locked out of the things that I needed to do at work [laughs]. JOËL: Yeah, that can be a challenge sometimes when you're adding work things to personal devices, maybe not because it's convenient and you want to, but because you don't have a choice for things like two-factor auth. STEPHANIE: Yeah, yeah. And then the person I was talking to actually suggested something I hadn't even thought about, which is like, "Oh, you know, if you really can't make it work, then, like, consider having that company issue another device for you to do the things that they're, like, requiring of you." And I hadn't even thought of that, so... And I'm not quite at the point where I'm like, everything has to be, like, completely separate [laughs], including two-factor auth. But, I don't know, something to consider, like, maybe that might be a place I get to if I'm feeling like I really want to keep those boundaries strict. JOËL: And I think it's interesting because, you know, when you think of the kind of work that we do, it's like, oh, we work with computers, but there are so many subfields within it. And device management and, just maybe, corporate IT, in general, is a whole subfield that is separate and almost a little bit alien. Two, I feel like me, as a software developer, I'm just aware of a little bit...like, I've read a couple of articles around...and this was, you know, years ago when the trend was starting called Bring Your Own Device. So, people who want to say, "Hey, I want to use my phone. I want to have my work email on my phone." But then does that mean that potentially you're leaking company memos and things? So, how do you secure that kind of thing? And everything that IT had to think through in order to allow that, the pros and cons. So, I think we're just kind of, as users of that system, touching the surface of it. But there's a lot of thought and discussion that, as an industry, the kind of corporate IT folks have gone through to struggle with how to balance a lot of those things. STEPHANIE: Yeah, yeah. I bet there's a lot of complexity or nuance there. I mean, we're just talking about, like, ways that we do or don't mix work and personal life. And for that kind of work, you know, that's, like, the job is to think really thoroughly about how people use their devices and what should and shouldn't be permissible. The last thing that I wanted to kind of ask about in terms of device management or, like, work and personal intermixing is the idea of being on call and your device being a way for work to reach you and that being a requirement, right? I feel very lucky to obviously not really be in that position. As consultants, like, we're not usually so embedded into a team that we're then brought into, like, an on-call rotation, and I think that's good for me. Like, I don't think that that is something I'd be interested in doing anytime soon. Do you have any experience with that? JOËL: I have not been on a project where I've had to be on call, and I think that's generally true for most of us at thoughtbot who are doing software development. I know those who are doing more kind of platformy SRE-type things are on call. And, in fact, we have specifically hired people in different regions around the world so that we can provide 24-hour coverage for that kind of thing. STEPHANIE: Yeah. And I imagine kind of like what we're talking about with work device management looks even different for that kind of role, where maybe you do need a lot more access to things, like, wherever you might be. JOËL: And maybe the answer there is you get issued a work-specific device and a work phone or something like that, or an old-school work pager. STEPHANIE: [laughs] JOËL: PagerDuty is not just a metaphoric thing. Back in the day, they used actual pagers. STEPHANIE: Yeah, that would be very funny. JOËL: So yeah, I can't speak to it from personal experience, but I could imagine that maybe some of the dynamics there might be a little bit different. And, you know, for some people, maybe it's fine to just have an app on your phone that pings you when something happens, and you have to be on call. And you're able to be present while waiting, like, in case you get pinged, but also let it go while you're on call. I can imagine that's, like, a really weird kind of, like, shadow, like, working, not working experience that I can't really speak to because I have not been in that position. STEPHANIE: Yeah. As you were saying that, I also had the thought that, like, our ability to step away from work and our devices is also very much dependent on, like, a company culture and those types of factors, right? Where, you know, it is okay for me to not be able to look at that stuff and just come back to it Monday morning, and I am very grateful [laughs] for that. Because I recognize that, like, not everyone is in that position where there might be a lot more pressure or urgency to be on top of that. But right now, for this time in my life, like, that's kind of how I like to work. JOËL: I think it kind of sits at the intersection of a few different things, right? There's sort of where you are personally. It might be a combination, like, personality and maybe, like, mental health, things like that, how you respond to how sharp or blurry those lines between work and personal life can be. Like you said, it's also an element of company culture. If there's a company culture that's really pushing to get into your personal life, maybe you need firmer boundaries. And then, finally, what we spent most of this episode talking about: technical solutions, whether that's, like, physically separating everything such that there are two devices. And you close down your laptop, and you're done for the day. And whether or not you allow any apps on your personal phone to carry with you after you leave for the day. So, I think at the intersection of those three is sort of how you're going to experience that, and every person is going to be a little bit different. Because those three...I guess I'm thinking of a Venn diagram. Those three circles are going to be different for everyone. STEPHANIE: Yeah, that makes complete sense. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at email@example.com via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: firstname.lastname@example.org with any questions.
In the latest episode, we delve into the rapidly evolving AI ecosystem and its implications for us as Elixir developers, highlighting the potential hazards of relying on proprietary services like OpenAI and the benefits of self-hosted, open-source AI models. We touch on the Elixir LangChain library, how Elixir's position of running our own AI models strengthens us, and the governance and financial risks of depending on a single AI provider. Tune in for why these topics matter and how they shape the future of development in the context of Elixir, plus the holiday season's impact on our show schedule, and more! Show Notes online - http://podcast.thinkingelixir.com/179 (http://podcast.thinkingelixir.com/179) Elixir Community News - https://twitter.com/chris_mccord/status/1724861258548052109 (https://twitter.com/chris_mccord/status/1724861258548052109?utm_source=thinkingelixir&utm_medium=shownotes) – Chris McCord teased a new visual on Twitter resembling a colorful flame logo with the text "Soon™", with more details to come. - https://hauleth.dev/post/who-watches-watchmen-ii/ (https://hauleth.dev/post/who-watches-watchmen-ii/?utm_source=thinkingelixir&utm_medium=shownotes) – Hauleth's blog post explores creating an Elixir service supervised by SystemD, building on his series about managing BEAM applications. - https://www.elixirstreams.com/tips/how-page-title-is-updated (https://www.elixirstreams.com/tips/how-page-title-is-updated?utm_source=thinkingelixir&utm_medium=shownotes) – German Valesco explains the updating of the page_title in Phoenix LiveView with a tip and video demonstration. - https://dockyard.com/blog/2023/11/08/three-years-of-nx-growing-the-machine-learning-ecosystem (https://dockyard.com/blog/2023/11/08/three-years-of-nx-growing-the-machine-learning-ecosystem?utm_source=thinkingelixir&utm_medium=shownotes) – Sean Moriarity discusses the past three years and the future of the Elixir Machine Learning Ecosystem and Nx in a blog post on Dockyard. - https://twitter.com/TheErlef/status/1726654135750066390 (https://twitter.com/TheErlef/status/1726654135750066390?utm_source=thinkingelixir&utm_medium=shownotes) – Announcement of the 3rd edition of a BEAM-focused devroom at the 2024 FOSDEM conference, set to take place in Brussels. - https://beam-fosdem.dev/ (https://beam-fosdem.dev/?utm_source=thinkingelixir&utm_medium=shownotes) – FOSDEM's BEAM devroom, an event for the Elixir community and enthusiasts, provides details about the upcoming sidetrack. - https://www.youtube.com/playlist?list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY (https://www.youtube.com/playlist?list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY?utm_source=thinkingelixir&utm_medium=shownotes) – The official playlist of ElixirConf US videos, with several more sessions expected to be added. - https://www.youtube.com/watch?v=nw-030FD0Qc&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=46 (https://www.youtube.com/watch?v=nw-030FD0Qc&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=46?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf US video of Rafal Studnicki discussing keeping real-time auctions running during rollouts. - https://www.youtube.com/watch?v=P44hFAhKPao&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=47 (https://www.youtube.com/watch?v=P44hFAhKPao&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=47?utm_source=thinkingelixir&utm_medium=shownotes) – Tyler Young's ElixirConf US presentation on migrating data without downtime. - https://www.youtube.com/watch?v=4XaB4XWg-Qg&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=48 (https://www.youtube.com/watch?v=4XaB4XWg-Qg&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=48?utm_source=thinkingelixir&utm_medium=shownotes) – Michał Śledź's session at ElixirConf US on rewriting Pion in Elixir. - https://www.youtube.com/watch?v=E9pZP5jUYZg&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=49 (https://www.youtube.com/watch?v=E9pZP5jUYZg&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=49?utm_source=thinkingelixir&utm_medium=shownotes) – Andrew Berrien introduces ECSx and discusses a new approach to game development in Elixir at ElixirConf US. - https://www.youtube.com/watch?v=F42B6AZ879Q&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=50 (https://www.youtube.com/watch?v=F42B6AZ879Q&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=50?utm_source=thinkingelixir&utm_medium=shownotes) – Geoffrey Lessel's introduction to Vox, a static site generator for Elixir enthusiasts, at ElixirConf US. - https://adventofcode.com/ (https://adventofcode.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Advent of Code is approaching, presenting new coding challenges starting December 1st with a new rule against using AI for leaderboard rankings. - https://twitter.com/ljgago/status/1724917401462997413 (https://twitter.com/ljgago/status/1724917401462997413?utm_source=thinkingelixir&utm_medium=shownotes) – Leonardo Gago tweets about his kino_aoc smart cell to assist with Advent of Code puzzles in Livebook. - https://github.com/ljgago/kino_aoc (https://github.com/ljgago/kino_aoc?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for KinoAoc, a Livebook smart cell created by Leonardo Gago for solving Advent of Code puzzles. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at email@example.com (mailto:firstname.lastname@example.org) Discussion Resources - The discussion explores the AI ecosystem's influence on Elixir developers, addressing risks and dependencies unrelated to Elixir itself. - Concerns are raised about the dangers of building on top of OpenAI and the risk of service outages, as experienced with an AI fitness trainer. - Open-source AI models are discussed as viable alternatives that offer the possibility of self-hosting and independence from proprietary systems. - Mention of the Elixir LangChain library signifies an interest in being able to seamlessly switch AI models without altering application code. - The discussion covers the risks of government regulation, policy changes, financial and governance uncertainties, and how they could affect dependencies on single AI providers. - An industry desire for regulatory measures is expressed, aiming to build a legal buffer that could protect from competition. - The conversation questions the broader implications of reliance on AI, including why the topic is intriguing and why self-hosted, open-source models are crucial. - Arguably, Elixir is considered to have a strong position for running self-managed AI models, highlighting the alignment with open-source philosophies. - Looking to the future, Elixir is positioned well to do this. - A final note touches on the holiday season's effect on the podcast's show schedule with potential changes or pauses in the regular programming. Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - email@example.com (mailto:firstname.lastname@example.org) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @email@example.com (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @firstname.lastname@example.org (https://genserver.social/dbern) - Cade Ward - @cadebward (https://twitter.com/cadebward) - Cade Ward on Fediverse - @email@example.com (https://genserver.social/cadebward)
Rachel Dines, Head of Product and Technical Marketing at Chronosphere, joins Corey on Screaming in the Cloud to discuss why creating a cloud-native observability strategy is so critical, and the challenges that come with both defining and accomplishing that strategy to fit your current and future observability needs. Rachel explains how Chronosphere is taking an open-source approach to observability, and why it's more important than ever to acknowledge that the stakes and costs are much higher when it comes to observability in the cloud. About RachelRachel leads product and technical marketing for Chronosphere. Previously, Rachel wore lots of marketing hats at CloudHealth (acquired by VMware), and before that, she led product marketing for cloud-integrated storage at NetApp. She also spent many years as an analyst at Forrester Research. Outside of work, Rachel tries to keep up with her young son and hyper-active dog, and when she has time, enjoys crafting and eating out at local restaurants in Boston where she's based.Links Referenced: Chronosphere: https://chronosphere.io/ LinkedIn: https://www.linkedin.com/in/rdines/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's featured guest episode is brought to us by our friends at Chronosphere, and they have also brought us Rachel Dines, their Head of Product and Solutions Marketing. Rachel, great to talk to you again.Rachel: Hi, Corey. Yeah, great to talk to you, too.Corey: Watching your trajectory has been really interesting, just because starting off, when we first started, I guess, learning who each other were, you were working at CloudHealth which has since become VMware. And I was trying to figure out, huh, the cloud runs on money. How about that? It feels like it was a thousand years ago, but neither one of us is quite that old.Rachel: It does feel like several lifetimes ago. You were just this snarky guy with a few followers on Twitter, and I was trying to figure out what you were doing mucking around with my customers [laugh]. Then [laugh] we kind of both figured out what we're doing, right?Corey: So, speaking of that iterative process, today, you are at Chronosphere, which is an observability company. We would have called it a monitoring company five years ago, but now that's become an insult after the observability war dust has settled. So, I want to talk to you about something that I've been kicking around for a while because I feel like there's a gap somewhere. Let's say that I build a crappy web app—because all of my web apps inherently are crappy—and it makes money through some mystical form of alchemy. And I have a bunch of users, and I eventually realize, huh, I should probably have a better observability story than waiting for the phone to ring and a customer telling me it's broken.So, I start instrumenting various aspects of it that seem to make sense. Maybe I go too low level, like looking at all the discs on every server to tell me if they're getting full or not, like their ancient servers. Maybe I just have a Pingdom equivalent of is the website up enough to respond to a packet? And as I wind up experiencing different failure modes and getting yelled at by different constituencies—in my own career trajectory, my own boss—you start instrumenting for all those different kinds of breakages, you start aggregating the logs somewhere and the volume gets bigger and bigger with time. But it feels like it's sort of a reactive process as you stumble through that entire environment.And I know it's not just me because I've seen this unfold in similar ways in a bunch of different companies. It feels to me, very strongly, like it is something that happens to you, rather than something you set about from day one with a strategy in mind. What's your take on an effective way to think about strategy when it comes to observability?Rachel: You just nailed it. That's exactly the kind of progression that we so often see. And that's what I really was excited to talk with you about today—Corey: Oh, thank God. I was worried for a minute there that you'd be like, “What the hell are you talking about? Are you just, like, some sort of crap engineer?” And, “Yes, but it's mean of you to say it.” But yeah, what I'm trying to figure out is there some magic that I just was never connecting? Because it always feels like you're in trouble because the site's always broken, and oh, like, if the disk fills up, yeah, oh, now we're going to start monitoring to make sure the disk doesn't fill up. Then you wind up getting barraged with alerts, and no one wins, and it's an uncomfortable period of time.Rachel: Uncomfortable period of time. That is one very polite way to put it. I mean, I will say, it is very rare to find a company that actually sits down and thinks, “This is our observability strategy. This is what we want to get out of observability.” Like, you can think about a strategy and, like, the old school sense, and you know, as an industry analyst, so I'm going to have to go back to, like, my roots at Forrester with thinking about, like, the people, and the process, and the technology.But really what the bigger component here is like, what's the business impact? What do you want to get out of your observability platform? What are you trying to achieve? And a lot of the time, people have thought, “Oh, observability strategy. Great, I'm just going to buy a tool. That's it. Like, that's my strategy.”And I hate to bring it to you, but buying tools is not a strategy. I'm not going to say, like, buy this tool. I'm not even going to say, “Buy Chronosphere.” That's not a strategy. Well, you should buy Chronosphere. But that's not a strategy.Corey: Of course. I'm going to throw the money by the wheelbarrow at various observability vendors, and hope it solves my problem. But if that solved the problem—I've got to be direct—I've never spoken to those customers.Rachel: Exactly. I mean, that's why this space is such a great one to come in and be very disruptive in. And I think, back in the days when we were running in data centers, maybe even before virtual machines, you could probably get away with not having a monitoring strategy—I'm not going to call it observability; it's not we call the back then—you could get away with not having a strategy because what was the worst that was going to happen, right? It wasn't like there was a finite amount that your monitoring bill could be, there was a finite amount that your customer impact could be. Like, you're paying the penny slots, right?We're not on the penny slots anymore. We're in the $50 craps table, and it's Las Vegas, and if you lose the game, you're going to have to run down the street without your shirt. Like, the game and the stakes have changed, and we're still pretending like we're playing penny slots, and we're not anymore.Corey: That's a good way of framing it. I mean, I still remember some of my biggest observability challenges were building highly available rsyslog clusters so that you could bounce a member and not lose any log data because some of that was transactionally important. And we've gone beyond that to a stupendous degree, but it still feels like you don't wind up building this into the application from day one. More's the pity because if you did, and did that intelligently, that opens up a whole world of possibilities. I dream of that changing where one day, whenever you start to build an app, oh, and we just push the button and automatically instrument with OTel, so you instrument the thing once everywhere it makes sense to do it, and then you can do your vendor selection and what you said were decisions later in time. But these days, we're not there.Rachel: Well, I mean, and there's also the question of just the legacy environment and the tech debt. Even if you wanted to, the—actually I was having a beer yesterday with a friend who's a VP of Engineering, and he's got his new environment that they're building with observability instrumented from the start. How beautiful. They've got OTel, they're going to have tracing. And then he's got his legacy environment, which is a hot mess.So, you know, there's always going to be this bridge of the old and the new. But this was where it comes back to no matter where you're at, you can stop and think, like, “What are we doing and why?” What is the cost of this? And not just cost in dollars, which I know you and I could talk about very deeply for a long period of time, but like, the opportunity costs. Developers are working on stuff that they could be working on something that's more valuable.Or like the cost of making people work round the clock, trying to troubleshoot issues when there could be an easier way. So, I think it's like stepping back and thinking about cost in terms of dollar sense, time, opportunity, and then also impact, and starting to make some decisions about what you're going to do in the future that's different. Once again, you might be stuck with some legacy stuff that you can't really change that much, but [laugh] you got to be realistic about where you're at.Corey: I think that that is a… it's a hard lesson to be very direct, in that, companies need to learn it the hard way, for better or worse. Honestly, this is one of the things that I always noticed in startup land, where you had a whole bunch of, frankly, relatively early-career engineers in their early-20s, if not younger. But then the ops person was always significantly older because the thing you actually want to hear from your ops person, regardless of how you slice it, is, “Oh, yeah, I've seen this kind of problem before. Here's how we fixed it.” Or even better, “Here's the thing we're doing, and I know how that's going to become a problem. Let's fix it before it does.” It's the, “What are you buying by bringing that person in?” “Experience, mostly.”Rachel: Yeah, that's an interesting point you make, and it kind of leads me down this little bit of a side note, but a really interesting antipattern that I've been seeing in a lot of companies is that more seasoned ops person, they're the one who everyone calls when something goes wrong. Like, they're the one who, like, “Oh, my God, I don't know how to fix it. This is a big hairy problem,” I call that one ops person, or I call that very experienced person. That experience person then becomes this huge bottleneck into solving problems that people don't really—they might even be the only one who knows how to use the observability tool. So, if we can't find a way to democratize our observability tooling a little bit more so, like, just day-to-day engineers, like, more junior engineers, newer ones, people who are still ramping, can actually use the tool and be successful, we're going to have a big problem when these ops people walk out the door, maybe they retire, maybe they just get sick of it. We have these massive bottlenecks in organizations, whether it's ops or DevOps or whatever, that I see often exacerbated by observability tools. Just a side note.Corey: Yeah. On some level, it feels like a lot of these things can be fixed with tooling. And I'm not going to say that tools aren't important. You ever tried to implement observability by hand? It doesn't work. There have to be computers somewhere in the loop, if nothing else.And then it just seems to devolve into a giant swamp of different companies, doing different things, taking different approaches. And, on some level, whenever you read the marketing or hear the stories any of these companies tell you also to normalize it from translating from whatever marketing language they've got into something that comports with the reality of your own environment and seeing if they align. And that feels like it is so much easier said than done.Rachel: This is a noisy space, that is for sure. And you know, I think we could go out to ten people right now and ask those ten people to define observability, and we would come back with ten different definitions. And then if you throw a marketing person in the mix, right—guilty as charged, and I know you're a marketing person, too, Corey, so you got to take some of the blame—it gets mucky, right? But like I said a minute ago, the answer is not tools. Tools can be part of the strategy, but if you're just thinking, “I'm going to buy a tool and that's going to solve my problem,” you're going to end up like this company I was talking to recently that has 25 different observability tools.And not only do they have 25 different observability tools, what's worse is they have 25 different definitions for their SLOs and 25 different names for the same metric. And to be honest, it's just a mess. I'm not saying, like, go be Draconian and, you know, tell all the engineers, like, “You can only use this tool [unintelligible 00:10:34] use that tool,” you got to figure out this kind of balance of, like, hands-on, hands-off, you know? How much do you centralize, how much do you push and standardize? Otherwise, you end up with just a huge mess.Corey: On some level, it feels like it was easier back in the days of building it yourself with Nagios because there's only one answer, and it sucks, unless you want to start going down the world of HP OpenView. Which step one: hire a 50-person team to manage OpenView. Okay, that's not going to solve my problem either. So, let's get a little more specific. How does Chronosphere approach this?Because historically, when I've spoken to folks at Chronosphere, there isn't that much of a day one story, in that, “I'm going to build a crappy web app. Let's instrument it for Chronosphere.” There's a certain, “You must be at least this tall to ride,” implicit expectation built into the product just based upon its origins. And I'm not saying that doesn't make sense, but it also means there's really no such thing as a greenfield build out for you either.Rachel: Well, yes and no. I mean, I think there's no green fields out there because everyone's doing something for observability, or monitoring, or whatever you want to call it, right? Whether they've got Nagios, whether they've got the Dog, whether they've got something else in there, they have some way of introspecting their systems, right? So, one of the things that Chronosphere is built on, that I actually think this is part of something—a way you might think about building out an observability strategy as well, is this concept of control and open-source compatibility. So, we only can collect data via open-source standards. You have to send this data via Prometheus, via Open Telemetry, it could be older standards, like, you know, statsd, Graphite, but we don't have any proprietary instrumentation.And if I was making a recommendation to somebody building out their observability strategy right now, I would say open, open, open, all day long because that gives you a huge amount of flexibility in the future. Because guess what? You know, you might put together an observability strategy that seems like it makes sense for right now—actually, I was talking to a B2B SaaS company that told me that they made a choice a couple of years ago on an observability tool. It seemed like the right choice at the time. They were growing so fast, they very quickly realized it was a terrible choice.But now, it's going to be really hard for them to migrate because it's all based on proprietary standards. Now, of course, a few years ago, they didn't have the luxury of Open Telemetry and all of these, but now that we have this, we can use these to kind of future-proof our mistakes. So, that's one big area that, once again, both my recommendation and happens to be our approach at Chronosphere.Corey: I think that that's a fair way of viewing it. It's a constant challenge, too, just because increasingly—you mentioned the Dog earlier, for example—I will say that for years, I have been asked whether or not at The Duckbill Group, we look at Azure bills or GCP bills. Nope, we are pure AWS. Recently, we started to hear that same inquiry specifically around Datadog, to the point where it has become a board-level concern at very large companies. And that is a challenge, on some level.I don't deviate from my typical path of I fix AWS bills, and that's enough impossible problems for one lifetime, but there is a strong sense of you want to record as much as possible for a variety of excellent reasons, but there's an implicit cost to doing that, and in many cases, the cost of observability becomes a massive contributor to the overall cost. Netflix has said in talks before that they're effectively an observability company that also happens to stream movies, just because it takes so much effort, engineering, and raw computing resources in order to get that data do something actionable with it. It's a hard problem.Rachel: It's a huge problem, and it's a big part of why I work at Chronosphere, to be honest. Because when I was—you know, towards the tail end at my previous company in cloud cost management, I had a lot of customers coming to me saying, “Hey, when are you going to tackle our Dog or our New Relic or whatever?” Similar to the experience you're having now, Corey, this was happening to me three, four years ago. And I noticed that there is definitely a correlation between people who are having these really big challenges with their observability bills and people that were adopting, like Kubernetes, and microservices and cloud-native. And it was around that time that I met the Chronosphere team, which is exactly what we do, right? We focus on observability for these cloud-native environments where observability data just goes, like, wild.We see 10X 20X as much observability data and that's what's driving up these costs. And yeah, it is becoming a board-level concern. I mean, and coming back to the concept of strategy, like if observability is the second or third most expensive item in your engineering bill—like, obviously, cloud infrastructure, number one—number two and number three is probably observability. How can you not have a strategy for that? How can this be something the board asks you about, and you're like, “What are we trying to get out of this? What's our purpose?” “Uhhhh… troubleshooting?”Corey: Right because it turns into business metrics as well. It's not just about is the site up or not. There's a—like, one of the things that always drove me nuts not just in the observability space, but even in cloud costing is where, okay, your costs have gone up this week so you get a frowny face, or it's in red, like traffic light coloring. Cool, but for a lot of architectures and a lot of customers, that's because you're doing a lot more volume. That translates directly into increased revenues, increased things you care about. You don't have the position or the context to say, “That's good,” or, “That's bad.” It simply is. And you can start deriving business insight from that. And I think that is the real observability story that I think has largely gone untold at tech conferences, at least.Rachel: It's so right. I mean, spending more on something is not inherently bad if you're getting more value out of it. And it definitely a challenge on the cloud cost management side. “My costs are going up, but my revenue is going up a lot faster, so I'm okay.” And I think some of the plays, like you know, we put observability in this box of, like, it's for low-level troubleshooting, but really, if you step back and think about it, there's a lot of larger, bigger picture initiatives that observability can contribute to in an org, like digital transformation. I know that's a buzzword, but, like that is a legit thing that a lot of CTOs are out there thinking about. Like, how do we, you know, get out of the tech debt world, and how do we get into cloud-native?Maybe it's developer efficiency. God, there's a lot of people talking about developer efficiency. Last week at KubeCon, that was one of the big, big topics. I mean, and yeah, what [laugh] what about cost savings? To me, we've put observability in a smaller box, and it needs to bust out.And I see this also in our customer base, you know? Customers like DoorDash use observability, not just to look at their infrastructure and their applications, but also look at their business. At any given minute, they know how many Dashers are on the road, how many orders are being placed, cut by geos, down to the—actually down to the second, and they can use that to make decisions.Corey: This is one of those things that I always found a little strange coming from the world of running systems in large [unintelligible 00:17:28] environments to fixing AWS bills. There's nothing that even resembles a fast, reactive response in the world of AWS billing. You wind up with a runaway bill, they're going to resolve that over a period of weeks, on Seattle business hours. If you wind up spinning something up that creates a whole bunch of very expensive drivers behind your bill, it's going to take three days, in most cases, before that starts showing up anywhere that you can reasonably expect to get at it. The idea of near real time is a lie unless you want to start instrumenting everything that you're doing to trap the calls and then run cost extrapolation from there. That's hard to do.Observability is a very different story, where latencies start to matter, where being able to get leading indicators of certain events—be a technical or business—start to be very important. But it seems like it's so hard to wind up getting there from where most people are. Because I know we like to talk dismissively about the past, but let's face it, conference-ware is the stuff we're the proudest of. The reality is the burning dumpster of regret in our data centers that still also drives giant piles of revenue, so you can't turn it off, nor would you want to, but you feel bad about it as a result. It just feels like it's such a big leap.Rachel: It is a big leap. And I think the very first step I would say is trying to get to this point of clarity and being honest with yourself about where you're at and where you want to be. And sometimes not making a choice is a choice, right, as well. So, sticking with the status quo is making a choice. And so, like, as we get into things like the holiday season right now, and I know there's going to be people that are on-call 24/7 during the holidays, potentially, to keep something that's just duct-taped together barely up and running, I'm making a choice; you're make a choice to do that. So, I think that's like the first step is the kind of… at least acknowledging where you're at, where you want to be, and if you're not going to make a change, just understanding the cost and being realistic about it.Corey: Yeah, being realistic, I think, is one of the hardest challenges because it's easy to wind up going for the aspirational story of, “In the future when everything's great.” Like, “Okay, cool. I appreciate the need to plant that flag on the hill somewhere. What's the next step? What can we get done by the end of this week that materially improves us from where we started the week?” And I think that with the aspirational conference-ware stories, it's hard to break that down into things that are actionable, that don't feel like they're going to be an interminable slog across your entire existing environment.Rachel: No, I get it. And for things like, you know, instrumenting and adding tracing and adding OTEL, a lot of the time, the return that you get on that investment is… it's not quite like, “I put a dollar in, I get a dollar out,” I mean, something like tracing, you can't get to 60% instrumentation and get 60% of the value. You need to be able to get to, like, 80, 90%, and then you'll get a huge amount of value. So, it's sort of like you're trudging up this hill, you're charging up this hill, and then finally you get to the plateau, and it's beautiful. But that hill is steep, and it's long, and it's not pretty. And I don't know what to say other than there's a plateau near the top. And those companies that do this well really get a ton of value out of it. And that's the dream, that we want to help customers get up that hill. But yeah, I'm not going to lie, the hill can be steep.Corey: One thing that I find interesting is there's almost a bimodal distribution in companies that I talk to. On the one side, you have companies like, I don't know, a Chronosphere is a good example of this. Presumably you have a cloud bill somewhere and the majority of your cloud spend will be on what amounts to a single application, probably in your case called, I don't know, Chronosphere. It shares the name of the company. The other side of that distribution is the large enterprise conglomerates where they're spending, I don't know, $400 million a year on cloud, but their largest workload is 3 million bucks, and it's just a very long tail of a whole bunch of different workloads, applications, teams, et cetera.So, what I'm curious about from the Chronosphere perspective—or the product you have, not the ‘you' in this metaphor, which gets confusing—is, it feels easier to instrument a Chronosphere-like company that has a primary workload that is the massive driver of most things and get that instrumented and start getting an observability story around that than it does to try and go to a giant company and, “Okay, 1500 teams need to all implement this thing that are all going in different directions.” How do you see it playing out among your customer base, if that bimodal distribution holds up in your world?Rachel: It does and it doesn't. So, first of all, for a lot of our customers, we often start with metrics. And starting with metrics means Prometheus. And Prometheus has hundreds of exporters. It is basically built into Kubernetes. So, if you're running Kubernetes, getting Prometheus metrics out, actually not a very big lift. So, we find that we start with Prometheus, we start with getting metrics in, and we can get a lot—I mean, customers—we have a lot of customers that use us just for metrics, and they get a massive amount of value.But then once they're ready, they can start instrumenting for OTEL and start getting traces in as well. And yeah, in large organizations, it does tend to be one team, one application, one service, one department that kind of goes at it and gets all that instrumented. But I've even seen very large organizations, when they get their act together and decide, like, “No, we're doing this,” they can get OTel instrumented fairly quickly. So, I guess it's, like, a lining up. It's more of a people issue than a technical issue a lot of the time.Like, getting everyone lined up and making sure that like, yes, we all agree. We're on board. We're going to do this. But it's usually, like, it's a start small, and it doesn't have to be all or nothing. We also just recently added the ability to ingest events, which is actually a really beautiful thing, and it's very, very straightforward.It basically just—we connect to your existing other DevOps tools, so whether it's, like, a Buildkite, or a GitHub, or, like, a LaunchDarkly, and then anytime something happens in one of those tools, that gets registered as an event in Chronosphere. And then we overlay those events over your alerts. So, when an alert fires, then first thing I do is I go look at the alert page, and it says, “Hey, someone did a deploy five minutes ago,” or, “There was a feature flag flipped three minutes ago,” I solved the problem right then. I don't think of this as—there's not an all or nothing nature to any of this stuff. Yes, tracing is a little bit of a—you know, like I said, it's one of those things where you have to make a lot of investment before you get a big reward, but that's not the case in all areas of observability.Corey: Yeah. I would agree. Do you find that there's a significant easy, early win when customers start adopting Chronosphere? Because one of the problems that I've found, especially with things that are holistic, and as you talk about tracing, well, you need to get to a certain point of coverage before you see value. But human psychology being what it is, you kind of want to be able to demonstrate, oh, see, the Meantime To Dopamine needs to come down, to borrow an old phrase. Do you find that some of there's some easy wins that start to help people to see the light? Because otherwise, it just feels like a whole bunch of work for no discernible benefit to them.Rachel: Yeah, at least for the Chronosphere customer base, one of the areas where we're seeing a lot of traction this year is in optimizing the costs, like, coming back to the cost story of their overall observability bill. So, we have this concept of the control plane in our product where all the data that we ingest hits the control plane. At that point, that customer can look at the data, analyze it, and decide this is useful, this is not useful. And actually, not just decide that, but we show them what's useful, what's not useful. What's being used, what's high cardinality, but—and high cost, but maybe no one's touched it.And then we can make decisions around aggregating it, dropping it, combining it, doing all sorts of fancy things, changing the—you know, downsampling it. We can do this, on the trace side, we can do it both head based and tail based. On the metrics side, it's as it hits the control plane and then streams out. And then they only pay for the data that we store. So typically, customers are—they come on board and immediately reduce their observability dataset by 60%. Like, that's just straight up, that's the average.And we've seen some customers get really aggressive, get up to, like, in the 90s, where they realize we're only using 10% of this data. Let's get rid of the rest of it. We're not going to pay for it. So, paying a lot less helps in a lot of ways. It also helps companies get more coverage of their observability. It also helps customers get more coverage of their overall stack. So, I was talking recently with an autonomous vehicle driving company that recently came to us from the Dog, and they had made some really tough choices and were no longer monitoring their pre-prod environments at all because they just couldn't afford to do it anymore. It's like, well, now they can, and we're still saving the money.Corey: I think that there's also the downstream effect of the money saving to that, for example, I don't fix observability bills directly. But, “Huh, why is your CloudWatch bill through the roof?” Or data egress charges in some cases? It's oh because your observability vendor is pounding the crap out of those endpoints and pulling all your log data across the internet, et cetera. And that tends to mean, oh, yeah, it's not just the first-order effect; it's the second and third and fourth-order effects this winds up having. It becomes almost a holistic challenge. I think that trying to put observability in its own bucket, on some level—when you're looking at it from a cost perspective—starts to be a, I guess, a structure that makes less and less sense in the fullness of time.Rachel: Yeah, I would agree with that. I think that just looking at the bill from your vendor is one very small piece of the overall cost you're incurring. I mean, all of the things you mentioned, the egress, the CloudWatch, the other services, it's impacting, what about the people?Corey: Yeah, it sure is great that your team works for free.Rachel: [laugh]. Exactly, right? I know, and it makes me think a little bit about that viral story about that particular company with a certain vendor that had a $65 million per year observability bill. And that impacted not just them, but, like, it showed up in both vendors' financial filings. Like, how did you get there? How did you get to that point? And I think this all comes back to the value in the ROI equation. Yes, we can all sit in our armchairs and be like, “Well, that was dumb,” but I know there are very smart people out there that just got into a bad situation by kicking the can down the road on not thinking about the strategy.Corey: Absolutely. I really want to thank you for taking the time to speak with me about, I guess, the bigger picture questions rather than the nuts and bolts of a product. I like understanding the overall view that drives a lot of these things. I don't feel I get to have enough of those conversations some weeks, so thank you for humoring me. If people want to learn more, where's the best place for them to go?Rachel: So, they should definitely check out the Chronosphere website. Brand new beautiful spankin' new website: chronosphere.io. And you can also find me on LinkedIn. I'm not really on the Twitters so much anymore, but I'd love to chat with you on LinkedIn and hear what you have to say.Corey: And we will, of course, put links to all of that in the [show notes 00:28:26]. Thank you so much for taking the time to speak with me. It's appreciated.Rachel: Thank you, Corey. Always fun.Corey: Rachel Dines, Head of Product and Solutions Marketing at Chronosphere. This has been a featured guest episode brought to us by our friends at Chronosphere, and I'm Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry and insulting comment that I will one day read once I finished building my highly available rsyslog system to consume it with.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business, and we get to the point. Visit duckbillgroup.com to get started.
We break down everything you need to know from .NET Conf 2023 and .NET 8 release! Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Merge Conflict: Twitter, Facebook, Website, Chat on Discord Music : Amethyst Seer - Citrine by Adventureface ⭐⭐ Review Us (https://itunes.apple.com/us/podcast/merge-conflict/id1133064277?mt=2&ls=1) ⭐⭐ Machine transcription available on http://mergeconflict.fm
Shortcuts aren't just a little Applescript goodness that can be used to make droplets. Although you could do that. Shortcuts can be used to perform fairly complex atomic operations. In this session we'll look at some of the work Damian Cavanagh has posted to his Github, and the crazy lengths he went to in order to have Shortcuts that work with device management options. Hosts: Tom Bridge - @firstname.lastname@example.org Marcus Ransom - @marcusransom Guests: Damian Cavanagh - LinkedIn Links: ShortcutsForJamfPro Canvas podcast - Workflow Series Automators podcast - Shortcuts utility apps Actions app GitHub repo Sponsors: Kandji Kolide Watchman Monitoring If you're interested in sponsoring the Mac Admins Podcast, please email email@example.com for more information. Get the latest about the Mac Admins Podcast, follow us on Twitter! We're @MacAdmPodcast! The Mac Admins Podcast has launched a Patreon Campaign! Our named patrons this month include Weldon Dodd, Damien Barrett, Justin Holt, Chad Swarthout, William Smith, Stephen Weinstein, Seb Nash, Dan McLaughlin, Joe Sfarra, Nate Cinal, Jon Brown, Dan Barker, Tim Perfitt, Ashley MacKinlay, Tobias Linder Philippe Daoust, AJ Potrebka, Adam Burg, & Hamlin Krewson
ANTIC Episode 103 - POKEY The Atari Guru In this episode of ANTIC The Atari 8-Bit Computer Podcast…results of the ABBUC 2023 Software Contest, upcoming 2024 shows (and an Atari Party yet in 2023), and Kay has a great time testing out POKEY The Atari Guru… READY! Recurring Links Floppy Days Podcast AtariArchives.org AtariMagazines.com Kay's Book “Terrible Nerd” New Atari books scans at archive.org ANTIC feedback at AtariAge Atari interview discussion thread on AtariAge Interview index: here ANTIC Facebook Page AHCS Eaten By a Grue Next Without For Links for Items Mentioned in Show: What we've been up to QSO Today Episode 475 Kay Savetz - https://archive.org/details/qso-today-475 The 1056 Board -- Atari 400 1056k Memory Upgrade - https://www.ebay.com/itm/275939475102 News Version 6.4.0 of the RECOIL (Retro Computer Image Library) graphics browser - http://recoil.sourceforge.net POKEY The Atari Guru by Andy Diller - https://chat.openai.com/g/g-t3fdHnbbK-pokey-the-atari-guru Atari 8-bit clubs - https://www.virtualatari.org/user-groups.html ABACUS - Atari Bay Area Computer User Society -- https://www.abacusonline.org/ ABBUC - Atari Bit Byter User Club -- https://www.abbuc.de/ ABBUC HAR - Hanover ABBUC Regional Group -- https://www.bertelmann.org/har/ ABBUC RAF - Frankfurt ABBUC Regional Group -- https://www.abbuc-raf.de/ ABBUC RENO - ABBUC Regional Group North -- https://andymanone.dyndns.org/reno/ ACEC - Atari Computer Enthusiasts of Columbus -- https://acec.atari.org/ AKPV - Atariklub Prostejov -- https://atariklub.prostejov.cz/ CCC - Chestnut Computer Club (UK) -- https://homepage.ntlworld.com/derryck123/ HA - HispAtari - El porto hispano de Usarios de Atari -- https://hispatari.mforos.com/ Rose City Atari Club http://www.rosecityatariclub.com/ SAK - The Swedish Atari Club (Svenska Atariklubben) -- https://www.sak.nu/ SCAT - Suburban Chicago ATarians -- https://www.scatarians.org/ SPACE - Saint Paul Atari Computer Enthusiasts -- https://space.atari.org/ Terry Stewart new video on Atari 800 - https://x.com/classiccomputNZ/status/1723091786619658314 ABBUC software contest 2023 - All entries in ABBUC Special Magazine 53 and on associate disk: ABBUC official results - https://abbuc.de/2023/10/ergebnis-abbuc-software-wettbewerb-2023-result-abbuc-software-contest-2023/ Thread at AtariAge - https://forums.atariage.com/topic/350228-abbuc-software-contest-2023/ Zero Page Homebrew, Part 1 - https://www.youtube.com/watch?v=nW9UAk2Zsfc Zero Page Homebrew, Part 2 - https://www.youtube.com/watch?v=XKL0rPzVvLY Videos of the top 5 ABBUC winners: Time Wizard by Krzysztof "Amarok" Piotrowski - video by Saberman RetroNews Night Knight by Lars Langhans - video by Saberman RetroNews Zauberball by Olivier Cyranka - video by Saberman RetroNews Double by PopMilo - video by Saberman RetroNews Der Schranker 3 by Janko Grewe - video by Saberman RetroNews Atari WozMon by freetz - https://forums.atariage.com/topic/352757-atari-wozmon-for-abbuc-software-contest-2023/ Zylon Defenders, iOS game tribute to 1979 Atari 8 Bit computers killer app Star Raiders Apple Store - https://apps.apple.com/us/app/zylon-defenders/id1191618583 Github - https://github.com/jglasse/ZylonDefenders New Atari podcast from Wade (Inverse ATASCII) called The ANTIC Files - https://inverseatascii.info/ Atari 8-Bit/5200 Homebrew Games Released/Completed/WIP in 2023 - Zero Page Homebrew - https://forums.atariage.com/topic/345916-atari-8-bit5200-homebrew-games-releasedcompletedwip-in-2023/ Celebrating the Iconic Atari Logo - AUSRETROGAMER - https://ausretrogamer.com/celebrating-the-iconic-atari-logo/ Upcoming Shows Atari Party 2023! - Saturday, December 2, 2023, 1pm to 4pm - Quakertown Train Station, Quakertown, PA - https://quakertowntrainstation.org - organized by Peter Fletcher Vintage Computer Festival SoCal - February 17-18, 2024 - Hotel Fera Events Center, Orange, CA - vcfsocal.com Interim Computer Festival SPRING - March 23rd and 24th, 2024 - Intraspace, Seattle, WA - https://sdf.org/icf/ Midwest Gaming Classic - April 5-7 - Wisconsin Center, Milwaukee, WI - https://www.midwestgamingclassic.com/ VCF East - April 5-7, 2024 - Wall, NJ - http://www.vcfed.org Indy Classic Computer and Video Game Expo - April 13-14 - Crowne Plaza Airport Hotel, Indianapolis, IN - https://indyclassic.org/ VCF Southwest - June 14-16, 2024 - Davidson-Gundy Alumni Center at UT Dallas - https://www.vcfsw.org/ Boatfest Retro Computer Expo - June 14-16 - Hurricane, WV - http://boatfest.info Southern Fried Gaming Expo and VCF Southeast - July 19-21, 2024 - Atlanta, GA - https://gameatl.com/ Fujiama - July 23-28 - Lengenfeld, Germany - http://atarixle.ddns.net/fuji/2024/ Silly Venture SE (Summer Edition) - Aug. 15-28 - Gdansk, Poland - https://www.demoparty.net/silly-venture/silly-venture-2024-se Portland Retro Gaming Expo - September 27-29, 2024 - Oregon Convention Center, Portland, OR - https://retrogamingexpo.com/ Silly Venture WE (Winter Edition) - Dec. 5-8 - Gdansk, Poland - https://www.demoparty.net/silly-venture/silly-venture-2024-we YouTube Videos A8PicoCart flash-cart ATARI 8-bit XL/XE/XEGS - hichal - https://www.youtube.com/watch?v=1zp8EeBGnIk How Do You Make Cartridges for the Gorgeous Atari 800? YARC Yet Another Retro Channel - https://www.youtube.com/watch?v=KjSOJErwtvQ New Atari solderless HDMI mod - LumaCode GTIA-digitizer - TheRetroChannel - https://www.youtube.com/watch?v=prNM38bC1zM 8x Cross Platform Games (Atari 8-Bit) from Fabrizio Caruso - ZeroPage Homebrew - https://www.youtube.com/watch?v=UbWK5rz24OQ New at Archive.org Portland Retro Gaming Expo 2023 Program https://archive.org/details/6502-assembly-language-programming-for-apple-commodore-and-atari-computers-christopher-lampton-1985/page/n11/mode/2up Atari Explorer espana - https://archive.org/details/@bultro?query=atari SBACE Gazette 1989-01 SBACE Gazette 1989-03 Pikes Peek Poke Atari Computer Enthusiasts newsletter 1989-07 Pikes Peek Poke Atari Computer Enthusiasts newsletter 1989-05 Pikes Peek Poke Atari Computer Enthusiasts newsletter 1989-11 Pikes Peek Poke Atari Computer Enthusiasts newsletter 1989-09 New at Github https://github.com/pvbestinfoo/Atari_8-Bit_Rom_Image_File_Explorer https://github.com/fredlcore/AtariWozMon Feedback Michael Mulhern scanning of “Your Computer” - https://archive.org/details/@jongleur? Ellen Jameson comment at Bob Evans, Capital Children's Museum administrator — Atari computer lab — interview Bill Hogue, Miner 2049er — interview https://www.youtube.com/watch?v=593uvgt4USo&lc=UgyxxFpZPUlLLvimQ7x4AaABAg
FULL SHOW NOTES https://podcast.nz365guy.com/502 What happens when a tech enthusiast from Ecuador finds his calling in the vibrant city of Amsterdam? Join us as we chat with Wilmer Alcivar, an MVP from the Netherlands who is passionate about everything CRM, social media integration and adding value to CRM projects. From his early days in Ecuador to his current involvement in the Dynamics sales model and his shift towards the Power Platform, Wilmer's journey is not only intriguing but also inspiring. Picture this – a laid-back conversation about tech, culture, and personal anecdotes, all while navigating through the maze of Microsoft Business Applications. Learn from Wilmer as he regales us with his experience with model-driven apps, canvas apps, and custom pages using the Greater Kit. He also takes us through his pioneering work with Power Pages, Power Automate Desktop, and the challenges he sees in the Power Platform. From his initial rejection to finally becoming an MVP, Wilmer's candid recount of his journey offers valuable insights. This episode promises a unique blend of personal wisdom and professional insight, all delivered with a dash of Wilmer's signature charm. OTHER RESOURCES: Microsoft MVP YouTube Series - How to Become a Microsoft MVP 90-Day Mentoring Challenge - https://ako.nz365guy.com/ Power 365 Academy - https://www.power365academy.com/ GitHub - https://github.com/walcivarAgileXRM AgileXRm - The integrated BPM for Microsoft Power PlatformSupport the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening
Saron chats with Laura Thorson, Program Manager at GitHub. Laura talks about how she was always interested in singing, dancing and music growing up which led her to UCLA on a scholarship to play the oboe. She tells us about her experience at UCLA and her decision to go to a coding bootcamp after graduation as opposed to searching for a job with her English Lit degree. Laura then describes the jobs she landed after bootcamp at Salesforce, Twitter, Meta and now GitHub and how LinkedIn played a huge role in helping her land these opportunities. Show Links Code Comments (sponsor) IRL (sponsor) STK AdTech Laura's GitHub Laura's Twitter
This week Simon Quigley, the release manager for Lubuntu joins Ask Noah to talk about the 24.04 release! We give you some gift ideas for the geek in your life, and of course we answers your questions. -- During The Show -- 00:55 Gratitude Gratitude is good for your health Reduces stress Can be measured in as small as a few weeks Noah's thank full for Friends willing to help For profit companies contributing to open source Developers who donate their time As we get older, gratefulness grows Steve's thankful for Ability to learn from open source code Hardware that "just won't die" 09:00 Geek Gift Recommendations MokerLink Switch (https://www.amazon.com/MokerLink-Managed-Ethernet-Auto-Negotiation-Bandwidth/dp/B0C53H61LN) Seagate 20TB HDD (Amazon) (https://www.amazon.com/Seagate-ST18000NM000J-Internal-Surveillance-Supported/dp/B09MWKXR2T) Seagate 20TB (Newegg) (https://www.newegg.com/seagate-exos-x20-st20000nm007d-20tb/p/N82E16822185011?Item=N82E16822185011) Seagate Iornwolf BackBlaze Drive Review (https://www.backblaze.com/blog/backblaze-drive-stats-for-q3-2023/) Thank You Home Assistant Shelly Devices (https://www.shelly.com/en-us/products/shop#unfiltered) 19:11 Nothing/SunBird App ARS Technica (https://arstechnica.com/gadgets/2023/11/nothings-imessage-app-was-a-security-catastrophe-taken-down-in-24-hours/) Company claimed to have "hacked" iMessage Many blogs and sites cried fowl Got it pulled from app stores Dumpster fire Reuploaded the app under SunBird name Service must be audit-able Plenty of options for E2EE Lowest common denominator Beeper iMessage solution E2EE is probably good for the world 25:13 Lubuntu Release Simon Quigley - Release Manager for Lubuntu Calamares System Installer Light weight & Full install Optional programs Why the Calamares Installer? Customize Menu Welcome Screen? Installer and updates How the installer implements Snaps How did you land on these applications Element (Snap) virt-manager Thunderbird Krita (Snap) What would you tell the next generation? What did your start look like? 42:53 News Wire Oracle Linux 9.3 - Oracle (https://blogs.oracle.com/linux/post/oracle-linux-9-update-3) Rocky Linux 9.3 - 9 to 5 Linux (https://9to5linux.com/rocky-linux-9-3-brings-back-cloud-and-container-images-for-powerpc-64-bit) Endeaver OS Adopts KDE - Debugpoint News (https://debugpointnews.com/endeavouros-galileo/) Wireshark 4.2.0 - Help Net Security (https://www.helpnetsecurity.com/2023/11/17/wireshark-4-2-0-open-source-packet-analysis/) Handbrake 1.7 - GitHub (https://github.com/HandBrake/HandBrake/releases/tag/1.7.0) Calibre 7.0 - Calibre (https://calibre-ebook.com/whats-new) Distrobox - GitHub (https://github.com/89luca89/distrobox/releases/tag/1.6.0) Firefox 120.0 - Mozilla (https://www.mozilla.org/en-US/firefox/120.0/releasenotes/) Olimex Drone Swarm - Hackster.io (https://www.hackster.io/news/olimex-shows-off-an-open-hardware-linux-based-autonomous-drone-swarm-88ed4bfbd390) Collabora NVK - Collabora (https://www.collabora.com/news-and-blog/news-and-events/nvk-reaches-vulkan-conformance.html) TikTok Edge Accelerator - The News Stack (https://thenewstack.io/tiktok-to-open-source-cloud-neutralizing-edge-accelerator/) TETRA Going Open Source - Bank Info Security (https://www.bankinfosecurity.com/european-telecom-body-to-open-source-radio-encryption-system-a-23599) IPStorm Shut Down - PC Mag (https://www.pcmag.com/news/fbi-shuts-down-ipstorm-malware-that-targeted-windows-mac-linux) Open Se Cura - Mark Tech Post (https://www.marktechpost.com/2023/11/17/meet-googles-project-open-se-cura-an-open-source-framework-to-accelerate-the-development-of-secure-scalable-transparent-and-efficient-ai-systems/) Kyutai - Tech Crunch (https://techcrunch.com/2023/11/17/kyutai-is-an-french-ai-research-lab-with-a-330-million-budget-that-will-make-everything-open-source/) 45:00 Nat Reflection - Sebastian All the same thing NAT Reflection Most strait forward Least "hacky" Local DNS Caching issues 2 sources of truth Put the server on a different subnet Netgate NAT Reflection Doc (https://docs.netgate.com/pfsense/en/latest/nat/reflection.html) 52:00 Hikvision Follow Up - William Thank You Glen! IE Tab Extension requires internet connection Sold Hikvision Bought used axis camera on Ebay - Just worked! -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/364) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed) Special Guest: Simon Quigley.
Don't miss the latest Azure for Sports Podcast episode, where we have a special guest from GitHub, Elizabeth Barrord. She will show us how GitHub Copilot can make our lives easier as developers by helping us write better code faster and with fewer errors. Watch us put GitHub Copilot to the test in a LIVE DEMO and see how it can help you modernize your applications! To learn more about GitHub Copilot, the fantastic tool we used in the demo, visit: https://github.com/features/copilot
Sam and Greg back are at OpenAI - here's my message to them Subscribe to the Multimodal Podcast! Spotify - https://open.spotify.com/show/7qrWSE7ZxFXYe8uoH8NIFV Apple Podcasts - https://podcasts.apple.com/us/podcast/multimodal-by-bakz-t-future/id1564576820 Google Podcasts - https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2Jha3p0ZnV0dXJlL2ZlZWQueG1s Stitcher - https://www.stitcher.com/show/multimodal-by-bakz-t-future Other Podcast Apps (RSS Link) - https://feed.podbean.com/bakztfuture/feed.xml Connect with me: YouTube - https://www.youtube.com/bakztfuture Substack Newsletter - https://bakztfuture.substack.com Twitter - https://www.twitter.com/bakztfuture Instagram - https://www.instagram.com/bakztfuture Github - https://www.github.com/bakztfuture
As the industry continues to change, keeping a pulse on it becomes even more difficult. Luckily, the team over at Common Room partnered with a number of DevRel professionals (some of whom are on this call!) to create a survey focused on DevRel compensation as well as roles and responsibilities, business impact, success metrics, and personal wellbeing. In this episode, we'll talk about the results of the survey and learn what we can do to push the industry forward. 2023 Common Room Survey Results Overview (https://www.commonroom.io/blog/2023-developer-relations-compensation-and-culture-report-overview/) Checkouts Rebecca Marshburn * The Color of Law: A Forgotten History of How our Government Segregated America (https://www.amazon.com/Color-Law-Forgotten-Government-Segregated/dp/1631494538?&linkCode=sl1&tag=persea-20&linkId=4c15702a44d9c65837b15d8b7d3976e7&language=en_US&ref_=as_li_ss_tl) by Richard Rothstein * Common Room DevRel playbooks (https://www.commonroom.io/playbooks/?filters=community/developer-relations) - Like ‘How to automate your GitHub triage processes' and ‘Easily meet your support SLAs': * re:Invent - come join our DevRel brunch! Invite here (https://commonroombrunchinvegas.splashthat.com/) - open to all! PJ Hagerty * New Wu-Tang Single (https://combine.fm/spotify/track/3vTSGUgTcDJHO47dlkzDcU) * Boygenius - The Rest (https://open.spotify.com/album/1n0esOkFQdL74PwMwTVgtz?si=QHLrBqmESzWpmFdqBOLSyg) * Southern Rites - a Photo documentary (https://www.eastman.org/doc-southernrites) * Ocean at the End of the Lane (https://www.amazon.com/Ocean-End-Lane-Novel/dp/0063070707) by Neil Gaiman Jason Hand * Re:invent Sessions (https://dtdg.co/3slq5ks) Mary Thengvall * Demon Copperhead (https://amzn.to/3tImsVQ) by Barbara Kingsolver * P!nk - new(ish) album (https://pink.lnk.to/TRUSTFALL) + concert experience Enjoy the podcast? Please take a few moments to leave us a review on iTunes (https://itunes.apple.com/us/podcast/community-pulse/id1218368182?mt=2) and follow us on Spotify (https://open.spotify.com/show/3I7g5WfMSgpWu38zZMjet?si=565TMb81SaWwrJYbAIeOxQ), or leave a review on one of the other many podcasting sites that we're on! Your support means a lot to us and helps us continue to produce episodes every month. Like all things Community, this too takes a village. Episode artwork by UX Indonesia (https://unsplash.com/@uxindo?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash) on Unsplash (https://unsplash.com/@uxindo?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash) Special Guest: Rebecca Marshburn.
Jono Bacon's passion for building communities has been a driving force in a career taken him from Canonical to GitHub to founding the Community Leadership Core community accelerator. In this episode, Jono shares his definition of community, how a community can create a movement and the differences between the two. We also get a bit of insight into how he developed his passion for building communities and why he continues helping companies build and support theirs through the Community Leadership Core. When Jono speaks about communities he is involved with, he uses “we” instead of “I” to describe their achievements, so I had him dig into that a bit more as we explored the power dynamics that have a huge influence on the success of a community or movement. Highlights: I introduce Jono, who is the founder of Community Leadership Core (0:28) Jono shares more about his passion for building communities and why he started Community Leadership Core (0:51) Jono goes into his background, discovering how Linux was created, and finding connection to others through open source (2:47) Jono reflects on his time at Canonical and what he learned (10:46) How Jono defines and thinks about “community” (13:10) The difference between building a community and creating a movement (15:50) Using “we” vs “I” in communities to encourage collaboration (18:05) Where Jono sees companies missing the mark in community building (20:03) Jono explains what delivery looks like in the context of community (22:31) Jono shares examples of successful communities (27:05) Communities Jono enjoys participating in (28:44) How to start a community from scratch (31:22) A quick summary of the Community Leadership Core (32:40) Links:Jono LinkedIn: https://www.linkedin.com/in/jonobacon Twitter: https://twitter.com/jonobacon Company: communityleadershipcore.com
A new version of the Steam Deck looks to be a nice improvement, Amazon's new Linux-based OS is probably bad news for Fire TV hackers, great news for GNOME, Signal tells us how expensive it is to run its service, GitHub goes all in on Copilot, our speculation about the OpenAI drama, and a mini... Read More
In episode 65 of o11ycast, Jess and Martin speak with Sophie DeBenedetto of GitHub. This talk explores observability at GitHub, the value of tracing, the BEAM ecosystem, the Elixir language, and insights on leveraging observability at scale.
In episode 65 of o11ycast, Jess and Martin speak with Sophie DeBenedetto of GitHub. This talk explores observability at GitHub, the value of tracing, the BEAM ecosystem, the Elixir language, and insights on leveraging observability at scale.
A new version of the Steam Deck looks to be a nice improvement, Amazon's new Linux-based OS is probably bad news for Fire TV hackers, great news for GNOME, Signal tells us how expensive it is to run its service, GitHub goes all in on Copilot, our speculation about the OpenAI drama, and a mini... Read More
CHAOSScast – Episode 74 On this episode, our host Georg Link kicks off the discussion, introducing a stellar lineup of panelists including Sean Goggins, Yehui Wang, Mike Nolan, and Cali Dolfi. The topics discussed today are the CHAOSS software, Augur, and GrimoireLab, and the different applications built on top of this software. The panel members discuss the projects they are involved in, such as the Augur project, OSS Compass, and Project Aspen's 8Knot. Then, we'll delve into Mystic's prototype software, aiming to transform how academic contributions are recognized and valued. The discussion dives deep into the role of CHAOSS software in open source and community health, talks about Augur and GrimoireLab projects, ecosystem-level analysis, and data visualization. Press download now to hear more! [00:00:58] The panelists each introduce themselves. [00:03:03] Georg explains the origins of CHAOSS software, particularly Augur and Grimoire Lab, and their development. He dives into Grimoire Lab's focus on data quality, flexibility, and its identity management tool, Sorting Hat. [00:05:55] Sean details Augur's inception, its focus on a relational database, and its capabilities in data collection and validation. Georg and Sean recall Augur's early days, focusing on GitHub archive data, and its evolution into a comprehensive system. [00:09:28] Yehui discusses OSS Compass, its goals, the integration of metrics models, and the choice of using Grimoire Lab as a backend. He elaborates on OSS Compass's ease of use and the adoption of new data sources like Gitee. [00:14:16] Mike inquires about the handling of the vast number of repositories on Gitee, and Yehui explains using a message bus and RabbitMQ for both data handling and parallel processing. Sean clarifies that Gitee is a Git platform similar to GitHub and GitLab, and OSS Compass is the metrics and modeling tool. [00:15:29] Cali asks about the visualization tool used, and Yehui mentions moving away from Kibana to front-end technologies and libraries like ECharts for creating visualizations, which is an Apache open source project. [00:16:29] Cali describes 8Knot under Project Aspen built in Plotly Dash and Repel, focusing on mapping open source ecosystems using Augur data. She emphasizes the data science approach to analyzing open source communities and the templated nature of 8Knot for easy visualization creation by data scientists. [00:20:19] Sean comments on the ease of adding new visualizations with Dash Plotly technology in 8Knot. Cali adds that new visualizations can be easily made an that 8Knot is connected to a maintained Augur database but can also be forked for specific community and company needs. [00:2342] Georg underlines the importance of ecosystem-level analysis, especially for software supply chain security. Cali shares the goals of analyzing ecosystems to understand relationships between projects, influenced by Red Hat's interests in investing in interconnected communities. [00:26:30] The conversation shifts to Mystic, and Mike describes it as a prototype software integrating both GrimoireLab and Augur, with the goal of better integrating these projects through development. [00:27:30] Mike outlines Mystic's goal to serve as a front-end to date collection systems, with a specific focus on the academic community's contributions to technology research. He envisions Mystic as a tool for academics to measure community health and impact of their projects, aiding in tenure and promotion cases. [00:30:52] Yehui asks about integration of Grimoire Lab and Augur within Mystic and the selection of components for the solution. Mike explains the early stages of integration and the plan to combine data collection services from GrimoireLab into Augur to support undergraduate student development. [00:32:30] Mike details research on Mystic, including interviews with faculty from various departments to understand their digital collaboration and artifact creation. He aims to develop generalized models of collaboration applicable to multiple data sources, allowing systems like Mystic to support diverse academic disciplines. Value Adds (Picks) of the week: [00:36:26] Georg's pick is focusing on the slogan, “One day at a time.” [00:37:12] Cali's pick is doing a Friendsgiving this week. [00:38:08] Sean's pick is the launch of the tv show ‘Moonlighting' from the 80's. [00:38:49] Yehui's pick is riding his bike to work which is peaceful for him. [00:39:52] Mike's pick is attending The Turing Way Book Dash. Panelists: Georg Link Sean Goggins Michael Nolan Cali Dolfi Yehui Wang Links: CHAOSS (https://chaoss.community/) CHAOSS Project X/Twitter (https://twitter.com/chaossproj?lang=en) CHAOSScast Podcast (https://podcast.chaoss.community/) firstname.lastname@example.org (mailto:email@example.com) Ford Foundation (https://www.fordfoundation.org/) Georg Link Website (https://georg.link/) Sean Goggins Website (https://www.seangoggins.net/) Mike Nolan LinkedIn (https://www.linkedin.com/in/mikenolansoftware/?originalSubdomain=uk) Cali Dolfi LinkedIn (https://www.linkedin.com/in/calidolfi/) Yehui Wang GitHub (https://github.com/eyehwan) Augur (https://github.com/chaoss/augur) GrimoireLab (https://chaoss.github.io/grimoirelab/) Perceval-GitHub (https://github.com/chaoss/grimoirelab-perceval) Gitee (https://gitee.com/) RabbitMQ (https://www.rabbitmq.com/) OSS Compass-GitHub (https://github.com/oss-compass) Kibana (https://www.elastic.co/kibana) Apache ECharts (https://echarts.apache.org/en/index.html) 8Knot (https://eightknot.osci.io/) Building an open source community health analytics platform (Mystic) (https://opensource.com/article/21/9/openrit-mystic) The Turing Way Book Dashes (https://the-turing-way.netlify.app/community-handbook/bookdash.html) Special Guests: Cali Dolfi, Mike Nolan, and Yehui Wang.