Podcasts about SQL

Language for management and use of relational databases

  • 1,162PODCASTS
  • 3,846EPISODES
  • 41mAVG DURATION
  • 1DAILY NEW EPISODE
  • Nov 29, 2023LATEST

POPULARITY

20162017201820192020202120222023

Categories



Best podcasts about SQL

Show all podcasts related to sql

Latest podcast episodes about SQL

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Catch us at Modular's ModCon next week with Chris Lattner, and join our community!Due to Bryan's very wide ranging experience in data science and AI across Blue Bottle (!), StitchFix, Weights & Biases, and now Hex Magic, this episode can be considered a two-parter.Notebooks = Chat++We've talked a lot about AI UX (in our meetups, writeups, and guest posts), and today we're excited to dive into a new old player in AI interfaces: notebooks! Depending on your background, you either Don't Like or you Like notebooks — they are the most popular example of Knuth's Literate Programming concept, basically a collection of cells; each cell can execute code, display it, and share its state with all the other cells in a notebook. They can also simply be Markdown cells to add commentary to the analysis. Notebooks have a long history but most recently became popular from iPython evolving into Project Jupyter, and a wave of notebook based startups from Observable to DeepNote and Databricks sprung up for the modern data stack.The first wave of AI applications has been very chat focused (ChatGPT, Character.ai, Perplexity, etc). Chat as a user interface has a few shortcomings, the major one being the inability to edit previous messages. We enjoyed Bryan's takes on why notebooks feel like “Chat++” and how they are building Hex Magic:* Atomic actions vs Stream of consciousness: in a chat interface, you make corrections by adding more messages to a conversation (i.e. “Can you try again by doing X instead?” or “I actually meant XYZ”). The context can easily get messy and confusing for models (and humans!) to follow. Notebooks' cell structure on the other hand allows users to go back to any previous cells and make edits without having to add new ones at the bottom. * “Airlocks” for repeatability: one of the ideas they came up with at Hex is “airlocks”, a collection of cells that depend on each other and keep each other in sync. If you have a task like “Create a summary of my customers' recent purchases”, there are many sub-tasks to be done (look up the data, sum the amounts, write the text, etc). Each sub-task will be in its own cell, and the airlock will keep them all in sync together.* Technical + Non-Technical users: previously you had to use Python / R / Julia to write notebooks code, but with models like GPT-4, natural language is usually enough. Hex is also working on lowering the barrier of entry for non-technical users into notebooks, similar to how Code Interpreter is doing the same in ChatGPT. Obviously notebooks aren't new for developers (OpenAI Cookbooks are a good example), but haven't had much adoption in less technical spheres. Some of the shortcomings of chat UIs + LLMs lowering the barrier of entry to creating code cells might make them a much more popular UX going forward.RAG = RecSys!We also talked about the LLMOps landscape and why it's an “iron mine” rather than a “gold rush”: I'll shamelessly steal [this] from a friend, Adam Azzam from Prefect. He says that [LLMOps] is more of like an iron mine than a gold mine in the sense of there is a lot of work to extract this precious, precious resource. Don't expect to just go down to the stream and do a little panning. There's a lot of work to be done. And frankly, the steps to go from this resource to something valuable is significant.Some of my favorite takeaways:* RAG as RecSys for LLMs: at its core, the goal of a RAG pipeline is finding the most relevant documents based on a task. This isn't very different from traditional recommendation system products that surface things for users. How can we apply old lessons to this new problem? Bryan cites fellow AIE Summit speaker and Latent Space Paper Club host Eugene Yan in decomposing the retrieval problem into retrieval, filtering, and scoring/ranking/ordering:As AI Engineers increasingly find that long context has tradeoffs, they will also have to relearn age old lessons that vector search is NOT all you need and a good systems not models approach is essential to scalable/debuggable RAG. Good thing Bryan has just written the first O'Reilly book about modern RecSys, eh?* Narrowing down evaluation: while “hallucination” is a easy term to throw around, the reality is more nuanced. A lot of times, model errors can be automatically fixed: is this JSON valid? If not, why? Is it just missing a closing brace? These smaller issues can be checked and fixed before returning the response to the user, which is easier than fixing the model.* Fine-tuning isn't all you need: when they first started building Magic, one of the discussions was around fine-tuning a model. In our episode with Jeremy Howard we talked about how fine-tuning leads to loss of capabilities as well. In notebooks, you are often dealing with domain-specific data (i.e. purchases, orders, wardrobe composition, household items, etc); the fact that the model understands that “items” are probably part of an “order” is really helpful. They have found that GPT-4 + 3.5-turbo were everything they needed to ship a great product rather than having to fine-tune on notebooks specifically.Definitely recommend listening to this one if you are interested in getting a better understanding of how to think about AI, data, and how we can use traditional machine learning lessons in large language models. The AI PivotFor more Bryan, don't miss his fireside chat at the AI Engineer Summit:Show Notes* Hex Magic* Bryan's new book: Building Recommendation Systems in Python and JAX* Bryan's whitepaper about MLOps* “Kitbashing in ML”, slides from his talk on building on top of foundation models* “Bayesian Statistics The Fun Way” by Will Kurt* Bryan's Twitter* “Berkeley man determined to walk every street in his city”* People:* Adam Azzam* Graham Neubig* Eugene Yan* Even OldridgeTimestamps* [00:00:00] Bryan's background* [00:02:34] Overview of Hex and the Magic product* [00:05:57] How Magic handles the complex notebook format to integrate cleanly with Hex* [00:08:37] Discussion of whether to build vs buy models - why Hex uses GPT-4 vs fine-tuning* [00:13:06] UX design for Magic with Hex's notebook format (aka “Chat++”)* [00:18:37] Expanding notebooks to less technical users* [00:23:46] The "Memex" as an exciting underexplored area - personal knowledge graph and memory augmentation* [00:27:02] What makes for good LLMops vs MLOps* [00:34:53] Building rigorous evaluators for Magic and best practices* [00:36:52] Different types of metrics for LLM evaluation beyond just end task accuracy* [00:39:19] Evaluation strategy when you don't own the core model that's being evaluated* [00:41:49] All the places you can make improvements outside of retraining the core LLM* [00:45:00] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, Partner and CTO-in-Residence of Decibel Partners, and today I'm joining by Bryan Bischof. [00:00:15]Bryan: Hey, nice to meet you. [00:00:17]Alessio: So Bryan has one of the most thorough and impressive backgrounds we had on the show so far. Lead software engineer at Blue Bottle Coffee, which if you live in San Francisco, you know a lot about. And maybe you'll tell us 30 seconds on what that actually means. You worked as a data scientist at Stitch Fix, which used to be one of the premier data science teams out there. [00:00:38]Bryan: It used to be. Ouch. [00:00:39]Alessio: Well, no, no. Well, you left, you know, so how good can it still be? Then head of data science at Weights and Biases. You're also a professor at Rutgers and you're just wrapping up a new O'Reilly book as well. So a lot, a lot going on. Yeah. [00:00:52]Bryan: And currently head of AI at Hex. [00:00:54]Alessio: Let's do the Blue Bottle thing because I definitely want to hear what's the, what's that like? [00:00:58]Bryan: So I was leading data at Blue Bottle. I was the first data hire. I came in to kind of get the data warehouse in order and then see what we could build on top of it. But ultimately I mostly focused on demand forecasting, a little bit of recsys, a little bit of sort of like website optimization and analytics. But ultimately anything that you could imagine sort of like a retail company needing to do with their data, we had to do. I sort of like led that team, hired a few people, expanded it out. One interesting thing was I was part of the Nestle acquisition. So there was a period of time where we were sort of preparing for that and didn't know, which was a really interesting dynamic. Being acquired is a very not necessarily fun experience for the data team. [00:01:37]Alessio: I build a lot of internal tools for sourcing at the firm and we have a small VCs and data community of like other people doing it. And I feel like if you had a data feed into like the Blue Bottle in South Park, the Blue Bottle at the Hanahaus in Palo Alto, you can get a lot of secondhand information on the state of VC funding. [00:01:54]Bryan: Oh yeah. I feel like the real source of alpha is just bugging a Blue Bottle. [00:01:58]Alessio: Exactly. And what's your latest book about? [00:02:02]Bryan: I just wrapped up a book with a coauthor Hector Yee called Building Production Recommendation Systems. I'll give you the rest of the title because it's fun. It's in Python and JAX. And so for those of you that are like eagerly awaiting the first O'Reilly book that focuses on JAX, here you go. [00:02:17]Alessio: Awesome. And we'll chat about that later on. But let's maybe talk about Hex and Magic before. I've known Hex for a while, I've used it as a notebook provider and you've been working on a lot of amazing AI enabled experiences. So maybe run us through that. [00:02:34]Bryan: So I too, before I sort of like joined Hex, saw it as this like really incredible notebook platform, sort of a great place to do data science workflows, quite complicated, quite ad hoc interactive ones. And before I joined, I thought it was the best place to do data science workflows. And so when I heard about the possibility of building AI tools on top of that platform, that seemed like a huge opportunity. In particular, I lead the product called Magic. Magic is really like a suite of sort of capabilities as opposed to its own independent product. What I mean by that is they are sort of AI enhancements to the existing product. And that's a really important difference from sort of building something totally new that just uses AI. It's really important to us to enhance the already incredible platform with AI capabilities. So these are things like the sort of obvious like co-pilot-esque vibes, but also more interesting and dynamic ways of integrating AI into the product. And ultimately the goal is just to make people even more effective with the platform. [00:03:38]Alessio: How do you think about the evolution of the product and the AI component? You know, even if you think about 10 months ago, some of these models were not really good on very math based tasks. Now they're getting a lot better. I'm guessing a lot of your workloads and use cases is data analysis and whatnot. [00:03:53]Bryan: When I joined, it was pre 4 and it was pre the sort of like new chat API and all that. But when I joined, it was already clear that GPT was pretty good at writing code. And so when I joined, they had already executed on the vision of what if we allowed the user to ask a natural language prompt to an AI and have the AI assist them with writing code. So what that looked like when I first joined was it had some capability of writing SQL and it had some capability of writing Python and it had the ability to explain and describe code that was already written. Those very, what feel like now primitive capabilities, believe it or not, were already quite cool. It's easy to look back and think, oh, it's like kind of like Stone Age in these timelines. But to be clear, when you're building on such an incredible platform, adding a little bit of these capabilities feels really effective. And so almost immediately I started noticing how it affected my own workflow because ultimately as sort of like an engineering lead and a lot of my responsibility is to be doing analytics to make data driven decisions about what products we build. And so I'm actually using Hex quite a bit in the process of like iterating on our product. When I'm using Hex to do that, I'm using Magic all the time. And even in those early days, the amount that it sped me up, that it enabled me to very quickly like execute was really impressive. And so even though the models weren't that good at certain things back then, that capability was not to be underestimated. But to your point, the models have evolved between 3.5 Turbo and 4. We've actually seen quite a big enhancement in the kinds of tasks that we can ask Magic and even more so with things like function calling and understanding a little bit more of the landscape of agent workflows, we've been able to really accelerate. [00:05:57]Alessio: You know, I tried using some of the early models in notebooks and it actually didn't like the IPyNB formatting, kind of like a JSON plus XML plus all these weird things. How have you kind of tackled that? Do you have some magic behind the scenes to make it easier for models? Like, are you still using completely off the shelf models? Do you have some proprietary ones? [00:06:19]Bryan: We are using at the moment in production 3.5 Turbo and GPT-4. I would say for a large number of our applications, GPT-4 is pretty much required. To your question about, does it understand the structure of the notebook? And does it understand all of this somewhat complicated wrappers around the content that you want to show? We do our very best to abstract that away from the model and make sure that the model doesn't have to think about what the cell wrapper code looks like. Or for our Magic charts, it doesn't have to speak the language of Vega. These are things that we put a lot of work in on the engineering side, to the AI engineer profile. This is the AI engineering work to get all of that out of the way so that the model can speak in the languages that it's best at. The model is quite good at SQL. So let's ensure that it's speaking the language of SQL and that we are doing the engineering work to get the output of that model, the generations, into our notebook format. So too for other cell types that we support, including charts, and just in general, understanding the flow of different cells, understanding what a notebook is, all of that is hard work that we've done to ensure that the model doesn't have to learn anything like that. I remember early on, people asked the question, are you going to fine tune a model to understand Hex cells? And almost immediately, my answer was no. No we're not. Using fine-tuned models in 2022, I was already aware that there are some limitations of that approach and frankly, even using GPT-3 and GPT-2 back in the day in Stitch Fix, I had already seen a lot of instances where putting more effort into pre- and post-processing can avoid some of these larger lifts. [00:08:14]Alessio: You mentioned Stitch Fix and GPT-2. How has the balance between build versus buy, so to speak, evolved? So GPT-2 was a model that was not super advanced, so for a lot of use cases it was worth building your own thing. Is with GPT-4 and the likes, is there a reason to still build your own models for a lot of this stuff? Or should most people be fine-tuning? How do you think about that? [00:08:37]Bryan: Sometimes people ask, why are you using GPT-4 and why aren't you going down the avenue of fine-tuning today? I can get into fine-tuning specifically, but I do want to talk a little bit about the good old days of GPT-2. Shout out to Reza. Reza introduced me to GPT-2. I still remember him explaining the difference between general transformers and GPT. I remember one of the tasks that we wanted to solve with transformer-based generative models at Stitch Fix were writing descriptions of clothing. You might think, ooh, that's a multi-modal problem. The answer is, not necessarily. We actually have a lot of features about the clothes that are almost already enough to generate some reasonable text. I remember at that time, that was one of the first applications that we had considered. There was a really great team of NLP scientists at Stitch Fix who worked on a lot of applications like this. I still remember being exposed to the GPT endpoint back in the days of 2. If I'm not mistaken, and feel free to fact check this, I'm pretty sure Stitch Fix was the first OpenAI customer, unlike their true enterprise application. Long story short, I ultimately think that depending on your task, using the most cutting-edge general model has some advantages. If those are advantages that you can reap, then go for it. So at Hex, why GPT-4? Why do we need such a general model for writing code, writing SQL, doing data analysis? Shouldn't a fine-tuned model just on Kaggle notebooks be good enough? I'd argue no. And ultimately, because we don't have one specific sphere of data that we need to write great data analysis workbooks for, we actually want to provide a platform for anyone to do data analysis about their business. To do that, you actually need to entertain an extremely general universe of concepts. So as an example, if you work at Hex and you want to do data analysis, our projects are called Hexes. That's relatively straightforward to teach it. There's a concept of a notebook. These are data science notebooks, and you want to ask analytics questions about notebooks. Maybe if you trained on notebooks, you could answer those questions, but let's come back to Blue Bottle. If I'm at Blue Bottle and I have data science work to do, I have to ask it questions about coffee. I have to ask it questions about pastries, doing demand forecasting. And so very quickly, you can see that just by serving just those two customers, a model purely fine-tuned on like Kaggle competitions may not actually fit the bill. And so the more and more that you want to build a platform that is sufficiently general for your customer base, the more I think that these large general models really pack a lot of additional opportunity in. [00:11:21]Alessio: With a lot of our companies, we talked about stuff that you used to have to extract features for, now you have out of the box. So say you're a travel company, you want to do a query, like show me all the hotels and places that are warm during spring break. It would be just literally like impossible to do before these models, you know? But now the model knows, okay, spring break is like usually these dates and like these locations are usually warm. So you get so much out of it for free. And in terms of Magic integrating into Hex, I think AI UX is one of our favorite topics and how do you actually make that seamless. In traditional code editors, the line of code is like kind of the atomic unit and HEX, you have the code, but then you have the cell also. [00:12:04]Bryan: I think the first time I saw Copilot and really like fell in love with Copilot, I thought finally, fancy auto-complete. And that felt so good. It felt so elegant. It felt so right sized for the task. But as a data scientist, a lot of the work that you do previous to the ML engineering part of the house, you're working in these cells and these cells are atomic. They're expressing one idea. And so ultimately, if you want to make the transition from something like this code, where you've got like a large amount of code and there's a large amount of files and they kind of need to have awareness of one another, and that's a long story and we can talk about that. But in this atomic, somewhat linear flow through the notebook, what you ultimately want to do is you want to reason with the agent at the level of these individual thoughts, these atomic ideas. Usually it's good practice in say Jupyter notebook to not let your cells get too big. If your cell doesn't fit on one page, that's like kind of a code smell, like why is it so damn big? What are you doing in this cell? That also lends some hints as to what the UI should feel like. I want to ask questions about this one atomic thing. So you ask the agent, take this data frame and strip out this prefix from all the strings in this column. That's an atomic task. It's probably about two lines of pandas. I can write it, but it's actually very natural to ask magic to do that for me. And what I promise you is that it is faster to ask magic to do that for me. At this point, that kind of code, I never write. And so then you ask the next question, which is what should the UI be to do chains, to do multiple cells that work together? Because ultimately a notebook is a chain of cells and actually it's a first class citizen for Hex. So we have a DAG and the DAG is the execution DAG for the individual cells. This is one of the reasons that Hex is reactive and kind of dynamic in that way. And so the very next question is, what is the sort of like AI UI for these collections of cells? And back in June and July, we thought really hard about what does it feel like to ask magic a question and get a short chain of cells back that execute on that task. And so we've thought a lot about sort of like how that breaks down into individual atomic units and how those are tied together. We introduced something which is kind of an internal name, but it's called the airlock. And the airlock is exactly a sequence of cells that refer to one another, understand one another, use things that are happening in other cells. And it gives you a chance to sort of preview what magic has generated for you. Then you can accept or reject as an entire group. And that's one of the reasons we call it an airlock, because at any time you can sort of eject the airlock and see it in the space. But to come back to your question about how the AI UX fits into this notebook, ultimately a notebook is very conversational in its structure. I've got a series of thoughts that I'm going to express as a series of cells. And sometimes if I'm a kind data scientist, I'll put some text in between them too, explaining what on earth I'm doing. And that feels, in my opinion, and I think this is quite shared amongst exons, that feels like a really nice refinement of the chat UI. I've been saying for several months now, like, please stop building chat UIs. There is some irony because I think what the notebook allows is like chat plus plus. [00:15:36]Alessio: Yeah, I think the first wave of everything was like chat with X. So it was like chat with your data, chat with your documents and all of this. But people want to code, you know, at the end of the day. And I think that goes into the end user. I think most people that use notebooks are software engineer, data scientists. I think the cool things about these models is like people that are not traditionally technical can do a lot of very advanced things. And that's why people like code interpreter and chat GBT. How do you think about the evolution of that persona? Do you see a lot of non-technical people also now coming to Hex to like collaborate with like their technical folks? [00:16:13]Bryan: Yeah, I would say there might even be more enthusiasm than we're prepared for. We're obviously like very excited to bring what we call the like low floor user into this world and give more people the opportunity to self-serve on their data. We wanted to start by focusing on users who are already familiar with Hex and really make magic fantastic for them. One of the sort of like internal, I would say almost North Stars is our team's charter is to make Hex feel more magical. That is true for all of our users, but that's easiest to do on users that are already able to use Hex in a great way. What we're hearing from some customers in particular is sort of like, I'm excited for some of my less technical stakeholders to get in there and start asking questions. And so that raises a lot of really deep questions. If you immediately enable self-service for data, which is almost like a joke over the last like maybe like eight years, if you immediately enabled self-service, what challenges does that bring with it? What risks does that bring with it? And so it has given us the opportunity to think about things like governance and to think about things like alignment with the data team and making sure that the data team has clear visibility into what the self-service looks like. Having been leading a data team, trying to provide answers for stakeholders and hearing that they really want to self-serve, a question that we often found ourselves asking is, what is the easiest way that we can keep them on the rails? What is the easiest way that we can set up the data warehouse and set up our tools such that they can ask and answer their own questions without coming away with like false answers? Because that is such a priority for data teams, it becomes an important focus of my team, which is, okay, magic may be an enabler. And if it is, what do we also have to respect? We recently introduced the data manager and the data manager is an auxiliary sort of like tool on the Hex platform to allow people to write more like relevant metadata about their data warehouse to make sure that magic has access to the best information. And there are some things coming to kind of even further that story around governance and understanding. [00:18:37]Alessio: You know, you mentioned self-serve data. And when I was like a joke, you know, the whole rush to the modern data stack was something to behold. Do you think AI is like in a similar space where it's like a bit of a gold rush? [00:18:51]Bryan: I have like sort of two comments here. One I'll shamelessly steal from a friend, Adam Azzam from Prefect. He says that this is more of like an iron mine than a gold mine in the sense of there is a lot of work to extract this precious, precious resource. And that's the first one is I think, don't expect to just go down to the stream and do a little panning. There's a lot of work to be done. And frankly, the steps to go from this like gold to, or this resource to something valuable is significant. I think people have gotten a little carried away with the old maxim of like, don't go pan for gold, sell pickaxes and shovels. It's a much stronger business model. At this point, I feel like I look around and I see more pickaxe salesmen and shovel salesmen than I do prospectors. And that scares me a little bit. Metagame where people are starting to think about how they can build tools for people building tools for AI. And that starts to give me a little bit of like pause in terms of like, how confident are we that we can even extract this resource into something valuable? I got a text message from a VC earlier today, and I won't name the VC or the fund, but the question was, what are some medium or large size companies that have integrated AI into their platform in a way that you're really impressed by? And I looked at the text message for a few minutes and I was finding myself thinking and thinking, and I responded, maybe only co-pilot. It's been a couple hours now, and I don't think I've thought of another one. And I think that's where I reflect again on this, like iron versus gold. If it was really gold, I feel like I'd be more blown away by other AI integrations. And I'm not yet. [00:20:40]Alessio: I feel like all the people finding gold are the ones building things that traditionally we didn't focus on. So like mid-journey. I've talked to a company yesterday, which I'm not going to name, but they do agents for some use case, let's call it. They are 11 months old. They're making like 8 million a month in revenue, but in a space that you wouldn't even think about selling to. If you were like a shovel builder, you wouldn't even go sell to those people. And Swix talks about this a bunch, about like actually trying to go application first for some things. Let's actually see what people want to use and what works. What do you think are the most maybe underexplored areas in AI? Is there anything that you wish people were actually trying to shovel? [00:21:23]Bryan: I've been saying for a couple of months now, if I had unlimited resources and I was just sort of like truly like, you know, on my own building whatever I wanted, I think the thing that I'd be most excited about is building sort of like the personal Memex. The Memex is something that I've wanted since I was a kid. And are you familiar with the Memex? It's the memory extender. And it's this idea that sort of like human memory is quite weak. And so if we can extend that, then that's a big opportunity. So I think one of the things that I've always found to be one of the limiting cases here is access. How do you access that data? Even if you did build that data like out, how would you quickly access it? And one of the things I think there's a constellation of technologies that have come together in the last couple of years that now make this quite feasible. Like information retrieval has really improved and we have a lot more simple systems for getting started with information retrieval to natural language is ultimately the interface that you'd really like these systems to work on, both in terms of sort of like structuring the data and preparing the data, but also on the retrieval side. So what keys off the query for retrieval, probably ultimately natural language. And third, if you really want to go into like the purely futuristic aspect of this, it is latent voice to text. And that is also something that has quite recently become possible. I did talk to a company recently called gather, which seems to have some cool ideas in this direction, but I haven't seen yet what I, what I really want, which is I want something that is sort of like every time I listen to a podcast or I watch a movie or I read a book, it sort of like has a great vector index built on top of all that information that's contained within. And then when I'm having my next conversation and I can't quite remember the name of this person who did this amazing thing, for example, if we're talking about the Memex, it'd be really nice to have Vannevar Bush like pop up on my, you know, on my Memex display, because I always forget Vannevar Bush's name. This is one time that I didn't, but I often do. This is something that I think is only recently enabled and maybe we're still five years out before it can be good, but I think it's one of the most exciting projects that has become possible in the last three years that I think generally wasn't possible before. [00:23:46]Alessio: Would you wear one of those AI pendants that record everything? [00:23:50]Bryan: I think I'm just going to do it because I just like support the idea. I'm also admittedly someone who, when Google Glass first came out, thought that seems awesome. I know that there's like a lot of like challenges about the privacy aspect of it, but it is something that I did feel was like a disappointment to lose some of that technology. Fun fact, one of the early Google Glass developers was this MIT computer scientist who basically built the first wearable computer while he was at MIT. And he like took notes about all of his conversations in real time on his wearable and then he would have real time access to them. Ended up being kind of a scandal because he wanted to use a computer during his defense and they like tried to prevent him from doing it. So pretty interesting story. [00:24:35]Alessio: I don't know but the future is going to be weird. I can tell you that much. Talking about pickaxes, what do you think about the pickaxes that people built before? Like all the whole MLOps space, which has its own like startup graveyard in there. How are those products evolving? You know, you were at Wits and Biases before, which is now doing a big AI push as well. [00:24:57]Bryan: If you really want to like sort of like rub my face in it, you can go look at my white paper on MLOps from 2022. It's interesting. I don't think there's many things in that that I would these days think are like wrong or even sort of like naive. But what I would say is there are both a lot of analogies between MLOps and LLMops, but there are also a lot of like key differences. So like leading an engineering team at the moment, I think a lot more about good engineering practices than I do about good ML practices. That being said, it's been very convenient to be able to see around corners in a few of the like ML places. One of the first things I did at Hex was work on evals. This was in February. I hadn't yet been overwhelmed by people talking about evals until about May. And the reason that I was able to be a couple of months early on that is because I've been building evals for ML systems for years. I don't know how else to build an ML system other than start with the evals. I teach my students at Rutgers like objective framing is one of the most important steps in starting a new data science project. If you can't clearly state what your objective function is and you can't clearly state how that relates to the problem framing, you've got no hope. And I think that is a very shared reality with LLM applications. Coming back to one thing you mentioned from earlier about sort of like the applications of these LLMs. To that end, I think what pickaxes I think are still very valuable is understanding systems that are inherently less predictable, that are inherently sort of experimental. On my engineering team, we have an experimentalist. So one of the AI engineers, his focus is experiments. That's something that you wouldn't normally expect to see on an engineering team. But it's important on an AI engineering team to have one person whose entire focus is just experimenting, trying, okay, this is a hypothesis that we have about how the model will behave. Or this is a hypothesis we have about how we can improve the model's performance on this. And then going in, running experiments, augmenting our evals to test it, et cetera. What I really respect are pickaxes that recognize the hybrid nature of the sort of engineering tasks. They are ultimately engineering tasks with a flavor of ML. And so when systems respect that, I tend to have a very high opinion. One thing that I was very, very aligned with Weights and Biases on is sort of composability. These systems like ML systems need to be extremely composable to make them much more iterative. If you don't build these systems in composable ways, then your integration hell is just magnified. When you're trying to iterate as fast as people need to be iterating these days, I think integration hell is a tax not worth paying. [00:27:51]Alessio: Let's talk about some of the LLM native pickaxes, so to speak. So RAG is one. One thing is doing RAG on text data. One thing is doing RAG on tabular data. We're releasing tomorrow our episode with Kube, the semantic layer company. Curious to hear your thoughts on it. How are you doing RAG, pros, cons? [00:28:11]Bryan: It became pretty obvious to me almost immediately that RAG was going to be important. Because ultimately, you never expect your model to have access to all of the things necessary to respond to a user's request. So as an example, Magic users would like to write SQL that's relevant to their business. And it's important then to have the right data objects that they need to query. We can't expect any LLM to understand our user's data warehouse topology. So what we can expect is that we can build a RAG system that is data warehouse aware, data topology aware, and use that to provide really great information to the model. If you ask the model, how are my customers trending over time? And you ask it to write SQL to do that. What is it going to do? Well, ultimately, it's going to hallucinate the structure of that data warehouse that it needs to write a general query. Most likely what it's going to do is it's going to look in its sort of memory of Stack Overflow responses to customer queries, and it's going to say, oh, it's probably a customer stable and we're in the age of DBT, so it might be even called, you know, dim customers or something like that. And what's interesting is, and I encourage you to try, chatGBT will do an okay job of like hallucinating up some tables. It might even hallucinate up some columns. But what it won't do is it won't understand the joins in that data warehouse that it needs, and it won't understand the data caveats or the sort of where clauses that need to be there. And so how do you get it to understand those things? Well, this is textbook RAG. This is the exact kind of thing that you expect RAG to be good at augmenting. But I think where people who have done a lot of thinking about RAG for the document case, they think of it as chunking and sort of like the MapReduce and the sort of like these approaches. But I think people haven't followed this train of thought quite far enough yet. Jerry Liu was on the show and he talked a little bit about thinking of this as like information retrieval. And I would push that even further. And I would say that ultimately RAG is just RecSys for LLM. As I kind of already mentioned, I'm a little bit recommendation systems heavy. And so from the beginning, RAG has always felt like RecSys to me. It has always felt like you're building a recommendation system. And what are you trying to recommend? The best possible resources for the LLM to execute on a task. And so most of my approach to RAG and the way that we've improved magic via retrieval is by building a recommendation system. [00:30:49]Alessio: It's funny, as you mentioned that you spent three years writing the book, the O'Reilly book. Things must have changed as you wrote the book. I don't want to bring out any nightmares from there, but what are the tips for people who want to stay on top of this stuff? Do you have any other favorite newsletters, like Twitter accounts that you follow, communities you spend time in? [00:31:10]Bryan: I am sort of an aggressive reader of technical books. I think I'm almost never disappointed by time that I've invested in reading technical manuscripts. I find that most people write O'Reilly or similar books because they've sort of got this itch that they need to scratch, which is that I have some ideas, I have some understanding that we're hard won, I need to tell other people. And there's something that, from my experience, correlates between that itch and sort of like useful information. As an example, one of the people on my team, his name is Will Kurt, he wrote a book sort of Bayesian statistics the fun way. I knew some Bayesian statistics, but I read his book anyway. And the reason was because I was like, if someone feels motivated to write a book called Bayesian statistics the fun way, they've got something to say about Bayesian statistics. I learned so much from that book. That book is like technically like targeted at someone with less knowledge and experience than me. And boy, did it humble me about my understanding of Bayesian statistics. And so I think this is a very boring answer, but ultimately like I read a lot of books and I think that they're a really valuable way to learn these things. I also regrettably still read a lot of Twitter. There is plenty of noise in that signal, but ultimately it is still usually like one of the first directions to get sort of an instinct for what's valuable. The other comment that I want to make is we are in this age of sort of like archive is becoming more of like an ad platform. I think that's a little challenging right now to kind of use it the way that I used to use it, which is for like higher signal. I've chatted a lot with a CMU professor, Graham Neubig, and he's been doing LLM evaluation and LLM enhancements for about five years and know that I didn't misspeak. And I think talking to him has provided me a lot of like directionality for more believable sources. Trying to cut through the hype. I know that there's a lot of other things that I could mention in terms of like just channels, but ultimately right now I think there's almost an abundance of channels and I'm a little bit more keen on high signal. [00:33:18]Alessio: The other side of it is like, I see so many people say, Oh, I just wrote a paper on X and it's like an article. And I'm like, an article is not a paper, but it's just funny how I know we were kind of chatting before about terms being reinvented and like people that are not from this space kind of getting into AI engineering now. [00:33:36]Bryan: I also don't want to be gatekeepy. Actually I used to say a lot to people, don't be shy about putting your ideas down on paper. I think it's okay to just like kind of go for it. And I, I myself have something on archive that is like comically naive. It's intentionally naive. Right now I'm less concerned by more naive approaches to things than I am by the purely like advertising approach to sort of writing these short notes and articles. I think blogging still has a good place. And I remember getting feedback during my PhD thesis that like my thesis sounded more like a long blog post. And I now feel like that curmudgeonly professor who's also like, yeah, maybe just keep this to the blogs. That's funny.Alessio: Uh, yeah, I think one of the things that Swyx said when he was opening the AI engineer summit a couple of weeks ago was like, look, most people here don't know much about the space because it's so new and like being open and welcoming. I think it's one of the goals. And that's why we try and keep every episode at a level that it's like, you know, the experts can understand and learn something, but also the novices can kind of like follow along. You mentioned evals before. I think that's one of the hottest topics obviously out there right now. What are evals? How do we know if they work? Yeah. What are some of the fun learnings from building them into X? [00:34:53]Bryan: I said something at the AI engineer summit that I think a few people have already called out, which is like, if you can't get your evals to be sort of like objective, then you're not trying hard enough. I stand by that statement. I'm not going to, I'm not going to walk it back. I know that that doesn't feel super good because people, people want to think that like their unique snowflake of a problem is too nuanced. But I think this is actually one area where, you know, in this dichotomy of like, who can do AI engineering? And the answer is kind of everybody. Software engineering can become AI engineering and ML engineering can become AI engineering. One thing that I think the more data science minded folk have an advantage here is we've gotten more practice in taking very vague notions and trying to put a like objective function around that. And so ultimately I would just encourage everybody who wants to build evals, just work incredibly hard on codifying what is good and bad in terms of these objective metrics. As far as like how you go about turning those into evals, I think it's kind of like sweat equity. Unfortunately, I told the CEO of gantry several months ago, I think it's been like six months now that I was sort of like looking at every single internal Hex request to magic by hand with my eyes and sort of like thinking, how can I turn this into an eval? Is there a way that I can take this real request during this dog foodie, not very developed stage? How can I make that into an evaluation? That was a lot of sweat equity that I put in a lot of like boring evenings, but I do think ultimately it gave me a lot of understanding for the way that the model was misbehaving. Another thing is how can you start to understand these misbehaviors as like auxiliary evaluation metrics? So there's not just one evaluation that you want to do for every request. It's easy to say like, did this work? Did this not work? Did the response satisfy the task? But there's a lot of other metrics that you can pull off these questions. And so like, let me give you an example. If it writes SQL that doesn't reference a table in the database that it's supposed to be querying against, we would think of that as a hallucination. You could separately consider, is it a hallucination as a valuable metric? You could separately consider, does it get the right answer? The right answer is this sort of like all in one shot, like evaluation that I think people jump to. But these intermediary steps are really important. I remember hearing that GitHub had thousands of lines of post-processing code around Copilot to make sure that their responses were sort of correct or in the right place. And that kind of sort of defensive programming against bad responses is the kind of thing that you can build by looking at many different types of evaluation metrics. Because you can say like, oh, you know, the Copilot completion here is mostly right, but it doesn't close the brace. Well, that's the thing you can check for. Or, oh, this completion is quite good, but it defines a variable that was like already defined in the file. Like that's going to have a problem. That's an evaluation that you could check separately. And so this is where I think it's easy to convince yourself that all that matters is does it get the right answer? But the more that you think about production use cases of these things, the more you find a lot of this kind of stuff. One simple example is like sometimes the model names the output of a cell, a variable that's already in scope. Okay. Like we can just detect that and like we can just fix that. And this is the kind of thing that like evaluations over time and as you build these evaluations over time, you really can expand the robustness in which you trust these models. And for a company like Hex, who we need to put this stuff in GA, we can't just sort of like get to demo stage or even like private beta stage. We really hunting GA on all of these capabilities. Did it get the right answer on some cases is not good enough. [00:38:57]Alessio: I think the follow up question to that is in your past roles, you own the model that you're evaluating against. Here you don't actually have control into how the model evolves. How do you think about the model will just need to improve or we'll use another model versus like we can build kind of like engineering post-processing on top of it. How do you make the choice? [00:39:19]Bryan: So I want to say two things here. One like Jerry Liu talked a little bit about in his episode, he talked a little bit about sort of like you don't always want to retrain the weights to serve certain use cases. Rag is another tool that you can use to kind of like soft tune. I think that's right. And I want to go back to my favorite analogy here, which is like recommendation systems. When you build a recommendation system, you build the objective function. You think about like what kind of recs you want to provide, what kind of features you're allowed to use, et cetera, et cetera. But there's always another step. There's this really wonderful collection of blog posts from Eugene Yon and then ultimately like even Oldridge kind of like iterated on that for the Merlin project where there's this multi-stage recommender. And the multi-stage recommender says the first step is to do great retrieval. Once you've done great retrieval, you then need to do great ranking. Once you've done great ranking, you need to then do a good job serving. And so what's the analogy here? Rag is retrieval. You can build different embedding models to encode different features in your latent space to ensure that your ranking model has the best opportunity. Now you might say, oh, well, my ranking model is something that I've got a lot of capability to adjust. I've got full access to my ranking model. I'm going to retrain it. And that's great. And you should. And over time you will. But there's one more step and that's downstream and that's the serving. Serving often sounds like I just show the s**t to the user, but ultimately serving is things like, did I provide diverse recommendations? Going back to Stitch Fix days, I can't just recommend them five shirts of the same silhouette and cut. I need to serve them a diversity of recommendations. Have I respected their requirements? They clicked on something that got them to this place. Is the recommendations relevant to that query? Are there any hard rules? Do we maybe not have this in stock? These are all things that you put downstream. And so much like the recommendations use case, there's a lot of knobs to pull outside of retraining the model. And even in recommendation systems, when do you retrain your model for ranking? Not nearly as much as you do other s**t. And even this like embedding model, you might fiddle with more often than the true ranking model. And so I think the only piece of the puzzle that you don't have access to in the LLM case is that sort of like middle step. That's okay. We've got plenty of other work to do. So right now I feel pretty enabled. [00:41:56]Alessio: That's great. You obviously wrote a book on RecSys. What are some of the key concepts that maybe people that don't have a data science background, ML background should keep in mind as they work in this area? [00:42:07]Bryan: It's easy to first think these models are stochastic. They're unpredictable. Oh, well, what are we going to do? I think of this almost like gaseous type question of like, if you've got this entropy, where can you put the entropy? Where can you let it be entropic and where can you constrain it? And so what I want to say here is think about the cases where you need it to be really tightly constrained. So why are people so excited about function calling? Because function calling feels like a way to constrict it. Where can you let it be more gaseous? Well, maybe in the way that it talks about what it wants to do. Maybe for planning, if you're building agents and you want to do sort of something chain of thoughty. Well, that's a place where the entropy can happily live. When you're building applications of these models, I think it's really important as part of the problem framing to be super clear upfront. These are the things that can be entropic. These are the things that cannot be. These are the things that need to be super rigid and really, really aligned to a particular schema. We've had a lot of success in making specific the parts that need to be precise and tightly schemified, and that has really paid dividends. And so other analogies from data science that I think are very valuable is there's the sort of like human in the loop analogy, which has been around for quite a while. And I have gone on record a couple of times saying that like, I don't really love human in the loop. One of the things that I think we can learn from human in the loop is that the user is the best judge of what is good. And the user is pretty motivated to sort of like interact and give you kind of like additional nudges in the direction that you want. I think what I'd like to flip though, is instead of human in the loop, I'd like it to be AI in the loop. I'd rather center the user. I'd rather keep the user as the like core item at the center of this universe. And the AI is a tool. By switching that analogy a little bit, what it allows you to do is think about where are the places in which the user can reach for this as a tool, execute some task with this tool, and then go back to doing their workflow. It still gets this back and forth between things that computers are good at and things that humans are good at, which has been valuable in the human loop paradigm. But it allows us to be a little bit more, I would say, like the designers talk about like user-centered. And I think that's really powerful for AI applications. And it's one of the things that I've been trying really hard with Magic to make that feel like the workflow as the AI is right there. It's right where you're doing your work. It's ready for you anytime you need it. But ultimately you're in charge at all times and your workflow is what we care the most about. [00:44:56]Alessio: Awesome. Let's jump into lightning round. What's something that is not on your LinkedIn that you're passionate about or, you know, what's something you would give a TED talk on that is not work related? [00:45:05]Bryan: So I walk a lot. [00:45:07]Bryan: I have walked every road in Berkeley. And I mean like every part of every road even, not just like the binary question of, have you been on this road? I have this little app that I use called Wanderer, which just lets me like kind of keep track of everywhere I've been. And so I'm like a little bit obsessed. My wife would say a lot a bit obsessed with like what I call new roads. I'm actually more motivated by trails even than roads, but like I'm a maximalist. So kind of like everything and anything. Yeah. Believe it or not, I was even like in the like local Berkeley paper just talking about walking every road. So yeah, that's something that I'm like surprisingly passionate about. [00:45:45]Alessio: Is there a most underrated road in Berkeley? [00:45:49]Bryan: What I would say is like underrated is Kensington. So Kensington is like a little town just a teeny bit north of Berkeley, but still in the Berkeley hills. And Kensington is so quirky and beautiful. And it's a really like, you know, don't sleep on Kensington. That being said, one of my original motivations for doing all this walking was people always tell me like, Berkeley's so quirky. And I was like, how quirky is Berkeley? Turn it out. It's quite, quite quirky. It's also hard to say quirky and Berkeley in the same sentence I've learned as of now. [00:46:20]Alessio: That's a, that's a good podcast warmup for our next guests. All right. The actual lightning ground. So we usually have three questions, acceleration, exploration, then a takeaway acceleration. What's, what's something that's already here today that you thought would take much longer to arrive in AI and machine learning? [00:46:39]Bryan: So I invited the CEO of Hugging Face to my seminar when I worked at Stitch Fix and his talk at the time, honestly, like really annoyed me. The talk was titled like something to the effect of like LLMs are going to be the like technology advancement of the next decade. It's on YouTube. You can find it. I don't remember exactly the title, but regardless, it was something like LLMs for the next decade. And I was like, okay, they're like one modality of model, like whatever. His talk was fine. Like, I don't think it was like particularly amazing or particularly poor, but what I will say is damn, he was right. Like I, I don't think I quite was on board during that talk where I was like, ah, maybe, you know, like there's a lot of other modalities that are like moving pretty quick. I thought things like RL were going to be the like real like breakout success. And there's a little pun with Atari and breakout there, but yeah, like I, man, I was sleeping on LLMs and I feel a little embarrassed. I, yeah. [00:47:44]Alessio: Yeah. No, I mean, that's a good point. It's like sometimes the, we just had Jeremy Howard on the podcast and he was saying when he was talking about fine tuning, everybody thought it was dumb, you know, and then later people realize, and there's something to be said about messaging, especially like in technical audiences where there's kind of like the metagame, you know, which is like, oh, these are like the cool ideas people are exploring. I don't know where I want to align myself yet, you know, or whatnot. So it's cool exploration. So it's kind of like the opposite of that. You mentioned RL, right? That's something that was kind of like up and up and up. And then now it's people are like, oh, I don't know. Are there any other areas if you weren't working on, on magic that you want to go work on? [00:48:25]Bryan: Well, I did mention that, like, I think this like Memex product is just like incredibly exciting to me. And I think it's really opportunistic. I think it's very, very feasible, but I would maybe even extend that a little bit, which is I don't see enough people getting really enthusiastic about hardware with advanced AI built in. You're hearing whispering of it here and there, put on the whisper, but like you're starting to see people putting whisper into pieces of hardware and making that really powerful. I joked with, I can't think of her name. Oh, Sasha, who I know is a friend of the pod. Like I joked with Sasha that I wanted to make the big mouth Billy Bass as a babble fish, because at this point it's pretty easy to connect that up to whisper and talk to it in one language and have it talk in the other language. And I was like, this is the kind of s**t I want people building is like silly integrations between hardware and these new capabilities. And as much as I'm starting to hear whisperings here and there, it's not enough. I think I want to see more people going down this track because I think ultimately like these things need to be in our like physical space. And even though the margins are good on software, I want to see more like integration into my daily life. Awesome. [00:49:47]Alessio: And then, yeah, a takeaway, what's one message idea you want everyone to remember and think about? [00:49:54]Bryan: Even though earlier I was talking about sort of like, maybe like not reinventing things and being respectful of the sort of like ML and data science, like ideas. I do want to say that I think everybody should be experimenting with these tools as much as they possibly can. I've heard a lot of professors, frankly, express concern about their students using GPT to do their homework. And I took a completely opposite approach, which is in the first 15 minutes of the first class of my semester this year, I brought up GPT on screen and we talked about what GPT was good at. And we talked about like how the students can sort of like use it. I showed them an example of it doing data analysis work quite well. And then I showed them an example of it doing quite poorly. I think however much you're integrating with these tools or interacting with these tools, and this audience is probably going to be pretty high on that distribution. I would really encourage you to sort of like push this into the other people in your life. My wife is very technical. She's a product manager and she's using chat GPT almost every day for communication or for understanding concepts that are like outside of her sphere of excellence. And recently my mom and my sister have been sort of like onboarded onto the chat GPT train. And so ultimately I just, I think that like it is our duty to help other people see like how much of a paradigm shift this is. We should really be preparing people for what life is going to be like when these are everywhere. [00:51:25]Alessio: Awesome. Thank you so much for coming on, Bryan. This was fun. [00:51:29]Bryan: Yeah. Thanks for having me. And use Hex magic. [00:51:31] Get full access to Latent Space at www.latent.space/subscribe

BIFocal - Clarifying Business Intelligence
Episode 269 - Power BI November 2023 Feature Summary part 2

BIFocal - Clarifying Business Intelligence

Play Episode Listen Later Nov 29, 2023 37:49


This is episode 269 recorded on November 22st, 2023 where John & Jason talk the Power BI November 2023 Feature Summary, part 2, including the new Explore feature, OneLake integration for Import-mode semantic models, and stored credentials for Direct Lake semantic models.

Explicit Measures Podcast
271: Microsoft Fabric &.... Tableau?

Explicit Measures Podcast

Play Episode Listen Later Nov 28, 2023 54:01


Taking a different take based on another great article by Kurt Buhler, the guys dive into the integration of not just the ability to use OneLake with Tableau and other products. How does Semantic Link and the ability to spin up a SQL end point WITHIN POWER BI change the game? Link to article: https://data-goblins.com/power-bi/connect-tableau-to-power-bi Get in touch: Send in your questions or topics you want us to discuss by tweeting to @PowerBITips with the hashtag #empMailbag or submit on the PowerBI.tips Podcast Page. Visit PowerBI.tips: https://powerbi.tips/ Watch the episodes live every Tuesday and Thursday morning at 730am CST on YouTube: https://www.youtube.com/powerbitips Subscribe on Spotify: https://open.spotify.com/show/230fp78XmHHRXTiYICRLVv Subscribe on Apple: https://podcasts.apple.com/us/podcast/explicit-measures-podcast/id1568944083‎ Check Out Community Jam: https://jam.powerbi.tips Follow Mike: https://www.linkedin.com/in/michaelcarlo/ Follow Seth: https://www.linkedin.com/in/seth-bauer/ Follow Tommy: https://www.linkedin.com/in/tommypuglia/

Screaming in the Cloud
Use Cases for Couchbase's New Columnar Data Stores with Jeff Morris

Screaming in the Cloud

Play Episode Listen Later Nov 27, 2023 30:22


Jeff Morris, VP of Product & Solutions Marketing at Couchbase, joins Corey on Screaming in the Cloud to discuss Couchbase's new columnar data store functionality, specific use cases for columnar data stores, and why AI gets better when it communicates with a cleaner pool of data. Jeff shares how more responsive databases could allow businesses like Dominos and United Airlines to create hyper-personalized experiences for their customers by utilizing more responsive databases. Jeff dives into the linked future of AI and data, and Corey learns about Couchbase's plans for the re:Invent conference. If you're attending re:Invent, you can visit Couchbase at booth 1095.About JeffJeff Morris is VP Product & Solutions Marketing at Couchbase (NASDAQ: BASE), a cloud database platform company that 30% of the Fortune 100 depend on.Links Referenced:Couchbase: https://www.couchbase.com/TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted guest episode of Screaming in the Cloud is brought to us by our friends at Couchbase. Also brought to us by Couchbase is today's victim, for lack of a better term. Jeff Morris is their VP of Product and Solutions Marketing. Jeff, thank you for joining me.Jeff: Thanks for having me, Corey, even though I guess I paid for it.Corey: Exactly. It's always great to say thank you when people give you things. I learned this from a very early age, and the only people who didn't were rude children and turned into worse adults.Jeff: Exactly.Corey: So, you are effectively announcing something new today, and I always get worried when a database company says that because sometimes it's a license that is going to upset people, sometimes it's dyed so deep in the wool of generative AI that, “Oh, we're now supporting vectors or whatnot.” Well, most of us don't know what that means.Jeff: Right.Corey: Fortunately, I don't believe that's what you're doing today. What have you got for us?Jeff: So, you're right. It's—well, what I'm doing is, we're announcing new stuff inside of Couchbase and helping Couchbase expand its market footprint, but we're not really moving away from our sweet spot, either, right? We like building—or being the database platform underneath applications. So, push us on the operational side of the operational versus analytic, kind of, database divide. But we are announcing a columnar data store inside of the Couchbase platform so that we can build bigger, better, stronger analytic functionality to feed the applications that we're supporting with our customers.Corey: Now, I feel like I should ask a question around what a columnar data store is because my first encounter with the term was when I had a very early client for AWS bill optimization when I was doing this independently, and I was asking them the… polite question of, “Why do you have 283 billion objects in a single S3 bucket? That is atypical and kind of terrifying.” And their answer was, “Oh, we built our own columnar data store on top of S3. This might not have been the best approach.” It's like, “I'm going to stop you there. With no further information, I can almost guarantee you that it was not.” But what is a columnar data store?Jeff: Well, let's start with the, everybody loves more data and everybody loves to count more things, right, but a columnar data store allows you to expedite the kind of question that you ask of the data itself by not having to look at every single row of the data while you go through it. You can say, if you know you're only looking for data that's inside of California, you just look at the column value of find me everything in California and then I'll pick all of those records to analyze. So, it gives you a faster way to go through the data while you're trying to gather it up and perform aggregations against it.Corey: It seems like it's one of those, “Well, that doesn't sound hard,” type of things, when you're thinking about it the way that I do, in terms of a database being more or less a medium to large size Excel spreadsheet. But I have it on good faith from all the customer environments. I've worked with that no, no, there are data stores that span even larger than that, which is, you know, one of those sad realities of the world. And everything at scale begins to be a heck of a lot harder. I've seen some of the value that this stuff offers and I can definitely understand a few different workloads in which case that's going to be super handy. What are you targeting specifically? Or is this one of those areas where you're going to learn from your customers?Jeff: Well, we've had analytic functionality inside the platform. It just, at the size and scale customers actually wanted to roam through the data, we weren't supporting that that much. So, we'll expand that particular footprint, it'll give us better integration capabilities with external systems, or better access to things in your bucket. But the use case problem is, I think, going to be driven by what new modern application requirements are going to be. You're going to need, we call it hyper-personalization because we tend to cater to B2C-style applications, things with a lot of account profiles built into them.So, you look at account profile, and you're like, “Oh, well Jeff likes blue, so sell him blue stuff.” And that's a great current level personalization, but with a new analytic engine against this, you can maybe start aggregating all the inventory information that you might have of all the blue stuff that you want to sell me and do that in real-time, so I'm getting better recommendations, better offers as I'm shopping on your site or looking at my phone and, you know, looking for the next thing I want to buy.Corey: I'm sure there's massive amounts of work that goes into these hyper-personalization stories. The problem is that the only time they really rise to our notice is when they fail hilariously. Like, you just bought a TV, would you like to buy another? Now statistically, you are likelier to buy a second TV right after you buy one, but for someone who just, “Well, I'm replacing my living room TV after ten years,” it feels ridiculous. Or when you buy a whole bunch of nails and they don't suggest, “Would you like to also perhaps buy a hammer?”It's one of those areas where it just seems like a human putting thought into this could make some sense. But I've seen some of the stuff that can come out of systems like this and it can be incredible. I also personally tend to bias towards use cases that are less, here's how to convince you to buy more things and start aiming in a bunch of other different directions where it starts meeting emerging use cases or changing situations rapidly, more rapidly than a human can in some cases. The world has, for better or worse, gotten an awful lot faster over the last few decades.Jeff: Yeah. And think of it in terms of how responsive can I be at any given moment. And so, let's pick on one of the more recent interesting failures that has popped up. I'm a Giants fan, San Francisco Giants fan, so I'll pick on the Dodgers. The Dodgers during the baseball playoffs, Clayton Kershaw—three-time MVP, Cy Young Award winner, great, great pitcher—had a first-inning meltdown of colossal magnitude: gave up 11 runs in the first inning to the Diamondbacks.Well, my customer Domino's Pizza could end up—well, let's shift the focus of our marketing. We—you know, the Dodgers are the best team in baseball this year in the National League—let's focus our attention there, but with that meltdown, let's pivot to Arizona and focus on our market in Phoenix. And they could do that within minutes or seconds, even, with the kinds of capabilities that we're coming up with here so that they can make better offers to that new environment and also do the decision intelligence behind it. Like, do I have enough dough to make a bigger offer in that big market? Do I have enough drivers or do I have to go and spin out and get one of the other food delivery folks—UberEats, or something like that—to jump on board with me and partner up on this kind of system?It's that responsiveness in real, real-time, right, that's always been kind of the conundrum between applications and analytics. You get an analytic insight, but it takes you an hour or a day to incorporate that into what the application is doing. This is intended to make all of that stuff go faster. And of course, when we start to talk about things in AI, right, AI is going to expect real-time responsiveness as best you can make it.Corey: I figure we have to talk about AI. That is a technology that has absolutely sprung to the absolute peak of the hype curve over the past year. OpenAI released Chat-Gippity, either late last year or early this year and suddenly every company seems to be falling all over itself to rebrand itself as an AI company, where, “We've been working on this for decades,” they say, right before they announce something that very clearly was crash-developed in six months. And every company is trying to drape themselves in the mantle of AI. And I don't want to sound like I'm a doubter here. I'm like most fans; I see an awful lot of value here. But I am curious to get your take on what do you think is real and what do you think is not in the current hype environment.Jeff: So yeah, I love that. I think there's a number of things that are, you know, are real is, it's not going away. It is going to continue to evolve and get better and better and better. One of my analyst friends came up with the notion that the exercise of generative AI, it's imprecise, so it gives you similarity things, and that's actually an improvement, in many cases, over the precision of a database. Databases, a transaction either works or it doesn't. It has failover or it doesn't, when—Corey: It's ideally deterministic when you ask it a question—Jeff: Yes.Corey: —the same question a second time, assuming it's not time-bound—Jeff: Gives you the right answer.Corey: Yeah, the sa—or at least the same answer.Jeff: The same answer. And your gen AI may not. So, that's a part of the oddity of the hype. But then it also helps me kind of feed our storyline of if you're going to try and make Gen AI closer and more accurate, you need a clean pool of data that you're dealing with, even though you've got probably—your previous design was such that you would use a relational database for transactions, a document database for your user profiles, you'd probably attach your website to a caching database because you needed speed and a lot of concurrency. Well, now you got three different databases there that you're operating.And if you're feeding data from each of those databases back to AI, one of them might be wrong or one of them might confuse the AI, yet how are you going to know? The complexity level is going to become, like, exponential. So, our premise is, because we're a multi-modal database that incorporates in-memory speed and documents and search and transactions and the like, if you start with a cleaner pool of data, you'll have less complexity that you're offering to your AI system and therefore you can steer it into becoming more accurate in its response. And then, of course, all the data that we're dealing with is on mobile, right? Data is created there for, let's say, your account profile, and then it's also consumed there because that's what people are using as their application interface of choice.So, you also want to have mobile interactivity and synchronization and local storage, kind of, capabilities built in there. So, those are kind of, you know, a couple of the principles that we're looking at of, you know, JSON is going to be a great format for it regardless of what happens; complexity is kind of the enemy of AI, so you don't want to go there; and mobility is going to be an absolute requirement. And then related to this particular announcement, large-scale aggregation is going to be a requirement to help feed the application. There's always going to be some other bigger calculation that you're going to want to do relatively in real time and feed it back to your users or the AI system that's helping them out.Corey: I think that that is a much more nuanced use case than a lot of the stuff that's grabbing customer attentions where you effectively have the Chat-Gippity story of it being an incredible parrot. Where I have run into trouble with the generative story has been people putting the thing that the robot that's magic and from the future has come up with off the cuff and just hurling that out into the universe under their own name without any human review, and that's fine sometimes sure, but it does get it hilariously wrong at some points. And the idea of sending something out under my name that has not been at least reviewed by me if not actually authored by me, is abhorrent. I mean, I review even the transactional, “Yes, you have successfully subscribed,” or, “Sorry to see you go,” email confirmations on stuff because there's an implicit, “Hugs and puppies, love Corey,” at the end of everything that goes out under my name.Jeff: Right.Corey: But I've gotten a barrage of terrible sales emails and companies that are trying to put the cart before the horse where either the, “Support rep,” quote-unquote, that I'm speaking to in the chat is an AI system or else needs immediate medical attention because there's something going on that needs assistance.Jeff: Yeah, they just don't understand.Corey: Right. And most big enterprise stories that I've heard so far that have come to light have been around the form of, “We get to fire most of our customer service staff,” an outcome that basically no one sensible wants. That is less compelling than a lot of the individualized consumer use cases. I love asking it, “Here's a blog post I wrote. Give me ten title options.” And I'll usually take one of them—one of them is usually not half bad and then I can modify it slightly.Jeff: And you'll change four words in it. Yeah.Corey: Yeah, exactly. That's a bit of a different use case.Jeff: It's been an interesting—even as we've all become familiar—or at least junior prompt engineers, right—is, your information is only going to be as good as you feed the AI system—the return is only going to be as good—so you're going to want to refine that kind of conversation. Now, we're not trying to end up replacing the content that gets produced or the writing of all kinds of pros, other than we do have a code generator that works inside of our environment called Capella iQ that talks to ChatGPT, but we try and put guardrails on that too, right, as always make sure that it's talking in terms of the context of Couchbase rather than, “Where's Taylor Swift this week,” which I don't want it to answer because I don't want to spend GPT money to answer that question for you.Corey: And it might not know the right answer, but it might very well spit out something that sounds plausible.Jeff: Exactly. But I think the kinds of applications that we're steering ourselves toward can be helped along by the Gen AI systems, but I don't expect all my customers are going to be writing automatic blog post generation kinds of applications. I think what we're ultimately trying to do is facilitate interactions in a way that we haven't dreamt of yet, right? One of them might be if I've opted into to loyalty programs, like my United account and my American Express account—Corey: That feels very targeted at my lifestyle as well, so please, continue.Jeff: Exactly, right? And so, what I really want the system to do is for Amex to reward me when I hit 1k status on United while I'm on the flight and you know, have the flight attendant come up and be like, “Hey, you did it. Either, here's a free upgrade from American Express”—that would be hyper-personalization because you booked your plane ticket with it, but they also happen to know or they cross-consumed information that I've opted into.Corey: I've seen them congratulate people for hitting a million miles flown mid-flight, but that's clearly something that they've been tracking and happens a heck of a lot less frequently. This is how you start scaling that experience.Jeff: Yes. But that happened because American Airlines was always watching because that was an American Airlines ad ages ago, right, but the same principle holds true. But I think there's going to be a lot more of these: how much information am I actually allowing to be shared amongst the, call it loyalty programs, but the data sources that I've opted into. And my God, there's hundreds of them that I've personally opted into, whether I like it or not because everybody needs my email address, kind of like what you were describing earlier.Corey: A point that I have that I think agrees largely with your point is that few things to me are more frustrating than what I'm signing up, for example, oh, I don't know, an AWS even—gee, I can't imagine there's anything like that going on this week—and I have to fill out an entire form that always asked me the same questions: how big my company is, whether we have multiple workloads on, what industry we're in. And no matter what I put into that, first, it never remembers me for the next time, which is frustrating in its own right, but two, no matter what I put in to fill that thing out, the email I get does not change as a result. At one point, I said, all right—I'm picking randomly—“I am a venture capitalist based in Sweden,” and I got nothing that is differentiated from the other normal stuff I get tied to my account because I use a special email address for those things, sometimes just to see what happens. And no, if you're going to make me jump through the hoops to give you the data, at least use it to make my experience better. It feels like I'm asking for the moon here, but I shouldn't be.Jeff: Yes. [we need 00:16:19] to make your experience better and say, you know, “Here's four companies in Malmo that you ought to be talking to. And they happen to be here at the AWS event and you can go find them because their booth is here, here, and here.” That kind of immediate responsiveness could be facilitated, and to our point, ought to be facilitated. It's exactly like that kind of thing is, use the data in real-time.I was talking to somebody else today that was discussing that most data, right, becomes stale and unvaluable, like, 50% of the data, its value goes to zero after about a day. And some of it is stale after about an hour. So, if you can end up closing that responsiveness gap that we were describing—and this is kind of what this columnar service inside of Capella is going to be like—is react in real-time with real-time calculation and real-time look-up and real-time—find out how you might apply that new piece of information right now and then give it back to the consumer or the user right now.Corey: So, Couchbase takes a few different forms. I should probably, at least for those who are not steeped in the world of exotic forms of database, I always like making these conversations more accessible to folks who are not necessarily up to speed. Personally, I tend to misuse anything as a database, if I can hold it just the wrong way.Jeff: The wrong way. I've caught that about you.Corey: Yeah, it's—everything is a database if you hold it wrong. But you folks have a few different options: you have a self-managed commercial offering; you're an open-source project, so I can go ahead and run it on my own infrastructure however I want; and you have Capella, which is Couchbase as a service. And all of those are useful and have their points, and I'm sure I'm missing at least one or two along the way. But do you find that the columnar use case is going to disproportionately benefit folks using Capella in ways that the self-hosted version would not be as useful for, or is this functionality already available in other expressions of Couchbase?Jeff: It's not already available in other expressions, although there is analytic functionality in the self-managed version of Couchbase. But it's, as I've mentioned I think earlier, it's just not as scalable or as really real-time as far as we're thinking. So, it's going to—yes, it's going to benefit the database as a service deployments of Couchbase available on your favorite three clouds, and still interoperable with environments that you might self-manage and self-host. So, there could be even use cases where our development team or your development team builds in AWS using the cloud-oriented features, but is still ultimately deploying and hosting and managing a self-managed environment. You could still do all of that. So, there's still a great interplay and interoperability amongst our different deployment options.But the fun part, I think, about this is not only is it going to help the Capella user, there's a lot of other things inside Couchbase that help address the developers' penchant for trading zero-cost for degrees of complexity that you're willing to accept because you want everything to be free and open-source. And Couchbase is my fifth open-source company in my background, so I'm well, well versed in the nuances of what open-source developers are seeking. But what makes Couchbase—you know, its origin story really cool too, though, is it's the peanut butter and chocolate marriage of memcached and the people behind that and membase and CouchDB from [Couch One 00:19:54]. So, I can't think of that many—maybe Red Hat—project and companies that formed up by merging two complementary open-source projects. So, we took the scale and—Corey: You have OpenTelemetry, I think, that did that once, but that—you see occasional mergers, but it's very far from common.Jeff: But it's very, very infrequent. But what that made the Couchbase people end up doing is make a platform that will scale, make a data design that you can auto partition anywhere, anytime, and then build independently scalable services on top of that, one for SQL++, the query language. Anyone who knows SQL will be able to write something in Couchbase immediately. And I've got this AI Automator, iQ, that makes it even easier; you just say, “Write me a SQL++ query that does this,” and it'll do that. But then we added full-text search, we added eventing so you can stream data, we added the analytics capability originally and now we're enhancing it, and use JSON as our kind of universal data format so that we can trade data with applications really easily.So, it's a cool design to start with, and then in the cloud, we're steering towards things like making your entry point and using our database as a service—Capella—really, really, really inexpensive so that you get that same robustness of functionality, as well as the easy cost of entry that today's developers want. And it's my analyst friends that keep telling me the cloud is where the markets going to go, so we're steering ourselves towards that hockey puck location.Corey: I frequently remark that the role of the DBA might not be vanishing, but it's definitely changing, especially since the last time I counted, if you hold them and use as directed, AWS has something on the order of 14 distinct managed database offerings. Some are general purpose, some are purpose-built, and if this trend keeps up, in a decade, the DBA role is going to be determining which of its 40 databases is going to be the right fit for a given workload. That seems to be the counter-approach to a general-purpose database that works across the board. Clearly you folks have opinions on this. Where do you land?Jeff: Oh, so absolutely. There's the product that is a suite of capabilities—or that are individual capabilities—and then there's ones that are, in my case, kind of multi-model and do lots of things at once. I think historically, you'll recognize—because this is—let's pick on your phone—the same holds true for, you know, your phone used to be a watch, used to be a Palm Pilot, used to be a StarTAC telephone, and your calendar application, your day planner all at the same time. Well, it's not anymore. Technology converges upon itself; it's kind of a historical truism.And the database technologies are going to end up doing that—or continue to do that, even right now. So, that notion that—it's a ten-year-old notion of use a purpose-built database for that particular workload. Maybe sometimes in extreme cases that is the appropriate thing, but in more cases than not right now, if you need transactions when you need them, that's fine, I can do that. You don't necessarily need Aurora or RDS or Postgres to do that. But when you need search and geolocation, I support that too, so you don't need Elastic. And then when you need caching and everything, you don't need ElastiCache; it's all built-in.So, that multi-model notion of operate on the same pool of data, it's a lot less complex for your developers, they can code faster and better and more cleanly, debugging is significantly easier. As I mentioned, SQL++ is our language. It's basically SQL syntax for JSON. We're a reference implementation of this language, along with—[AsteriskDB 00:23:42] is one of them, and actually, the original author of that language also wrote DynamoDB's PartiQL.So, it's a common language that you wouldn't necessarily imagine, but the ease of entry in all of this, I think, is still going to be a driving goal for people. The old people like me and you are running around worrying about, am I going to get a particular, really specific feature out of the full-text search environment, or the other one that I pick on now is, “Am I going to need a vector database, too?” And the answer to me is no, right? There's going—you know, the database vendors like ourselves—and like Mongo has announced and a whole bunch of other NoSQL vendors—we're going to support that. It's going to be just another mode, and you get better bang for your buck when you've got more modes than a single one at a time.Corey: The consensus opinion that's emerging is very much across the board that vector is a feature, not a database type.Jeff: Not a category, yeah. Me too. And yeah, we're well on board with that notion, as well. And then like I said earlier, the JSON as a vehicle to give you all of that versatility is great, right? You can have vector information inside a JSON document, you can have time series information in the document, you could have graph node locations and ID numbers in a JSON array, so you don't need index-free adjacency or some of the other cleverness that some of my former employers have done. It really is all converging upon itself and hopefully everybody starts to realize that you can clean up and simplify your architectures as you look ahead, so that you do—if you're going to build AI-powered applications—feed it clean data, right? You're going to be better off.Corey: So, this episode is being recorded in advance, thankfully, but it's going to release the first day of re:Invent. What are you folks doing at the show, for those who are either there and for some reason, listening to a podcast rather than going to getting marketed to by a variety of different pitches that all mention AI or might even be watching from home and trying to figure out what to make of it?Jeff: Right. So, of course we have a booth, and my notes don't have in front of me what our booth number is, but you'll see it on the signs in the airport. So, we'll have a presence there, we'll have an executive briefing room available, so we can schedule time with anyone who wants to come talk to us. We'll be showing not only the capabilities that we're offering here, we'll show off Capella iQ, our coding assistant, okay—so yeah, we're on the AI hype band—but we'll also be showing things like our mobile sync capability where my phone and your phone can synchronize data amongst themselves without having to actually have a live connection to the internet. So, long as we're on the same network locally within the Venetian's network, we have an app that we have people download from the Apple Store and then it's a color synchronization app or picture synchronization app.So, you tap it, and it changes on my screen and I tap it and it changes on your screen, and we'll have, I don't know, as many people who are around standing there, synchronizing, what, maybe 50 phones at a time. It's actually a pretty slick demonstration of why you might want a database that's not only in the cloud but operates around the cloud, operates mobile-ly, operates—you know, can connect and disconnect to your networks. It's a pretty neat scenario. So, we'll be showing a bunch of cool technical stuff as well as talking about the things that we're discussing right now.Corey: I will say you're putting an awful lot of faith in conductivity working at re:Invent, be it WiFi or the cellular network. I know that both of those have bitten me in various ways over the years. But I wish you the best on it. I think it's going to be an interesting show based upon everything I've heard in the run-up to it. I'm just glad it's here.Jeff: Now, this is the cool part about what I'm talking about, though. The cool part about what I'm talking about is we can set up our own wireless network in our booth, and we still—you'd have to go to the app store to get this application, but once there, I can have you switch over to my local network and play around on it and I can sync the stuff right there and have confidence that in my local network that's in my booth, the system's working. I think that's going to be ultimately our design there because oh my gosh, yes, I have a hundred stories about connectivity and someone blowing a demo because they're yanking on a cable behind the pulpit, right?Corey: I always build in a—and assuming there's no connectivity, how can I fake my demos, just because it's—I've only had to do it once, but you wind up planning in advance when you start doing a talk to a large enough or influential enough audience where you want things to go right.Jeff: There's a delightful acceptance right now of recorded videos and demonstrations that people sort of accept that way because of exactly all this. And I'm sure we'll be showing that in our booth there too.Corey: Given the non-deterministic nature of generative AI, I'm sort of surprised whenever someone hasn't mocked the demo in advance, just because yeah, gives the right answer in the rehearsal, but every once in a while, it gets completely unglued.Jeff: Yes, and we see it pretty regularly. So, the emergence of clever and good prompt engineering is going to be a big skill for people. And hopefully, you know, everybody's going to figure out how to pass it along to their peers.Corey: Excellent. We'll put links to all this in the show notes, and I look forward to seeing how well this works out for you. Best of luck at the show and thanks for speaking with me. I appreciate it.Jeff: Yeah, Corey. We appreciate the support, and I think the show is going to be very strong for us as well. And thanks for having me here.Corey: Always a pleasure. Jeff Morris, VP of Product and Solutions Marketing at Couchbase. This episode has been brought to us by our friends at Couchbase. And I'm Cloud Economist Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment, but if you want to remain happy, I wouldn't ask that podcast platform what database they're using. No one likes the answer to those things.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Data Engineering Podcast
Addressing The Challenges Of Component Integration In Data Platform Architectures

Data Engineering Podcast

Play Episode Listen Later Nov 27, 2023 29:42


Summary Building a data platform that is enjoyable and accessible for all of its end users is a substantial challenge. One of the core complexities that needs to be addressed is the fractal set of integrations that need to be managed across the individual components. In this episode Tobias Macey shares his thoughts on the challenges that he is facing as he prepares to build the next set of architectural layers for his data platform to enable a larger audience to start accessing the data being managed by his team. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack) You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It's the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it's real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free! Developing event-driven pipelines is going to be a lot easier - Meet Functions! Memphis functions enable developers and data engineers to build an organizational toolbox of functions to process, transform, and enrich ingested events “on the fly” in a serverless manner using AWS Lambda syntax, without boilerplate, orchestration, error handling, and infrastructure in almost any language, including Go, Python, JS, .NET, Java, SQL, and more. Go to dataengineeringpodcast.com/memphis (https://www.dataengineeringpodcast.com/memphis) today to get started! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'll be sharing an update on my own journey of building a data platform, with a particular focus on the challenges of tool integration and maintaining a single source of truth Interview Introduction How did you get involved in the area of data management? data sharing weight of history existing integrations with dbt switching cost for e.g. SQLMesh de facto standard of Airflow Single source of truth permissions management across application layers Database engine Storage layer in a lakehouse Presentation/access layer (BI) Data flows dbt -> table level lineage orchestration engine -> pipeline flows task based vs. asset based Metadata platform as the logical place for horizontal view Contact Info LinkedIn (https://linkedin.com/in/tmacey) Website (https://www.dataengineeringpodcast.com) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers Links Monologue Episode On Data Platform Design (https://www.dataengineeringpodcast.com/data-platform-design-episode-268) Monologue Episode On Leaky Abstractions (https://www.dataengineeringpodcast.com/abstractions-and-technical-debt-episode-374) Airbyte (https://airbyte.com/) Podcast Episode (https://www.dataengineeringpodcast.com/airbyte-open-source-data-integration-episode-173/) Trino (https://trino.io/) Dagster (https://dagster.io/) dbt (https://www.getdbt.com/) Snowflake (https://www.snowflake.com/en/) BigQuery (https://cloud.google.com/bigquery) OpenMetadata (https://open-metadata.org/) OpenLineage (https://openlineage.io/) Data Platform Shadow IT Episode (https://www.dataengineeringpodcast.com/shadow-it-data-analytics-episode-121) Preset (https://preset.io/) LightDash (https://www.lightdash.com/) Podcast Episode (https://www.dataengineeringpodcast.com/lightdash-exploratory-business-intelligence-episode-232/) SQLMesh (https://sqlmesh.readthedocs.io/) Podcast Episode (https://www.dataengineeringpodcast.com/sqlmesh-open-source-dataops-episode-380) Airflow (https://airflow.apache.org/) Spark (https://spark.apache.org/) Flink (https://flink.apache.org/) Tabular (https://tabular.io/) Iceberg (https://iceberg.apache.org/) Open Policy Agent (https://www.openpolicyagent.org/) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)

Smart Software with SmartLogic
Machine Learning in Elixir vs. Python, SQL, and Matlab with Katelynn Burns & Alexis Carpenter

Smart Software with SmartLogic

Play Episode Listen Later Nov 23, 2023 31:19


In this episode of Elixir Wizards, Katelynn Burns, software engineer at LaunchScout, and Alexis Carpenter, senior data scientist at cars.com, join Host Dan Ivovich to discuss machine learning with Elixir, Python, SQL, and MATLAB. They compare notes on available tools, preprocessing, working with pre-trained models, and training models for specific jobs. The discussion inspires collaboration and learning across communities while revealing the foundational aspects of ML, such as understanding data and asking the right questions to solve problems effectively. Topics discussed: Using pre-trained models in Bumblebee for Elixir projects Training models using Python and SQL The importance of data preprocessing before building models Popular tools used for machine learning in different languages Getting started with ML by picking a personal project topic of interest Resources for ML aspirants, such as online courses, tutorials, and books The potential for Elixir to train more customized models in the future Similarities between ML approaches in different languages Collaboration opportunities across programming communities Choosing the right ML approach for the problem you're trying to solve Productionalizing models like fine-tuned LLM's The need for hands-on practice for learning ML skills Continued maturation of tools like Bumblebee in Elixir Katelynn's upcoming CodeBeam talk on advanced motion tracking Links mentioned in this episode https://launchscout.com/ https://www.cars.com/ Genetic Algorithms in Elixir (https://pragprog.com/titles/smgaelixir/genetic-algorithms-in-elixir/) by Sean Moriarity Machine Learning in Elixir (https://pragprog.com/titles/smelixir/machine-learning-in-elixir/) by Sean Moriarity https://github.com/elixir-nx/bumblebee https://github.com/huggingface https://www.docker.com/products/docker-hub/ Programming with MATLAB (https://www.mathworks.com/products/matlab/programming-with-matlab.html) https://elixirforum.com/ https://pypi.org/project/pyspark/  Machine Learning Course (https://online.stanford.edu/courses/cs229-machine-learning) from Stanford School of Engineering Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow (https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/) by Aurélien Géron Data Science for Business (https://data-science-for-biz.com/) by Foster Provost & Tom Fawcett https://medium.com/@carscomtech  https://github.com/k-burns  Code Beam America (https://codebeamamerica.com/) March, 2024 Special Guests: Alexis Carpenter and Katelynn Burns.

BIFocal - Clarifying Business Intelligence
Episode 268 - Power BI November 2023 Feature Summary part 1

BIFocal - Clarifying Business Intelligence

Play Episode Listen Later Nov 21, 2023 36:32


This is episode 268 recorded on November 21st, 2023 where John & Jason talk the Power BI November 2023 Feature Summary, for way too long and have to split the episode in 2 parts, including the new button slicer, DAX Query view, the Dataset rename to Semantic Models, and much more.

The Bike Shed
407: Tech Opinions Online with Edward Loveall

The Bike Shed

Play Episode Listen Later Nov 21, 2023 36:44


Stephanie interviews Edward Loveall, a former thoughtbotter, now software developer at Relevant Healthcare. Part of their discussion centers around Edward's blog post on the tech industry's over-reliance on GitHub. He argues for the importance of exploring alternatives to avoid dependency on a single platform and encourages readers to make informed technological choices. The conversation broadens to include how to form opinions on technology, the balance between personal preferences and team decisions, and the importance of empathy and nuance in professional interactions. Both Stephanie and Edward highlight the value of considering various perspectives and tools in software development, advocating for a flexible, open-minded approach to technology and problem-solving in the tech industry. Relevant (https://relevant.healthcare/) Let's make sure Github doesn't become the only option (https://blog.edwardloveall.com/lets-make-sure-github-doesnt-become-the-only-option) And not but (https://blog.edwardloveall.com/and-not-but) Empathy Online (https://thoughtbot.com/blog/empathy-online) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. And today, I'm joined by a very special guest, a friend of the pod and former thoughtboter, Edward Loveall. EDWARD: Hello, thanks for having me. STEPHANIE: Edward, would you share a little bit about yourself and what you're doing these days? EDWARD: Yes, I am a software developer at a company called Relevant Healthcare. We do a lot of things, but the maybe high-level summary is we take very complicated medical data and help federally-funded health centers actually understand that data and help their population's health, which is really fun and really great. STEPHANIE: Awesome. So, Edward, what is new in your world? EDWARD: Let's see, this weekend...I live in a dense city. I live in Cambridge, Massachusetts, and it's pretty dense there. And a lot of houses are very tightly packed. And delivery drivers struggle to find the numbers on the houses sometimes because A, they're old and B, there is many of them. And so, we put up house numbers because I live in, like, a three-story kind of building, but there are two different addresses in the same three stories, which is very weird. And so [laughs], delivery drivers are like, "Where is number 10 or 15?" or whatever. And so, there's two different numbers. And so, we finally put up numbers after living here for, like, four years [chuckles]. So, now, hopefully, delivery drivers in the holiday busy season will be able to find our house [laughs]. STEPHANIE: That's great. Yeah, I have kind of a similar problem where, a lot of the times, delivery folks will think that my house is the big building next door. And the worst is those at the building next door they drop off their packages inside the little, like, entryway that is locked for people who don't live there. And so, I will see my package in the window and, you know, it has my name on it. It has, like, my address on it. And [laughs] some strategies that I've used is leaving a note on the door [laughter] that is, like, "Please redeliver my package over there," and, like, I'll draw an arrow to the direction of my house. Or sometimes I've been that person to just, like, buzz random [laughter] units and just hope that they, like, let me in, and then I'll grab my package. And, you know, if I know the neighbors, I'll, like, try to apologize the next time I see them. But sometimes I'll just be like, I just need to get my package [laughs]. EDWARD: You're writing documentation for those people working out in the streets. STEPHANIE: Yeah. But I'm glad you got that sorted. EDWARD: Yeah. What about you? What's new in your world? STEPHANIE: Well, I wanted to talk a little bit about a thing that you and I have been doing lately that I have been enjoying a lot. First of all, are you familiar with the group chat trend these days? Do you know what I'm talking about? EDWARD: No. STEPHANIE: Okay. It's basically this idea that, like, everyone is just connecting with their friends via a group chat now as opposed to social media. But as a person who is not a big group chat person, I can't, like, keep up with [chuckles], like, chatting with multiple people [laughter] at once. I much prefer, like, one-on-one interaction. And, like, a month ago, I asked you if you would be willing to try having a shared note, like, a shared iOS note that we have for items that we want to discuss with each other but, you know, the next time we either talk on the phone or, I don't know, things that are, like, less urgent than a text message would communicate but, like, stuff that we don't want to forget. EDWARD: Yeah. You're, like, putting a little message in my inbox and vice versa. And yeah, we get to just kind of, whenever we want, respond to it, or think about it, or use it as a topic for a conversation later. STEPHANIE: Yeah. And I think it is kind of a playbook from, like, a one-on-one with a manager. I know that that's, like, a strategy that some folks use. But I think it works well in the context of our friendship because it's just gotten, like, richer over time. You know, maybe in the beginning, we're like, oh, like, I don't know, here are some random things that I've thought about. But now we're having, like, whole discussions in the note [laughter]. Like, we will respond to each other, like, with sub-bullets [laughs]. And then we end up not even needing to talk about it on the phone because we've already had a whole conversation about it in the note. EDWARD: Which is good because neither of us are particularly brief when talking on the phone. And [laughs] we only dedicate, like, half an hour every two weeks. It sort of helps clear the decks a little. STEPHANIE: Yeah, yeah. So, that's what I recommend. Try a shared note for [laughs] your next friendship hangout. EDWARD: Yeah, it's great. I heartily recommend it. STEPHANIE: So, one of the things that we end up talking about a lot is various things that we've been reading about tech on the web [laughs]. And we share with each other a lot of, like, blog posts, or articles, various links, and recently, something of yours kind of resurfaced. You wrote a blog post about GitHub a little while ago about how, you know, as an industry, we should make sure that GitHub doesn't become our only option. EDWARD: Yeah, this was a post I wrote, I think, back in May, or at least earlier this year, and it got a bunch of traction. And it's a somewhat, I would say, controversial article or take. GitHub just had their developer conference, and it resurfaced again. And I don't have a habit of writing particularly controversial articles, I don't think. Most of my writing history has been technical posts like tutorials. Like, I wrote a whole tutorial on how to write SQL, or I did write one about how to communicate online. But I wasn't, like, so much responding to, like, a particular person's communication or a company's communication. And this is the first big post I've written that has been a lot more very heavily opinionated, very, like, targeted at a particular thing or entity, I guess you'd say. It's been received well, I think, mostly, and I'm proud of it. But it's a different little world for me, and it's a little scary, honestly. STEPHANIE: Yeah, I hear that, having an opinion [laughs], a very strong and maybe, like, a less popular opinion, and publishing that for the world. Could you recap what the thesis of it is for our listeners? EDWARD: Yeah, and I think you did a great job of it, too. I see GitHub or really any singular piece of technology that we have in...I'll say our stack with air quotes, but it's, you know, all the tools that we use and all the things that we use. It's a risk if you only have one of those things, let's say GitHub. Like, if the only way you know how to contribute to a code repository with, you know, 17 people all committing to that repository, if the only way you know how to do that is a pull request and GitHub goes away, and you don't have pull requests anymore, how are you going to contribute to code? It's not that you couldn't figure it out, or there aren't multiple ways or even other pull request equivalents on other sites. But it is a risk to rely on one company to provide all of the things that you potentially need, or even many of the things that you potentially need, without any alternatives. So, I wanted to try to lay out A: those risks, and B: encourage people to try alternatives, to say that GitHub is not necessarily bad, although they may not actually fit what you need for various reasons, or someone else for various reasons. But you should have an alternative in your back pocket so that in case something changes, or you get locked out, or they go away, or they decide to cancel that feature, or any number of other scenarios, you have greatly diminished that risk. So, that's the main thrust of the post. STEPHANIE: Yeah, I really appreciated it because, you know, I think a lot of us probably take GitHub for granted [laughs]. And, you know, every new thing that they kind of add to the platform is like, oh, like, cool, like, I can now do this. In the post, you kind of lay out all of the different features that GitHub has rolled out over the last, you know, couple of years. And when you see it all like that, you know, like, in addition to being, like, a code repository, you now have, like, GitHub Actions for CI/CD, you know, you can deploy static pages with it. It now has, like, an in-browser editor, and then, you know, Copilot, which, like, the more things that they [laughs] roll out, the more it's becoming, like, the one-stop shop, right? That, like, do all of your work here. And I appreciated kind of, like, seeing that and being like, oh, like, is this what I want? EDWARD: Right. Yeah, exactly. Yeah. And you mentioned a bunch. There's also issues and discussions. You mentioned their in-browser editor. But so many people use VS Code, which, while it was technically made by Microsoft, it's based on Electron, which was developed at GitHub. And GitHub even, like, took away their other Electron-based editor, Atom. And then now officially recommends VS Code. And everything from deploying all the way down to, like, thinking about and prioritizing features and editing the code and all of that pretty much could happen on GitHub. I think maybe the only thing they don't currently do is host non-static sites, maybe [laughs]. That's maybe about it. And who knows? Maybe they're working on that; as far as I know, they are, so... STEPHANIE: Yeah, absolutely. You also mentioned one thing that I really liked about the content in the post was that you talked about alternatives to GitHub, even, like, alternatives to all of the different features that we mentioned. I guess I'm wondering, like, what were you hoping that a reader from your blog post, like, what they would get out of reading and, like, what they would take away from kind of sharing your opinion? EDWARD: I wanted to try to meet people where I think they might be because I think a lot of people do use GitHub, and they do take it for granted. And they do sort of see it as this thing that they must use, or they want to use even, and that's fine. That's not necessarily a bad thing. I want them to see those alternatives and have at least some idea that there is something else out there, that GitHub doesn't become just not only the default, but, like, the only thing. I mean, to just [chuckles] re-paraphrase the title of the post, I want to make sure GitHub does not become the only option, right? I want people to realize that there are other options out there and be encouraged to try them. And I have found, for me, at least, the better way to do that is not to only focus on, like, hey, don't use GitHub. Like, I hope people did not come away with only that message or even that message at all. But that it is more, hey, maybe try something else out and to encourage you to try something out. I'm going to A: share the risks with you and B: give you some actual things to try. So, I talk about the things I'm using and some other platforms and different paradigms to think about and use. So, I hope they take those. We'll see what happens in the next, you know, months or years. And I'll probably never know if it was actually just from me or from many other conversations, and thoughts, and articles, and all that kind of stuff. But that's what it takes, so... STEPHANIE: Yeah. I think the other fun thing about kind of the, like, meta-conversation we're having about having an opinion and, like, sharing it with the world is that you don't even really say like, "This is better than GitHub," or, like, kind of make a statement about, like, you shouldn't use...you don't even say, "You shouldn't use GitHub," right? The message is, like, here are some options: try it out, and, like, decide for yourself. EDWARD: Yeah, exactly. I want to empower people to do that. I don't think it would have been useful if I'd just go and say, "Hey, don't do this." It's very frustrating to me to see posts that are only negatives. And, honestly, I've probably written those posts, like, I'm not above them necessarily. But I have found that trying to help people do what you want them to do, as silly and maybe obvious as that sounds, is a more effective way to get them to do what you want them to do [laughs], as opposed to say, "Hey, stop doing the thing I don't want you to do," or attack their identity, or their job, or some other aspect of their life. Human behavior does not respond well to that generally, at least in my experience. Like, having your identity tied up in a tool or a platform is, unfortunately, pretty common in, like, a tech space. Like, oh, like, Ruby on Rails is the best piece of software or something like that. And it's like, well, you might like it, and that might be the best thing for you. And personally, I really like Ruby on Rails. I think it does a great job of what it does. But as an example, I would not use Ruby on Rails to maybe build an iOS app. I could; I think that's possible, but I don't think that's maybe the best tool for that job. And so, trying to, again, meet people where they are. STEPHANIE: I guess it kind of goes back to what you're saying. It's like, you want to help people do what they are trying to do. EDWARD: Yeah. Maybe there's a little paternalistic thinking, too, of, like, what's good for the industry, even if it feels bad for you right now. I don't love that sort of paternalistic thinking. But if it's a real risk, it seems worth at least addressing or pointing out and letting people make that decision for themselves. STEPHANIE: Yeah, absolutely. I am actually kind of curious about how do you, like, decide something for yourself? You know, like, how do you form your own opinion about technology? I think, yeah, like, a lot of people take GitHub for granted. They use it because that's just what's used, and that may or may not be a good reason for doing so. But that was a position I was in for a long time, right? You know, especially when you're newer to the industry, you're like, oh, well, this is what the company uses, or this is what, like, the industry uses. But, like, how do you start to figure out for yourself, like, do I actually like this? Does this help me meet my goals and needs? Is it doing what I want it to be doing? Do you have any thoughts about that? EDWARD: Yeah. I imagine most people listening to this have tried lots of different pieces of software and found them great, or terrible, or somewhere in between. And I don't think there's necessarily one way to do this. But I think my way has been to try lots of things, unsurprisingly, and evaluate them based on the thing that I'm trying to do. Sometimes I'll go into a new field, or a new area, or a new product, or whatever, and you just sort of use what's there, or what people have told you about, or what you heard about last, and that's fine. That's a great place to start, right? And then you start seeing maybe where it falls down, or where it is frustrating or doesn't quite meet those needs. And it takes a bit of stepping back. Again, I don't think I'm, like, going to blow anyone's mind here by this amazing secretive technique that I have for, like, discovering good software. But it's, like, sitting there and going through this iterative loop of try it, evaluate it. Be honest with, is it meeting or not meeting some particular needs? And then try something else. Or now you have a little more info to arm yourself to get to the next piece that is potentially good. As you go on in your career and you've tried many, many, many pieces of things, you start to see patterns, right? And you know, like, oh, it's not like, oh, this is how I make websites. It's like, ah, I understand that websites are made with a combination of HTML, and CSS, and JavaScript and sometimes use frameworks. And there's a database layer with an ORM. And you start to understand all the different parts. And now that you have those keywords and those pieces a little more under your control or you have more experience with them, you can use all that experience to then seek out particular pieces. I'm looking for an ORM that's built with Rust because that's the thing I need to do it for; that's the platform I need to work with. And I needed to make sure that it supports MySQL and Postgres, right? Like, it's a very targeted thing that you wouldn't know when you're starting out. But over years of experience, you understand the difference and the reasons why you might need something like that. And sometimes it's about kind of evaluating options and maybe making little test projects to play around with those things or side projects. That's why something like investment time or 20% time is so helpful and useful for that if you're the kind of person who, you know, enjoys programming on your own in your own free time like I am. And that's also a great time to do it, although it's certainly not required. And so, that's kind of how I go through and evaluate whatever tool it is that I need. For something maybe more professional or higher stakes, there's a little more evaluation upfront, right? You want to make sure you make the right choice before you spend thousands of hours using it and potentially regretting [laughs] it and having to roll it back, causing even more thousands of hours of time. So, there's obviously some scrutiny there. But, again, that also takes experience and understanding the kind of need that you have. So, yeah, it's kind of a trade-off of, like, your time, and your energy, and your experience, and your interest. You will have many different inputs from colleagues, from websites, from posts on the internet, from Twitter, or fediverse-type kind of blogging and everything in between, right? So, you take all that in, and you try a bunch of stuff, and you come out on the other side, and then you do it again. STEPHANIE: Yeah, it sounds like you really like to just experiment, and I think that's really great. And I actually have to say that I am not someone who likes to do that [laughs]. Like, it's not where I focus a lot of my time. And it's why I'm, like, glad I'm friends with you, first of all. EDWARD: [laughs] STEPHANIE: But also, I've realized I'm much more of, like, a gatherer in terms of information and opinions. Like, I like hearing about other people's experience to then, like, help inform an opinion that I might develop myself. And, you know, it's not to say that, like, I am, like, oh yeah, like, so and so said this, and so, therefore, yeah, I completely believe what they have to say. But as someone who does not particularly want to spend a ton of my time trying out things, it is really helpful to know people who do like to do that, know people who I do trust, right? And then kind of like you had mentioned, just, like, having all these different inputs. And one thing that has changed for me with more experience is, previously, a lot of, like, the basis of what I thought was the quote, unquote, "right way" to develop software was, like, asking, like, other people and, you know, their opinions becoming my own. And, you know, at some point, though, that, like, has shifted, right? Where it's like, oh, like, you know, I remember learning this from so and so, and, like, actually, I think I disagree now. Or maybe it's like, I will take one part of it and be like, yeah, I really like test-driven development in this particular way that I have figured out how I do it, but it is different still from, like, who I learned it from. And even though, like, that was kind of what I thought previously as, like, oh yeah, like, this is the way that I've adopted without room for adjustment. I think that has been a growth, I guess, that I can point to and be like, oh yeah, like, I once was in a position where maybe opinions weren't necessarily my own. But now I spend a lot more time thinking about, like, oh, like, how do I feel about this? And I think there is, like, some amount of self-reflection required, right? A lot, honestly. Like, you try things, and then you think about, like, did I like that? [laughs] One without the other doesn't necessarily fully informed opinion make. EDWARD: Yeah, absolutely. I mean, I'm really glad you brought up that, like, you've heard an opinion, or a suggestion, or an idea from somebody, and you kind of adopt it as your own for a little bit. I like to think of it as trying on ideas like you try on clothing. Or something like, let me try on this jacket. Does this fit? And maybe you like it a little bit. Or maybe you look ridiculous, and it's [laughs] not quite for you. And you don't feel like it's for you. But you have to try. You have to, like, actually do it. And that is a completely valid way to, like, kick-starting some of those opinions, getting input from friends or colleagues, or just the world around you. And, like, hearing those things and trying them is 100% valid. And I'm glad you mentioned that because if I mentioned it, I think I kind of skipped over it or went through it very quickly. So, absolutely. And you're talking about how you just take, like, one part of it maybe. That nuance, that is, I think, really critical to that whole thought, too. Everything works differently for different people. And every tool is good for other, like, different jobs. Like, it will be like saying a hammer is the best tool, and it's, like, well, it's a good tool for the right thing. But, like, I wouldn't use a hammer to, like, I don't know, level the new house numbers I put on my house, right? But I might use them to, like, hit the nail to get them in. So, it's a silly analogy, but, like, there is always nuance and different ways to apply these different tools and opinions. STEPHANIE: I like that analogy. I think it would be really funny if there was someone out there who claimed that the hammer is the best tool ever invented [laughs]. EDWARD: Oh, I'm sure. I'm sure there is, you know. I'm not going to use a drill to paint my house, though [laughs]. STEPHANIE: That's a fair point, and you don't have to [chuckles]. EDWARD: Thank you [laughs]. STEPHANIE: But, I guess, to extend this thought further, I completely and wholeheartedly agree that, like, yeah, everyone gets to decide for themselves what works for them. But also, we work in relation with others. And I'm very interested in the balance of having your own ideas and opinions about tooling, software practices, like, whatever, and then how to bring that back into, like, working on a team or, like, working with others. EDWARD: Yeah. Well, I don't know if this is exactly what you're asking, but it makes me think of: you've gone off; you've discovered a whole bunch of stuff that you think works really well for you. And then you go to work, or you go to a community that is using a very different way of working, or different tools, or different technologies. That can be a piece of friction sometimes of, like, "Oh my gosh, I love Ruby on Rails. It's the best." And someone else is like, "I really, really don't like Ruby on Rails for reasons XYZ. And we don't use it here." And that can be really tough and, honestly, sometimes even disheartening, depending on how strongly you feel about that tool and how strongly they feel about their tools. And as a young developer many years ago, I definitely had a lot more of my identity wrapped up in the tools and technologies that I used. And that has been very useful to try to separate those two. I don't claim to be perfect at it or done with that work yet. But the more I can step away and say, you know, like, this is only a tool. It is not the tool. It is not the best tool. It is a tool that can be very effective at certain things. And I've found, at least right now, the more useful thing is to get to the root of the problem you're trying to solve and make sure you agree with everybody on that premise. So, yes, you may have come from a world where fast iteration and a really fluent language interface like Ruby has and a really fast iteration cycle like Rails has, is, like, the most important need to be solved because other things have been solved. You understand what you're doing for your product, or maybe you need to iterate quickly on that product. You've figured out an audience. You're getting payroll. You're meeting all that as a business. But then you go into a business that's potentially, like, let's say, much less funded. Or they have their market fit, and now they're working on, like, extreme performance optimization, or they're working on getting, like, government compliance, or something like that. And maybe Rails is still great. This is maybe a...the analogy may fall apart here. But let's pretend it isn't for some reason. You have to agree that, hey, like, yes, we've solved problem X that Rails really helps you solve. And now we're moving on to problem Y, and Rails may not help you solve that, or whatever technology you're using may not help you solve that. And I've found it to be much more useful to stop worrying about the means, and the tools, the things in between, and worry about the ends, worry about the goal, worry about the problems you're actually trying to solve. And then you can feel really invested in trying to solve that problem together as a group, as a team, as a community. I've found that to be very helpful. And I would also like to say it is extremely difficult to let some of that stuff go. It takes a lot of work. I see you nodding along. Like, it's really, really hard. And, like I said, I'm not totally done with it either. But that's, I think, it's something I'm really working on now and something I feel really strongly about. STEPHANIE: Yeah. You mentioned the friction of, like, working in an environment where there are different opinions, which is, you know, I don't know, just, like, reality, I guess [laughs]. EDWARD: Human nature. STEPHANIE: Yeah, exactly. And one thing I was thinking about recently was, like, okay, like, so someone else maybe made a decision about using a type of technology or, like, made a decision about architecture before my time or, like, above me, or whatever, right? Like, I wasn't there, and that is okay. But also, like, how do I maintain what I believe in and hold fast to, like, my opinions based on my value system, at least, without complaining? [laughs] Because I've only seen that a little bit before, right? When it just becomes, like, venting, right? It's like, ugh, like, you know, I have seen people who are coming from maybe, like, microservices or more of a JavaScript world, and they're like, ugh, like, what is going on with Rails? Like, this sucks [laughs]. And one thing I've been trying lately is just, like, communicating when I don't agree that something's a great idea. But also, like, acknowledging that, like, yeah, but this is how it is for this team, and I'm also not in a position to change it. Or, like, I don't feel so strongly about it that I'm like, "Hey, we should totally rethink using this, like, background job [laughs] platform." But I will be like, "Hey, like, I don't like this particular thing about it. And, you know, maybe here are some things that I did to mitigate whatever thing I'm not super into," or, like, "If I had more time, this is what I would do," and just putting it out there. Sometimes, I don't get, like, engagement on it. But it's a good practice for me to be, like, this is how I can still have opinions about things, even if I'm not, at least in this particular moment, in a position to change anything. EDWARD: It sounds to me like you in, at least at the lowest level, like, you want to be acknowledged, and you want to, like, be heard. You want to be part of a process. And yes, it doesn't always go with Stephanie's initial thought, or even final thought, or Edward's final thought. But it is very helpful to know that you are heard and you are respected. And it isn't someone just, like, completely disregarding any feeling that you have. As much as we like to say programming is this very, like, I don't know, value neutral, zero emotion kind of job, like, there's tons of emotion in this job. We want to do good things for the world. We want our technology to serve the people, ultimately, at least I do, and I know you do. But we sometimes disagree on the way to do that. And so, you want to make sure you're heard. And if you can't get that at work, like, and I know you do this, but I would encourage anyone listening out there to, like, get a buddy that you can vent to or get somebody that you can express, and they will hear you. That is so valuable just as a release, in some ways, to kind of get through what you need to get through sometimes. Because it is a job, and you aren't always the person that's going to make the decisions. And, honestly, like, you do still have one decision left, which is you can go work somewhere else if it really is that bad. And, like, it's useful to know that you are staying where you are because you appreciate the trade-offs that you have: a steady paycheck, or the colleagues that you work with, or whatever. And that's fine. That's an okay trade-off. And at some point, you might want to make a different trade-off, and that's also fine. We're getting real managery and real here. But I think it's useful. Like you said, this can be a very emotional career, and it's worth acknowledging that. STEPHANIE: Yeah, you just, you know, raised a bunch of, like, very excellent points. Yeah, at the end of the day, like, you know, you can do your best to, like, propose changes or, like, introduce new tooling and, like, see how other people feel about it. But, like, yeah, if you fundamentally do not enjoy working with a critical tool that, you know, a lot of the foundation of the work that you're doing day to day is built off of, then maybe there is a place where, like, another company that's using tools that you do feel excited or, like, passionate or, like, are a better alignment with what you hope to be doing. Kind of just going back to that theme that we were talking about earlier, like, everyone gets to decide for themselves, right? Like, the tools to help them do what they want to be doing. EDWARD: And you could even, like, reframe it for yourself, where instead of it being about the tools, maybe it's about the problem. Like, you start being more invested in, like, the problem that you're solving and, okay, maybe you don't want to use microservices or whatever, but, like, maybe you can get behind that if you realign yourself. The thing you're trying to solve is not the tool. The thing you're trying to solve is the problem. And that can be a useful, like, way to mitigate that or to, like, help yourself feel okay about the thing, whatever that is. STEPHANIE: Yeah. Now, how do I have this conversation with everyone [laughter] who claims on the internet that X is the solution to all their problems or the silver bullet, [laughs] or whatever? EDWARD: Yeah, that's tough because there are some very strong opinions on the internet, as I'm sure [laughs] you've observed. I don't know if I have the answer [laughs]. Once again, nuance and indecisions. I have been currently approaching it from kind of a meta-perspective of, like, if someone says, "X is the best tool," you know, "A hammer is the best tool," right? I'm not going to go write the post that's like, "No, hammer is, in fact, not the best tool. Don't use hammers." I would maybe instead write a post that's like, "Consider what makes the best tool." I've effectively, like, raised up one level of abstraction from, we're no longer talking about is X, or Y, or Z, the best tool? We're talking about how do we even decide that? How do we even think about that? One post...I'm now just promoting my blog posts, so get ready. But one thing I wrote was this post called And Not But. And I tried to make the case that instead of saying the word but in a sentence, so, like, yeah, yeah, we might want to use hammers, but we have to use drills or whatever. I'm trying to make the case that you can use and instead. So yeah, hammers are really good, and drills are really good in these other scenarios. And trying to get that nuance in there, like, really, really putting that in there and getting people to, like, feel that better, I think, has been really helpful, for me, certainly to get through. And part of the best thing about writing a blog post is just getting your own thoughts...I mean, it's another way to vent, right? It's getting your own thoughts out somewhere. And sometimes people respond to them. You'd be surprised who just reaches out and been like, "Hey, yeah, like, I really appreciated that post. That was really great." You weren't trying to reach that person, but now you have another connection. So, a side benefit for writing blog posts [inaudible 30:17] do it, or just even getting your thoughts out via a podcast, via a video, whatever. So, I've kind of addressed that. I also wrote a post when I worked at thoughtbot called Empathy Online. And that came out of, like, frustration with seeing people being too divisive or, in my opinion, unempathetic or inconsiderate. And instead of, again, trying to just say, "Stop it, don't do that," [laughs] but trying to, like, help use what I have learned when communicating in a medium that is kind of inherently difficult to get across emotion and empathy. And so, again, it's, in some ways, unsatisfying because what you really want to do is go talk to that person that says, "Hammer is the best tool," and say, "No, stop it [laughs]," and, like, slap them on the head or whatever, politely. But I think that probably will not get you very far. And so, if your goal, really, is to change the way people think about these things, I find it way more effective to, like, zoom out and talk about that on that sort of more meta-level and that higher level. STEPHANIE: Yeah. I liked how you called it, like, a higher level of abstraction. And, honestly, the other thing I was thinking about as you were talking about the, like, divisiveness that opinions can create, there's also some aspect of it, as a reader, realizing that one person sharing their opinion does not take away your ability to have a differing opinion [laughs]. And sometimes it's tough when someone's like, "Tailwind sucks [laughs], and it is a backward step in, you know, how we write CSS," or whatever. Yes, like, sometimes that can be kind of, like, inflammatory. But if you, like, kind of are translating it or, like, reading between the lines, they're just writing about their perspective from the things that they value. And it is okay for you to value different things and, for that reason, have a different perspective on the same thing. And, I don't know, that has helped me sometimes avoid getting into that, like, headspace of wanting to argue with someone [laughs] on the internet. Or they'll be like, "This is why I am right." [laughs] Now I have to write something and share it on the internet in response [laughs]. EDWARD: There's this idea of the narcissism of minor differences. And I believe the idea is this, like, you know, you're more likely to argue with someone who, like, 90% agrees with you. But you're just, like, quibbling over that last 10%. I mean, one might call it bikeshedding. I don't know if you've heard that phrase. But the thing that I have often found, too, is that, like the GitHub post, I will get people arguing with me, like, there's the kind of stuff I expected, where it's like, "Oh, but GitHub is really good," and XYZ and that's fine. And we can have that conversation. But it's kind of surprising, and I should have expected it, that people will sometimes be like, "Hey, you didn't go far enough. You should tell people to, like, completely delete their GitHub or, like, you know, go protest in the street." And, like, maybe that's true. I'm not saying it is or isn't. But I think one thing I try to think about is, in any post, in any trying convincing argument, like, you're potentially moving someone 1 step forward, even if there's ten steps to go. But they're never going to make those ten steps if they don't make the first 1. And so, you can kind of help them get there. And someone else's post can absolutely take them from step 5 to 6 or 6 to 7 or 7 to 8. And you won't accomplish it all at once, and it's kind of a silly thing to try, and your efforts are probably lost [laughs]. Unfortunately, it's a little bit of preaching to the choir because, like, yeah, the people that are going to respond to, like, the extreme, the end are, like, the people that already get it. And the people that you're trying to convince and move along are not going to get that thing. I do want to say that I could see this being perceived as, like, a very privileged position of, like, if there's some, like, genuine atrocity happening in the world, like, it is appropriate to go to extremes many times and sometimes, and that's fine, and people are allowed to be there. I don't want to invalidate that. It's a really tricky balance. And I'm trying to say that if your goal is to vent, that's fine. And if your goal is to move people from step 3 to 4, you have to meet people at step 3. And all that's valid and okay to try to help people move in that way. But it is very tricky. And I don't want to invalidate someone who's extremely frustrated because they're at step 10, and no one else is seeing the harm that not everybody else being at step 10 is. Like, that's an incredibly reasonable place to be and an okay place to be. STEPHANIE: Yeah, yeah. The other thing you just sparked, for me, is also the, like, power of, yeah, being able to say like, "Yeah, I agree with this 50%, or 60%, or, like, 90%." And also, there's this 10% that I'm like, oh, like, I wish were different, or I wish they'd gone further, or I wish they didn't say that. Or, you know, I just straight up disagree with this step 1 sentence, but the rest of the article, you know, I really related to. And, like, teasing that apart has been very useful for me, right? Because then I'm no longer like being like, oh, was this post good or bad? Do I agree with it or don't agree with it? It's like, there's room for [laughs] all of it. EDWARD: Yeah, that's that nuance that, you know, I liked this post, and I did not agree with these two parts of it, or whatever. It's so useful. STEPHANIE: Well, thanks, Edward, so much for coming on the show and bringing that nuance to this conversation. I feel really excited about kind of what we talked about, and hopefully, it resonates with some of our listeners. EDWARD: Yeah, I hope so too. I hope I can take them from step 2 to step 3 [laughs]. STEPHANIE: On that note, shall we wrap up? EDWARD: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at tbot.io/referral. Or you can email us at referrals@thoughtbot.com with any questions.

Data Engineering Podcast
Unlocking Your dbt Projects With Practical Advice For Practitioners

Data Engineering Podcast

Play Episode Listen Later Nov 20, 2023 76:04


Summary The dbt project has become overwhelmingly popular across analytics and data engineering teams. While it is easy to adopt, there are many potential pitfalls. Dustin Dorsey and Cameron Cyr co-authored a practical guide to building your dbt project. In this episode they share their hard-won wisdom about how to build and scale your dbt projects. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data projects are notoriously complex. With multiple stakeholders to manage across varying backgrounds and toolchains even simple reports can become unwieldy to maintain. Miro is your single pane of glass where everyone can discover, track, and collaborate on your organization's data. I especially like the ability to combine your technical diagrams with data documentation and dependency mapping, allowing your data engineers and data consumers to communicate seamlessly about your projects. Find simplicity in your most complex projects with Miro. Your first three Miro boards are free when you sign up today at dataengineeringpodcast.com/miro (https://www.dataengineeringpodcast.com/miro). That's three free boards at dataengineeringpodcast.com/miro (https://www.dataengineeringpodcast.com/miro). Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack) You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It's the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it's real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Dustin Dorsey and Cameron Cyr about how to design your dbt projects Interview Introduction How did you get involved in the area of data management? What was your path to adoption of dbt? What did you use prior to its existence? When/why/how did you start using it? What are some of the common challenges that teams experience when getting started with dbt? How does prior experience in analytics and/or software engineering impact those outcomes? You recently wrote a book to give a crash course in best practices for dbt. What motivated you to invest that time and effort? What new lessons did you learn about dbt in the process of writing the book? The introduction of dbt is largely responsible for catalyzing the growth of "analytics engineering". As practitioners in the space, what do you see as the net result of that trend? What are the lessons that we all need to invest in independent of the tool? For someone starting a new dbt project today, can you talk through the decisions that will be most critical for ensuring future success? As dbt projects scale, what are the elements of technical debt that are most likely to slow down engineers? What are the capabilities in the dbt framework that can be used to mitigate the effects of that debt? What tools or processes outside of dbt can help alleviate the incidental complexity of a large dbt project? What are the most interesting, innovative, or unexpected ways that you have seen dbt used? What are the most interesting, unexpected, or challenging lessons that you have learned while working with dbt? (as engineers and/or as autors) What is on your personal wish-list for the future of dbt (or its competition?)? Contact Info Dustin LinkedIn (https://www.linkedin.com/in/dustindorsey/) Cameron LinkedIn (https://www.linkedin.com/in/cameron-cyr/) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers Links Biobot Analytic (https://biobot.io/) Breezeway (https://www.breezeway.io/) dbt (https://www.getdbt.com/) Podcast Episode (https://www.dataengineeringpodcast.com/dbt-data-analytics-episode-81/) Synapse Analytics (https://azure.microsoft.com/en-us/products/synapse-analytics/) Snowflake (https://azure.microsoft.com/en-us/products/synapse-analytics/) Podcast Episode (https://www.dataengineeringpodcast.com/snowflakedb-cloud-data-warehouse-episode-110/) Fivetran (https://www.fivetran.com/) Podcast Episode (https://www.dataengineeringpodcast.com/fivetran-data-replication-episode-93/) Analytics Power Hour (https://analyticshour.io/) DDL == Data Definition Language (https://en.wikipedia.org/wiki/Data_definition_language) DML == Data Manipulation Language (https://en.wikipedia.org/wiki/Data_manipulation_language) dbt codegen (https://github.com/dbt-labs/dbt-codegen) Unlocking dbt (https://amzn.to/49BhACq) book (affiliate link) dbt Mesh (https://www.getdbt.com/product/dbt-mesh) dbt Semantic Layer (https://www.getdbt.com/product/semantic-layer) GitHub Actions (https://github.com/features/actions) Metaplane (https://www.metaplane.dev/) Podcast Episode (https://www.dataengineeringpodcast.com/metaplane-data-observability-platform-episode-253/) DataTune Conference (https://www.datatuneconf.com/) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)

BIFocal - Clarifying Business Intelligence
Episode 267 - Microsoft Ignite 2023 - Fabric is GA

BIFocal - Clarifying Business Intelligence

Play Episode Listen Later Nov 17, 2023 41:04


This is episode 267 recorded on November 17th, 2023 where John & Jason talk about the release to GA of Microsoft Fabric and everything around it Mirroring in Microsoft Fabric, Microsoft 365 Data in Microsoft Fabric, and the Public Preview of Copilot for Power BI in Microsoft Fabric.

Software Sessions
David Copeland on Medium Sized Decisions (RubyConf 2023)

Software Sessions

Play Episode Listen Later Nov 17, 2023 48:33


David was the chief software architect and director of engineering at Stitch Fix. He's also the author of a number of books including Sustainable Web Development with Ruby on Rails and most recently Ruby on Rails Background Jobs with Sidekiq. He talks about how he made decisions while working with a medium sized team (~200 developers) at Stitch Fix. The audio quality for the first 19 minutes is not great but the correct microphones turn on right after that. Recorded at RubyConf 2023 in San Diego. A few topics covered: Ruby's origins at Stitch Fix Thoughts on Go Choosing technology and cloud services Moving off heroku Building a platform team Where Ruby and Rails fit in today The role of books and how different people learn Large Language Model's effects on technical content Related Links David's Blog Mastodon Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today. I want to share another conversation from RubyConf San Diego. This time it's with David Copeland. He was a chief software architect and director of engineering at stitch fix. And at the start of the conversation, you're going to hear about why he decided to write the book, sustainable web development with Ruby on rails. Unfortunately, you're also going to notice the sound quality isn't too good. We had some technical difficulties. But once you hit the 20 minute mark of the recording, the mics are going to kick in. It's going to sound way better. So I hope you stick with it. Enjoy. Ruby at Stitch Fix [00:00:35] David: Stitch Fix was a Rails shop. I had done a lot of Rails and learned a lot of things that worked and didn't work, at least in that situation. And so I started writing them down and I was like, I should probably make this more than just a document that I keep, you know, privately on my computer. Uh, so that's, you know, kind of, kind of where the genesis of that came from and just tried to, write everything down that I thought what worked, what didn't work. Uh, if you're in a situation like me. Working on a product, with a medium sized, uh, team, then I think the lessons in there will be useful, at least some of them. Um, and I've been trying to keep it up over, over the years. I think the first version came out a couple years ago, so I've been trying to make sure it's always up to date with the latest stuff and, and Rails and based on my experience and all that. [00:01:20] Jeremy: So it's interesting that you mention, medium sized team because, during the, the keynote, just a few moments ago, Matz the creator of Ruby was talking about how like, Oh, Rails is really suitable for this, this one person team, right? Small, small team. And, uh, he was like, you're not Google. So like, don't worry about, right. Can you scale to that level? Yeah. Um, and, and I wonder like when you talk about medium size or medium scale, like what are, what are we talking? [00:01:49] David: I think probably under 200 developers, I would say. because when I left Stitch Fix, it was closing in on that number of developers. And so it becomes, you know, hard to... You can kind of know who everybody is, or at least the names sound familiar of everybody. But beyond that, it's just, it's just really hard. But a lot of it was like, I don't have experience at like a thousand developer company. I have no idea what that's like, but I definitely know that Rails can work for like... 200 ish people how you can make it work basically. yeah. [00:02:21] Jeremy: The decision to use Rails, I'm assuming that was made before you joined? [00:02:26] David: Yeah, the, um, the CTO of Stitch Fix, he had come in to clean up a mess made by contractors, as often happens. They had used Django, which is like the Python version of Rails. And he, the CTO, he was more familiar with Rails. So the first two developers he hired, also familiar with Rails. There wasn't a lot to maintain with the Django app, so they were like, let's just start fresh, fresh with Rails. yeah, but it's funny because a lot of the code in that Rails app was, like, transliterated from Python. So you could, it would, it looked like the strangest Ruby code in the world because it was basically, there was no test. So they were like, let's just write the Ruby version of this Python just so we know it works. but obviously that didn't, didn't last forever, so. [00:03:07] Jeremy: So, so what's an example of a, of a tell? Where you're looking at the code and you're like, oh, this is clearly, it came from Python. [00:03:15] David: You'd see like, very, very explicit, right? Like Python, there's a lot of like single line things. very like, this sounds like a dig, but it's very simple looking code. Like, like I don't know Python, but I was able to change this Django app. And I had to, I could look at it and you can figure out immediately how it works. Cause there's. Not much to it. There's nothing fancy. So, like, this, this Ruby code, there was nothing fancy. You'd be like, well, maybe they should have memoized that, or maybe they should have taken that into another class, or you could have done this with a hash or something like that. So there was, like, none of that. It was just, like, really basic, plain code like you would see in any beginning programming language kind of thing. Which is at least nice. You can understand it. but you probably wouldn't have written it that way at first in Ruby. Thoughts on Go [00:04:05] Jeremy: Yeah, that's, that's interesting because, uh, people sometimes talk about the Go programming language and how it looks, I don't know if simple is the right word, but it's something where you look at the code and even if you don't necessarily understand Go, it's relatively straightforward. Yeah. I wonder what your thoughts are on that being a strength versus that being, like, [00:04:25] David: Yeah, so at Stitch Fix at one point we had a pro, we were moving off of Heroku and we were going to, basically build a deployment platform using ECS on AWS. And so the deployment platform was a Rails app and we built a command line tool using Ruby. And it was fine, but it was a very complicated command line tool and it was very slow. And so one of the developers was like, I'm going to rewrite it in Go. I was like, ugh, you know, because I just was not a big fan. So he rewrote it in Go. It was a bazillion times faster. And then I was like, okay, I'm going to add, I'll add a feature to it. It was extremely easy. Like, it's just like what you said. I looked at it, like, I don't know anything about Go. I know what is happening here. I can copy and paste this and change things and make it work for what I want to do. And it did work. And it was, it was pretty easy. so there's that, I mean, aesthetically it's pretty ugly and it's, I, I. I can't really defend that as a real reason to not use it, but it is kind of gross. I did do Go, I did a small project in Go after Stitch Fix, and there's this vibe in Go about like, don't create abstractions. I don't know where I got that from, but every Go I look at, I'm like we should make an abstraction for this, but it's just not the vibe. They just don't like doing that. They like it all written out. And I see the value because you can look at the code and know what it does and you don't have to chase abstractions anywhere. But. I felt like I was copying and pasting a lot of, a lot of things. Um, so I don't know. I mean, the, the team at Stitch Fix that did this like command line app in go, they're the platform team. And so their job isn't to write like web apps all day, every day. There's kind of in and out of all kinds of things. They have to try to figure out something that they don't understand quickly to debug a problem. And so I can see the value of something like go if that's your job, right? You want to go in and see what the issue is. Figure it out and be done and you're not going to necessarily develop deep expertise and whatever that thing is that you're kind of jumping into. Day to day though, I don't know. I think it would make me kind of sad. (laughs) [00:06:18] Jeremy: So, so when you say it would make you kind of sad, I mean, what, what about it? Is it, I mean, you mentioned that there's a lot of copy and pasting, so maybe there's code duplication, but are there specific things where you're like, oh, I just don't? [00:06:31] David: Yeah, so I had done a lot of Java in my past life and it felt very much like that. Where like, like the Go library for making an HTTP call for like, I want to call some web service. It's got every feature you could ever want. Everything is tweakable. You can really, you can see why it's designed that way. To dial in some performance issue or solve some really esoteric thing. It's there. But the problem is if you just want to get an JSON, it's just like huge production. And I felt like that's all I really want to do and it's just not making it very easy. And it just felt very, very cumbersome. I think that having to declare types also is a little bit of a weird mindset because, I mean, I like to make types in Ruby, I like to make classes, but I also like to just use hashes and stuff to figure it out. And then maybe I'll make a class if I figure it out, but Go, you can't. You have to have a class, you have to have a type, you have to think all that ahead of time, and it just, I'm not used to working that way, so it felt, I mean, I guess I could get used to it, but I just didn't warm up to that sort of style of working, so it just felt like I was just kind of fighting with the vibe of the language, kind of. Yeah, [00:07:40] Jeremy: so it's more of the vibe or the feel where you're writing it and you're like this seems a little too... Explicit. I feel like I have to be too verbose. It just doesn't feel natural for me to write this. [00:07:53] David: Right, it's not optimized for what in my mind is the obvious case. And maybe that's not the obvious case for the people that write Go programs. But for me, like, I just want to like get this endpoint and get the JSON back as a map. Not any easier than any other case, right? Whereas like in Ruby, right? And you can, I think if you include net HTTP, you can just type get. And it will just return whatever that is. Like, that's amazing. It's optimized for what I think is a very common use case. So it makes me feel really productive. It makes me feel pretty good. And if that doesn't work out long term, I can always use something more complicated. But I'm not required to dig into the NetHttp library just to do what in my mind is something very simple. [00:08:37] Jeremy: Yeah, I think that's something I've noticed myself in working with Ruby. I mean, you have the standard library that's very... Comprehensive and the API surface is such that, like you said there, when you're trying to do common tasks, a lot of times they have a call you make and it kind of does the thing you expected or hoped for. [00:08:56] David: Yeah, yeah. It's kind of, I mean, it's that whole optimized for programmer happiness thing. Like it does. That is the vibe of Ruby and it seems like that is still the way things are. And, you know, I, I suppose if I had a different mindset, I mean, because I work with developers who did not like using Ruby or Rails. They loved using Go or Java. And I, I guess there's probably some psychological analysis we could do about their background and history and mindset that makes that make sense. But, to me, I don't know. It's, it's nice when it's pleasant. And Ruby seems pleasant. (laughs) Choosing Technology [00:09:27] Jeremy: as a... Software Architect, or as a CTO, when, when you're choosing technology, what are some of the things you look at in terms of, you know? [00:09:38] David: Yeah, I mean, I think, like, it's a weird criteria, but I think what is something that the team is capable of executing with? Because, like, most, right, most programming languages all kind of do the same thing. Like, you can kind of get most stuff done in most common popular programming languages. So, it's probably not... It's not true that if you pick the wrong language, you can't build the app. Like, that's probably not really the case. At least for like a web app or something. so it's more like, what is the team that's here to do it? What are they comfortable and capable of doing? I worked on a project with... It was a mix of like junior engineers who knew JavaScript, and then some senior engineers from Google. And for whatever reason someone had chosen a Rails app and none of them were comfortable or really yet competent with doing Ruby on Rails and they just all hated it and like it didn't work very well. Um, and so even though, yes, Rails is a good choice for doing stuff for that team at that moment. Not a good choice. Right. So I think you have to go in and like, what, what are we going to be able to execute on so that when the business wants us to do something, we just do it. And we don't complain and we don't say, Oh, well we can't because this technology that we chose, blah, blah, blah. Like you don't ever want to say that if possible. So I think that's. That's kind of the, the top thing. I think second would be how widely supported is it? Like you don't want to be the cutting edge user that's finding all the bugs in something really. Like you want to use something that's stable. Postgres, MySQL, like those work, those are fine. The bugs have been sorted out for most common use cases. Some super fancy edge database, I don't know if I'd want to be doing, doing that you know? Choosing cloud services [00:11:15] Jeremy: How do you feel about the cloud specific services and databases? Like are you comfortable saying like, oh, I'm going to use... Google Cloud, BigQuery. Yeah. [00:11:27] David: That sort of thing. I think it would kind of fall under the same criteria that I was just, just saying like, so with AWS it's interesting 'cause when we moved from Heroku to AWS by EC2 RDS, their database thing, uh, S3, those have been around for years, probably those are gonna work, but they always introduce new things. Like we, we use RabbitMQ and AWS came out with. Some, I forget what it was, it was a queuing service similar to Rabbit. We were like, Oh, maybe we should switch to that. But it was clear that they weren't really ready to support it. So. Yeah, so we didn't, we didn't switch to that. So I, you gotta try to read the tea leaves of the provider to see are they committed to, to supporting this thing or is this there to get some enterprise client to move into the cloud. And then the idea is to move off of that transitional thing into what they do support. And it's hard to get a clear answer from them too. So it takes a little bit of research to figure out, Are they going to support this or not? Because that's what you don't want. To move everything into some very proprietary cloud system and have them sunset it and say, Oh yeah, now you've got to switch again. Uh, that kind of sucks. So, it's a little trickier. [00:12:41] Jeremy: And what kind of questions or research do you do? Is it purely a function of this thing has existed for X number of years so I feel okay? [00:12:52] David: I mean, it's kind of similar to looking at like some gem you're going to add to your project, right? So you'll, you'll look at how often does it change? Is it being updated? Uh, what is the documentation? Does it look like someone really cared about the documentation? Does the documentation look updated? Are there issues with it that are being addressed or, or not? Um, so those are good signals. I think, talking to other practitioners too can be good. Like if you've got someone who's experienced. You can say, hey, do you know anybody back channeling through, like, everybody knows somebody that works at AWS, you can probably try to get something there. at Stitch Fix, we had an enterprise support contract, and so your account manager will sometimes give you good information if you ask. Again, it's a, they're not going to come out and say, don't use this product that we have, but they might communicate that in a subtle way. So you have to triangulate from all these sources to try to. to try to figure out what, what you want to do. [00:13:50] Jeremy: Yeah, it kind of makes me wish that there was a, a site like, maybe not quite like, can I use, right? Can I use, you can see like, oh, can I use this in my browser? Is there, uh, like an AWS or a Google Cloud? Can I trust this? Can I trust this? Yeah. Is this, is this solid or not? [00:14:04] David: Right, totally. It's like, there's that, that site where you, it has all the Apple products and it says whether or not you should buy it because one may or may not be coming out or they may be getting rid of it. Like, yeah, that would... For cloud services, that would be, that would be nice. [00:14:16] Jeremy: Yeah, yeah. That's like the Mac Buyer's Guide. And then we, we need the, uh, the technology. Yeah. Maybe not buyers. Cloud Provider Buyer's Guide, yeah. I guess we are buyers. [00:14:25] David: Yeah, yeah, totally, totally. [00:14:27] Jeremy: it's interesting that you, you mentioned how you want to see that, okay, this thing is mature. I think it's going to stick around because, I, interviewed, someone who worked on, I believe it was the CloudWatch team. Okay. Daniel Vassalo, yeah. so he left AWS, uh, after I think about 10 years, and then he wrote a book called, uh, The Good Parts of AWS. Oh! And, if you read his book, most of the services he says to use are the ones that are, like, old. Yeah. He's, he's basically saying, like, S3, you know you're good. Yeah. Right? but then all these, if you look at the AWS webpage, they have who knows, I don't know how many hundreds of services. Yeah. He's, he's kind of like I worked there and I would not use, you know, all these new services. 'cause I myself, I don't trust [00:15:14] David: it yet. Right. And so, and they're working there? Yeah, they're working there. Yeah. No. One of the VPs at Stitch Fix had worked on Google Cloud and so when we were doing this transition from Heroku, he was like, we are not using Google Cloud. I was like, really? He's like AWS is far ahead of the game. Do not use Google Cloud. I was like, all right, I don't need any more info. You work there. You said don't. I'm gonna believe you. So [00:15:36] Jeremy: what, what was his did he have like a core point? [00:15:39] David: Um, so he never really had anything bad to say about Google per se. Like I think he enjoyed his time there and I think he thought highly of who he worked with and what he worked on and that sort of thing. But his, where he was coming from was like AWS was so far ahead. of Google on anything that we would use, he was like, there's, there's really no advantage to, to doing it. AWS is a known quantity, right? it's probably still the case. It's like, you know, you've heard the nobody ever got fired for using IBM or using Microsoft or whatever the thing is. Like, I think that's, that was kind of the vibe. And he was like, moving all of our infrastructure right before we're going to go public. This is a serious business. We should just use something that we know will work. And he was like, I know this will work. I'm not confident about. Google, uh, for our use case. So we shouldn't, we shouldn't risk it. So I was like, okay, I trust you because I didn't know anything about any of that stuff at the time. I knew Heroku and that was it. So, yeah. [00:16:34] Jeremy: I don't know if it's good or bad, but like you said, AWS seems to be the default choice. Yeah. And I mean, there's people who use Azure. I assume it's mostly primarily Microsoft. Yeah. And then there's Google Cloud. It's not really clear why you would pick it, unless there was a specific service or something that only they had. [00:16:55] David: Yeah, yeah. Or you're invested in Google, you know, you want to keep everything there. I mean, I don't know. I haven't really been at that level to make that kind of decision, and I would probably choose AWS for the reasons discussed, but, yeah. Moving off Heroku [00:17:10] Jeremy: And then, so at Stitch Fix, you said you moved off of Heroku [00:17:16] David: yeah. Yeah, so we were heavy into Heroku. I think that we were told that at one point we had the biggest Heroku Postgres database on their platform. Not a good place to be, right? You never want to be the biggest customer person, usually. but the problem we were facing was essentially we were going to go public. And to do that, you're under all the scrutiny. about many things, including the IT systems and the security around there. So, like, by default, a Postgres, a Heroku Postgres database is, like, on the internet. It's only secured by the password. all their services are on the internet. So, not, not ideal. they were developing their private cloud service at that time. And so that would have given us, in theory, on paper, it would have solved all of our problems. And we liked Heroku and we liked the developer experience. It was great. but... Heroku private spaces, it was still early. There's a lot of limitations that when they explained why those limitations, they were reasonable. And if we had. started from scratch on Heroku Private Spaces. It probably would have worked great, but we hadn't. So we just couldn't make it work. So we were like, okay, we're going to have to move to AWS so that everything can be basically off the internet. Like our public website needs to be on the internet and that's kind of it. So we need to, so that's basically was the, was the impetus for that. but it's too bad because I love Heroku. It was great. I mean, they were, they were a great partner. They were great. I think if Stitch Fix had started life a year later, Private Spaces. Now it's, it's, it's way different than it was then. Cause it's been, it's a mature product now, so we could have easily done that, but you know, the timing didn't work out, unfortunately. [00:18:50] Jeremy: And that was a compliance thing to, [00:18:53] David: Yeah. And compliance is weird cause they don't tell you what to do, but they give you some parameters that you need to meet. And so one of them is like how you control access. So, so going public, the compliance is around the financial data and. Ensuring that the financial data is accurate. So a lot of the systems at Stichfix were storing the financial data. We, you know, the warehouse management system was custom made. Uh, all the credit card processing was all done, like it was all in some databases that we had running in Heroku. And so those needed to be subject to stricter security than we could achieve with just a single password that we just had to remember to rotate when someone like left the team. So that was, you know, the kind of, the kind of impetus for, for all of that. [00:19:35] Jeremy: when you were using Heroku, Salesforce would have already owned it then. Did you, did you get any sense that you weren't really sure about the future of the platform while you're on it or, [00:19:45] David: At that time, no, it seemed like they were still innovating. So like, Heroku has a Redis product now. They didn't at the time we wish that they did. They told us they're working on it, but it wasn't ready. We didn't like using the third parties. Kafka was not a thing. We very much were interested in that. We would have totally used it if it was there. So they were still. Like doing bigger innovations then, then it seems like they are now. I don't know. It's weird. Like they're still there. They still make money, I assume for Salesforce. So it doesn't feel like they're going away, but they're not innovating at the pace that they were kind of back in the day. [00:20:20] Jeremy: it used to feel like when somebody's asking, I want to host a Rails app. Then you would say like, well, use Heroku because it's basically the easiest to get started. It's a known quantity and it's, it's expensive, but, it seemed for, for most people, it was worth it. and then now if I talk to people, it's like. Not what people suggest anymore. [00:20:40] David: Yeah, because there's, there's actual competitors. It's crazy to me that there was no competitors for years, and now there's like, Render and Fly. io seem to be the two popular alternatives. Um, I doubt they're any cheaper, honestly, but... You get a sense, right, that they're still innovating, still building those platforms, and they can build with, you know, all of the knowledge of what has come before them, and do things differently that might, that might help. So, I still use Heroku for personal things just because I know it, and I, you know, sometimes you don't feel like learning a new thing when you just want to get something done, but, yeah, I, I don't know if we were starting again, I don't know, maybe I'd look into those things. They, they seem like they're getting pretty mature and. Heroku's resting on its laurels, still. [00:21:26] Jeremy: I guess I never quite the mindset, right? Where you You have a platform that's doing really well and people really like it and you acquire it and then it just It seems like you would want to keep it rolling, right? (laughs) [00:21:38] David: Yeah, it's, it is wild, I mean, I guess... Why did you, what was Salesforce thinking they were going to get? Uh, who knows maybe the person at Salesforce that really wanted to purchase it isn't there. And so no one at Salesforce cares about it. I mean, there's all these weird company politics that like, who knows what's going on and you could speculate. all day. What's interesting is like, there's definitely some people in the Ruby community who work there and still are working there. And that's like a little bit of a canary for me. I'm like, all right, well, if that person's still working there, that person seems like they're on the level and, and, and, and seems pretty good. They're still working there. It, it's gotta be still a cool place to be or still doing something, something good. But, yeah, I don't know. I would, I would love to know what was going on in all the Salesforce meetings about acquiring that, how to manage it. What are their plans for it? I would love to know that stuff. [00:22:29] Jeremy: maybe you had some experience with this at Stitch Fix But I've heard with Heroku some of their support staff at least in the past they would, to some extent, actually help you troubleshoot, like, what's going on with your app. Like, if your app is, like, using a whole bunch of memory, and you're out of memory, um, they would actually kind of look into that, for you, which is interesting, because it's like, that's almost like a services thing than it is just a platform. [00:22:50] David: Yeah. I mean, they, their support, you would get, you would get escalated to like an engineer sometimes, like who worked on that stuff and they would help figure out what the problem was. Like you got the sense that everybody there really wanted the platform to be good and that they were all sort of motivated to make sure that everybody. You know, did well and used the platform. And they also were good at, like a thing that trips everybody up about Heroku is that your app restarts every day. And if you don't know anything about anything, you might think that is stupid. Why, why would I want that? That's annoying. And I definitely went through that and I complained to them a lot. And I'm like, if you only could not restart. And they very patiently and politely explained to me why that it needed to do that, they weren't going to remove that, and how to think about my app given that reality, right? Which is great because like, what company does that, right? From the engineers that are working on it, like No, nobody does that. So, yeah, no, I haven't escalated anything to support at Heroku in quite some time, so I don't know if it's still like that. I hope it is, but I'm not really, not really sure. Building a platform team [00:23:55] Jeremy: Yeah, that, uh, that reminds me a little bit of, I think it's Rackspace? There's, there's, like, another hosting provider that was pretty popular before, and they... Used to be famous for that type of support, where like your, your app's having issues and somebody's actually, uh, SSHing into your box and trying to figure out like, okay, what's going on? which if, if that's happening, then I, I can totally see where the, the price is justified. But if the support is kind of like dropping off to where it's just, they don't do that kind of thing, then yeah, I can see why it's not so much of a, yeah, [00:24:27] David: We used to think of Heroku as like they were the platform team before we had our own platform team and they, they acted like it, which was great. [00:24:35] Jeremy: Yeah, I don't have, um, experience with, render, but I, I, I did, talk to someone from there, and it does seem like they're, they're trying to fill that role, um, so, yeah, hopefully, they and, and other companies, I guess like Vercel and things like that, um, they're, they're all trying to fill that space, [00:24:55] David: Yeah, cause, cause building our own internal platform, I mean it was the right thing to do, but it's, it's a, you can't just, you have to have a team on it, it's complicated, getting all the stuff in AWS to work the way you want it to work, to have it be kind of like Heroku, like it's not trivial. if I'm a one person company, I don't want to be messing around with that particularly. I want to just have it, you know, push it up and have it go and I'm willing to pay for that. So it seems logical that there would be competitors in that space. I'm glad there are. Hopefully that'll light a fire under, under everybody. [00:25:26] Jeremy: so in your case, it sounds like you moved to having your own platform team and stuff like that, uh, partly because of the compliance thing where you're like, we need our, we need to be isolated from the internet. We're going to go to AWS. If you didn't have that requirement, do you still think like that would have been the time to, to have your own platform team and manage that all yourself? [00:25:46] David: I don't know. We, we were thinking an issue that we were running into when we got bigger, um, was that, I mean, Heroku, it, It's obviously not as flexible as AWS, but it is still very flexible. And so we had a lot of internal documentation about this is how you use Heroku to do X, Y, and Z. This is how you set up a Stitch Fix app for Heroku. Like there was just the way that we wanted it to be used to sort of. Just make it all manageable. And so we were considering having a team spun up to sort of add some tooling around that to sort of make that a little bit easier for everybody. So I think there may have been something around there. I don't know if it would have been called a platform team. Maybe we call, we thought about calling it like developer happiness or because you got developer experience or something. We, we probably would have had something there, but. I do wonder how easy it would have been to fund that team with developers if we hadn't had these sort of business constraints around there. yeah, um, I don't know. You get to a certain size, you need some kind of manageability and consistency no matter what you're using underneath. So you've got to have, somebody has to own it to make sure that it's, that it's happening. [00:26:50] Jeremy: So even at your, your architect level, you still think it would have been a challenge to, to. Come to the executive team and go like, I need funding to build this team. [00:27:00] David: You know, certainly it's a challenge because everybody, you know, right? Nobody wants to put developers in anything, right? There are, there are a commodity and I mean, that is kind of the job of like, you know, the staff engineer or the architect at a company is you don't have, you don't have the power to put anybody on anything you, you have the power to Schedule a meeting with a VP or the CTO and they will listen to you. And that's basically, you've got to use that power to convince them of what you want done. And they're all reasonable people, but they're balancing 20 other priorities. So it would, I would have had to, it would have been a harder case to make that, Hey, I want to take three engineers. And have them write tooling to make Heroku easier to use. What? Heroku is not easy to use. Why aren't, you know, so you really, I would, it would be a little bit more of a stretch to walk them through it. I think a case could be made, but, definitely would take some more, more convincing than, than what was needed in our case. [00:27:53] Jeremy: Yeah. And I guess if you're able to contrast that with, you were saying, Oh, I need three people to help me make Heroku easier. Your actual platform team on AWS, I imagine was much larger, right? [00:28:03] David: Initially it was, there was, it was three people did the initial move over. And so by the time we went public, we'd been on this new system for, I don't know, six to nine months. I can't remember exactly. And so at that time the platform team was four or five people, and I, I mean, so percentage wise, right, the engineering team was maybe almost 200, 150, 200. So percentage wise, maybe a little small, I don't know. but it kind of gets back to the power of like the rails and the one person framework. Like everything we did was very much the same And so the Rails app that managed the deployment was very simple. The, the command line app, even the Go one with all of its verbosity was very, very simple. so it was pretty easy for that small team to manage. but, Yeah, so it was sort of like for redundancy, we probably needed more than three or four people because you know, somebody goes out sick or takes a vacation. That's a significant part of the team. But in terms of like just managing the complexity and building it and maintaining it, like it worked pretty well with, you know, four or five people. Where Rails fits in vs other technology [00:29:09] Jeremy: So during the Keynote today, they were talking about how companies like GitHub and Shopify and so on, they're, they're using Rails and they're, they're successful and they're fairly large. but I think the thing that was sort of unsaid was the fact that. These companies, while they use Rails, they use a lot of other, technology as well. And, and, and kind of increasing amounts as well. So, I wonder from your perspective, either from your experience at StitchFix or maybe going forward, what is the role that, that Ruby and Rails plays? Like, where does it make sense for that to be used versus like, Okay, we need to go and build something in Java or, you know, or Go, that sort of thing? [00:29:51] David: right. I mean, I think for like your standard database backed web app, it's obviously great. especially if your sort of mindset bought into server side rendering, it's going to be great at that. so like internal tools, like the customer service dashboard or... You know, something for like somebody who works at a company to use. Like, it's really great because you can go super fast. You're not going to be under a lot of performance constraints. So you kind of don't even have to think about it. Don't even have to solve it. You can, but you don't have to, where it wouldn't work, I guess, you know, if you have really strict performance. Requirements, you know, like a, a Go version of some API server is going to use like percentages of what, of what Rails would use. If that's meaningful, if what you're spending on memory or compute is, is meaningful, then, then yeah. That, that becomes worthy of consideration. I guess if you're, you know, if you're making a mobile app, you probably need to make a mobile app and use those platforms. I mean, I guess you can wrap a Rails app sort of, but you're still making, you still need to make a mobile app, that does something. yeah. And then, you know, interestingly, the data science part of Stitch Fix was not part of the engineering team. They were kind of a separate org. I think Ruby and Rails was probably the only thing they didn't use over there. Like all the ML stuff, everything is either Java or Scala or Python. They use all that stuff. And so, yeah, if you want to do AI and ML with Ruby, you, it's, it's hard cause there's just not a lot there. You really probably should use Python. It'll make your life easier. so yeah, those would be some of the considerations, I guess. [00:31:31] Jeremy: Yeah, so I guess in the case of, ML, Python, certainly, just because of the, the ecosystem, for maybe making a command line application, maybe Go, um, Go or Rust, perhaps, [00:31:44] David: Right. Cause you just get a single binary. Like the problem, I mean, I wrote this book on Ruby command line apps and the biggest problem is like, how do I get the Ruby VM to be anywhere so that it can then run my like awesome scripts? Like that's kind of a huge pain. (laughs) So [00:31:59] Jeremy: and then you said, like, if it's Very performance sensitive, which I am kind of curious in, in your experience with the companies you've worked at, when you're taking on a project like that, do you know up front where you're like, Oh, the CPU and memory usage is going to be a problem, or is it's like you build it and you're like, Oh, this isn't working. So now I know. [00:32:18] David: yeah, I mean, I, I don't have a ton of great experience there at Stitch Fix. The biggest expense the company had was the inventory. So like the, the cost of AWS was just de minimis compared to all that. So nobody ever came and said, Hey, you've got to like really save costs on, on that stuff. Cause it just didn't really matter. at the, the mental health startup I was at, it was too early. But again, the labor costs were just far, far exceeded the amount of money I was spending on, on, um, you know, compute and infrastructure and stuff like that. So, Not knowing anything, I would probably just sort of wait and see if it's a problem. But I suppose you always take into account, like, what am I actually building? And like, what does this business have to scale to, to make it worthwhile? And therefore you can kind of do a little bit of planning ahead there. But, I dunno, I think it would kind of have to depend. [00:33:07] Jeremy: There's a sort of, I guess you could call it a meme, where people say like, Oh, it's, it's not, it's not Rails that's slow, it's the, the database that's slow. And, uh, I wonder, is that, is that accurate in your experience, or, [00:33:20] David: I mean, most of the stuff that we had that was slow was the database, because like, it's really easy to write a crappy query in Rails if you're not, if you're not careful, and then it's really easy to design a database that doesn't have any indexes if you're not careful. Like, you, you kind of need to know that, But of course, those are easy to fix too, because you just add the index, especially if it's before the database gets too big where we're adding indexes is problematic. But, I think those are just easy performance mistakes to make. Uh, especially with Rails because you're not, I mean, a lot of the Rails developers at Citrix did not know SQL at all. I mean, they had to learn it eventually, but they didn't know it at all. So they're not even knowing that what they're writing could possibly be problematic. It's just, you're writing it the Rails way and it just kind of works. And at a small scale, it does. And it doesn't matter until, until one day it does. [00:34:06] Jeremy: And then in, in the context of, let's say, using ActiveRecord and instantiating the objects, or, uh, the time it takes to render templates, that kinds of things, to, at least in your experience, that wasn't such of an issue. [00:34:20] David: No, and it was always, I mean, whenever we looked at why something was slow, it was always the database and like, you know, you're iterating over some active records and then, and then, you know, you're going into there and you're just following this object graph. I've got a lot of the, a lot of the software at Stitch Fix was like internal stuff and it was visualizing complicated data out of the database. And so if you didn't think about it, you would just start dereferencing and following those relationships and you have this just massive view and like the HTML is fine. It's just that to render this div, you're. Digging into some active record super deep. and so, you know, that was usually the, the, the problems that we would see and they're usually easy enough to fix by making an index or. Sometimes you do some caching or something like that. and that solved most of the, most of the issues [00:35:09] Jeremy: The different ways people learn [00:35:09] Jeremy: so you're also the author of the book, Sustainable Web Development with Ruby on Rails. And when you talk to people about like how they learn things, a lot of them are going on YouTube, they're going on, uh, you know, looking for blogs and things like that. And so as an author, what do you think the role is of, of books now? Yeah, [00:35:29] David: I have thought about this a lot, because I, when I first got started, I'm pretty old, so books were all you had, really. Um, so they seem very normal and natural to me, but... does someone want to sit down and read a 400 page technical book? I don't know. so Dave Thomas who runs Pragmatic Bookshelf, he was on a podcast and was asked the same question and basically his answer, which is my answer, is like a long form book is where you can really lay out your thinking, really clarify what you mean, really take the time to develop sometimes nuanced, examples or nuanced takes on something that are Pretty hard to do in a short form video or in a blog post. Because the expectation is, you know, someone sends you an hour long YouTube video, you're probably not going to watch that. Two minute YouTube video is sure, but you can't, you can't get into so much, kind of nuanced detail. And so I thought that was, was right. And that was kind of my motivation for writing. I've got some thoughts. They're too detailed. It's, it's too much set up for a blog post. There's too much of a nuanced element to like, really get across. So I need to like, write more. And that means that someone's going to have to read more to kind of get to it. But hopefully it'll be, it'll be valuable. one of the sessions that we're doing later today is Ruby content creators, where it's going to be me and Noel Rappin and Dave Thomas representing the old school dudes that write books and probably a bunch of other people that do, you know, podcasts videos. It'd be interesting to see, I really want to know how do people learn stuff? Because if no one reads books to learn things, then there's not a lot of point in doing it. But if there is value, then, you know. It should be good and should be accessible to people. So, that's why I do it. But I definitely recognize maybe I'm too old and, uh, I'm not hip with the kids or, or whatever, whatever the case is. I don't know. [00:37:20] Jeremy: it's tricky because, I think it depends on where you are in the process of learning that thing. Because, let's say, you know a fair amount about the technology already. And you look at a book, in a lot of cases it's, it's sort of like taking you from nothing to something. And so you're like, well, maybe half of this isn't relevant to me, but then if I don't read it, then I'm probably missing a lot still. And so you're in this weird in be in between zone. Another thing is that a lot of times when people are trying to learn something, they have a specific problem. And, um, I guess with, with books, it's, you kind of don't know for sure if the thing you're looking for is going to be in the book. [00:38:13] David: I mean, so my, so my book, I would not say as a beginner, it's not a book to learn how to do Rails. It's like you already kind of know Rails and you want to like learn some comprehensive practices. That's what my book is for. And so sometimes people will ask me, I don't know Rails, should I get your book? And I'm like, no, you should not. but then you have the opposite thing where like the agile web development with Rails is like the beginner version. And some people are like, Oh, it's being updated for Rails 7. Should I get it? I'm like, probably not because How to go from zero to rails hasn't changed a lot in years. There's not that much that's going to be new. but, how do you know that, right? Hopefully the Table of Contents tells you. I mean, the first book I wrote with Pragmatic, they basically were like, The Table of Contents is the only thing the reader, potential reader is going to have to have any idea what's in the book. So, You need to write the table of contents with that in mind, which may not be how you'd write the subsections of a book, but since you know that it's going to serve these dual purposes of organizing the book, but also being promotional material that people can read, you've got to keep that in mind, because otherwise, how does anybody, like you said, how does anybody know what's, what's going to be in there? And they're not cheap, I mean, these books are 50 bucks sometimes, and That's a lot of money for people in the U. S. People outside the U. S. That's a ton of money. So you want to make sure that they know what they're getting and don't feel ripped off. [00:39:33] Jeremy: Yeah, I think the other challenge is, at least what I've heard, is that... When people see a video course, for whatever reason, they, they set, like, a higher value to it. They go, like, oh, this video course is, 200 dollars and it's, like, seems like a lot of money, but for some people it's, like, okay, I can do that. But then if you say, like, oh, this, this book I've been researching for five years, uh, I want to sell it for a hundred bucks, people are going to be, like no. No way., [00:40:00] David: Yeah. Right. A hundred bucks for a book. There's no way. That's a, that's a lot. Yeah. I mean, producing video, I've thought about doing video content, but it seems so labor intensive. Um, and it's kind of like, It's sort of like a performance. Like I was mentioning before we started that I used to play in bands and like, there's a lot to go into making an even mediocre performance. And so I feel like, you know, video content is the same way. So I get that it like, it does cost more to produce, but, are you getting more information out of it? I, that, I don't know, like maybe not, but who knows? I mean, people learn things in different ways. So, [00:40:35] Jeremy: It's just like this perception thing, I think. And, uh, I'm not sure why that is. Um, [00:40:40] David: Yeah, maybe it's newer, right? Maybe books feel older so they're easier to make and video seems newer. I mean, I don't know. I would love to talk to engineers who are like... young out of college, a few years into their career to see what their perception of this stuff is. Cause I mean, there was no, I mean, like I said, I read books cause that's all there was. There was no, no videos. You, you go to a conference and you read a book and that was, that was all you had. so I get it. It seems a whole video. It's fancier. It's newer. yeah, I don't know. I would love to hear a wide variety of takes on it to see what's actually the, the future, you know? [00:41:15] Jeremy: sure, yeah. I mean, I think it probably can't just be one or the other, right? Like, I think there are... Benefits of each way. Like, if you have the book, you can read it at your own pace without having to, like, scroll through the video, and you can easily copy and paste the, the code segments, [00:41:35] David: Search it. Go back and forth. [00:41:36] Jeremy: yeah, search it. So, I think there's a place for it, but yeah, I think it would be very interesting, like you said, to, to see, like, how are people learning, [00:41:45] David: Right. Right. Yeah. Well, it's the same with blogs and podcasts. Like I, a lot of podcasters I think used to be bloggers and they realized that like they can get out what they need by doing a podcast. And it's way easier because it's more conversational. You don't have to do a bunch of research. You don't have to do a bunch of editing. As long as you're semi coherent, you can just have a conversation with somebody and sort of get at some sort of thing that you want to talk about or have an opinion about. And. So you, you, you see a lot more podcasts and a lot less blogs out there because of that. So it's, that's kind of like the creators I think are kind of driving that a little bit. yeah. So I don't know. [00:42:22] Jeremy: Yeah, I mean, I can, I can say for myself, the thing about podcasts is that it's something that I can listen to while I'm doing something else. And so you sort of passively can hopefully pick something up out of that conversation, but... Like, I think it's maybe not so good at the details, right? Like, if you're talking code, you can talk about it over voice, but can you really visualize it? Yeah, yeah, yeah. I think if you sit down and you try to implement something somebody talked about, you're gonna be like, I don't know what's happening. [00:42:51] David: Yeah. [00:42:52] Jeremy: So, uh, so, so I think there's like these, these different roles I think almost for so like maybe you know the podcast is for you to Maybe get some ideas or get some familiarity with a thing and then when you're ready to go deeper You can go look at a blog post or read a book I think video kind of straddles those two where sometimes video is good if you want to just see, the general concept of a thing, and have somebody explain it to you, maybe do some visuals. that's really good. but then it can also be kind of detailed, where, especially like the people who stream their process, right, you can see them, Oh, let's, let's build this thing together. You can ask me questions, you can see how I think. I think that can be really powerful. at the same time, like you said, it can be hard to say, like, you know, I look at some of the streams and it's like, oh, this is a three hour stream and like, well, I mean, I'm interested. I'm interested, but yeah, it's hard enough for me to sit through a, uh, a three hour movie, [00:43:52] David: Well, then that, and that gets into like, I mean, we're, you know, we're at a conference and they, they're doing something a little, like, there are conference talks at this conference, but there's also like. sort of less defined activities that aren't a conference talk. And I think that could be a reaction to some of this too. It's like I could watch a conference talk on, on video. How different is that going to be than being there in person? maybe it's not that different. Maybe, maybe I don't need to like travel across the country to go. Do something that I could see on video. So there's gotta be something here that, that, that meets that need that I can't meet any other way. So it's all these different, like, I would like to think that's how it is, right? All this media all is a part to play and it's all going to kind of continue and thrive and it's not going to be like, Oh, remember books? Like maybe, but hopefully not. Hopefully it's like, like what you're saying. Like it's all kind of serving different purposes that all kind of work together. Yeah. [00:44:43] Jeremy: I hope that's the case, because, um, I don't want to have to scroll through too many videos. [00:44:48] David: Yeah. The video's not for me. Large Language Models [00:44:50] Jeremy: I, I like, I actually do find it helpful, like, like I said, for the high level thing, or just to see someone's thought process, but it's like, if you want to know a thing, and you have a short amount of time, maybe not the best, um, of course, now you have all the large language model stuff where you like, you feed the video in like, Hey, tell, tell, tell me, uh, what this video is about and give me the code snippets and all that stuff. I don't know how well it works, but it seems [00:45:14] David: It's gotta get better. Cause you go to a support site and they're like, here's how to fix your problem, and it's a video. And I'm like, can you just tell me? But I'd never thought about asking the AI to just look at the video and tell me. So yeah, it's not bad. [00:45:25] Jeremy: I think, that's probably where we're going. So it's, uh, it's a little weird to think about, but, [00:45:29] David: yeah, yeah. I was just updating, uh, you know, like I said, I try to keep the book updated when new versions of Rails come out, so I'm getting ready to update it for Rails 7. 1 and in Amazon's, Kindle Direct Publishing as their sort of backend for where you, you know, publish like a Kindle book and stuff, and so they added a new question, was AI used in the production of this thing or not? And if you answer yes, they want you to say how much, And I don't know what they're gonna do with that exactly, but I thought it was pretty interesting, cause I would be very disappointed to pay 50 for a book that the AI wrote, right? So it's good that they're asking that? Yeah. [00:46:02] Jeremy: I think the problem Amazon is facing is where people wholesale have the AI write the book, and the person either doesn't review it at all, or maybe looks at a little, a little bit. And, I mean, the, the large language model stuff is very impressive, but If you have it generate a technical book for you, it's not going to be good. [00:46:22] David: yeah. And I guess, cause cause like Amazon, I mean, think about like Amazon scale, like they're not looking at the book at all. Like I, I can go click a button and have my book available and no person's going to look at it. they might scan it or something maybe with looking for bad words. I don't know, but there's no curation process there. So I could, yeah. I could see where they could have that, that kind of problem. And like you as the, as the buyer, you don't necessarily, if you want to book on something really esoteric, there are a lot of topics I wish there was a book on that there isn't. And as someone generally want to put it on Amazon, I could see a lot of people buying it, not realizing what they're getting and feeling ripped off when it was not good. [00:47:00] Jeremy: Yeah, I mean, I, I don't know, if it's an issue with the, the technical stuff. It probably is. But I, I know they've definitely had problems where, fiction, they have people just generating hundreds, thousands of books, submitting them all, just flooding it. [00:47:13] David: Seeing what happens. [00:47:14] Jeremy: And, um, I think that's probably... That's probably the main reason why they ask you, cause they want you to say like, uh, yeah, you said it wasn't. And so now we can remove your book. [00:47:24] David: right. Right. Yeah. Yeah. [00:47:26] Jeremy: I mean, it's, it's not quite the same, but it's similar to, I don't know what Stack Overflow's policy is now, but, when the large language model stuff started getting big, they had a lot of people answering the questions that were just. Pasting the question into the model [00:47:41] David: Which because they got it from [00:47:42] Jeremy: and then [00:47:43] David: The Got model got it from Stack Overflow. [00:47:45] Jeremy: and then pasting the answer into Stack Overflow and the person is not checking it. Right. So it's like, could be right, could not be right. Um, cause, cause to me, it's like, if, if you generate it, if you generate the answer and the answer is right, and you checked it, I'm okay with that. [00:48:00] David: Yeah. Yeah. [00:48:01] Jeremy: but if you're just like, I, I need some karma, so I'm gonna, I'm gonna answer these questions with, with this bot, I mean, then maybe [00:48:08] David: I could have done that. You're not adding anything. Yeah, yeah. [00:48:11] Jeremy: it's gonna be a weird, weird world, I think. [00:48:12] David: Yeah, no kidding. No kidding. [00:48:15] Jeremy: that's a, a good place to end it on, but is there anything else you want to mention, [00:48:19] David: No, I think we covered it all just yeah, you could find me online. I'm Davetron5000 on Ruby. social Mastodon, I occasionally post on Twitter, but not that much anymore. So Mastodon's a place to go. [00:48:31] Jeremy: David, thank you so much [00:48:32] David: All right. Well, thanks for having me.

The Industrial Talk Podcast with Scott MacKenzie
Leonella and Edward Bass

The Industrial Talk Podcast with Scott MacKenzie

Play Episode Listen Later Nov 16, 2023 22:30 Transcription Available


On this episode of Industrial Talk, we're onsite at Accruent Insights and chatting with Leonella Bass and Edward Bass about Data Analytics - Solutions to identify and access valuable operational data.  Here are the key takeaways: Industrial IoT security with Palo Alto Networks. 0:00 Palo Alto Networks Industrial IoT Security report analyzes improved ROI and reduced complexity. Leonella provides background on herself and her business, discussing her experience in Venezuela and her love for sequel products. Data standardization and migration. 3:42 Edward describes their experience working in maintenance connection and SQL reporting, and how they helped clients with data standardization and migrations. Edward explains their company's unique approach to mapping fields and assessing asset condition, risk, and maintenance connections. Data analysis and normalization for various systems. 7:38 Stakeholders want to validate data usage and identify areas for improvement in their systems. Leonella: Standardize data from different systems to create a unified view for reporting and decision-making. Leonella: Ensure data accuracy and security by normalizing data from legacy systems and spreadsheets. Improving data accuracy in CMMS with SQL. 12:15 Edward highlights the importance of standardization and reporting in CMMS to identify untrustworthy data and improve accuracy. Edward and Scott MacKenzie discuss the need for a team effort to ensure quality data, involving stakeholders and end users in the process. Leonella discusses data cleaning, mentioning the importance of mapping and standardizing data, as well as the time it takes to complete the process (5 hours). Edward agrees that data cleaning is heavy lifting, but notes that SQL allows for running loops to make the process more efficient. Data efficiency solutions with Team Data Efficiency Solutions. 18:41 Leonella provides geeky insights on data QC and Excel use, with a focus on primary keys and VLOOKUP. Scott MacKenzie thanks listeners for tuning in and invites them to reach out to Leonella and Edward vas for data analytics services. Also, get your exclusive free access to the Industrial Academy and a series on “Why You Need To Podcast” for Greater Success in 2023. All links designed for keeping you current in this rapidly changing Industrial Market. Learn! Grow! Enjoy! LEONELLA BASS' CONTACT INFORMATION: Personal LinkedIn: https://www.linkedin.com/in/leonella-bass-crl-13633aa9/ EDWARD BASS' CONTACT INFORMATION: Personal LinkedIn: 

Project Geospatial
FOSS4GNA 2023 | Geo Unleashed: How Apache Sedona is Revolutionizing Geospatial Data Analysis

Project Geospatial

Play Episode Listen Later Nov 15, 2023 29:01


Summary: - The speaker introduces themselves as a co-founder and Chief Architect of We Robots. - They discuss the challenges of analyzing geospatial data due to its large scale. - They introduce Apache Sedona as an open-source computing engine for processing geospatial data. - Sedona provides distributed query algorithms and APIs for different programming languages. - The speaker demonstrates examples of spatial SQL queries that can be performed in Sedona. - They explain the importance of correctly calculating distances between geographic locations. - Sedona also supports raster data processing, such as analyzing temperature observations. Highlights: - Apache Sedona is revolutionizing geospatial data analysis with its scalable and efficient processing capabilities.

Project Geospatial
FOSS4GNA 2023 | Feeding Your 911 Data from QGISPostGIS - Randal Hale

Project Geospatial

Play Episode Listen Later Nov 15, 2023 23:07


Summary: Feeding 911 data from QGIS/PostGIS is discussed by Randal Hale in his FOSS4GNA 2023 presentation. He shares insights into transitioning from proprietary to open-source tools, recounting experiences with a 911 proof-of-concept project in the Caribbean. The talk covers setting up PostGIS servers, overcoming challenges, and expanding GIS capabilities over time. Noteworthy aspects include leveraging GitHub for collaborative development, improving SQL scripts for efficiency, and adopting GeoPackage for streamlined data management. Highlights:

Ecommerce Brain Trust
Unpacking the Digital Shelf with Ross Walker: Wringing Performance out of Amazon Marketing Cloud - Episode 316

Ecommerce Brain Trust

Play Episode Listen Later Nov 14, 2023 35:52


In this episode of The Ecommerce Braintrust, host Kiri Masters shares a special replay from "Unpacking the Digital Shelf," presented by our friends over at the Digital Shelf Institute, Peter Crosby and Lauren Livak. They're diving into the realm of Amazon Marketing Cloud, an open database that provides deeply insightful data for advertisers, with Ross Walker, the retail media team lead at Acadia, joining them to unfold the complexities of AMC and, more importantly, how to use it to our advantage. Whether you're an executive, an e-commerce manager, or a budding data analyst, get ready for a knowledge-packed session. Make sure you tune in to find out more! In today's episode, Peter, Lauren and Ross discuss: - AMC offers the ability to positively target customers based on specific behaviors, which includes customers who added items to the cart but didn't purchase. - Use of the AMC in remarketing to customers who were previously exposed to a campaign. - AMC helps brands identify their prime products for generating new customers and aids in making wise marketing decisions. - Introduction to the ASIN Overlap report aiding in predicting customers' future purchases. - Differences in the adoption and usage maturity of the AMC among clients, ranging from unawareness to sophisticated usage. - Successful clients often have internal abilities and use AMC data for informed marketing decisions. - Importance and significance of the 'New to brand' metric for Amazon sellers and vendors. - Insights offered by AMC allow a better understanding of marketing performance from a new-to-brand perspective.  - The emergence of the incremental ROAS concept for assessing sales lift and ROAS due to increased ad spending. - Potential improvements for Amazon Media Console, such as AI capabilities for natural language queries. - Recommendations for incorporating AMC into future strategies by tracking new-to-brand campaigns. - Challenges in effective utilization of data when outsourcing AMC insights to third-party tools. - The bystander effect of very few brands using AMC to upload and anonymize CRM data to target customers through Amazon DSP. - The importance of the 'New to brand' metric surfaced by AMC in evaluating the effectiveness of campaigns in attracting new customers. - Possible shift towards a focus on acquiring new customers rather than upselling to existing ones in uncertain economic times. - Features of AMC being complex and requiring understanding of writing SQL code. - Explanation of the queries AMC offers for various stakeholders to get insights into their advertising performance.  - The use of AMC in providing broader picture of a brand's performance on Amazon.  - Use of AMC to target audiences based on shopper behavior, location, and other factors.

Screaming in the Cloud
How Couchbase is Using AI to Enhance the User Experience with Laurent Doguin

Screaming in the Cloud

Play Episode Listen Later Nov 14, 2023 31:52


Laurent Doguin, Director of Developer Relations & Strategy at Couchbase, joins Corey on Screaming in the Cloud to talk about the work that Couchbase is doing in the world of databases and developer relations, as well as the role of AI in their industry and beyond. Together, Corey and Laurent discuss Laurent's many different roles throughout his career including what made him want to come back to a role at Couchbase after stepping away for 5 years. Corey and Laurent dig deep on how Couchbase has grown in recent years and how it's using artificial intelligence to offer an even better experience to the end user.About LaurentLaurent Doguin is Director of Developer Relations & Strategy at Couchbase (NASDAQ: BASE), a cloud database platform company that 30% of the Fortune 100 depend on.Links Referenced: Couchbase: https://couchbase.com XKCD #927: https://xkcd.com/927/ dbdb.io: https://dbdb.io DB-Engines: https://db-engines.com/en/ Twitter: https://twitter.com/ldoguin LinkedIn: https://www.linkedin.com/in/ldoguin/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Are you navigating the complex web of API management, microservices, and Kubernetes in your organization? Solo.io is here to be your guide to connectivity in the cloud-native universe!Solo.io, the powerhouse behind Istio, is revolutionizing cloud-native application networking. They brought you Gloo Gateway, the lightweight and ultra-fast gateway built for modern API management, and Gloo Mesh Core, a necessary step to secure, support, and operate your Istio environment.Why struggle with the nuts and bolts of infrastructure when you can focus on what truly matters - your application. Solo.io's got your back with networking for applications, not infrastructure. Embrace zero trust security, GitOps automation, and seamless multi-cloud networking, all with Solo.io.And here's the real game-changer: a common interface for every connection, in every direction, all with one API. It's the future of connectivity, and it's called Gloo by Solo.io.DevOps and Platform Engineers, your journey to a seamless cloud-native experience starts here. Visit solo.io/screaminginthecloud today and level up your networking game.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. This promoted guest episode is brought to us by our friends at Couchbase. And before we start talking about Couchbase, I would rather talk about not being at Couchbase. Laurent Doguin is the Director of Developer Relations and Strategy at Couchbase. First, Laurent, thank you for joining me.Laurent: Thanks for having me. It's a pleasure to be here.Corey: So, what I find interesting is that this is your second time at Couchbase, where you were a developer advocate there for a couple of years, then you had five years of, we'll call it wilderness I suppose, and then you return to be the Director of Developer Relations. Which also ties into my personal working thesis of, the best way to get promoted at a lot of companies is to leave and then come back. But what caused you to decide, all right, I'm going to go work somewhere else? And what made you come back?Laurent: So, I've joined Couchbase in 2014. Spent about two or three years as a DA. And during those three years as a developer advocate, I've been advocating SQL database and I—at the time, it was mostly DBAs and ops I was talking to. And DBA and ops are, well, recent, modern ops are writing code, but they were not the people I wanted to talk to you when I was a developer advocate. I came from a background of developer, I've been a platform engineer for an enterprise content management company. I was writing code all day.And when I came to Couchbase, I realized I was mostly talking about Docker and Kubernetes, which is still cool, but not what I wanted to do. I wanted to talk about developers, how they use database to be better app, how they use key-value, and those weird thing like MapReduce. At the time, MapReduce was still, like, a weird thing for a lot of people, and probably still is because now everybody's doing SQL. So, that's what I wanted to talk about. I wanted to… engage with people identify with, really. And so, didn't happen. Left. Built a Platform as a Service company called Clever Cloud. They started about four or five years before I joined. We went from seven people to thirty-one LFs, fully bootstrapped, no VC. That's an interesting way to build a company in this age.Corey: Very hard to do because it takes a lot of upfront investment to build software, but you can sort of subsidize that via services, which is what we've done here in some respects. But yeah, that's a hard road to walk.Laurent: That's the model we had—and especially when your competition is AWS or Azure or GCP, so that was interesting. So entrepreneurship, it's not for everyone. I did my four years there and then I realized, maybe I'm going to do something else. I met my former colleagues of Couchbase at a software conference called Devoxx, in France, and they told me, “Well, there's a new sheriff in town. You should come back and talk to us. It's all about developers, we are repositioning, rehandling the way we do marketing at Couchbase. Why not have a conversation with our new CMO, John Kreisa?”And I said, “Well, I mean, I don't have anything to do. I actually built a brewery during that past year with some friends. That was great, but that's not going to feed me or anything. So yeah, let's have a conversation about work.” And so, I talked to John, I talked to a bunch of other people, and I realized [unintelligible 00:03:51], he actually changed, like, there was a—they were purposely going [against 00:03:55] developer, talking to developer. And that was not the case, necessarily, five, six years before that.So, that's why I came back. The product is still amazing, the people are still amazing. It was interesting to find a lot of people that still work there after, what, five years. And it's a company based in… California, headquartered in California, so you would expect people to, you know, jump around a bit. And I was pleasantly surprised to find the same folks there. So, that was also one of the reasons why I came back.Corey: It's always a strong endorsement when former employees rejoin a company. Because, I don't know about you, but I've always been aware of those companies you work for, you leave. Like, “Aw, I'm never doing that again for love or money,” just because it was such an unpleasant experience. So, it speaks well when you see companies that do have a culture of boomerangs, for lack of a better term.Laurent: That's the one we use internally, and there's a couple. More than a couple.Corey: So, one thing that seems to have been a thread through most of your career has been an emphasis on developer experience. And I don't know if we come at it from the same perspective, but to me, what drives nuts is honestly, with my work in cloud, bad developer experience manifests as the developer in question feeling like they're somehow not very good at their job. Like, they're somehow not understanding how all this stuff is supposed to work, and honestly, it leads to feeling like a giant fraud. And I find that it's pernicious because even when I intellectually know for a fact that I'm not the dumbest person ever to use this tool when I don't understand how something works, the bad developer experience manifests to me as, “You're not good enough.” At least, that's where I come at it from.Laurent: And also, I [unintelligible 00:05:34] to people that build these products because if we build the products, the user might be in the same position that we are right now. And so, we might be responsible for that experience [unintelligible 00:05:43] a developer, and that's not a great feeling. So, I completely agree with you. I've tried to… always on software-focused companies, whether it was Nuxeo, Couchbase, Clever Cloud, and then Couchbase. And I guess one of the good thing about coming back to a developer-focused era is all the product alignments.Like, a lot of people talk about product that [grows 00:06:08] and what it means. To me what it means was, what it meant—what it still means—building a product that developer wants to use, and not just want to, sometimes it's imposed to you, but actually are happy to use, and as you said, don't feel completely stupid about it in front of the product. It goes through different things. We've recently revamped our Couchbase UI, Couchbase Capella UI—Couchbase Capella is a managed cloud product—and so we've added a lot of in-product getting started guidelines, snippets of code, to help developers getting started better and not have that feeling of, “What am I doing? Why is it not working and what's going on?”Corey: That's an interesting decision to make, just because historically, working with a bunch of tools, the folks who are building the documentation working with that tool, tend to generally be experts at it, so they tend to optimize for improving things for the experience of someone has been using it for five years as opposed to the newcomer. So, I find that the longer a product is in existence, in many cases, the worse the new user experience becomes because companies tend to grow and sprawl in different ways, the product does likewise. And if you don't know the history behind it, “Oh, your company, what does it do?” And you look at the website and there's 50 different offerings that you have—like, the AWS landing page—it becomes overwhelming very quickly. So, it's neat to see that emphasis throughout the user interface on the new developer experience.On the other side of it, though, how are the folks who've been using it for a while respond to those changes? Because it's frustrating for me at least, when I log into a new account, which happens periodically within AWS land, and I have this giant series of onboarding pop-ups that I have to click to make go away every single time. How are they responding to it?Laurent: Yeah, it's interesting. One of the first things that struck me when I joined Couchbase the first time was the size of the technical documentation team. Because the whole… well, not the whole point, but part of the reason why they exist is to do that, to make sure that you understand all the differences and that it doesn't feel like the [unintelligible 00:08:18] what the documentation or the product pitch or everything. Like, they really, really, really emphasize on this from the very beginning. So, that was interesting.So, when you get that culture built into the products, well, the good thing is… when people try Couchbase, they usually stick with Couchbase. My main issue as a Director of the Developer Relations is not to make people stick with Couchbase because that works fairly well with the product that we have; it's to make them aware that we exist. That's the biggest issue I have. So, my goal as DevRel is to make sure that people get the trial, get through the trial, get all that in-app context, all that helps, get that first sample going, get that first… I'm not going to say product built because that's even a bit further down the line, but you know, get that sample going. We have a code playground, so when you're in the application, you get to actually execute different pieces of code, different languages. And so, we get those numbers and we're happy to see that people actually try that. And that's a, well, that's a good feeling.Corey: I think that there's a definite lack of awareness almost industry-wide around the fact that as the diversity of your customers increases, you have to have different approaches that meet them at various points along the journey. Because things that I've seen are okay, it's easy to ass—even just assuming a binary of, “Okay, I've done this before a thousand times; this is the thousand and first, I don't need the Hello World tutorial,” versus, “Oh, I have no idea what I'm doing. Give me the Hello World tutorial,” there are other points along that continuum, such as, “Oh, I used to do something like this, but it's been three years. Can you give me a refresher,” and so on. I think that there's a desire to try and fit every new user into a predefined persona and that just doesn't work very well as products become more sophisticated.Laurent: It's interesting, we actually have—we went through that work of defining those personas because there are many. And that was the origin of my departure. I had one person, ops slash DBA slash the person that maintain this thing, and I wanted to talk to all the other people that built the application space in Couchbase. So, we broadly segment things into back-end, full-stack, and mobile because Couchbase is also a mobile database. Well, we haven't talked too much about this, so I can explain you quickly what Couchbase is.It's basically a distributed JSON database with an integrated caching layer, so it's reasonably fast. So it does cache, and when the key-value is JSON, then you can create with SQL, you can do full-text search, you can do analytics, you can run user-defined function, you get triggers, you get all that actual SQL going on, it's transactional, you get joins, ANSI joins, you get all those… windowing function. It's modern SQL on the JSON database. So, it's a general-purpose database, and it's a general-purpose database that syncs.I think that's the important part of Couchbase. We are very good at syncing cluster of databases together. So, great for multi-cloud, hybrid cloud, on-prem, whatever suits you. And we also sync on the device, there's a thing called Couchbase Mobile, which is a local database that runs in your phone, and it will sync automatically to the server. So, a general-purpose database that syncs and that's quite modern.We try to fit as much way of growing data as possible in our database. It's kind of a several-in-one database. We call that a data platform. It took me a while to warm up to the word platform because I used to work for an enterprise content management platform and then I've been working for a Platform as a Service and then a data platform. So, it took me a bit of time to warm up to that term, but it explained fairly well, the fact that it's a several-in-one product and we empower people to do the trade-offs that they want.Not everybody needs… SQL. Some people just need key-value, some people need search, some people need to do SQL and search in the same query, which we also want people to do. So, it's about choices, it's about empowering people. And that's why the word platform—which can feel intimidating because it can seem complex, you know, [for 00:12:34] a lot of choices. And choices is maybe the enemy of a good developer experience.And, you know, we can try to talk—we can talk for hours about this. The more services you offer, the more complicated it becomes. What's the sweet spots? We did—our own trade-off was to have good documentation and good in-app help to fix that complexity problem. That's the trade-off that we did.Corey: Well, we should probably divert here just to make sure that we cover the basic groundwork for those who might not be aware: what exactly is Couchbase? I know that it's a database, which honestly, anything is a database if you hold it incorrectly enough; that's my entire shtick. But what is it exactly? Where does it start? Where does it stop?Laurent: Oh, where does it start? That's an interesting question. It's a… a merge—some people would say a fork—of Apache CouchDB, and membase. Membase was a distributed key-value store and CouchDB was this weird Erlang and C JSON REST API database that was built by Damian Katz from Lotus Notes, and that was in 2006 or seven. That was before Node.js.Let's not care about the exact date. The point is, a JSON and REST API-enabled database before Node.js was, like, a strong [laugh] power move. And so, those two merged and created the first version of Couchbase. And then we've added all those things that people want to do, so SQL, full-text search, analytics, user-defined function, mobile sync, you know, all those things. So basically, a general-purpose database.Corey: For what things is it not a great fit? This is always my favorite question to ask database folks because the zealot is going to say, “It's good for every use case under the sun. Use it for everything, start to finish”—Laurent: Yes.Corey: —and very few databases can actually check that box.Laurent: It's a very interesting question because when I pitch like, “We do all the things,” because we are a platform, people say, “Well, you must be doing lots of trade-offs. Where is the trade-off?” The trade-off is basically the way you store something is going to determine the efficiency of your [growing 00:14:45]—or the way you [grow 00:14:47] it. And that's one of the first thing you learn in computer science. You learn about data structure and you know that it's easier to get something in a hashmap when you have the key than passing your whole list of elements and checking your data, is it right one? It's the same for databases.So, our different services are different ways to store the data and to query it. So, where is it not good, it's where we don't have an index or a service that answer to the way you want to query data. We don't have a graph service right now. You can still do recursive common table expression for the SQL nerds out there, that will allow you to do somewhat of a graph way of querying your data, but that's not, like, actual—that's not a great experience for people were expecting a graph, like a Neo4j or whatever was a graph database experience.So, that's the trade-off that we made. We have a lot of things at the same place and it can be a little hard, intimidating to operate, and the developer experience can be a little, “Oh, my God, what is this thing that can do all of those features?” At the same time, that's just, like, one SDK to learn for all of the features we've just talked about. So, that's what we did. That's a trade-off that we did.It sucks to operate—well, [unintelligible 00:16:05] Couchbase Capella, which is a lot like a vendor-ish thing to say, but that's the value props of our managed cloud. It's hard to operate, we'll operate this for you. We have a Kubernetes operator. If you are one of the few people that wants to do Kubernetes at home, that's also something you can do. So yeah, I guess what we cannot do is the thing that Route 53 and [Unbound 00:16:26] and [unintelligible 00:16:27] DNS do, which is this weird DNS database thing that you like so much.Corey: One thing that's, I guess, is a sign of the times, but I have to confess that I'm relatively skeptical around, when I pull up couchbase.com—as one does; you're publicly traded; I don't feel that your company has much of a choice in this—but the first thing it greets me with is Couchbase Capella—which, yes, that is your hosted flagship product; that should be the first thing I see on the website—then it says, “Announcing Capella iQ, AI-powered coding assistance for developers.” Which oh, great, not another one of these.So, all right, give me the pitch. What is the story around, “Ooh, everything that has been a problem before, AI is going to make it way better.” Because I've already talked to you about developer experience. I know where you stand on these things. I have a suspicion you would not be here to endorse something you don't believe in. How does the AI magic work in this context?Laurent: So, that's the thing, like, who's going to be the one that get their products out before the other? And so, we're announcing it on the website. It's available on the private preview only right now. I've tried it. It works.How does it works? The way most chatbot AI code generation work is there's a big model, large language model that people use and that people fine-tune into in order to specialize it to the tasks that they want to do. The way we've built Couchbase iQ is we picked a very famous large language model, and when you ask a question to a bot, there's a context, there's a… the size of the window basically, that allows you to fit as much contextual information as possible. The way it works and the reason why it's integrated into Couchbase Capella is we make sure that we preload that context as much as possible and fine-tune that model, that [foundation 00:18:19] model, as much as possible to do whatever you want to do with Couchbase, which usually falls into several—a couple of categories, really—well maybe three—you want to write SQL, you want to generate data—actually, that's four—you want to generate data, you want to generate code, and if you paste some SQL code or some application code, you want to ask that model, what does do? It's especially true for SQL queries.And one of the questions that many people ask and are scared of with chatbot is how does it work in terms of learning? If you give a chatbot to someone that's very new to something, and they're just going to basically use a chatbot like Stack Overflow and not really think about what they're doing, well it's not [great 00:19:03] right, but because that's the example that people think most developer will do is generate code. Writing code is, like, a small part of our job. Like, a substantial part of our job is understanding what the code does.Corey: We spend a lot more time reading code than writing it, if we're, you know—Laurent: Yes.Corey: Not completely foolish.Laurent: Absolutely. And sometimes reading big SQL query can be a bit daunting, especially if you're new to that. And one of the good things that you get—Corey: Oh, even if you're not, it can still be quite daunting, let me assure you.Laurent: [laugh]. I think it's an acquired taste, let's be honest. Some people like to write assembly code and some people like to write SQL. I'm sort of in the middle right now. You pass your SQL query, and it's going to tell you more or less what it does, and that's a very nice superpower of AI. I think that's [unintelligible 00:19:48] that's the one that interests me the most right now is using AI to understand and to work better with existing pieces of code.Because a lot of people think that the cost of software is writing the software. It's maintaining the codebase you've written. That's the cost of the software. That's our job as developers should be to write legacy code because it means you've provided value long enough. And so, if in a company that works pretty well and there's a lot of legacy code and there's a lot of new people coming in and they'll have to learn all those things, and to be honest, sometimes we don't document stuff as much as we should—Corey: “The code is self-documenting,” is one of the biggest lies I hear in tech.Laurent: Yes, of course, which is why people are asking retired people to go back to COBOL again because nobody can read it and it's not documented. Actually, if someone's looking for a company to build, I guess, explaining COBOL code with AI would be a pretty good fit to do in many places.Corey: Yeah, it feels like that's one of those things that would be of benefit to the larger world. The counterpoint to that is you got that many business processes wrapped around something running COBOL—and I assure you, if you don't, you would have migrated off of COBOL long before now—it's making sure that okay well, computers, when they're in the form of AI, are very, very good at being confident-sounding when they talk about things, but they can also do that when they're completely wrong. It's basically a BS generator. And that is a scary thing when you're taking a look at something that broad. I mean, I'll use the AI coding assistance for things all the time, but those things look a lot more like, “Okay, I haven't written CloudFormation from scratch in a while. Build out the template, just because I forget the exact sequence.” And it's mostly right on things like that. But then you start getting into some of the real nuanced areas like race conditions and the rest, and often it can make things worse instead of better. That's the scary part, for me, at least.Laurent: Most coding assistants are… and actually, each time you ask its opinion to an AI, they say, “Well, you should take this with a grain of salt and we are not a hundred percent sure that this is the case.” And this is, make sure you proofread that, which again, from a learning perspective, can be a bit hard to give to new students. Like, you're giving something to someone and might—that assumes is probably as right as Wikipedia but actually, it's not. And it's part of why it works so well. Like, the anthropomorphism that you get with chatbots, like, this, it feels so human. That's why it get people so excited about it because if you think about it, it's not that new. It's just the moment it took off was the moment it looked like an assertive human being.Corey: As you take a look through, I guess, the larger ecosystem now, as well as the database space, given that is where you specialize, what do you think people are getting right and what do you think people are getting wrong?Laurent: There's a couple of ways of seeing this. Right now, when I look at from the outside, every databases is going back to SQL, I think there's a good reason for that. And it's interesting to put into perspective with AI because when you generate something, there's probably less chance to generate something wrong with SQL than generating something with code directly. And I think five generation—was it four or five generation language—there some language generation, so basically, the first innovation is assembly [into 00:23:03] in one and then you get more evolved languages, and at some point you get SQL. And SQL is a way to very shortly express a whole lot of business logic.And I think what people are doing right now is going back to SQL. And it's been impressive to me how even new developers that were all about [ORMs 00:23:25] and [no-DMs 00:23:26], and you know, avoiding writing SQL as much as possible, are actually back to it. And that's, for an old guy like me—well I mean, not that old—it feels good. I think SQL is coming back with a vengeance and that makes me very happy. I think what people don't realize is that it also involves doing data modeling, right, and stuff because database like Couchbase that are schemaless exist. You should store your data without thinking about it, you should still do data modeling. It's important. So, I think that's the interesting bits. What are people doing wrong in that space? I'm… I don't want to say bad thing about other databases, so I cannot even process that thought right now.Corey: That's okay. I'm thrilled to say negative things about any database under the sun. They all haunt me. I mean, someone wants to describe SQL to me is the chess of the programming world and I feel like that's very accurate. I have found that it is far easier in working with databases to make mistakes that don't wash off after a new deployment than it is in most other realms of technology. And when you're lucky and have a particular aura, you tend to avoid that stuff, at least that was always my approach.Laurent: I think if I had something to say, so just like the XKCD about standards: like, “there's 14 standards. I'm going to do one that's going to unify them all.” And it's the same with database. There's a lot… a [laugh] lot of databases. Have you ever been on a website called dbdb.io?Corey: Which one is it? I'm sorry.Laurent: Dbdb.io is the database of databases, and it's very [laugh] interesting website for database nerds. And so, if you're into database, dbdb.io. And you will find Couchbase and you will find a whole bunch of other databases, and you'll get to know which database is derived from which other database, you get the history, you get all those things. It's actually pretty interesting.Corey: I'm familiar with DB-Engines, which is sort of like the ranking databases by popularity, and companies will bend over backwards to wind up hitting all of the various things that they want in that space. The counterpoint with all of it is that it's… it feels historically like there haven't exactly been an awful lot of, shall we say, huge innovations in databases for the past few years. I mean, sure, we hear about vectors all the time now because of the joy that's AI, but smarter people than I are talking about how, well that's more of a feature than it is a core database. And the continual battle that we all hear about constantly is—and deal with ourselves—of should we use a general-purpose database, or a task-specific database for this thing that I'm doing remains largely unsolved.Laurent: Yeah, what's new? And when you look at it, it's like, we are going back to our roots and bringing SQL again. So, is there anything new? I guess most of the new stuff, all the interesting stuff in the 2010s—well, basically with the cloud—were all about the distribution side of things and were all about distributed consensus, Zookeeper, etcd, all that stuff. Couchbase is using an RAFT-like algorithm to keep every node happy and under the same cluster.I think that's one of the most interesting things we've had for the past… well, not for the past ten years, but between, basically, 20 or… between the start of AWS and well, let's say seven years ago. I think the end of the distribution game was brought to us by the people that have atomic clock in every data center because that's what you use to synchronize things. So, that was interesting things. And then suddenly, there wasn't that much innovation in the distributed world, maybe because Aphyr disappeared from Twitter. That might be one of the reason. He's not here to scare people enough to be better at that.Aphyr was the person behind the test called the Jepsen Test [shoot 00:27:12]. I think his blog engine was called Call Me Maybe, and he was going through every distributed system and trying to break them. And that was super interesting. And it feels like we're not talking that much about this anymore. It really feels like database have gone back to the status of infrastructure.In 2010, it was not about infrastructure. It was about developer empowerment. It was about serving JSON and developer experience and making sure that you can code faster without some constraint in a distributed world. And like, we fixed this for the most part. And the way we fixed this—and as you said, lack of innovation, maybe—has brought databases back to an infrastructure layer.Again, it wasn't the case 15 years a—well, 2023—13 years ago. And that's interesting. When you look at the new generation of databases, sometimes it's just a gateway on top of a well-known database and they call that a database, but it provides higher-level services, provides higher-level bricks, better developer experience to developer to build stuff faster. We've been trying to do this with Couchbase App Service and our sync gateway, which is basically a gateway on top of a Couchbase cluster that allow you to manage authentication, authorization, that allows you to manage synchronization with your mobile device or with websites. And yeah, I think that's the most interesting thing to me in this industry is how it's been relegated back to infrastructure, and all the cool stuff, new stuff happens on the layer above that.Corey: I really want to thank you for taking the time to speak with me. If people want to learn more, where's the best place for them to find you?Laurent: Thanks for having me and for entertaining this conversation. I can be found anywhere on the internet with these six letters: L-D-O-G-U-I-N. That's actually 7 letters. Ldoguin. That's my handle on pretty much any social network. Ldoguin. So X, [BlueSky 00:29:21], LinkedIn. I don't know where to be anymore.Corey: I hear you. We'll put links to all of it in the [show notes 00:29:27] and let people figure out where they want to go on that. Thank you so much for taking the time to speak with me today. I really do appreciate it.Laurent: Thanks for having me.Corey: Laurent Doguin, Director of Developer Relations and Strategy at Couchbase. I'm Cloud Economist Corey Quinn and this episode has been brought to us by our friends at Couchbase. If you enjoyed this episode, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that you're not going to be able to submit properly because that platform of choice did not pay enough attention to the experience of typing in a comment.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

SaaS for Developers
Building a Serverless Streaming Platform

SaaS for Developers

Play Episode Listen Later Nov 13, 2023 42:23


Krishna Raman has many years of experience building platforms for developers. Now he's applying this experience at Delta Streams to build a serverless platform for stream processing in SQL. In our conversation, we discussed the serverless developer experience, some of the secret sauce behind Delta Stream's Flink Operator, how to deliver a bring-your-own-account SaaS, K8s network policies and why they matter and when you shouldn't use K8s at all.

Data Engineering Podcast
Enhancing The Abilities Of Software Engineers With Generative AI At Tabnine

Data Engineering Podcast

Play Episode Listen Later Nov 13, 2023 67:52


Summary Software development involves an interesting balance of creativity and repetition of patterns. Generative AI has accelerated the ability of developer tools to provide useful suggestions that speed up the work of engineers. Tabnine is one of the main platforms offering an AI powered assistant for software engineers. In this episode Eran Yahav shares the journey that he has taken in building this product and the ways that it enhances the ability of humans to get their work done, and when the humans have to adapt to the tool. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack) This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold) Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It's the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it's real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free! Your host is Tobias Macey and today I'm interviewing Eran Yahav about building an AI powered developer assistant at Tabnine Interview Introduction How did you get involved in machine learning? Can you describe what Tabnine is and the story behind it? What are the individual and organizational motivations for using AI to generate code? What are the real-world limitations of generative AI for creating software? (e.g. size/complexity of the outputs, naming conventions, etc.) What are the elements of skepticism/oversight that developers need to exercise while using a system like Tabnine? What are some of the primary ways that developers interact with Tabnine during their development workflow? Are there any particular styles of software for which an AI is more appropriate/capable? (e.g. webapps vs. data pipelines vs. exploratory analysis, etc.) For natural languages there is a strong bias toward English in the current generation of LLMs. How does that translate into computer languages? (e.g. Python, Java, C++, etc.) Can you describe the structure and implementation of Tabnine? Do you rely primarily on a single core model, or do you have multiple models with subspecialization? How have the design and goals of the product changed since you first started working on it? What are the biggest challenges in building a custom LLM for code? What are the opportunities for specialization of the model architecture given the highly structured nature of the problem domain? For users of Tabnine, how do you assess/monitor the accuracy of recommendations? What are the feedback and reinforcement mechanisms for the model(s)? What are the most interesting, innovative, or unexpected ways that you have seen Tabnine's LLM powered coding assistant used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI assisted development at Tabnine? When is an AI developer assistant the wrong choice? What do you have planned for the future of Tabnine? Contact Info LinkedIn (https://www.linkedin.com/in/eranyahav/?originalSubdomain=il) Website (https://csaws.cs.technion.ac.il/~yahave/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers Links TabNine (https://www.tabnine.com/) Technion University (https://www.technion.ac.il/en/home-2/) Program Synthesis (https://en.wikipedia.org/wiki/Program_synthesis) Context Stuffing (http://gptprompts.wikidot.com/context-stuffing) Elixir (https://elixir-lang.org/) Dependency Injection (https://en.wikipedia.org/wiki/Dependency_injection) COBOL (https://en.wikipedia.org/wiki/COBOL) Verilog (https://en.wikipedia.org/wiki/Verilog) MidJourney (https://www.midjourney.com/home) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)

Seller Sessions
What's New In Amazon's AMC

Seller Sessions

Play Episode Listen Later Nov 12, 2023 20:11


AMC is a cloud-based database that gathers signals from all over Amazon to provide insights into customer behavior. It still requires technical expertise like SQL to maximize its potential. Since the last episode, AMC has expanded the data it provides beyond just sponsored ads. It now includes things like TV streaming ads, Alexa ads, etc. AMC makes it easier to analyze the customer purchasing journey across multiple touchpoints. Often purchases take 7-8 days and involve multiple ad exposures rather than a direct path. You can use AMC to build targeted audiences in DSP, like people who searched related terms but didn't convert or added to wishlist. Look at gateway products that lead to further purchases and maximize exposure. See which products have the highest overlap/cross-sell. Appoint someone in your team or agency to become the AMC expert through certifications and training. Queries take practice but it's accessible. Get AMC set up ASAP to start gathering historical data, even if you don't use it right away. The instance backfills 13 months of data. New-to-brand data is only available in AMC for sponsored product ads. See which keywords drive incremental sales. Other Notes: Brent recommends getting started with AMC if you spend $10k+ per month on Amazon sponsored ads. It's free to use. Some of Brent's favorite uses are analyzing customer journey, frequency of ad exposure, gateway products, and CLTV. Brent offers Amazon PPC services through his agency Pathfinder. Reach out to him at brant.bike or amcpathfinder.com. Podcast Details: Show: Seller Sessions Episode: AMC 1 Year Later - Tips and Strategies for 2022 Host: Danny McMillan Guest: Brent Zahradnik, Amazon PPC Expert

Outgrow's Marketer of the Month
EPISODE 141- The Art of Data Governance: Just Eat's Data Quality Analyst Kirill Skaletskiy on the Power of Data

Outgrow's Marketer of the Month

Play Episode Listen Later Nov 9, 2023 19:59


Kirill Skaletskiy, a senior data quality analyst at Just Eat Takeaway, excels in collecting, interpreting, and analyzing data. His expertise spans front-end dashboard design, back-end data processing, and various programming languages like Python, SQL, DAX, Power Query M, and Excel VBA. He's dedicated to optimizing business by improving data value and clarity. On The Menu: 1. Data Governance's Role: Vital for data management, a key aspect of being a data-driven company. 2. Good Data Quality Examples: Varies by use case, driven by data requirements and context. 3. Data Quality Challenges: Responsibility and issue remediation processes pose hurdles. 4. Open-Source vs. Commercial Tools: Pros and cons of both options for data quality management. 5. Overcoming Challenges: Data profiling and collaboration with data owners and consumers. 6. Data Monitoring: Importance of data lineage and continuous data quality measurement.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
AGI is Being Achieved Incrementally (OpenAI DevDay w/ Simon Willison, Alex Volkov, Jim Fan, Raza Habib, Shreya Rajpal, Rahul Ligma, et al)

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Nov 8, 2023 142:33


SF folks: join us at the AI Engineer Foundation's Emergency Hackathon tomorrow and consider the Newton if you'd like to cowork in the heart of the Cerebral Arena.Our community page is up to date as usual!~800,000 developers watched OpenAI Dev Day, ~8,000 of whom listened along live on our ThursdAI x Latent Space, and ~800 of whom got tickets to attend in person:OpenAI's first developer conference easily surpassed most people's lowballed expectations - they simply did everything short of announcing GPT-5, including:* ChatGPT (the consumer facing product)* GPT4 Turbo already in ChatGPT (running faster, with an April 2023 cutoff), all noticed by users weeks before the conference* Model picker eliminated, God Model chooses for you* GPTs - “tailored version of ChatGPT for a specific purpose” - stopping short of “Agents”. With custom instructions, expanded knowledge, and actions, and an intuitive no-code GPT Builder UI (we tried all these on our livestream yesterday and found some issues, but also were able to ship interesting GPTs very quickly) and a GPT store with revenue sharing (an important criticism we focused on in our episode on ChatGPT Plugins)* API (the developer facing product)* APIs for Dall-E 3, GPT4 Vision, Code Interpreter (RIP Advanced Data Analysis), GPT4 Finetuning and (surprise!) Text to Speech* many thought each of these would take much longer to arrive* usable in curl and in playground* BYO Interpreter + Async Agents?* Assistant API: stateful API backing “GPTs” like apps, with support for calling multiple tools in parallel, persistent Threads (storing message history, unlimited context window with some asterisks), and uploading/accessing Files (with a possibly-too-simple RAG algorithm, and expensive pricing)* Whisper 3 announced and open sourced (HuggingFace recap)* Price drops for a bunch of things!* Misc: Custom Models for big spending ($2-3m) customers, Copyright Shield, SatyaThe progress here feels fast, but it is mostly (incredible) last-mile execution on model capabilities that we already knew to exist. On reflection it is important to understand that the one guiding principle of OpenAI, even more than being Open (we address that in part 2 of today's pod), is that slow takeoff of AGI is the best scenario for humanity, and that this is what slow takeoff looks like:When introducing GPTs, Sam was careful to assert that “gradual iterative deployment is the best way to address the safety challenges with AI”:This is why, in fact, GPTs and Assistants are intentionally underpowered, and it is a useful exercise to consider what else OpenAI continues to consider dangerous (for example, many people consider a while(true) loop a core driver of an agent, which GPTs conspicuously lack, though Lilian Weng of OpenAI does not).We convened the crew to deliver the best recap of OpenAI Dev Day in Latent Space pod style, with a 1hr deep dive with the Functions pod crew from 5 months ago, and then another hour with past and future guests live from the venue itself, discussing various elements of how these updates affect their thinking and startups. Enjoy!Show Notes* swyx live thread (see pinned messages in Twitter Space for extra links from community)* Newton AI Coworking Interest Form in the heart of the Cerebral ArenaTimestamps* [00:00:00] Introduction* [00:01:59] Part I: Latent Space Pod Recap* [00:06:16] GPT4 Turbo and Assistant API* [00:13:45] JSON mode* [00:15:39] Plugins vs GPT Actions* [00:16:48] What is a "GPT"?* [00:21:02] Criticism: the God Model* [00:22:48] Criticism: ChatGPT changes* [00:25:59] "GPTs" is a genius marketing move* [00:26:59] RIP Advanced Data Analysis* [00:28:50] GPT Creator as AI Prompt Engineer* [00:31:16] Zapier and Prompt Injection* [00:34:09] Copyright Shield* [00:38:03] Sharable GPTs solve the API distribution issue* [00:39:07] Voice* [00:44:59] Vision* [00:49:48] In person experience* [00:55:11] Part II: Spot Interviews* [00:56:05] Jim Fan (Nvidia - High Level Takeaways)* [01:05:35] Raza Habib (Humanloop) - Foundation Model Ops* [01:13:59] Surya Dantuluri (Stealth) - RIP Plugins* [01:21:20] Reid Robinson (Zapier) - AI Actions for GPTs* [01:31:19] Div Garg (MultiOn) - GPT4V for Agents* [01:37:15] Louis Knight-Webb (Bloop.ai) - AI Code Search* [01:49:21] Shreya Rajpal (Guardrails.ai) - on Hallucinations* [01:59:51] Alex Volkov (Weights & Biases, ThursdAI) - "Keeping AI Open"* [02:10:26] Rahul Sonwalkar (Julius AI) - Advice for FoundersTranscript[00:00:00] Introduction[00:00:00] swyx: Hey everyone, this is Swyx coming at you live from the Newton, which is in the heart of the Cerebral Arena. It is a new AI co working space that I and a couple of friends are working out of. There are hot desks available if you're interested, just check the show notes. But otherwise, obviously, it's been 24 hours since the opening of Dev Day, a lot of hot reactions and longstanding tradition, one of the longest traditions we've had.[00:00:29] And the latent space pod is to convene emergency sessions and record the live thoughts of developers and founders going through and processing in real time. I think a lot of the roles of podcasts isn't as perfect information delivery channels, but really as an audio and oral history of what's going on as it happens, while it happens.[00:00:49] So this one's a little unusual. Previously, we only just gathered on Twitter Spaces, and then just had a bunch of people. The last one was the Code Interpreter one with 22, 000 people showed up. But this one is a little bit more complicated because there's an in person element and then a online element.[00:01:06] So this is a two part episode. The first part is a recorded session between our latent space people and Simon Willison and Alex Volkoff from the Thursday iPod, just kind of recapping the day. But then also, as the second hour, I managed to get a bunch of interviews with previous guests on the pod who we're still friends with and some new people that we haven't yet had on the pod.[00:01:28] But I wanted to just get their quick reactions because most of you have known and loved Jim Fan and Div Garg and a bunch of other folks that we interviewed. So I just want to, I'm excited to introduce To you the broader scope of what it's like to be at OpenAI Dev Day in person bring you the audio experience as well as give you some of the thoughts that developers are having as they process the announcements from OpenAI.[00:01:51] So first off, we have the Mainspace Pod recap. One hour of open I dev day.[00:01:59] Part I: Latent Space Pod Recap[00:01:59] Alessio: Hey. Welcome to the Latents Based Podcast an emergency edition after OpenAI Dev Day. This is Alessio, partner and CTO of Residence at Decibel Partners, and as usual, I'm joined by Swyx, founder of SmallAI. Hey,[00:02:12] swyx: and today we have two special guests with us covering all the latest and greatest.[00:02:17] We, we, we love to get our band together and recap things, especially when they're big. And it seems like that every three months we have to do this. So Alex, welcome. From Thursday AI we've been collaborating a lot on the Twitter spaces and welcome Simon from many, many things, but also I think you're the first person to not, not make four appearances on our pod.[00:02:37] Oh, wow. I feel privileged. So welcome. Yeah, I think we're all there yesterday. How... Do we feel like, what do you want to kick off with? Maybe Simon, you want to, you want to take first and then Alex. Sure. Yeah. I mean,[00:02:47] Simon Willison: yesterday was quite exhausting, quite frankly. I feel like it's going to take us as a community several months just to completely absorb all of the stuff that they dropped on us in one giant.[00:02:57] Giant batch. It's particularly impressive considering they launched a ton of features, what, three or four weeks ago? ChatGPT voice and the combined mode and all of that kind of thing. And then they followed up with everything from yesterday. That said, now that I've started digging into the stuff that they released yesterday, some of it is clearly in need of a bit more polish.[00:03:15] You know, the the, the reality of what they look, what they released is I'd say about 80 percent of, of what it looks like it was yesterday, which is still impressive. You know, don't get me wrong. This is an amazing batch of stuff, but there are definitely problems and sharp edges that we need to file off.[00:03:29] And there are things that we still need to figure out before we can take advantage of all of this.[00:03:33] swyx: Yeah, agreed, agreed. And we can go into those, those sharp edges in a bit. I just want to pop over to Alex. What are your thoughts?[00:03:39] Alex Volkov: So, interestingly, even folks at OpenAI, there's like several booths and help desks so you can go in and ask people, like, actual changes and people, like, they could follow up with, like, the right people in OpenAI and, like, answer you back, etc.[00:03:52] Even some of them didn't know about all the changes. So I went to the voice and audio booth. And I asked them about, like, hey, is Whisper 3 that was announced by Sam Altman on stage just, like, briefly, will that be open source? Because I'm, you know, I love using Whisper. And they're like, oh, did we open source?[00:04:06] Did we talk about Whisper 3? Like, some of them didn't even know what they were releasing. But overall, I felt it was a very tightly run event. Like, I was really impressed. Shawn, we were sitting in the audience, and you, like, pointed at the clock to me when they finished. They finished, like, on... And this was after like doing some extra stuff.[00:04:24] Very, very impressive for a first event. Like I was absolutely like, Good job.[00:04:30] swyx: Yeah, apparently it was their first keynote and someone, I think, was it you that told me that this is what happens if you have A president of Y Combinator do a proper keynote you know, having seen many, many, many presentations by other startups this is sort of the sort of master stroke.[00:04:46] Yeah, Alessio, I think you were watching remotely. Yeah, we were at the Newton. Yeah, the Newton.[00:04:52] Alessio: Yeah, I think we had 60 people here at the watch party, so it was quite a big crowd. Mixed reaction from different... Founders and people, depending on what was being announced on the page. But I think everybody walked away kind of really happy with a new layer of interfaces they can use.[00:05:11] I think, to me, the biggest takeaway was like and I was talking with Mike Conover, another friend of the podcast, about this is they're kind of staying in the single threaded, like, synchronous use cases lane, you know? Like, the GPDs announcement are all like... Still, chatbase, one on one synchronous things.[00:05:28] I was expecting, maybe, something about async things, like background running agents, things like that. But it's interesting to see there was nothing of that, so. I think if you're a founder in that space, you're, you're quite excited. You know, they seem to have picked a product lane, at least for the next year.[00:05:45] So, if you're working on... Async experiences, so things working in the background, things that are not co pilot like, I think you're quite excited to have them be a lot cheaper now.[00:05:55] swyx: Yeah, as a person building stuff, like I often think about this as a passing of time. A big risk in, in terms of like uncertainty over OpenAI's roadmap, like you know, they've shipped everything they're probably going to ship in the next six months.[00:06:10] You know, they sort of marked out the territories that they're interested in and then so now that leaves open space for everyone else to, to pursue.[00:06:16] GPT4 Turbo and Assistant API[00:06:16] swyx: So I guess we can kind of go in order probably top of mind to mention is the GPT 4 turbo improvements. Yeah, so longer context length, cheaper price.[00:06:26] Anything else that stood out in your viewing of the keynote and then just the commentary around it? I[00:06:34] Alex Volkov: was I was waiting for Stateful. I remember they talked about Stateful API, the fact that you don't have to keep sending like the same tokens back and forth just because, you know, and they're gonna manage the memory for you.[00:06:45] So I was waiting for that. I knew it was coming at some point. I was kind of... I did not expect it to come at this event. I don't know why. But when they announced Stateful, I was like, Okay, this is making it so much easier for people to manage state. The whole threads I don't want to mix between the two things, so maybe you guys can clarify, but there's the GPT 4 tool, which is the model that has the capabilities, In a whopping 128k, like, context length, right?[00:07:11] It's huge. It's like two and a half books. But also, you know, faster, cheaper, etc. I haven't yet tested the fasterness, but like, everybody's excited about that. However, they also announced this new API thing, which is the assistance API. And part of it is threads, which is, we'll manage the thread for you.[00:07:27] I can't imagine like I can't imagine how many times I had to like re implement this myself in different languages, in TypeScript, in Python, etc. And now it's like, it's so easy. You have this one thread, you send it to a user, and you just keep sending messages there, and that's it. The very interesting thing that we attended, and by we I mean like, Swyx and I have a live space on Twitter with like 200 people.[00:07:46] So it's like me, Swyx, and 200 people in our earphones with us as well. They kept asking like, well, how's the price happening? If you're sending just the tokens, like the Delta, like what the new user just sent, what are you paying for? And I went to OpenAI people, and I was like, hey... How do we get paid for this?[00:08:01] And nobody knew, nobody knew, and I finally got an answer. You still pay for the whole context that you have inside the thread. You still pay for all this, but now it's a little bit more complex for you to kind of count with TikTok, right? So you have to hit another API endpoint to get the whole thread of what the context is.[00:08:17] Then TikTokonize this, run this in TikTok, and then calculate. This is now the new way, officially, for OpenAI. But I really did, like, have to go and find this. They didn't know a lot of, like, how the pricing is. Ouch! Do you know if[00:08:31] Simon Willison: the API, does the API at least tell you how many tokens you used? Or is it entirely up to you to do the accounting?[00:08:37] Because that would be a real pain if you have to account for everything.[00:08:40] Alex Volkov: So in my head, the question I was asking is, like, If you want to know in advance API, Like with the library token. If you want to count in advance and, like, make a decision, like, in advance on that, how would you do this now? And they said, well, yeah, there's a way.[00:08:54] If you hit the API, get the whole thread back, then count the tokens. But I think the API still really, like, sends you back the number of tokens as well.[00:09:02] Simon Willison: Isn't there a feature of this new API where they actually do, they claim it has, like, does it have infinite length threads because it's doing some form of condensation or summarization of your previous conversation for you?[00:09:15] I heard that from somewhere, but I haven't confirmed it yet.[00:09:18] swyx: So I have, I have a source from Dave Valdman. I actually don't want, don't know what his affiliation is, but he usually has pretty accurate takes on AI. So I, I think he works in the iCircles in some capacity. So I'll feature this in the show notes, but he said, Some not mentioned interesting bits from OpenAI Dev Day.[00:09:33] One unlimited. context window and chat threads from opening our docs. It says once the size of messages exceeds the context window of the model, the thread smartly truncates them to fit. I'm not sure I want that intelligence.[00:09:44] Alex Volkov: I want to chime in here just real quick. The not want this intelligence. I heard this from multiple people over the next conversation that I had. Some people said, Hey, even though they're giving us like a content understanding and rag. We are doing different things. Some people said this with Vision as well.[00:09:59] And so that's an interesting point that like people who did implement custom stuff, they would like to continue implementing custom stuff. That's also like an additional point that I've heard people talk about.[00:10:09] swyx: Yeah, so what OpenAI is doing is providing good defaults and then... Well, good is questionable.[00:10:14] We'll talk about that. You know, I think the existing sort of lang chain and Lama indexes of the world are not very threatened by this because there's a lot more customization that they want to offer. Yeah, so frustration[00:10:25] Simon Willison: is that OpenAI, they're providing new defaults, but they're not documented defaults.[00:10:30] Like they haven't told us how their RAG implementation works. Like, how are they chunking the documents? How are they doing retrieval? Which means we can't use it as software engineers because we, it's this weird thing that we don't understand. And there's no reason not to tell us that. Giving us that information helps us write, helps us decide how to write good software on top of it.[00:10:48] So that's kind of frustrating. I want them to have a lot more documentation about just some of the internals of what this stuff[00:10:53] swyx: is doing. Yeah, I want to highlight.[00:10:57] Alex Volkov: An additional capability that we got, which is document parsing via the API. I was, like, blown away by this, right? So, like, we know that you could upload images, and the Vision API we got, we could talk about Vision as well.[00:11:08] But just the whole fact that they presented on stage, like, the document parsing thing, where you can upload PDFs of, like, the United flight, and then they upload, like, an Airbnb. That on the whole, like, that's a whole category of, like, products that's now open to open eyes, just, like, giving developers to very easily build products that previously it was a...[00:11:24] Pain in the butt for many, many people. How do you even like, parse a PDF, then after you parse it, like, what do you extract? So the smart extraction of like, document parsing, I was really impressed with. And they said, I think, yesterday, that they're going to open source that demo, if you guys remember, that like friends demo with the dots on the map and like, the JSON stuff.[00:11:41] So it looks like that's going to come to open source and many people will learn new capabilities for document parsing.[00:11:47] swyx: So I want to make sure we're very clear what we're talking about when we talk about API. When you say API, there's no actual endpoint that does this, right? You're talking about the chat GPT's GPT's functionality.[00:11:58] Alex Volkov: No, I'm talking about the assistance API. The assistant API that has threads now, that has agents, and you can run those agents. I actually, maybe let's clarify this point. I think I had to, somebody had to clarify this for me. There's the GPT's. Which is a UI version of running agents. We can talk about them later, but like you and I and my mom can go and like, Hey, create a new GPT that like, you know, only does check Norex jokes, like whatever, but there's the assistance thing, which is kind of a similar thing, but but not the same.[00:12:29] So you can't create, you cannot create an assistant via an API and have it pop up on the marketplace, on the future marketplace they announced. How can you not? No, no, no, not via the API. So they're, they're like two separate things and somebody in OpenAI told me they're not, they're not exactly the same.[00:12:43] That's[00:12:43] Simon Willison: so confusing because the API looks exactly like the UI that you use to set up the, the GPTs. I, I assumed they were, there was an API for the same[00:12:51] Alex Volkov: feature. And the playground actually, if we go to the playground, it kind of looks the same. There's like the configurable thing. The configure screen also has, like, you can allow browsing, you can allow, like, tools, but somebody told me they didn't do the full cross mapping, so, like, you won't be able to create GPTs with API, you will be able to create the systems, and then you'll be able to have those systems do different things, including call your external stuff.[00:13:13] So that was pretty cool. So this API is called the system API. That's what we get, like, in addition to the model of the GPT 4 turbo. And that has document parsing. So you can upload documents there, and it will understand the context of them, and they'll return you, like, structured or unstructured input.[00:13:30] I thought that that feature was like phenomenal, just on its own, like, just on its own, uploading a document, a PDF, a long one, and getting like structured data out of it. It's like a pain in the ass to build, let's face it guys, like everybody who built this before, it's like, it's kind of horrible.[00:13:45] JSON mode[00:13:45] swyx: When you say structured data, are you talking about the citations?[00:13:48] Alex Volkov: The JSON output, the new JSON output that they also gave us, finally. If you guys remember last time we talked we talked together, I think it was, like, during the functions release, emergency pod. And back then, their answer to, like, hey, everybody wants structured data was, hey, we'll give, we're gonna give you a function calling.[00:14:03] And now, they did both. They gave us both, like, a JSON output, like, structure. So, like, you can, the models are actually going to return JSON. Haven't played with it myself, but that's what they announced. And the second thing is, they improved the function calling. Significantly as well.[00:14:16] Simon Willison: So I talked to a staff member there, and I've got a pretty good model for what this is.[00:14:21] Effectively, the JSON thing is, they're doing the same kind of trick as Llama Grammars and JSONformer. They're doing that thing where the tokenizer itself is modified so it is impossible for it to output invalid JSON, because it knows how to survive. Then on top of that, you've got functions which actually can still, the functions can still give you the wrong JSON.[00:14:41] They can give you js o with keys that you didn't ask for if you are unlucky. But at least it will be valid. At least it'll pass through a json passer. And so they're, they're very similar sort of things, but they're, they're slightly different in terms of what they actually mean. And yeah, the new function stuff is, is super exciting.[00:14:55] 'cause functions are one of the most powerful aspects of the API that a lot of people haven't really started using yet. But it's amazingly powerful what you can do with it.[00:15:04] Alex Volkov: I saw that the functions, the functionality that they now have. is also plug in able as actions to those assistants. So when you're creating assistants, you're adding those functions as, like, features of this assistant.[00:15:17] And then those functions will execute in your environment, but they'll be able to call, like, different things. Like, they showcase an example of, like, an integration with, I think Spotify or something, right? And that was, like, an internal function that ran. But it is confusing, the kind of, the online assistant.[00:15:32] APIable agents and the GPT's agents. So I think it's a little confusing because they demoed both. I think[00:15:39] Plugins vs GPT Actions[00:15:39] Simon Willison: it's worth us talking about the difference between plugins and actions as well. Because, you know, they launched plugins, what, back in February. And they've effectively... They've kind of deprecated plugins.[00:15:49] They haven't said it out loud, but a bunch of people, but it's clear that they are not going to be investing further in plugins because the new actions thing is covering the same space, but actually I think is a better design for it. Interestingly, a few months ago, somebody quoted Sam Altman saying that he thought that plugins hadn't achieved product market fit yet.[00:16:06] And I feel like that's sort of what we're seeing today. The the problem with plugins is it was all a little bit messy. People would pick and mix the plugins that they needed. Nobody really knew which plugin combinations would work. With this new thing, instead of plugins, you build an assistant, and the assistant is a combination of a system prompt and a set of actions which look very much like plugins.[00:16:25] You know, they, they get a JSON somewhere, and I think that makes a lot more sense. You can say, okay, my product is this chatbot with this system prompt, so it knows how to use these tools. I've given it this combination of plugin like things that it can use. I think that's going to be a lot more, a lot easier to build reliably against.[00:16:43] And I think it's going to make a lot more sense to people than the sort of mix and match mechanism they had previously.[00:16:48] What is a "GPT"?[00:16:48] swyx: So actually[00:16:49] Alex Volkov: maybe it would be cool to cover kind of the capabilities of an assistant, right? So you have a custom prompt, which is akin to a system message. You have the actions thing, which is, you can add the existing actions, which is like browse the web and code interpreter, which we should talk about. Like, the system now can write code and execute it, which is exciting. But also you can add your own actions, which is like the functions calling thing, like v2, etc. Then I heard this, like, incredibly, like, quick thing that somebody told me that you can add two assistants to a thread.[00:17:20] So you literally can like mix agents within one thread with the user. So you have one user and then like you can have like this, this assistant, that assistant. They just glanced over this and I was like, that, that is very interesting. That is not very interesting. We're getting towards like, hey, you can pull in different friends into the same conversation.[00:17:37] Everybody does the different thing. What other capabilities do we have there? You guys remember? Oh Remember, like, context. Uploading API documentation.[00:17:48] Simon Willison: Well, that one's a bit more complicated. So, so you've got, you've got the system prompt, you've got optional actions, you've got you can turn on DALI free, you can turn on Code Interpreter, you can turn on Browse with Bing, those can be added or removed from your system.[00:18:00] And then you can upload files into it. And the files can be used in two different ways. You can... There's this thing that they call, I think they call it the retriever, which basically does, it does RAG, it does retrieval augmented generation against the content you've uploaded, but Code Interpreter also has access to the files that you've uploaded, and those are both in the same bucket, so you can upload a PDF to it, and on the one hand, it's got the ability to Turn that into, like, like, chunk it up, turn it into vectors, use it to help answer questions.[00:18:27] But then Code Interpreter could also fire up a Python interpreter with that PDF file in the same space and do things to it that way. And it's kind of weird that they chose to combine both of those things. Also, the limits are amazing, right? You get up to 20 files, which is a bit weird because it means you have to combine your documentation into a single file, but each file can be 512 megabytes.[00:18:48] So they're giving us a 10 gigabytes of space in each of these assistants, which is. Vast, right? And of course, I tested, it'll handle SQLite databases. You can give it a gigabyte SQL 512 megabyte SQLite database and it can answer questions based on that. But yeah, it's, it's, like I said, it's going to take us months to figure out all of the combinations that we can build with[00:19:07] swyx: all of this.[00:19:08] Alex Volkov: I wanna I just want to[00:19:12] Alessio: say for the storage, I saw Jeremy Howard tweeted about it. It's like 20 cents per gigabyte per system per day. Just in... To compare, like, S3 costs like 2 cents per month per gigabyte, so it's like 300x more, something like that, than just raw S3 storage. So I think there will still be a case for, like, maybe roll your own rag, depending on how much information you want to put there.[00:19