Podcasts about Redis Labs

  • 55PODCASTS
  • 69EPISODES
  • 44mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jul 16, 2024LATEST
Redis Labs

POPULARITY

20172018201920202021202220232024


Best podcasts about Redis Labs

Latest podcast episodes about Redis Labs

Humans of Martech
128: Vish Gupta: Why simplification should come before automation if you want to avoid a Frankenstack

Humans of Martech

Play Episode Listen Later Jul 16, 2024 51:15


What's up everyone, today we have the pleasure of sitting down with Vish Gupta, Marketing Operations Manager at Databricks. Summary: This episode with Vish is jam packed with advice for marketers making their way through the martech galaxy. We touch on the pitfalls of Frankenstein stacks and the perks of self-service martech. Vish explains why martech isn't just for engineers and highlights the efficiency of customized Asana intake forms. We also tackle the dangers of over-specialization for senior leaders. Additionally, we explore the intersection of martech and large language models (LLMs), providing insights on how to stay ahead in the evolving landscape.About VishVish started started her career as a Business Analyst in sales ops at Riverbed, a network management companyShe later joined Redis Labs – a real time data platform – as a Marketing Coordinator and got her first taste of analytics and reporting covering social, paid and eventsShe had a short contract at Brocade where she was Marketing Ops specialist and worked closely with their data science team to develop marketing reporting using BIShe then joined VMware, the popular virtualization software giant just before they were acquired by Broadcom. She was both a marketing analyst and later shifted to Growth Analyst where she focused more on Go to market strategyToday Vish is Marketing Operations Manager at Databricks, a leader in data and AI tech valued at more than 40BInfluences from a Tech-Infused ChildhoodVish's upbringing in a tech-savvy household shaped her career path significantly. Her parents, immigrants from India, transitioned into tech for better opportunities, despite initial dreams of cricket and architecture. This drive for a better lifestyle through technology was a core narrative in her family.Interestingly, Vish initially rebelled against this tech-centric world. She pursued psychology, striving to carve out her unique path. However, practicality led her back to tech, aligning her career with her desired lifestyle. This shift wasn't romantic but highlighted her adaptability and strategic thinking.Her parents' relentless upskilling and enthusiasm for technology left a lasting impression. Their constant engagement with new tools and innovations inspired Vish to embrace learning and staying current with tech trends. This mindset proved invaluable in her role at Databricks, where technological adeptness is key.Growing up in Silicon Valley provided Vish with a unique network and role models in tech. This environment, combined with her parents' stories and actions, underscored the importance of tech as a vehicle for advancement and success.Key takeaway: Vish's tech-centric upbringing, driven by her immigrant parents' pursuit of better opportunities, significantly shaped her career. Despite initially rebelling by studying psychology, practicality led her back to tech, showcasing her adaptability. Her parents' continuous upskilling inspired her commitment to learning, crucial in her role at Databricks.Why Your Frankenstein Martech Stack is Sabotaging Your SuccessA Frankenstein martech stack is like a tech monster stitched together from mismatched parts, always on the brink of chaos. Avoiding the creation of a Frankenstein stack is challenging for any marketing operations team who is trying to stay on top of new tools. Vish's mantra is that tools are not problem-solvers on their own; people and processes are the real drivers of solutions.She's a big proponent of understanding the role each tool plays within the organization. It's crucial to ask, "What is this tool doing?" If a tool isn't effectively serving a business purpose or hasn't been adopted well, it might be time to retire it. Simplification is key before automation. An overly complex or constantly changing process isn't a good candidate for automation.Vish points out a common misconception: the belief that automating everything is the ultimate solution. In reality, automating a clunky or inefficient process can exacerbate issues rather than resolve them. The focus should be on simplifying processes first. Only after streamlining should organizations consider tools that enhance efficiency.In practice, this means critically assessing each tool's contribution to the business. If a tool no longer serves its purpose or complicates processes, it's time to reconsider its place in the stack. Automation should follow simplification, ensuring that processes are as straightforward as possible before adding layers of technology.Key takeaway: Simplification should precede automation. Marketers must critically evaluate their tools and processes, focusing on streamlining before leveraging automation. This approach prevents the creation of a cumbersome, Frankenstein-like martech stack—a tech monster stitched together from mismatched parts, always on the brink of chaos.Empowering Campaign Ops with Self-Serve ModelsSetting up self-service models for campaigns is like to an all-you-can-eat buffet, where the food is already prepared, and you simply pick and choose what you want. In the realm of campaign operations, enabling self-service means providing users with the right tools and training, allowing them to be effective without the need for constant support.One such tool, Knak, plays a pivotal role in this self-service approach for Databricks. Vish explains that Knak allows users to create emails independently without needing to delve into their automation platform. This system keeps users out of the intricate details of their MAP, reducing the burden on the marketing operations team while still enabling efficient email creation. By using Knak, the process is streamlined: users work within Knak, sync their work to their MAP, perform quality assurance, and then execute their campaigns. This seamless integration not only simplifies operations but also enhances efficiency.Vish highlights the potential pitfalls of a full self-service model, where multiple users could potentially create chaos within their MAP. Instead, she advocates for a balanced approach, where specific components of the campaign process are made self-service. This method provides a win-win situation for both the operations team and the front-end users. The key is finding tools that allow for this partial self-service model, thereby maintaining control while empowering users.Knak was introduced to replace a previous tool that failed to meet expectations. Vish was part of the decision-making process, although the team had several champions for Knak and a supportive leader confident in their ability to select the right vendor. This collective decision-making and confidence in the tool have led to a successful implementation, demonstrating the importance of team involvement and leadership support in adopting new technologies.Key takeaway: Empowering users with the right self-service tools like Knak can streamline campaign operations and reduce the burden on the marketing team. A balanced approach to self-service can prevent chaos while maximizing efficiency.Why Martech Shouldn't Cater Exclusively to EngineersWhen asked if martech is really geared towards engineers, Vish provided a nuanced perspective. She finds the notion that martech should cater exclusively to engineers rather unsettling. For Vish, her expertise lies in mastering popular systems like Marketo and HubSpot, not engineering. She raises a compelling point about the value of specialized martech knowledge, emphasizing that the real worth of a martech professional is their ability to understand and implement what marketers need, not merely to build systems from scratch...

B2B Marketing: The Provocative Truth
Product-Led Growth for B2B Marketing, with Madhukar Kumar, CMO of Singlestore

B2B Marketing: The Provocative Truth

Play Episode Listen Later Sep 27, 2023 32:40


In this episode of B2B Marketing: The Provocative Truth, Benedict talks to Madhukar Kumar about what a product-led growth strategy looks like in B2B.Marketers in most businesses, especially in the SaaS space, understand their need to collaborate with their product teams to have an effective strategy. However, what B2B marketers might be missing out on is a “product-led” growth strategy, and it might be because they just don't know about how PLG could impact their business. So what exactly are B2B marketers missing out on? In what ways can marketers include a PLG strategy among their wider marketing activities, and how could it uniquely support business objectives?Madhukar Kumar is the CMO of Singlestore, and he has 15+ years of experience in leading product and marketing teams for global organisations. Prior to joining Singlestore, Madhukar has served as Head of Growth and Marketing at DevRev, VP Product and Solutions Marketing at Nutanix, VP Product and Developer Marketing at Redis Labs, and more. In addition to his career, Madhukar has previously guest lectured at Duke University's Fuqua School of Business in which he teaches New Product Development. He also is currently writing a book which he publishes chapter by chapter on Substack called “ Growth”.You can find Madhukar Kumar on Linkedin.You can subscribe to Madhukar's Substack here. You can watch full video versions of the podcast on our YouTube channel.Ready to provoke the truth? Get in touch at alan-agency.com. Hosted on Acast. See acast.com/privacy for more information.

Software Sessions
Victor Adossi on Yak Shaving

Software Sessions

Play Episode Listen Later Jan 2, 2023 110:47


Victor is a software consultant in Tokyo who describes himself as a yak shaver. He writes on his blog at vadosware and curates Awesome F/OSS, a mailing list of open source products. He's also a contributor to the Open Core Ventures blog. Before our conversation Victor wrote a structured summary of how he works on projects. I recommend checking that out in addition to the episode. Topics covered: Most people should use Dokku or CapRover But he uses Kubernetes anyways Hosting a Database in Kubernetes Learning technology You don't really know a thing until something goes wrong History of Frontend Development Context from lower layers of the stack and historical projects Good project pages have comparisons to other products Choosing technologies Language choice affects maintainability Knowing an ecosystem Victor's preferred stack Technology bake offs Posting findings means you get free corrections Why people use medium instead of personal sites Victor VADOSWARE - Blog How Victor works on Projects - Companion post for this episode Awesome FOSS - Curated list of OSS projects NimbusWS - Hosted OSS built on top of budget cloud providers Unvalidated Ideas - Startup ideas for side project inspiration PodcastSaver - Podcast index that allows you to choose Postgres or MeiliSearch and compare performance and results of each Victor's preferred stack Docker - Containers Kubernetes - Container provisioning (Though at the beginning of the episode he suggests Dokku for single server or CapRover for multiple) TypeScript - JavaScript with syntax for types. Victor's default choice. Rust - Language he uses if doing embedded work, performance is critical, or more correctness is desired Haskell - Language he uses if correctness and type system is the most important for the project Postgresql - General purpose database that's good enough for most use cases including full text search. KeyDB - Redis compatible database for caching. Acquired by Snap and then made open source. Victor uses it over Redis because it is multi threaded and supports flash storage without a Redis Enterprise license. Pulumi - Provision infrastructure with the languages you're already using instead of a specialized one or YAML Svelte and SvelteKit - Preferred frontend stack. Previously used Nuxt. Search engines Postgres Full Text Search vs the rest Optimizing Postgres Text Search with Trigrams OpenSearch - Amazon's fork of Elasticsearch typesense meilisearch sonic Quickwit JavaScript build tools Babel SWC Webpack esbuild parcel Vite Turbopack JavaScript frameworks React Vue Svelte Ember Frameworks built on top of frameworks Next - React Nuxt - Vue SvelteKit - Svelte Astro - Multiple Historical JavaScript tools and frameworks Underscore jQuery MooTools Backbone AngularJS Knockout Aurelia GWT Bower - Frontend package manager Grunt - Task runner Gulp - Task runner Related Links Dokku - Open source single-host alternative to Heroku Cloud Native Buildpacks - Buildpacks created by Heroku and Pivotal and used by Dokku CapRover - An open source PaaS-like abstraction built on top of Docker Swarm Kelsey Hightower's tweet about being cautious about running databases on Kubernetes Settling the Myth of Transparent HugePages for Databases Kubernetes Container Storage Interface (CSI) Kubernetes Local Persistent Volumes Longhorn - Distributed block storage for Kubernetes Postgres docs Postgres TOAST Everything I've seen on optimizing Postgres on ZFS Kubernetes Workload Resources Kubernetes Network Plugins Kubernetes Ingress Traefik Kubernetes the Hard Way (Setting up a cluster in a way that optimizes for learning) How does TLS work Let's Encrypt Cert manager for Kubernetes Choose Boring Technology A Linux user's guide to Logical Volume Management Docker networking overview Kubernetes Scheduler Tauri - Build desktop applications with web technology and Rust ripgrep - CLI tool to recursively search directory for a regex pattern (Meant to be a rust replacement for grep) angle-grinder / ag - CLI tool to parse and process log files written in rust Object.observe ECMAScript Proposal to be Withdrawn Ruby on Rails - Ruby web framework Django - Python web framework Laravel - PHP web framework Adonis - JavaScript NestJS - JavaScript What is a NullPointerException, and how do I fix it? Mastodon Clap - CLI argument parser for Rust AWS CDK - Provision AWS infrastructure using programming languages Terraform - Provision infrastructure with terraform language URL canonicalization of duplicate pages and the use of the canonical tag - Used by dev.to to send google traffic to the original blogpost instead of dev.to Transcript You can help edit this transcript on GitHub. [00:00:00] Jeremy: This episode, I talk to Victor Adossi who describes himself as a yak shaver. Someone who likes trying a whole bunch of different technologies, seeing the different options. We talk about what he uses, the evolution of front end development, and his various projects. Talking to just different people it's always good to get where they're coming from because something that works for Google at their scale is going to be different than what you're doing with one of your smaller projects. [00:00:31] Victor: Yeah, the context. Of course in direct conflict with that statement, I definitely use Google technology despite not needing to at all right? Like, you know, 99% of people who are doing like people like to call it indiehacking or building small products could probably get by with just Dokku. If you know Dokku or like CapRover. Are two projects that'll be like, Oh, you can just push your code here, we'll build it up like a little mini Heroku PaaS thing and just go on one big server, right? Like 99% of the people could just use that. But of course I'm not doing that. So I'm a bit of a hypocrite in that sense. I know what I should be doing, but I'm not doing that. I am writing a Kubernetes cluster with like five nodes for no reason. Uh, yeah, I dunno, people don't normally count the controllers. [00:01:24] Jeremy: Dokku and CapRover, I think those are where it's supposed to create a heroku like experience I think it's based off of the heroku buildpacks right? At least Dokku is? [00:01:36] Victor: Yeah Buildpacks has actually been spun out into like a community thing so like pivotal and heroku, it's like buildpacks.io, they're trying to build a wider standard around it so that more people can get involved. And buildpacks are actually obviously fantastic as a technology and as a a process piece. There's not much else like them and you know, that's obvious from like Heroku's success and everything. I know Dokku uses that. I don't know that Caprover does, but I haven't, I haven't really run Caprover that much. They, they probably do. Like at this point if you're going to support building from code, it seems silly to try and build your own buildpacks. Cause that's what you will do, eventually. So you might as well use what's there. Anyway, this is like just getting to like my personal opinions at this point, but like, if you think containers are a bad idea in 2022, You're wrong, you should, you should stop. Like you should, you should stop. Think about it. I mean, obviously there's not, um, I got a really great question at an interview once, which is, where are containers a bad idea? That's probably one of the best like recent interview questions I've ever gotten cause I was like, Oh yeah, I mean, like, you can't, it can't be perfect everywhere, right? Nothing's perfect everywhere. So it's like, where is it? Uh, and of course the answer was networking, right? (unintelligible) So if you need absolute performance, but like for just about everything else. Containers are kind of it at this point. Like, time has born it out, I think. So yeah, I always just like bias at taking containers at this point. So I'm probably more of a CapRover person than a Dokku person, even though I have not used, I don't use CapRover. [00:03:09] Jeremy: Well, like something that I've heard with containers, and maybe it's changed recently, but, but something that was kind of holdout was when people would host a database sometimes they would oh we just don't wanna put this in a container and I wonder if like that matches with your thinking or if things have changed. [00:03:27] Victor: I am not a database administrator right like I read postgres docs and I read the, uh, the Postgres documentation, and I think I know a bit about postgres but I don't commit right like so and I also haven't, like, oh, managed X terabytes on one server that you are making sure never goes down kind of deal. But the stickiness for me, at least from when I've run, So I've done a lot of tests with like ZFS and Postgres and like, um, and also like just trying to figure out, and I run Postgres in Kubernetes of course, like on my cluster and a lot of the stuff I found around is, is like fiddly kernel things like sort of base kernel settings that you need to have set. Like, you know, stuff like should you be using transparent huge pages, like stuff like that. But once you have that settled. Containers are just processes with name spacing and resource control, right? Like, that's it. there are some other ins and outs, but for the most part, if you're fine running a process, so people ran processes, right? And they were just completely like unprotected. Then people made users for the processes and they limited the users and ran the processes, right? Then the next step is now you can run a process and then do the limiting the name spaces in cgroups dynamically. Like there, there's, there's sort of not a humongous difference, unless you're hitting something very specific. Uh, but yeah, databases have been a point of contention, but I think, Kelsey Hightower had that tweet yeah. That was like, um, don't run databases in Kubernetes. And I think he called it back. [00:04:56] Victor: I don't know, but I, I know that was uh, was one of those things that people were really unsure about at first, but then after people sort of like felt it out, they were like, Oh, it's actually fine. Yeah. [00:05:06] Jeremy: Yeah I vaguely remember one of the concerns having to do with persistent storage. Like there were challenges with Kubernetes and needing to keep that storage around and I don't know if that's changed yeah or if that's still a concern. [00:05:18] Victor: Uh, I'd say that definitely has changed. Uh, and it was, it was a concern, depending on where you were. Mostly people who are running AKS or EKS or you know, all those other managed Kubernetes, they're just using EBS or like whatever storage provider is like offering for storage. Most of those people don't actually have that much of a problem with, storage in general. Now, high performance storage is obviously different, right? So like, so you'll, you're gonna have to start doing manual, like local volume management and stuff like that. it was a problem, because obviously CSI (Kubernetes Container Storage Interface) didn't exist for some period of time, and like there was, it was hard to know what to do for if you were just running a Kubernetes cluster. I think a lot of people were just using local, first of all, local didn't even exist for a bit. Um, they were just using host path, right? And just like, Oh, it's on the disk somewhere. Where do we, we have to go get it right? Or we have to like, sort of manage that. So that was something most people weren't ready for, especially if you were just, if you weren't like sort of a, a, a traditional sysadmin and used to doing that stuff. And then of course local volumes came out, but I think they still had to be, um, pre-provisioned. So that's sysadmin stuff that most people, you know, maybe aren't, aren't necessarily ready for. Uh, and then most of the general solutions were slow. So like, I used Longhorn (https://longhorn.io) for a long time and Longhorn, Longhorn's great. And super easy to set up, but it can be slower and you can have some, like, delays in mount time. it wasn't ideal for, for most people. So yeah, I, overall it's true. Databases, Databases in Kubernetes were kind of fraught with peril for a while, but it wasn't for the reason that, it wasn't for the fundamental reason that Kubernetes was just wrong or like, it wasn't the reason most people think of, which is just like, Oh, you're gonna break your database. It's more like, running a database is hard and Kubernetes hasn't solved all the hard problems. Like, cuz that's what Kubernetes does. It basically solves a lot of problems in a very generic way. Right. So it just hadn't solved all those problems yet at this point. I think it's got decent answers on a lot of them. So I, I mean, I don't know. I I do it. Don't, don't take what I'm saying to your, you know, PM meeting or your standup meeting, uh, anyone who's listening. But it's more like if you could solve the problems with databases in the sense before. You could probably solve 'em on Kubernetes now with a good understanding of Kubernetes. Cause at the end of the day, it's all the same stuff. Just Kubernetes makes it a little easier to, uh, do it dynamically. [00:07:50] Jeremy: It sounds like you could do it before, but some of the, I guess the tools or the ways of doing persistent storage were not quite there yet, or they were difficult to use. And so that was why people at the start were like, Okay, maybe it's not a good idea, but, now maybe there's some established practices for how you should run a database in Kubernetes. And I, I suppose the other aspect too is that, like you were saying, Kubernetes is its own thing. You gotta learn Kubernetes and all its intricacies. And then running a database is also its own challenge. So if you stack the two of them together and, and the path was not really clear then maybe at the start it wasn't the best idea. Um, uh, if somebody was going to try it out now, was there like a specific resource you looked at or a specific path to where like okay this is is how I'm going to do it. [00:08:55] Victor: I'll just say what I normally recommend to everybody. Cause it depends on which path you wanna go right? If you wanna go down like running a database path first and figure that out, fill out that skill tree. Like go read the Postgres docs. Well, first of all, use Postgres. That's the first tip there. But like, read those documents. And obviously you don't have to understand everything. You won't understand everything. But knowing the big pieces and sort of letting your brain see the mention of like a whole bunch of things, like what is toast? Oh, you can do compression on columns. Like, you can do some, some things concurrently. Um, you know, what ALTER TABLE looks like. You get all that stuff kind of in your head. Um, and then I personally really believe in sort of learning by building and just like iterating. you won't get it right the first time. It's just like, it's not gonna happen. You're get, you can, you can get better the first time, right? By being really prepared and like, and leave yourself lots of outs, but you kind of have to like, get it out there. Do do your best to make sure that you can't fail, uh, catastrophically, right? So this is like, goes back to that decision to like use ZFS as the bottom of this I'm just like, All right, well, I, I'm not a file systems expert, but if I. I could delegate some of that, you know, some of that, I can get some of that knowledge from someone else. Um, and I can make it easier for me to not fail catastrophically. For the database side, actually read documentation on Postgres or the whatever database you're going to use, make sure you at least understand that. Then start running it like locally or whatever. Again, Docker use, use Docker locally. It's, it's, it's fine. and then, you know, sort of graduate to running sort of more progressively, more complicated versions. what I would say for the Kubernetes side is actually similar. the Kubernetes docs are really good. they're very large. but they're good. So you can actually go through and know all the, like, workload, workload resources, know, like what a config map is, what a secret is, right? Like what etcd is doing in this whole situation. you know, what a kublet is versus an API server, right? Like the, the general stuff, like if you go through all that, you should have like a whole bunch of ideas at least floating around in your head. And then once you try and start setting up a server, they will all start to pop up again, right? And they'll all start to like, you, like, Oh, okay, I need a CNI (Container Networking) plugin because something needs to make the services available, right? Or something needs to power the ingress, right? Like, if I wanna be able to get traffic, I need an ingress object. But what listens, what does that, what makes that ingress object do anything? Oh, it's an ingress controller. nginx, you know, almost everyone's heard of nginx, so they're like, okay. Um, nginx, has an ingress control. Actually there's, there used to be two, I assume there's still two, but there's like one that's maintained by Kubernetes, one that's maintained by nginx, the company or whatever. I use traefik, it's fantastic. but yeah, so I think those things kind of fall out and that is almost always my first way to explain it and to start building. And tinkering iteratively. So like, read the documentation, get a good first grasp of it, and then start building yourself because you'll, you'll get way more questions that way. Like, you'll ask way more questions, you won't be able to make progress. Uh, and then of course you can, you know, hop into slacks or like start looking around and, and searching on the internet. oh, one of the things that really helped me out early learning Kubernetes was, Kelsey Hightower's, um, learn Kubernetes the hard way. I'm also a big believer in doing things the hard way, at least knowing what you're choosing to not know, right? distributing file system, Deltas, right? Or like changes to a file system over the network is not a new problem. Other people have solved it. There's a lot of complexity there. but if you at least know the sort of surface level of what the thing does and what it's supposed to do and how it's supposed to do it, you can make a decision on, Oh, how deep am I going to go? Right? To prevent yourself from like, making a mistake or going too deep in the rabbit hole. If you have an idea of the sort of ecosystem and especially like, Oh, here, like the basics of how I can use this thing, that's generally very good. And doing things the hard way is a great way to get a, a feel for that, right? Cause if you take some chunk and like, you know, the first level of doing things the hard way, uh, or, you know, Kelsey Hightower's guide is like, get a machine, right? Like, so, like, if you somehow were like, Oh, I wanna run a Kubernetes cluster. but, you know, I don't want use necessarily EKS and you wanna learn it the hard way. You have to go get a machine, right? If you, if you're not familiar, if you run on Heroku the whole time, like you didn't manage your own machines, you gotta go like, figure out EC2, right? Or, I personally use, hetzner I love hetzner, so you have to go figure out hetzner, digital ocean, whatever. Right. And then the next thing's like, you know, the guide's changed a lot, and I haven't, I haven't looked at it in like, in years, actually a while since I, since I've sort of been, I guess living it, but it's, it's like generate certificates, right? So if you've never dealt with SSL and like, sort of like, or I should say TLS uh, and generating certificates and how that whole dance works, right? Which is fascinating because it's like, oh, right, nothing's secure on the internet, except that we distribute root certificates on computers that are deployed in every OS, right? Like, that's a sort of fundamental understanding you may not go deep enough to realize, but if you are fascinated by it, trying to do it manually would lead you down that path. You'd be like, Oh, what, like what is this thing? What is a CSR? Like, why, who is signing my request? Right? And it's like, why do we trust those people? Right? And it's like, you know, that kind of thing comes out and I feel like you can only get there from trying to do it, you know, answering the questions you can. Right. And again, it takes some judgment to know when you should not go down a rabbit hole. uh, and then iterating. of course there are people who are excellent at explaining. you can find some resources that are shortcuts. But, uh, I think particularly my bread and butter has been just to try and do it the hard way. Avoid pitfalls or like rabbit holes when you can. But know that the rabbit hole is there, and then keep going. And sometimes if something's just too hard, you're not gonna get it the first time. Like maybe you'll have to wait like another three months, you'll try again and you'll know more sort of ambiently about everything else. You get a little further that time. that's how I feel about that. Anyway. [00:15:06] Jeremy: That makes sense to me. I think sometimes when people take on a project, they try to learn too many things at the same time. I, I think the example of Kubernetes and Postgres is pretty good example, where if you're not familiar with how do I install Postgres on bare metal or a vm, trying to make sense of that while you're trying to into is probably gonna be pretty difficult. So, so splitting them up and learning them individually, that makes a lot of sense to me. And the whole deciding how deep you wanna go. That's interesting too, because I think that's very specific to the person right because sometimes you wanna go a little deeper because otherwise you don't understand how the two things connect together. But other times it's just like with the example with certificates, some people they may go like, I just put in let's encrypt it gives me my cert I don't care right then, and then, and some people they wanna know like okay how does the whole certificate infrastructure work which I think is interesting, depending on who you are, maybe you go ahh maybe it doesn't really matter right. [00:16:23] Victor: Yeah, and, you know, shout out to Let's Encrypt . It's, it's amazing, right? think Singlehandedly the most, most of the deployment of HTTPS that happens these days, right? so many so many of like internet providers and uh, sort of service providers will use it right? Under the covers. Like, Hey, we've got you free SSL through Let's Encrypt, right? Like, kind of like under the, under the covers. which is awesome. And they, and they do it. So if you're listening to this, donate to them. I've done it. So now that, now the pressure is on whoever's listening, but yeah, and, and I, I wanna say I am that person as well, right? Like, I use, Cert Manager on my cluster, right? So I'm just like, I don't wanna think about it, but I, you know, but I, I feel like I thought about it one time. I have a decent grasp. If something changes, then I guess I have to dive back in. I think it, you've heard the, um, innovation tokens idea, right? I can't remember the site. It's like, um, do, like do boring tech or something.com (https://boringtechnology.club/) . Like it shows up on sort of hacker news from time to time, essentially. But it's like, you know, you have a certain amount of tokens and sort of, uh, we'll call them tokens, but tolerance for complexity or tolerance for new, new ideas or new ways of doing things, new processes. Uh, and you spend those as you build any project, right? you can be devastatingly effective by just sticking to the stack, you know, and not introducing anything new, even if it's bad, right? and there's nothing wrong with LAMP stack, I don't wanna annoy anybody, but like if you, if you're running LAMP or if you run on a hostgator, right? Like, if you run on so, you know, some, some service that's really old but really works for you isn't, you know, too terribly insecure or like, has the features you need, don't learn Kubernetes then, right? Especially if you wanna go fast. cuz you, you're spending tokens, right? You're spending, essentially brain power, right? On learning whatever other thing. So, but yeah, like going back to that, databases versus databases on Kubernetes thing, you should probably know one of those before you, like, if you're gonna do that, do that thing. You either know Kubernetes and you like, at least feel comfortable, you know, knowing Kubernetes extremely difficult obviously, but you feel comfortable and you feel like you can debug. Little bit of a tangent, but maybe that's even a better, sort of watermark if you know how to debug a thing. If, if it's gone wrong, maybe one or five or 10 or 20 times and you've gotten out. Not without documentation, of course, cuz well, if you did, you're superhuman. But, um, but you've been able to sort of feel your way out, right? Like, Oh, this has gone wrong and you have enough of a model of the system in your head to be like, these are the three places that maybe have something wrong with them. Uh, and then like, oh, and then of course it's just like, you know, a mad dash to kind of like, find, find the thing that's wrong. You should have confidence about probably one of those things before you try and do both when it's like, you know, complex things like databases and distributed systems management, uh, and orchestration. [00:19:18] Jeremy: That's, that's so true in, in terms of you are comfortable enough being able to debug a problem because it's, I think when you are learning about something, a lot of times you start with some kind of guide or some kind of tutorial and you follow the steps. And if it all works, then great. Right? But I think it's such a large leap from that to something went wrong and I have to figure it out. Right. Whether it's something's not right in my Dockerfile or my postgres instance uh, the queries are timing out. so many things that could go wrong, that is the moment where you're forced to figure out, okay, what do I really know about this not thing? [00:20:10] Victor: Exactly. Yeah. Like the, the rubber's hitting the road it's uh you know the car's about to crash or has already crashed like if I open the bonnet, do I know what's happening right or am I just looking at (unintelligible). And that's, it's, I feel sort a little sorry or sad for, for devs that start today because there's so much. Complexity that's been built up. And a lot of it has a point, but you need to kind of have seen the before to understand the point, right? So I like, I like to use front end as an example, right? Like the front end ecosystem is crazy, and it has been crazy for a very long time, but the steps are actually usually logical, right? Like, so like you start with, you know, HTML, CSS and JavaScript, just plain, right? And like, and you can actually go in lots of directions. Like HTML has its own thing. CSS has its own sort of evolution sort of thing. But if we look at JavaScript, you're like, you're just writing JavaScript on every page, right? And like, just like putting in script tags and putting in whatever, and it's, you get spaghetti, you get spaghetti, you start like writing, copying the same function on multiple pages, right? You just, it, it's not good. So then people, people make jquery, right? And now, now you've got like a, a bundled set of like good, good defaults that you can, you can go for, right? And then like, you know, libraries like underscore come out for like, sort of like not dom related stuff that you do want, you do want everywhere. and then people go from there and they go to like backbone or whatever. it's because Jquery sort of also becomes spaghetti at some point and it becomes hard to manage and people are like, Okay, we need to sort of like encapsulate this stuff somehow, right? And like the new tools or whatever is around at the same timeframe. And you, you, you like backbone views for example. and you have people who are kind of like, ah, but that's not really good. It's getting kind of slow. Uh, and then you have, MVC stuff comes out, right? Like Angular comes out and it's like, okay, we're, we're gonna do this thing called dirty checking, and it's gonna be, it's gonna be faster and it's gonna be like, it's gonna be less sort of spaghetti and it's like a little bit more structured. And now you have sort of like the rails paradigm, but on the front end, and it takes people to get a while to get adjusted to that, but then that gets too heavy, right? And then dirty checking is realized to be a mistake. And then, you get stuff like MVVM, right? So you get knockout, like knockout js and you got like Durandal, and like some, some other like sort of front end technologies that come up to address that problem. Uh, and then after that, like, you know, it just keeps going, right? Like, and if you come in at the very end, you're just like, What is happening? Right? Like if it, if it, if someone doesn't sort of boil down the complexity and reduce it a little bit, you, you're just like, why, why do we do this like this? Right? and sometimes there's no good reason. Sometimes the complexity is just like, is unnecessary, but having the steps helps you explain it, uh, or helps you understand how you got there. and, and so I feel like that is something younger people or, or newer devs don't necessarily get a chance to see. Cause it just, it would take, it would take very long right? And if you're like a new dev, let's say you jumped into like a coding bootcamp. I mean, I've got opinions on coding boot camps, but you know, it's just like, let's say you jumped into one and you, you came out, you, you made it. It's just, there's too much to know. sure, you could probably do like HTML in one month. Well, okay, let's say like two weeks or whatever, right? If you were, if you're literally brand new, two weeks of like concerted effort almost, you know, class level, you know, work days right on, on html, you're probably decently comfortable with it. Very comfortable. CSS, a little harder because this is where things get hard. Cause if you, if you give two weeks for, for HTML, CSS is harder than HTML kind of, right? Because the interactions are way more varied. Right? Like, and, and maybe it's one of those things where you just, like, you, you get somewhat comfortable and then just like know that in the future you're gonna see something you don't understand and have to figure it out. Uh, but then JavaScript, like, how many months do you give JavaScript? Because if you go through that first like, sort of progression that I, I I, I, I mentioned everyone would have a perfect sort of, not perfect but good understanding of the pieces, right? Like, why did we start transpiling at all? Right? Like, uh, or why did you know, why did we adopt libraries? Like why did Bower exist? No one talks about Bower anymore, obviously, but like, Bower was like a way to distribute front end only packages, right? Um, what is it? Um, Uh, yes, there's grunt. There's like the whole build system thing, right? Once, once we decide we're gonna, we're gonna do stuff to files before we, before we push. So there's grunt, there's, uh, gulp, which is like grunt, but like, Oh, we're gonna do it all in memory. We're gonna pipe, we're gonna use this pipes thing to make sure everything goes fast. then there's like, of course that leads like the insanity that's webpack. And then there's like parcel, which did better. There's vite there's like, there's all this, there's this progression, but how many months would it take to know that progression? It, it's too long. So they end up just like, Hey, you're gonna learn react. Which is the right thing because it's like, that's what people hire for, right? But then you're gonna be in react and be like, What's webpack, right? And it's like, but you can't go down. You can't, you don't have the time. You, you can't sort of approach that problem from the other direction where you, which would give you better understanding cause you just don't have the time. I think it's hard for newer devs to overcome this. Um, but I think there are some, there's some hope on the horizon cuz some things are simpler, right? Like some projects do reduce complexity, like, by watching another project sort of innovate so like react. Wasn't the first component, first framework, right? Like technically, I, I think, I think you, you might have to give that to like, to maybe backbone because like they had views and like marionette also went with that. Like maybe, I don't know, someone, someone I'm sure will get in like, send me an angry email, uh, cuz I forgot you Moo tools or like, you know, Ember Ember. They've also, they've also been around, I used to be a huge Ember fan, still, still kind of am, but I don't use it. but if you have these, if you have these tools, right? Like people aren't gonna know how to use them and Vue was able to realize that React had some inefficiencies, right? So React innovates the sort of component. So Reintroduces the component based model component first, uh, front end development model. Vue sees that and it's like, wait a second, if we just export this like data object, and of course that's not the only innovation of Vue, but if we just export this data object, you don't have to do this fine grained tracking yourself anymore, right? You don't have to tell React or tell your the system which things change when other things change, right? Like you, you don't have to set up this watching and stuff, right? Um, and that's one of the reasons, like Vue is just, I, I, I remember picking up Vue and being like, Oh, I'm done. I'm done with React now. Because it just doesn't make sense to use React because they Vue essentially either, you know, you could just say they learned from them or they, they realize a better way to do things that is simpler and it's much easier to write. Uh, and you know, functionally similar, right? Um, similar enough that it's just like, oh they boil down some of that complexity and we're a step forward and, you know, in other ways, I think. Uh, so that's, that's awesome. Every once in a while you get like a compression in the complexity and then it starts to ramp up again and you get maybe another compression. So like joining the projects that do a compression. Or like starting to adopting those is really, can be really awesome. So there's, there's like, there's some hope, right? Cause sometimes there is a compression in that complexity and you you might be lucky enough to, to use that instead of, the thing that's really complex after years of building on it. [00:27:53] Jeremy: I think you're talking about newer developers having a tough time making sense of the current frameworks but the example you gave of somebody starting from HTML and JavaScript going to jquery backbone through the whole chain, that that's just by nature of you've put in a lot of time right you've done a lot of work working with each of these technologies you see the progression as if someone is starting new just by nature of you being new you won't have been able to spend that time [00:28:28] Victor: Do you think it could work? again, the, the, the time aspect is like really hard to get like how can you just avoid spending time um to to learn things that's like a general problem I think that problem is called education in the general sense. But like, does it make sense for a, let's say a bootcamp or, or any, you know, school right? To attempt to guide people through the previous solutions that didn't work, right? Like in math, you don't start with calculus, right? It just wouldn't, it doesn't make sense, right? But we try and start with calculus in software, right? We're just like, okay, here's the complexity. You've got all of it. Don't worry. Just look at this little bit. If, you know, if the compiler ever spits out a weird error uh oh, like, you're, you're, you're in for trouble cuz you, you just didn't get the. get the basics. And I think that's maybe some of what is missing. And the thing is, it is like the constraints are hard, right? No one has infinite time, right? Or like, you know, even like, just tons of time to devote to learning, learning just front end, right? That's not even all of computing, That's not even the algorithm stuff that some companies love to throw at you, right? Uh, or the computer sciencey stuff. I wonder if it makes more sense to spend some time taking people through the progression, right? Because discovering that we should do things via components, let's say, or, or at least encapsulate our functionality to components and compose that way, is something we, we not everyone knew, right? Or, you know, we didn't know wild widely. And so it feels like it might make sense to touch on that sort of realization and sort of guide the student through, you know, maybe it's like make five projects in a week and you just get progressively more complex. But then again, that's also hard cause effort, right? It's just like, it's a hard problem. But, but I think right now, uh, people who come in at the end and sort of like see a bunch of complexity and just don't know why it's there, right? Like, if you've like, sort of like, this is, this applies also very, this applies to general, but it applies very well to the Kubernetes problem as well. Like if you've never managed nginx on more than one machine, or if you've never tried to set up a, like a, to format your file system on the machine you just rented because it just, you know, comes with nothing, right? Or like, maybe, maybe some stuff was installed, but, you know, if you had to like install LVM (Logical Volume Manager) yourself, if you've never done any of that, Kubernetes would be harder to understand. It's just like, it's gonna be hard to understand. overlay networks are hard for everyone to understand, uh, except for network people who like really know networking stuff. I think it would be better. But unfortunately, it takes a lot of time for people to take a sort of more iterative approach to, to learning. I try and write blog posts in this way sometimes, but it's really hard. And so like, I'll often have like an idea, like, so I call these, or I think of these as like onion, onion style posts, right? Where you either build up an onion sort of from the inside and kind of like go out and like add more and more layers or whatever. Or you can, you can go from the outside and sort of take off like layers. Like, oh, uh, Kubernetes has a scheduler. Why do they need a scheduler? Like, and like, you know, kind of like, go, go down. but I think that might be one of the best ways to learn, but it just takes time. Or geniuses and geniuses who are good at two things, right? Good at the actual technology and good at teaching. Cuz teaching is a skill and it's very hard. and, you know, shout out to teachers cuz that's, it's, it's very difficult, extremely frustrating. it's hard to find determinism in, in like methods and solutions. And there's research of course, but it's like, yeah, that's, that's a lot harder than the computer being like, Nope, that doesn't work. Right? Like, if you can't, if you can't, like if you, if the function call doesn't work, it doesn't work. Right. If the person learned suboptimally, you won't know Right. Until like 10 years down the road when, when they can't answer some question or like, you know, when they, they don't understand. It's a missing fundamental piece anyway. [00:32:24] Jeremy: I think with the example of front end, maybe you don't have time to walk through the whole history of every single library and framework that came but I think at the very least, if you show someone, or you teach someone how to work with css, and you have them, like you were talking about components before you have them build a site where there's a lot of stuff that gets reused, right? Maybe you have five pages and they all have the same nav bar. [00:33:02] Victor: Yeah, you kind of like make them do it. [00:33:04] Jeremy: Yeah. You make 'em do it and they make all the HTML files, they copy and paste it, and probably your students are thinking like, ah, this, this kind of sucks [00:33:16] Victor: Yeah [00:33:18] Jeremy: And yeah, so then you, you come to that realization, and then after you've done that, then you can bring in, okay, this is why we have components. And similarly you brought up, manual dom manipulation with jQuery and things like that. I, I'm sure you could come up with an example of you don't even necessarily need to use jQuery. I think people can probably skip that step and just use the the, the API that comes with the browser. But you can have them go in like, Oh, you gotta find this element by the id and you gotta change this based on this, and let them experience the. I don't know if I would call it pain, but let them experience like how it was. Right. And, and give them a complex enough task where they feel like something is wrong right. Or, or like, there, should be something better. And then you can go to you could go straight to vue or react. I'm not sure if we need to go like, Here's backbone, here's knockout. [00:34:22] Victor: Yeah. That's like historical. Interesting. [00:34:27] Jeremy: I, I think that would be an interesting college course or something that. Like, I remember when, I went through school, one of the classes was programming languages. So we would learn things like, Fortran and stuff like that. And I, I think for a more frontend centered or modern equivalent you could go through, Hey, here's the history of frontend development here's what we used to do and here's how we got to where we are today. I think that could be actually a pretty interesting class yeah [00:35:10] Victor: I'm a bit interested to know you learned fortran in your PL class. I, think when I went, I was like, lisp and then some, some other, like, higher classes taught haskell but, um, but I wasn't ready for haskell, not many people but fortran is interesting, I kinda wanna hear about that. [00:35:25] Jeremy: I think it was more in terms of just getting you exposed to historically this is how things were. Right. And it wasn't so much of like, You can take strategies you used in Fortran into programming as a whole. I think it was just more of like a, a survey of like, Hey, here's, you know, here's Fortran and like you were saying, here's Lisp and all, all these different languages nd like at least you, you get to see them and go like, yeah, this is kind of a pain. [00:35:54] Victor: Yeah [00:35:55] Jeremy: And like, I understand why people don't choose to use this anymore but I couldn't take away like a broad like, Oh, I, I really wish we had this feature from, I think we were, I think we were using Fortran 77 or something like that. I think there's Fortran 77, a Fortran 90, and then there's, um, I think, [00:36:16] Victor: Like old fortran, deprecated [00:36:18] Jeremy: Yeah, yeah, yeah. So, so I think, I think, uh, I actually don't know if they're, they're continuing to, um, you know, add new things or maintain it or it's just static. But, it's, it's more, uh, interesting in terms of, like we were talking front end where it's, as somebody who's learning frontend development who is new and you get to see how, backbone worked or how Knockout worked how grunt and gulp worked. It, it's like the kind of thing where it's like, Oh, okay, like, this is interesting, but let us not use this again. Right? [00:36:53] Victor: Yeah. Yeah. Right. But I also don't need this, and I will never again [00:36:58] Jeremy: yeah, yeah. It's, um, but you do definitely see the, the parallels, right? Like you were saying where you had your, your Bower and now you have NPM and you had Grunt and Gulp and now you have many choices [00:37:14] Victor: Yeah. [00:37:15] Jeremy: yeah. I, I think having he history context, you know, it's interesting and it can be helpful, but if somebody was. Came to me and said hey I want to learn how to build websites. I get into front end development. I would not be like, Okay, first you gotta start moo tools or GWT. I don't think I would do that but it I think at a academic level or just in terms of seeing how things became the way they are sure, for sure it's interesting. [00:37:59] Victor: Yeah. And I, I, think another thing I don't remember who asked or why, why I had to think of this lately. um but it was, knowing the differentiators between other technologies is also extremely helpful right? So, What's the difference between ES build and SWC, right? Again, we're, we're, we're leaning heavy front end, but you know, just like these, uh, sorry for context, of course, it's not everyone a front end developer, but these are two different, uh, build tools, right? For, for JavaScript, right? Essentially you can think of 'em as transpilers, but they, I think, you know, I think they also bundle like, uh, generally I'm not exactly sure if, if ESbuild will bundle as well. Um, but it's like one is written in go, the other one's written in Rust, right? And sort of there's, um, there's, in addition, there's vite which is like vite does bundle and vite does a lot of things. Like, like there's a lot of innovation in vite that has to have to do with like, making local development as fast as possible and also getting like, you're sort of making sure as many things as possible are strippable, right? Or, or, or tree shakeable. Sorry, is is is the better, is the better term. Um, but yeah, knowing, knowing the, um, the differences between projects is often enough to sort of make it less confusing for me. Um, as far as like, Oh, which one of these things should I use? You know, outside of just going with what people are recommending. Cause generally there is some people with wisdom sometimes lead the crowd sometimes, right? So, so sometimes it's okay to be, you know, a crowd member as long as you're listening to the, to, to someone worth listening to. Um, and, and so yeah, I, I think that's another thing that is like the mark of a good project or, or it's not exclusive, right? It's not, the condition's not necessarily sufficient, but it's like a good projects have the why use this versus x right section in the Readme, right? They're like, Hey, we know you could use Y but here's why you should use us instead. Or we know you could use X, but here's what we do better than X. That might, you might care about, right? That's, um, a, a really strong indicator of a project. That's good cuz that means the person who's writing the project is like, they've done this, the survey. And like, this is kind of like, um, how good research happens, right? It's like most of research is reading what's happening, right? To knowing, knowing the boundary you're about to push, right? Or try and sort of like push one, make one step forward in, um, so that's something that I think the, the rigor isn't in necessarily software development everywhere, right? Which is good and bad. but someone who's sort of done that sort of rigor or, and like, and, and has, and or I should say, has been rigorous about knowing the boundary, and then they can explain that to you. They can be like, Oh, here's where the boundary was. These people were doing this, these people were doing this, these people were doing this, but I wanna do this. So you just learned now whether it's right for you and sort of the other points in the space, which is awesome. Yeah. Going to your point, I feel like that's, that's also important, it's probably not a good idea to try and get everyone to go through historical artifacts, but if just a, a quick explainer and sort of, uh, note on the differentiation, Could help for sure. Yeah. I feel like we've skewed too much frontend. No, no more frontend discussion this point. [00:41:20] Jeremy: It's just like, I, I think there's so many more choices where the, the mental thought that has to go into, Okay, what do I use next I feel is bigger on frontend. I guess it depends on the project you're working on but if you're going to work on anything front end if you haven't done it before or you don't have a lot of experience there's so many build tools so many frameworks, so many libraries that yeah, but we [00:41:51] Victor: Iterate yeah, in every direction, like the, it's good and bad, but frontend just goes in every direction at the same time Like, there's so many people who are so enthusiastic and so committed and and it's so approachable that like everyone just goes in every direction at the same time and like a lot of people make progress and then unfortunately you have try and pick which, which branch makes sense. [00:42:20] Jeremy: We've been kind of talking about, some of your experiences with a few things and I wonder if you could explain the the context you're thinking of in terms of the types of projects you typically work on like what are they what's the scale of them that sort of thing. [00:42:32] Victor: So I guess I've, I've gone through a lot of phases, right? In sort of what I use in in my tooling and what I thought was cool. I wrote enterprise java like everybody else. Like, like it really doesn't talk about it, but like, it's like almost at some point it was like, you're either a rail shop or a Java shop, for so many people. And I wrote enterprise Java for a, a long time, and I was lucky enough to have friends who were really into, other kinds of computing and other kinds of programming. a lot of my projects were wrapped around, were, were ideas that I was expressing via some new technology, let's say. Right? So, I wrote a lot of haskell for, for, for a while, right? But what did I end up building with that was actually a job board that honestly didn't go very far because I was spending much more time sort of doing, haskell things, right? And so I learned a lot about sort of what I think is like the pinnacle of sort of like type development in, in the non-research world, right? Like, like right on the edge of research and actual usability. But a lot of my ideas, sort of getting back to the, the ideas question are just things I want to build for myself. Um, or things I think could be commercially viable or like do, like, be, be well used, uh, and, and sort of, and profitable things, things that I think should be built. Or like if, if I see some, some projects as like, Oh, I wish they were doing this in this way, Right? Like, I, I often consider like, Oh, I want, I think I could build something that would be separate and maybe do like, inspired from other projects, I should say, Right? Um, and sort of making me understand a sort of a different, a different ecosystem. but a lot of times I have to say like, the stuff I build is mostly to scratch an itch I have. Um, and or something I think would be profitable or utilizing technology that I've seen that I don't think anyone's done in the same way. Right? So like learning Kubernetes for example, or like investing the time to learn Kubernetes opened up an entire world of sort of like infrastructure ideas, right? Because like the leverage you get is so high, right? So you're just like, Oh, I could run an aws, right? Like now that I, now that I know this cuz it's like, it's actually not bad, it's kind of usable. Like, couldn't I do that? Right? That kind of thing. Right? Or um, I feel like a lot of the times I'll learn a technology and it'll, it'll make me feel like certain things are possible that they, that weren't before. Uh, like Rust is another one of those, right? Like, cuz like Rust will go from like embedded all the way to WASM, which is like a crazy vertical stack. Right? It's, that's a lot, That's a wide range of computing that you can, you can touch, right? And, and there's, it's, it's hard to learn, right? The, the, the, the, uh, the, the ramp to learning it is quite steep, but, it opens up a lot of things you can write, right? It, it opens up a lot of areas you can go into, right? Like, if you ever had an idea for like a desktop app, right? You could actually write it in Rust. There's like, there's, there's ways, there's like is and there's like, um, Tauri is one of my personal favorites, which uses web technology, but it's either I'm inspired by some technology and I'm just like, Oh, what can I use this on? And like, what would this really be good at doing? or it's, you know, it's one of those other things, like either I think it's gonna be, Oh, this would be cool to build and it would be profitable. Uh, or like, I'm scratching my own itch. Yeah. I think, I think those are basically the three sources. [00:46:10] Jeremy: It's, it's interesting about Rust where it seems so trendy, I guess, in lots of people wanna do something with rust, but then in a lot of they also are not sure does it make sense to write in rust? Um, I, I think the, the embedded stuff, of course, that makes a lot of sense. And, uh, you, you've seen a sort of surge in command line apps, stuff ripgrep and ag, stuff like that, and places like that. It's, I think the benefits are pretty clear in terms of you've got the performance and you have the strong typing and whatnot and I think where there's sort of the inbetween section that's kind of unclear to me at least would I build a web application in rust I'm not sure that sort of thing [00:47:12] Victor: Yeah. I would, I characterize it as kind of like, it's a tool toolkit, so it really depends on the problem. And think we have many tools that there's no, almost never a real reason to pick one in particular right? Like there's, Cause it seems like just most of, a lot of the work, like, unless you're, you're really doing something interesting, right? Like, uh, something that like, oh, I need to, I need to, like, I'm gonna run, you know, billions and billions of processes. Like, yeah, maybe you want erlang at that point, right? Like, maybe, maybe you should, that should be, you know, your, your thing. Um, but computers are so fast these days, and most languages have, have sort of borrowed, not borrowed, but like adopted features from others that there's, it's really hard to find a, a specific use case, for one particular tool. Uh, so I often just categorize it by what I want out of the project, right? Or like, either my goals or project goals, right? Depending on, and, or like business goals, if you're, you know, doing this for a business, right? Um, so like, uh, I, I basically, if I want to go fast and I want to like, you know, reduce time to market, I use type script, right? Oh, and also I'm a, I'm a, like a type zealot. I, I'd say so. Like, I don't believe in not having types, right? Like, it's just like there's, I think it's crazy that you would like have a function but not know what the inputs could be. And they could actually be anything, right? , you're just like, and then you have to kind of just keep that in your head. I think that's silly. Now that we have good, we, we have, uh, ways to avoid the, uh, ceremony, right? You've got like hindley Milner type systems, like you have a way to avoid the, you can, you know, predict what types of things will be, and you can, you don't have to write everything everywhere. So like, it's not that. But anyway, so if I wanna go fast, the, the point is that going back to that early, like the JS ecosystem goes everywhere at the same time. Typescript is excellent because the ecosystem goes everywhere at the same time. And so you've got really good ecosystem support for just about everything you could do. Um, uh, you could write TypeScript that's very loose on the types and go even faster, but in general it's not very hard. There's not too much ceremony and just like, you know, putting some stuff that shows you what you're using and like, you know, the objects you're working with. and then generally if I wanna like, get it really right, I I'll like reach for haskell, right? Cause it's just like the sort of contortions, and again, this takes time, this not fast, but, right. the contortions you can do in the type system will make it really hard to write incorrect code or code that doesn't, that isn't logical with itself. Of course interfacing with the outside world. Like if you do a web request, it's gonna fail sometimes, right? Like the network might be down, right? So you have to, you basically pull that, you sort of wrap that uncertainty in your system to whatever degree you're okay with. And then, but I know it'll be correct, right? But and correctness is just not important. Most of like, Oh, I should , that's a bad quote. Uh, it's not that correct is not important. It's like if you need to get to market, you do not necessarily need every single piece of your code to be correct, Right? If someone calls some, some function with like, negative one and it's not an important, it's not tied to money or it's like, you know, whatever, then maybe it's fine. They just see an error and then like you get an error in your back and you're like, Oh, I better fix that. Right? Um, and then generally if I want to be correct and fast, I choose rust these days. Right? Um, these days. and going back to your point, a lot of times that means that I'm going to write in Typescript for a lot of projects. So that's what I'll do for a lot of projects is cuz I'll just be like, ah, do I need like absolute correctness or like some really, you know, fancy sort of type stuff. No. So I don't pick haskell. Right. And it's like, do I need to be like mega fast? No, probably not. Cuz like, cuz so I don't necessarily don't necessarily need rust. Um, maybe it's interesting to me in terms of like a long, long term thing, right? Like if I, if I'm think, oh, but I want x like for example, tight, tight, uh, integration with WASM, for example, if I'm just like, oh, I could see myself like, but that's more of like, you know, for a fun thing that I'm doing, right? Like, it's just like, it's, it's, you don't need it. You don't, that's premature, like, you know, that's a premature optimization thing. But if I'm just like, ah, I really want the ability to like maybe consider refactoring some of this out into like a WebAssembly thing later, then I'm like, Okay, maybe, maybe I'll, I'll pick Rust. Or like, if I, if I like, I do want, you know, really, really fast, then I'll like, then I'll go Rust. But most of the time it's just like, I want a good ecosystem so I don't have to build stuff myself most of the time. Uh, and you know, type script is good enough. So my stack ends up being a lot of the time just in type script, right? Yeah. [00:52:05] Jeremy: Yeah, I think you've encapsulated the reason why there's so many packages on NPM and why there's so much usage of JavaScript and TypeScript in general is that it, it, it fits the, it's good enough. Right? And in terms of, in terms of speed, like you said, most of the time you don't need of rust. Um, and so typescript I think is a lot more approachable a lot of people have to use it because they do front end work anyways. And so that kinda just becomes the I don't know if I should say the default but I would say it's probably the most common in terms of when somebody's building a backend today certainly there's other languages but JavaScript and TypeScript is everywhere. [00:52:57] Victor: Yeah. Uh, I, I, I, another thing is like, I mean, I'm, of ignored the, like, unreasonable effectiveness of like rails Cause there's just a, there's tons of just like rails warriors out there, and that's great. They're they're fantastic. I'm not a, I'm not personally a huge fan of rails but that's, uh, that's to my own detriment, right? In, in some, in some ways. But like, Rails and Django sort of just like, people who, like, I'm gonna learn this framework it's gonna be excellent. It most, they have a, they have carved out a great ecosystem for themselves. Um, or like, you know, even php right? PHP and like Laravel, or whatever. Uh, and so I'm ignoring those, like, those pockets of productivity, right? Those pockets of like intense productivity that people like, have all their needs met in that same way. Um, but as far as like general, general sort of ecosystem size and speed for me, um, like what you said, like applies to me. Like if I, if I'm just like, especially if I'm just like, Oh, I just wanna build a backend, Like, I wanna build something that's like super small and just does like, you know, maybe a few, a couple, you know, endpoints or whatever and just, I just wanna throw it out there. Right? Uh, I, I will pick, yeah. Typescript. It just like, it makes sense to me. I also think note is a better. VM or platform to build on than any of the others as well. So like, like I, by any of the others, I mean, Python, Perl, Ruby, right? Like sort of in the same class of, of tool. So I I am kind of convinced that, um, Node is better, than those as far as core abilities, right? Like threading Right. Versus the just multi-processing and like, you know, other, other, other solutions and like, stuff like that. So, if you want a boring stack, if I don't wanna use any tokens, right? Any innovation tokens I reach for TypeScript. [00:54:46] Jeremy: I think it's good that you brought up. Rails and, and Django because, uh, personally I've done, I've done work with Rails, and you're right in that Rails has so many built in, and the ways to do them are so well established that your ability to be productive and build something really fast hard to compete with, at least in my experience with available in the Node ecosystem. Um, on the other hand, like I, I also see what you mean by the runtimes. Like with Node, you're, you're built on top of V8 and there's so many resources being poured into it to making it fast and making it run pretty much everywhere. I think you probably don't do too much work with managed services, but if you go to a managed service to run your code, like a platform as a service, they're gonna support Node. Will they support your other preferred language? Maybe, maybe not, You know that they will, they'll be able to run node apps so but yeah I don't know if it will ever happen or maybe I'm just not familiar with it, but feel like there isn't a real rails of javascript. [00:56:14] Victor: Yeah, you're, totally right. There are, there are. It's, it's weird. It's actually weird that there, like Uh, but, but, I kind of agree with you. There's projects that are trying it recently. There's like Adonis, um, there is, there are backends that also do, like, will do basic templating, like Nest, NestJS is like really excellent. It's like one of the best sort of backend, projects out there. I I, I but like back in the day, there were projects like Sails, which was like very much trying to do exactly what Rails did, but it just didn't seem to take off and reach that critical mass possibly because of the size of the ecosystem, right? Like, how many alternatives to Rails are there? Not many, right? And, and now, anyway, maybe let's say the rest of 'em sort of like died out over the years, but there's also like, um, hapi HAPI, uh, which is like also, you know, similarly, it was like angling themselves to be that, but they just never, they never found the traction they needed. I think, um, or at least to be as wide, widely known as Rails is for, for, for the, for the Ruby ecosystem, um, but also for people to kind of know the magic, cause. Like I feel like you're productive in Rails only when you imbibe the magic, right? You, you, know all the magic context and you know the incantations and they're comforting to you, right? Like you've, you've, you have the, you have the sort of like, uh, convention. You're like, if you're living and breathing the convention, everything's amazing, right? Like, like you can't beat that. You're just like, you're in the zone but you need people to get in that zone. And I don't think node has, people are just too, they're too frazzled. They're going like, there's too much options. They can't, it's hard to commit, right? Like, imagine if you'd committed to backbone. Like you got, you can't, It's, it's over. Oh, it's not over. I mean, I don't, no, I don't wanna, you know, disparage the backbone project. I don't use it, but, you know, maybe they're still doing stuff and you know, I'm sure people are still working on it, but you can't, you, it's hard to commit and sort of really imbibe that sort of convention or, or, or sort of like, make yourself sort of breathe that product when there's like 10 products that are kind of similar and could be useful as well. Yeah, I think that's, that's that's kind of big. It's weird that there isn't a rails, for NodeJS, but, but people are working on it obviously. Like I mentioned Adonis, there's, there's more. I'm leaving a bunch of them out, but that's part of the problem. [00:58:52] Jeremy: On, on one hand, it's really cool that people are trying so many different things because hopefully maybe they can find something that like other people wouldn't have thought of if they all stick same framework. but on the other hand, it's ... how much time have we spent jumping between all these different frameworks when what we could have if we had a rails. [00:59:23] Victor: Yeah the, the sort of wasted time is, is crazy to think about it uh, I do think about that from time to time. And you know, and personally I waste a lot of my own time. Like, just, just rec

Geeks Of The Valley
#76: Developer Productivity & Technical Challenges in Machine Learning with Bain Capital Ventures' Rak Garg

Geeks Of The Valley

Play Episode Listen Later Nov 7, 2022 31:04


Rak is a Principal at Bain Capital Ventures, which manages over $6b and was an early investor in companies like LinkedIn, Docusign, Docker, and Redis Labs. Rak leads early-stage investments in cybersecurity and developer infrastructure companies. Before investing, he was a product manager at Atlassian, where he led the identity management tool Atlassian Access. LinkedIn: https://www.linkedin.com/in/rakgarg/ Twitter: https://twitter.com/rak_garg --- Support this podcast: https://anchor.fm/geeksofthevalley/support

Tank Talks
How to Develop the Best Buyer Personas in Sales with Leena Joshi of CloseFactor

Tank Talks

Play Episode Listen Later Nov 3, 2022 38:42


What is a buyer persona and why are they important to your startup? Buyer personas are models of your ideal customer, but what is the best way to generate those personas, and more importantly, are those the right personas to model in the first place? Our guest today is Leena Joshi Co-Founder and CEO of CloseFactor, a company that helps build better buyer personas. We cover how companies can collect and use data to enhance their buyer personas to be more targeted and how CloseFactor assists in this process. We also talk about building negative buyer personas, and how startups can use all of this to their advantage.About Leena:Leena is the Co-founder & CEO of CloseFactor. She is an enterprise software GTM veteran with over 20 years of experience spanning product management, product marketing, inside sales, corporate marketing and business operations at Splunk, VMware, Redis Labs, Intel, and advanced AI company, Petuum.A word from our sponsor:At Ripple, we manage all of our fund expenses and employee credit cards using Jeeves. The team at Jeeves helped get me and my team setup with physical and virtual credit cards in days. I was able to allow my teammates to expense items in multiple currencies allowing them to pay for anything, anywhere at anytime. We weren't asked for any personal guarantees or to pay any setup or monthly SaaS fees.Not only does Jeeves save us time, but they also give us cash back on our purchases including expenses like Google, Facebook, or AWS every month. New users can earn up to 3% cashback for their first 90 days.The best part is Jeeves puts up the cash, and you settle up once every 30 days in any currency you want, unlike some other corporate card companies that make you pre-pay every month. Jeeves also recently launched its Jeeves Growth and Working Capital initiative for startups and fast-growing companies to enable more financial freedom for companies. The best thing of all is that Jeeves is live in  24 countries including Canada, US and many other countries around the world.Jeeves truly offers the best all-in-one expense management corporate card program for all startups especially the ones at Ripple and we at Tank Talks could not be more excited to officially partner with them. Listeners of Tank Talks can get set up with a demo of Jeeves today and take advantage of our Tank Talks special with a‍ $250 statement credit after the first $2,500 in spend or a $500 statement credit after the first $5000 in spend. Lastly, all Jeeves cardholders receive access to their Lounge Pass program and access to over 1300 airports globally.Visit tryjeeves.com/tanktalks to learn more.In this episode we discuss:02:33 Leena's journey before Co-Founding CloseFactor03:33 Leena's experience at Splunk05:28 What it was like to go through the Splunk and VMWare IPOs06:15 Dealing with rapid growth and hiring sprees07:27 The opportunity she saw with CloseFactor10:13 Building the initial buyer persona for CloseFactor13:32 How companies can start building their own buyer personas16:05 What early stage founders need to think about in terms of selling today's customers versus tomorrow's customers17:20 How CloseFactor helps in building buyer personas19:33 Balancing customer discovery and sales20:54 Finding customer interviewees for research22:25 Defining negative buyer personas and why they are helpful23:56 How buyer personas can help with marketing24:59 Getting internal stakeholders aligned with each persona26:16 How CloseFactor helps distill down buyer personas27:49 Early success stories on CloseFactor30:56 How CloseFactor's ICP and buyer persona has evolved32:47 Plans for CloseFactor's recent raise of $4.5M with Sequoia and BogomilFast Favorites*

AI in Action Podcast
E369 Howard Ting, CEO at CyberHaven

AI in Action Podcast

Play Episode Listen Later Aug 1, 2022 19:50


Today's guest is Howard Ting, CEO at CyberHaven in Palo Alto, CA. Every day, millions of sensitive documents, files and other data are being exfiltrated because traditional data security products have failed to do their job, resulting in an estimated $600 billion annual cost to the US economy. These data breaches cause severe near and long-term impact, including stolen IP, business interruptions, loss of customer trust, weakened competitiveness and significant financial penalties. Cyberhaven will help organizations secure all of the data they must protect in order to compete and thrive in the digital economy. It's one big hairy problem but Cyberhaven are up to the challenge. Howard joined Cyberhaven as CEO in June 2020 and in the past decade, he has played a critical role in scaling Palo Alto Networks and Nutanix from initial sales to over $1billion in revenue, generating massive value for customers, employees and shareholders. Howard has also served in GTM and product roles at Redis Labs, Zscaler, Microsoft and RSA Security. In the episode, Howard will discuss: The work CyberHaven are doing within Data protection, What's new and innovative approach their approach to the market, How they are applying AI at CyberHaven, The day-to-day life within the tech team, What's in store for the near future, & Why CyberHaven are a great place to work

Eat Sleep Code Podcast
Guy Royse - Redis Beyond the Cache

Eat Sleep Code Podcast

Play Episode Listen Later Jul 7, 2022 75:07


On this episode of Eat Sleep Code, Guy Royse from Redis Labs talks about how Redis is used for more than just caching data. Guy discusses his passions for Big Foot, UFO encounters and D&D.

Screaming in the Cloud
Would You Kindly Remind with Peter Hamilton

Screaming in the Cloud

Play Episode Listen Later Mar 31, 2022 40:17


About PeterPeter's spent more than a decade building scalable and robust systems at startups across adtech and edtech. At Remind, where he's VP of Technology, Peter pushes for building a sustainable tech company with mature software engineering. He lives in Southern California and enjoys spending time at the beach with his family.Links: Redis: https://redis.com/ Remind: https://www.remind.com/ Remind Engineering Blog: https://engineering.remind.com LinkedIn: https://www.linkedin.com/in/hamiltop Email: peterh@remind101.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Today's episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that's built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you're defining those as, which depends probably on where you work. It's getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that's exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn't eat all the data you've gotten on the system, it's exactly what you've been looking for. Check it out today at min.io/download, and see for yourself. That's min.io/download, and be sure to tell them that I sent you.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats V-U-L-T-R.com slash screaming.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn and this is a fun episode. It is a promoted episode, which means that our friends at Redis have gone ahead and sponsored this entire episode. I asked them, “Great, who are you going to send me from, generally, your executive suite?” And they said, “Nah. You already know what we're going to say. We want you to talk to one of our customers.” And so here we are. My guest today is Peter Hamilton, VP of Technology at Remind. Peter, thank you for joining me.Peter: Thanks, Corey. Excited to be here.Corey: It's always interesting when I get to talk to people on promoted guest episodes when they're a customer of the sponsor because to be clear, you do not work for Redis. This is one of those stories you enjoy telling, but you don't personally have a stake in whether people love Redis, hate Redis, adopt that or not, which is exactly what I try and do on these shows. There's an authenticity to people who have in-the-trenches experience who aren't themselves trying to sell the thing because that is their entire job in this world.Peter: Yeah. You just presented three or four different opinions and I guarantee we felt all at the different times.Corey: [laugh]. So, let's start at the very beginning. What does Remind do?Peter: So, Remind is a messaging tool for education, largely K through 12. We support about 30 million active users across the country, over 2 million teachers, making sure that every student has, you know, equal opportunities to succeed and that we can facilitate as much learning as possible.Corey: When you say messaging that could mean a bunch of different things to a bunch of different people. Once on a lark, I wound up sitting down—this was years ago, so I'm sure the number is a woeful underestimate now—of how many AWS services I could use to send a message from me to you. And this is without going into the lunacy territory of, “Well, I can tag a thing and then mail it to you like a Snowball Edge or something.” No, this is using them as intended, I think I got 15 or 16 of them. When you say messaging, what does that mean to you?Peter: So, for us, it's about communication to the end-user. We will do everything we can to deliver whatever message a teacher or district administrator has to the user. We go through SMS, text messaging, we go through Apple and Google's push services, we go through email, we go through voice call, really pulling out all the stops we can to make sure that these important messages get out.Corey: And I can only imagine some of the regulatory pressure you almost certainly experience. It feels like it's not quite to HIPAA levels, where ohh, there's a private cause of action if any of this stuff gets out, but people are inherently sensitive about communications involving their children. I always sort of knew this in a general sense, and then I had kids myself, and oh, yeah, suddenly I really care about those sorts of things.Peter: Yeah. One of the big challenges, you can build great systems that do the correct thing, but at the end of the day, we're relying on a teacher choosing the right recipient when they send a message. And so we've had to build a lot of processes and controls in place, so that we can, kind of, satisfy two conflicting needs: One is to provide a clear audit log because that's an important thing for districts to know if something does happen, that we have clear communication; and the other is to also be able to jump in and intervene when something inappropriate or mistaken is sent out to the wrong people.Corey: Remind has always been one of those companies that has a somewhat exalted reputation in the AWS space. You folks have been early adopters of a bunch of different services—which let's be clear, in the responsible way, not the, “Well, they said it on stage; time to go ahead and put everything they just listed into production because we for some Godforsaken reason, view it as a todo list.”—but you've been thoughtful about how you approach things, and you have been around as a company for a while. But you've also been making a significant push toward being cloud-native by certain definitions of that term. So, I know this sounds like a college entrance essay, but what does cloud-native mean to you?Peter: So, one of the big gaps—if you take an application that was written to be deployed in a traditional data center environment and just drop it in the cloud, what you're going to get is a flaky data center.Corey: Well, that's unfair. It's also going to be extremely expensive.Peter: [laugh]. Sorry, an expensive, flaky data set.Corey: There we go. There we go.Peter: What we've really looked at–and a lot of this goes back to our history in the earlier days; we ran a top of Heroku and it was kind of the early days what they call the Twelve-Factor Application—but making aggressive decisions about how you structure your architecture and application so that you fit in with some of the cloud tools that are available and that you fit in, you know, with the operating models that are out there.Corey: When you say an aggressive decision, what sort of thing are you talking about? Because when I think of being aggressive with an approach to things like AWS, it usually involves Twitter, and I'm guessing that is not the direction you intend that to go.Peter: No, I think if you look at Twitter or Netflix or some of these players that, quite frankly, have defined what AWS is to us today through their usage patterns, not quite that.Corey: Oh, I mean using Twitter to yell at them explicitly about things—Peter: Oh.Corey: —because I don't do passive-aggressive; I just do aggressive.Peter: Got it. No, I think in our case, it's been plotting a very narrow path that allows us to avoid some of the bigger pitfalls. We have our sponsor here, Redis. Talk a little bit about our usage of Redis and how that's helped us in some of these cases. One of the pitfalls you'll find with pulling a non-cloud-native application and put it in the cloud is state is hard to manage.If you put state on all your machines and machines go down, networks fail, all those things, you now no longer have access to that state and we start to see a lot of problems. One of the decisions we've made is try to put as much data as we can into data stores like Redis or Postgres or something, in order to decouple our hardware from the state we're trying to manage and provide for users so that we're more resilient to those sorts of failures.Corey: I get the sense from the way that we're having this conversation, when you talk about Redis, you mean actual Redis itself, not ElastiCache for Redis, or as to I'm tending to increasingly think about AWS's services, Amazon Basics for Redis.Peter: Yeah. I mean, Amazon has launched a number of products. They have their ElastiCache, they have their new MemoryDB, there's a lot different ways to use this. We've relied pretty heavily on Redis, previously known as Redis Labs, and their enterprise product in their cloud, in order to take care of our most important data—which we just don't want to manage ourselves—trying to manage that on our own using something like ElastiCache, there's so many pitfalls, so many ways that we can lose that data. This data is important to us. By having it in a trusted place and managed by a great ops team, like they have at Redis, we're able to then lean in on the other aspects of cloud data to really get as much value as we can out of AWS.Corey: I am curious. As I said you've had a reputation as a company for a while in the AWS space of doing an awful lot of really interesting things. I mean, you have a robust GitHub presence, you have a whole bunch of tools that have come out Remind that are great, I've linked to a number of them over the years in the newsletter. You are clearly not afraid, culturally, to get your hands dirty and build things yourself, but you are using Redis Enterprise as opposed to open-source Redis. What drove that decision? I have to assume it's not, “Wait. You mean, I can get it for free as an open-source project? Why didn't someone tell me?” What brought you to that decision?Peter: Yeah, a big part of this is what we could call operating leverage. Building a great set of tools that allow you to get more value out of AWS is a little different story than babysitting servers all day and making sure they stay up. So, if you look through, most of our contributions in open-source space have really been around here's how to expand upon these foundational pieces from AWS; here's how to more efficiently launch a suite of servers into an auto-scaling group; here's, you know, our troposphere and other pieces there. This was all before Amazon CDK product, but really, it was, here's how we can more effectively use CloudFormation to capture our Infrastructure as Code. And so we are not afraid in any way to invest in our tooling and invest in some of those things, but when we look at the trade-off of directly managing stateful services and dealing with all the uncertainty that comes, we feel our time is better spent working on our product and delivering value to our users and relying on partners like Redis in order to provide that stability we need.Corey: You raise a good point. An awful lot of the tools that you've put out there are the best, from my perspective, approach to working with AWS services. And that is a relatively thin layer built on top of them with an eye toward making the user experience more polished, but not being so heavily opinionated that as soon as the service goes in a different direction, the tool becomes completely useless. You just decide to make it a bit easier to wind up working with specific environment variables or profiles, rather than what appears to be the AWS UX approach of, “Oh, now type in your access key, your secret key and your session token, and we've disabled copy and paste. Go, have fun.” You've really done a lot of quality of life improvements, more so than you have this is the entire system of how we do deploys, start to finish. It's opinionated and sort of a, like, a take on what Netflix, did once upon a time, with Asgard. It really feels like it's just the right level of abstraction.Peter: We did a pretty good job. I will say, you know, years later, we felt that we got it wrong a couple times. It's been really interesting to see that, that there are times when we say, “Oh, we could take these three or four services and wrap it up into this new concept of an application.” And over time, we just have to start poking holes in that new layer and we start to see we would have been better served by sticking with as thin a layer as possible that enables us, rather than trying to get these higher-level pieces.Corey: It's remarkably refreshing to hear you say that just because so many people love to tell the story on podcasts, or on conference stages, or whatever format they have of, “This is what we built.” And it is an aspirationally superficial story about this. They don't talk about that, “Well, firstly, without these three wrong paths first.” It's always a, “Oh, yes, obviously, we are smart people and we only make the correct decision.”And I remember in the before times sitting in conference talks, watching people talk about great things they'd done, and I'll turn next to the person next to me and say, “Wow, I wish I could be involved in a project like that.” And they'll say, “Yeah, so do I.” And it turns out they work at the company the speaker is from. Because all of these things tend to be the most positive story. Do you have an example of something that you have done in your production environment that going back, “Yeah, in hindsight, I would have done that completely differently.”Peter: Yeah. So, coming from Heroku moving into AWS, we had a great open-source project called Empire, which kind of bridge that gap between them, but used Amazon's ECS in order to launch applications. It was actually command-line compatible with the Heroku command when it first launched. So, a very big commitment there. And at the time—I mean, this comes back to the point I think you and I were talking about earlier, where architecture, costs, infrastructure, they're all interlinked.And I'm a big fan of Conway's Law, which says that an organization's structure needs to match its architecture. And so six, seven years ago, we're heavy growth-based company and we are interns running around, doing all the things, and we wanted to have really strict guardrails and a narrow set of things that our development team could do. And so we built a pretty constrained: You will launch, you will have one Docker image per ECS service, it can only do these specific things. And this allowed our development team to focus on pretty buttons on the screen and user engagement and experiments and whatnot, but as we've evolved as a company, as we built out a more robust business, we've started to track revenue and costs of goods sold more aggressively, we've seen, there's a lot of inefficient things that come out of it.One particular example was we used PgBouncer for our connection pooling to our Postgres application. In the traditional model, we had an auto-scaling group for a PgBouncer, and then our auto-scaling groups for the other applications would connect to it. And we saw additional latency, we saw additional cost, and we eventually kind of twirl that down and packaged that PgBouncer alongside the applications that needed it. And this was a configuration that wasn't available on our first pass; it was something we intentionally did not provide to our development team, and we had to unwind that. And when we did, we saw better performance, we saw better cost efficiency, all sorts of benefits that we care a lot about now that we didn't care about as much, many years ago.Corey: It sounds like you're describing some semblance of an internal platform, where instead of letting all your engineers effectively, “Well, here's the console. Ideally, you use some form of Infrastructure as Code. Good luck. Have fun.” You effectively gate access to that. Is that something that you're still doing or have you taken a different approach?Peter: So, our primary gate is our Infrastructure as Code repository. If you want to make a meaningful change, you open up a PR, got to go through code review, you need people to sign off on it. Anything that's not there may not exist tomorrow. There's no guarantees. And we've gone around, occasionally just shut random servers down that people spun up in our account.And sometimes people will be grumpy about it, but you really need to enforce that culture that we have to go through the correct channels and we have to have this cohesive platform, as you said, to support our development efforts.Corey: So, you're a messaging service in education. So, whenever I do a little bit of digging into backstories of companies and what has made, I guess, an impression, you look for certain things and explicit dates are one of them, where on March 13th of 2020, your business changed just a smidgen. What happened other than the obvious, we never went outside for two years?Peter: [laugh]. So, if we roll back a week—you know, that's March 13th, so if we roll back a week, we're looking at March 6th. On that day, we sent out about 60 million messages over all of our different mediums: Text, email, push notifications. On March 13th that was 100 million, and then, a few weeks later on March 30th, that was 177 million. And so our traffic effectively tripled over the course of those three weeks. And yeah, that's quite a ride, let me tell you.Corey: The opinion that a lot of folks have who have not gotten to play in sophisticated distributed systems is, “Well, what's the hard part there you have an auto-scaling group. Just spin up three times the number of servers in that fleet and problem solved. What's challenging?” A lot, but what did you find that the pressure points were?Peter: So, I love that example, that your auto-scaling group will just work. By default, Amazon's auto-scaling groups only support 1000 backends. So, when your auto-scaling group goes from 400 backends to 1200, things break, [laugh] and not in ways that you would have expected. You start to learn things about how database systems provided by Amazon have limits other than CPU and memory. And they're clearly laid out that there's network bandwidth limits and things you have to worry about.We had a pretty small team at that time and we'd gotten this cadence where every Monday morning, we would wake up at 4 a.m. Pacific because as part of the pandemic, our traffic shifted, so our East Coast users would be most active in the morning rather than the afternoon. And so at about 7 a.m. on the east coast is when everyone came online. And we had our Monday morning crew there and just looking to see where the next pain point was going to be.And we'd have Monday, walk through it all, Monday afternoon, we'd meet together, we come up with our three or four hypotheses on what will break, if our traffic doubles again, and we'd spend the rest of that next week addressing those the best we could and repeat for the next Monday. And we did this for three, four or five weeks in a row, and finally, it stabilized. But yeah, it's all the small little things, the things you don't know about, the limits in places you don't recognize that just catch up to you. And you need to have a team that can move fast and adapt quickly.Corey: You've been using Redis for six, seven years, something along those lines, as an enterprise offering. You've been working with the same vendor who provides this managed service for a while now. What are the fruits of that relationship? What is the value that you see by continuing to have a long-term relationship with vendors? Because let's be serious, most of us don't stay in jobs that long, let alone work with the same vendor.Peter: Yeah. So, coming back to the March 2020 story, many of our vendors started to see some issues here that various services weren't scaled properly. We made a lot of phone calls to a lot of vendors in working with them, and I… very impressed with how Redis Labs at the time was able to respond. We hopped on a call, they said, “Here's what we think we need to do, we'll go ahead and do this. We'll sort this out in a few weeks and figure out what this means for your contract. We're here to help and support in this pandemic because we recognize how this is affecting everyone around the world.”And so I think when you get in those deeper relationships, those long-term relationships, it is so helpful to have that trust, to have a little bit of that give when you need it in times of crisis, and that they're there and willing to jump in right away.Corey: There's a lot to be said for having those working relationships before you need them. So often, I think that a lot of engineering teams just don't talk to their vendors to a point where they may as well be strangers. But you'll see this most notably because—at least I feel it most acutely—with AWS service teams. They'll do a whole kickoff when the enterprise support deal is signed, three years go passed, and both the AWS team and the customer's team have completely rotated since then, and they may as well be strangers. Being able to have that relationship to fall back on in those really weird really, honestly, high-stress moments has been one of those things where I didn't see the value myself until the first time I went through a hairy situation where I found that that was useful.And now it's oh, I—I now bias instead for, “Oh, I can fit to the free tier of this service. No, no, I'm going to pay and become a paying customer.” I'd rather be a customer that can have that relationship and pick up the phone than someone whining at people in a forum somewhere of, “Hey, I'm a free user, and I'm having some problems with production.” Just never felt right to me.Peter: Yeah, there's nothing worse than calling your account rep and being told, “Oh, I'm not your account rep anymore.” Somehow you missed the email, you missed who it was. Prior to Covid, you know—and we saw this many, many years ago—one of the things about Remind is every back-to-school season, our traffic 10Xes in about three weeks. And so we're used to emergencies happening and unforeseen things happening. And we plan through our year and try to do capacity planning and everything, but we been around the block a couple of times.And so we have a pretty strong culture now leaning in hard with our support reps. We have them in our Slack channels. Our AWS team, we meet with often. Redis Labs, we have them on Slack as well. We're constantly talking about databases that may or may not be performing as we expect them, too. They're an extension of our team, we have an incident; we get paged. If it's related to one of the services, we hit them in Slack immediately and have them start checking on the back end while we're checking on our side. So.Corey: One of the biggest takeaways I wish more companies would have is that when you are dependent upon another company to effectively run your production infrastructure, they are no longer your vendor, they're your partner, whether you want them to be or not. And approaching it with that perspective really pays dividends down the road.Peter: Yeah. One of the cases you get when you've been at a company for a long time and been in relationship for a long time is growing together is always an interesting approach. And seeing, sometimes there's some painful points; sometimes you're on an old legacy version of their product that you were literally the last customer on, and you got to work with them to move off of. But you were there six years ago when they're just starting out, and they've seen how you grow, and you've seen how they've grown, and you've kind of been able to marry that experience together in a meaningful way.Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of “Hello, World” demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking, databases, observability, management, and security. And—let me be clear here—it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself, all while gaining the networking, load balancing, and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free, you can do things like run small-scale applications or do proof-of-concept testing without spending a dime. You know that I always like to put asterisks next to the word free? This is actually free, no asterisk. Start now. Visit snark.cloud/oci-free that's snark.cloud/oci-free.Corey: Redis is, these days, of data platform back once upon a time, I viewed it as more of a caching layer. And I admit that the capabilities of the platform has significantly advanced since those days when I viewed it purely through lens of cache. But one of the interesting parts is that neither one of those use cases, in my mind, blends particularly well with heavy use of Spot Fleets, but you're doing exactly that. What are your folks doing over there?Peter: [laugh]. Yeah, so as I mentioned earlier, coming back to some of the Twelve-Factor App design, we heavily rely on Redis as sort of a distributed heap. One of our challenges of delivering all these messages is every single message has its in-flight state: Here's the content, here's who we sent it to, we wait for them to respond. On a traditional application, you might have one big server that stores it all in-memory, and you get the incoming requests, and you match things up. By moving all that state to Redis, all of our workers, all of our application servers, we know they can disappear at any point in time.We use Amazon's Spot Instances and their Spot Fleet for all of our production traffic. Every single web service, every single worker that we have runs on this infrastructure, and we would not be able to do that if we didn't have a reliable and robust place to store this data that is in-flight and currently being accessed. So, we'll have a couple hundred gigs of data at any point in time in a Redis Database, just representing in-flight work that's happening on various machines.Corey: It's really neat seeing Spot Fleets being used as something more than a theoretical possibility. It's something I've always been very interested in, obviously, given the potential cost savings; they approach cheap is free in some cases. But it turns out—we talked earlier about the idea of being cloud-native versus the rickety, expensive data center in the cloud, and an awful lot of applications are simply not built in a way that yeah, we're just going to randomly turn off a subset of your systems, ideally, with two minutes of notice, but all right, have fun with that. And a lot of times, it just becomes a complete non-starter, even for stateless workloads, just based upon how all of these things are configured. It is really interesting to watch a company that has an awful lot of responsibility that you've been entrusted with who embraces that mindset. It's a lot more rare than you'd think.Peter: Yeah. And again, you know, sometimes, we overbuild things, and sometimes we go down paths that may have been a little excessive, but it really comes down to your architecture. You know, it's not just having everything running on Spot. It's making effective use of SQS and other queueing products at Amazon to provide checkpointing abilities, and so you know that should you lose an instance, you're only going to lose a few seconds of productive work on that particular workload and be able to kick off where you left off.It's properly using auto-scaling groups. From the financial side, there's all sorts of weird quirks you'll see. You know, the Spot market has a wonderful set of dynamics where the big instances are much, much cheaper per CPU than the small ones are on the Spot market. And so structuring things in a way that you can colocate different workloads onto the same hosts and hedge against the host going down by spreading across multiple availability zones. I think there's definitely a point where having enough workload, having enough scale allows you to take advantage of these things, but it all comes down to the architecture and design that really enables it.Corey: So, you've been using Redis for longer than I think many of our listeners have been in tech.Peter: [laugh].Corey: And the key distinguishing points for me between someone who is an advocate for a technology and someone who's a zealot—or a pure critic—is they can identify use cases for which is great and use cases for which it is not likely to be a great experience. In your time with Redis, what have you found that it's been great at and what are some areas that you would encourage people to consider more carefully before diving into it?Peter: So, we like to joke that five, six years ago, most of our development process was, “I've hit a problem. Can I use Redis to solve that problem?” And so we've tried every solution possible with Redis. We've done all the things. We have number of very complicated Lua scripts that are managing different keys in an atomic way.Some of these have been more successful than others, for sure. Right now, our biggest philosophy is, if it is data we need quickly, and it is data that is important to us, we put it in Enterprise Redis, the cloud product from Redis. Other use cases, there's a dozen things that you can use for a cache, Redis is great for cache, memcache does a decent job as well; you're not going to see a meaningful difference between those sorts of products. Where we've struggled a little bit has been when we have essentially relational data that we need fast access to. And we're still trying to find a clear path forward here because you can do it and you can have atomic updates and you can kind of simulate some of the ACID characteristics you would have in a relational database, but it adds a lot of complexity.And that's a lot of overhead to our team as we're continuing to develop these products, to extend them, to fix any bugs you might have in there. And so we're kind of recalibrating a bit, and some of those workloads are moving to other data stores where they're more appropriate. But at the end of the day, it's data that we need fast, and it's data that's important, we're sticking with what we got here because it's been working pretty well.Corey: It sounds almost like you started off with the mindset of one database for a bunch of different use cases and you're starting to differentiate into purpose-built databases for certain things. Or is that not entirely accurate?Peter: There's a little bit of that. And I think coming back to some of our tooling, as we kind of jumped on a bit of the microservice bandwagon, we would see, here's a small service that only has a small amount of data that needs to be stored. It wouldn't make sense to bring up a RDS instance, or an Aurora instance, for that, you know, in Postgres. Let's just store it in an easystore like Redis. And some of those cases have been great, some of them have been a little problematic.And so as we've invested in our tooling to make all our databases accessible and make it less of a weird trade-off between what the product needs, what we can do right now, and what we want to do long-term, and reduce that friction, we've been able to be much more deliberate about the data source that we choose in each case.Corey: It's very clear that you're speaking with a voice of experience on this where this is not something that you just woke up and figured out. One last area I want to go into with you is when I asked you what is you care about primarily as an engineering leader and as you look at serving your customers well, you effectively had a dual answer, almost off the cuff, of stability and security. I find the two of those things are deeply intertwined in most of the conversations I have, but they're rarely called out explicitly in quite the way that you do. Talk to me about that.Peter: Yeah, so in our wild journey, stability has always been a challenge. And we've alway—you know, been an early startup mode, where you're constantly pushing what can we ship? How quickly can we ship it? And in our particular space, we feel that this communication that we foster between teachers and students and their parents is incredibly important, and is a thing that we take very, very seriously. And so, a couple years ago, we were trying to create this balance and create not just a language that we could talk about on a podcasts like this, but really recognizing that framing these concepts to our company internally: To our engineers to help them to think as they're building a feature, what are the things they should think about, what are the concerns beyond the product spec; to work with our marketing and sales team to help them to understand why we're making these investments that may not get particular feature out by X date but it's still a worthwhile investment.So, from the security side, we've really focused on building out robust practices and robust controls that don't necessarily lock us into a particular standard, like PCI compliance or things like that, but really focusing on the maturity of our company and, you know, our culture as we go forward. And so we're in a place now we are ISO 27001; we're heading into our third year. We leaned in hard our disaster recovery processes, we've leaned in hard on our bug bounties, pen tests, kind of, found this incremental approach that, you know, day one, I remember we turned on our bug bounty and it was a scary day as the reports kept coming in. But we take on one thing at a time and continue to build on it and make it an essential part of how we build systems.Corey: It really has to be built in. It feels like security is not something could be slapped on as an afterthought, however much companies try to do that. Especially, again, as we started this episode with, you're dealing with communication with people's kids. That is something that people have remarkably little sense of humor around. And rightfully so.Seeing that there is as much if not more care taken around security than there is stability is generally the sign of a well-run organization. If there's a security lapse, I expect certain vendors to rip the power out of their data centers rather than run in an insecure fashion. And your job done correctly—which clearly you have gotten to—means that you never have to make that decision because you've approached this the right way from the beginning. Nothing's perfect, but there's always the idea of actually caring about it being the first step.Peter: Yeah. And the other side of that was talking about stability, and again, it's avoiding the either/or situation. We can work in as well along those two—stability and security—we work in our cost of goods sold and our operating leverage in other aspects of our business. And every single one of them, it's our co-number one priorities are stability and security. And if it costs us a bit more money, if it takes our dev team a little longer, there's not a choice at that point. We're doing the correct thing.Corey: Saving money is almost never the primary objective of any company that you really want to be dealing with unless something bizarre is going on.Peter: Yeah. Our philosophy on, you know, any cost reduction has been this should have zero negative impact on our stability. If we do not feel we can safely do this, we won't. And coming back to the Spot Instance piece, that was a journey for us. And you know, we tested the waters a bit and we got to a point, we worked very closely with Amazon's team, and we came to that conclusion that we can safely do this. And we've been doing it for over a year and seen no adverse effects.Corey: Yeah. And a lot of shops I've talked to folks about well, when we go and do a consulting project, it's, “Okay. There's a lot of things that could have been done before we got here. Why hasn't any of that been addressed?” And the answer is, “Well. We tried to save money once and it caused an outage and then we weren't allowed to save money anymore. And here we are.” And I absolutely get that perspective. It's a hard balance to strike. It always is.Peter: Yeah. The other aspect where stability and security kind of intertwine is you can think about security as InfoSec in our systems and locking things down, but at the end of the day, why are we doing all that? It's for the benefit of our users. And Remind, as a communication platform, and safety and security of our users is as dependent on us being up and available so that teachers can reach out to parents with important communication. And things like attendance, things like natural disasters, or lockdowns, or any of the number of difficult situations schools find themselves in. This is part of why we take that stewardship that we have so seriously is that being up and protecting a user's data just has such a huge impact on education in this country.Corey: It's always interesting to talk to folks who insists they're making the world a better place. And it's, “What do you do?” “We're improving ad relevance.” I mean, “Okay, great, good for you.” You're serving a need that I would I would not shy away from classifying what you do, fundamentally, as critical infrastructure, and that is always a good conversation to have. It's nice being able to talk to folks who are doing things that you can unequivocally look at and say, “This is a good thing.”Peter: Yeah. And around 80% of public schools in the US are using Remind in some capacity. And so we're not a product that's used in a few civic regions. All across the board. One of my favorite things about working in Remind is meeting people and telling them where I work, and they recognize it.They say, “Oh, I have that app, I use that app. I love it.” And I spent years and ads before this, and you know, I've been there and no one ever told me they were glad to see an ad. That's never the case. And it's been quite a rewarding experience coming in every day, and as you said, being part of this critical infrastructure. That's a special thing.Corey: I look forward to installing the app myself as my eldest prepares to enter public school in the fall. So, now at least I'll have a hotline of exactly where to complain when I didn't get the attendance message because, you know, there's no customer quite like a whiny customer.Peter: They're still customers. [laugh]. Happy to have them.Corey: True. We tend to be. I want to thank you for taking so much time out of your day to speak with me. If people want to learn more about what you're up to, where's the best place to find you?Peter: So, from an engineering perspective at Remind, we have our blog, engineering.remind.com. If you want to reach out to me directly. I'm on LinkedIn; good place to find me or you can just reach out over email directly, peterh@remind101.com.Corey: And we will put all of that into the show notes. Thank you so much for your time. I appreciate it.Peter: Thanks, Corey.Corey: Peter Hamilton, VP of Technology at Remind. This has been a promoted episode brought to us by our friends at Redis, and I'm Cloud Economist Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry and insulting comment that you will then hope that Remind sends out to 20 million students all at once.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Screaming in the Cloud
Keeping Life on the Internet Friction Free with Jason Frazier

Screaming in the Cloud

Play Episode Listen Later Feb 16, 2022 37:12


About JasonJason Frazier is a Software Engineering Manager at Ekata, a Mastercard Company. Jason's team is responsible for developing and maintaining Ekata's product APIs. Previously, as a developer, Jason led the investigation and migration of Ekata's Identity Graph from AWS Elasticache to Redis Enterprise Redis on Flash, which brought an average savings of $300,000/yr.Links: Ekata: https://ekata.com/ Email: jason.frazier@ekata.com LinkedIn: https://www.linkedin.com/in/jasonfrazier56 TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: Today's episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that's built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you're defining those as, which depends probably on where you work. It's getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that's exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn't eat all the data you've gotten on the system, it's exactly what you've been looking for. Check it out today at min.io/download, and see for yourself. That's min.io/download, and be sure to tell them that I sent you.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This one is a bit fun because it's a promoted episode sponsored by our friends at Redis, but my guest does not work at Redis, nor has he ever. Jason Frazier is a Software Engineering Manager at Ekata, a Mastercard company, which I feel, like, that should have some sort of, like, music backstopping into it just because, you know, large companies always have that magic sheen on it. Jason, thank you for taking the time to speak with me today.Jason: Yeah. Thanks for inviting me. Happy to be here.Corey: So, other than the obvious assumption, based upon the fact that Redis is kind enough to be sponsoring this episode, I'm going to assume that you're a Redis customer at this point. But I'm sure we'll get there. Before we do, what is Ekata? What do you folks do?Jason: So, the whole idea behind Ekata is—I mean, if you go to our website, our mission statement is, “We want to be the global leader in online identity verification.” What that really means is, in more increasingly digital world, when anyone can put anything they want into any text field they want, especially when purchasing anything online—Corey: You really think people do that? Just go on the internet and tell lies?Jason: I know. It's shocking to think that someone could lie about who they are online. But that's sort of what we're trying to solve specifically in the payment space. Like, I want to buy a new pair of shoes online, and I enter in some information. Am I really the person that I say I am when I'm trying to buy those shoes? To prevent fraudulent transactions. That's really one of the basis that our company goes on is trying to reduce fraud globally.Corey: That's fascinating just from the perspective of you take a look at cloud vendors at the space that I tend to hang out with, and a lot of their identity verification of, is this person who they claim to be, in fact, is put back onto the payment providers. Take Oracle Cloud, which I periodically beat up but also really enjoy aspects of their platform on, where you get to their always free tier, you have to provide a credit card. Now, they'll never charge you anything until you affirmatively upgrade the account, but—“So, what do you do need my card for?” “Ah, identity and fraud verification.” So, it feels like the way that everyone else handles this is, “Ah, we'll make it the payment networks' problem.” Well, you're now owned by Mastercard, so I sort of assume you are what the payment networks, in turn, use to solve that problem.Jason: Yeah, so basically, one of our flagship products and things that we return is sort of like a score, from 0 to 400, on how confident we are that this person is who they are. And it's really about helping merchants help determine whether they should either approve, or deny, or forward on a transaction to, like, a manual review agent. As well as there's also another use case that's even more popular, which is just, like, account creation. As you can imagine, there's lots of bots on everyone's [laugh] favorite app or website and things like that, or customers offer a promotion, like, “Sign up and get $10.”Well, I could probably get $10,000 if I make a thousand random accounts, and then I'll sign up with them. But, like, make sure that those accounts are legitimate accounts, that'll prevent, like, that sort of promo abuse and things like that. So, it's also not just transactions. It's also, like, account openings and stuff, make sure that you actually have real people on your platform.Corey: The thing that always annoyed me was the way that companies decide, oh, we're going to go ahead and solve that problem with a CAPTCHA on it. It's, “No, no, I don't want to solve machine learning puzzles for Google for free in order to sign up for something. I am the customer here; you're getting it wrong somewhere.” So, I assume, given the fact that I buy an awful lot of stuff online, but I don't recall ever seeing anything branded with Ekata that you do this behind the scenes; it is not something that requires human interaction, by which I mean, friction.Jason: Yeah, for sure. Yeah, yeah. It's behind the scenes. That's exactly what I was about to segue to is friction, is trying to provide a frictionless experience for users. In the US, it's not as common, but when you go into Europe or anything like that, it's fairly common to get confirmations on transactions and things like that.You may have to, I don't know text—or get a code text or enter that online to basically say, like, “Yes, I actually received this.” But, like, helping—and the reason companies do that is for that, like, extra bit of security and assurance that that's actually legitimate. And obviously, companies would like to prefer not to have to do that because, I don't know, if I'm trying to buy something, this website makes me do something extra, the site doesn't make me do anything extra, I'm probably going to go with that one because it's just more convenient for me because there's less friction there.Corey: You're obviously limited in how much you can say about this, just because it's here's a list of all the things we care about means that great, you've given me a roadmap, too, of things to wind up looking at. But you have an example or two of the sort of the data that you wind up analyzing to figure out the likelihood that I'm a human versus a robot.Jason: Yeah, for sure. I mean, it's fairly common across most payment forms. So, things like you enter in your first name, your last name, your address, your phone number, your email address. Those are all identity elements that we look at. We have two data stores: We have our Identity Graph and our Identity Network.The Identity Graph is what you would probably think of it, if you think of a web of a person and their identity, like, you have a name that's linked to a telephone, and that name is also linked to an address. But that address used to have previous people living there, so on and so forth. So, the various what we just call identity elements are the various things we look at. It's fairly common on any payment form, I'm sure, like, if you buy something on Amazon versus eBay or whatever, you're probably going to be asked, what's your name? What's your address? What's your email address? What's your telephone?Corey: It's one of the most obnoxious parts of buying things online from websites I haven't been to before. It's one of the genius ideas behind Apple Pay and the other centralized payment systems. Oh, yeah. They already know who you are. Just click the button, it's done.Jason: Yeah, even something as small as that. I mean, it gets a little bit easier with, like, form autocompletes and stuff like, oh, just type J and it'll just autocomplete everything for me. That's not the worst of the world, but it is still some amount of annoyance and friction. [laugh].Corey: So, as I look through all this, it seems like one of the key things you're trying to do since it's in line with someone waiting while something is spinning in their browser, that this needs to be quick. It also strikes me that this is likely not something that you're going to hit the same people trying to identify all the time—if so, that is its own sign of fraud—so it doesn't really seem like something can be heavily cached. Yet you're using Redis, which tells me that your conception of how you're using it might be different than the mental space that I put Redis into what I'm thinking about where this ridiculous architecture diagram is the Redis part going to go?Jason: Yeah, I mean, like, whenever anyone says Redis, thinks of Redis, I mean, even before we went down this path, you always think of, oh, I need a cache, I'll just stuff in Redis. Just use Redis as a cache here and there. I don't know, some small—I don't know, a few tens, hundreds gigabytes, maybe—cache, spin that up, and you're good. But we actually use Redis as our primary data store for our Identity Graph, specifically for the speed that we can get. Because if you're trying to look for a person, like, let's say you're buying something for your brother, how do we know if that's true or not? Because you have this name, you're trying to send it to a different address, like, how does that make sense? But how do we get from Corey to an address? Like, oh, maybe used to live with your brother?Corey: It's funny, you pick that as your example; my brother just moved to Dublin, so it's the whole problem of how do I get this from me to someone, different country, different names, et cetera? And yeah, how do you wind up mapping that to figure out the likelihood that it is either credit card fraud, or somebody actually trying to be, you know, a decent brother for once in my life?Jason: [laugh]. So, I mean, how it works is how you imagine you start at some entry point, which would probably be your name, start there and say, “Can we match this to this person's address that you believe you're sending to?” And we can say, “Oh, you have a person-person relationship, like he's your brother.” So, it maps to him, which we can then get his address and say, “Oh, here's that address. That matches what you're trying to send it to. Hey, this makes sense because you have a legitimate reason to be sending something there. You're not just sending it to some random address out in the middle of nowhere, for no reason.”Corey: Or the drop-shipping scams, or brushing scams, or one of—that's the thing is every time you think you've seen it all, all you have to do is look at fraud. That's where the real innovation seems to be happening, [laugh] no matter how you slice it.Jason: Yeah, it's quite an interesting space. I always like to say it's one of those things where if you had the human element in it, it's not super easy, but it's like, generally easy to tell, like, okay, that makes sense, or, oh, no, that's just complete garbage. But trying to do it at scale very fast in, like, a general case becomes an actual substantially harder problem. [laugh]. It's one of those things that people can probably do fairly well—I mean, that's why we still have manual reviews and things like that—but trying to do it automatically or just with computers is much more difficult. [laugh].Corey: Yeah, “Hee hee, I scammed a company out of 20 bucks is not the problem you're trying to avoid for.” It's the, “Okay, I just did that ten million times and now we have a different problem.”Jason: Yeah, exactly. I mean, one of the biggest losses for a lot of companies is, like, fraudulent transactions and chargebacks. Usually, in the case on, like, e-commerce companies—or even especially like nowadays where, as you can imagine, more people are moving to a more online world and doing shopping online and things like that, so as more people move to online shopping, some companies are always going to get some amount of chargebacks on fraudulent transactions. But when it happens at scale, that's when you start seeing many losses because not only are you issuing a chargeback, you probably sent out some products, that you're now out some physical product as well. So, it's almost kind of like a double-whammy. [laugh].Corey: So, as I look through all this, I tended to always view Redis in terms of, more or less, a key-value store. Is that still accurate? Is that how you wind up working with it? Or has it evolved significantly past them to the point where you can now do relational queries against it?Jason: Yeah, so we do use Redis as a key-value store because, like, Redis is just a traditional key-value store, very fast lookups. When we first started building out Identity Graph, as you can imagine, you're trying to model people to telephones to addresses; your first thought is, “Hey, this sounds a whole lot like a graph.” That's sort of what we did quite a few years ago is, let's just put it in some graph database. But as time went on and as it became much more important to have lower and lower latency, we really started thinking about, like, we don't really need all the nice and shiny things that, like, a graph database or some sort of graph technology really offers you. All we really need to do is I need to get from point A to point B, and that's it.Corey: Yeah, [unintelligible 00:10:35] graph database, what's the first thing I need to do? Well, spend six weeks in school trying to figure out exactly what the hell of graph database is because they're challenging to wrap your head around at the best of times. Then it just always seemed overpowered for a lot of—I don't want to say simple use cases; what you're doing is not simple, but it doesn't seem to be leveraging the higher-order advantages that graph database tends to offer.Jason: Yeah, it added a lot of complexity in the system, and [laugh] me and one of our senior principal engineers who's been here for a long time, we always have a joke: If you search our GitHub repository for… we'll say kindly-worded commit messages, you can see a very large correlation of those types of commit messages to all the commits to try and use a graph database from multiple years ago. It was not fun to work with, just added too much complexity, and we just didn't need all that shiny stuff. So, that's how we really just took a step back. Like, we really need to do it this way. We ended up effectively flattening the entire graph into an adjacency list.So, a key is basically some UUID to an entity. So, Corey, you'd have some UUID associated with you and the value would be whatever your information would be, as well as other UUIDs to links to the other entities. So, from that first retrieval, I can now unpack it, and, “Oh, now I have a whole bunch of other UUIDs I can then query on to get that information, which will then have more IDs associated with it,” is more or less sort of how we do our graph traversal and query this in our graph queries.Corey: One of the fun things about doing this sort of interview dance on the podcast as long as I have is you start to pick up what people are saying by virtue of what they don't say. Earlier, you wound up mentioning that we often use Redis for things like tens, or hundreds of gigabytes, which sort of leaves in my mind the strong implication that you're talking about something significantly larger than that. Can you disclose the scale of data we're talking about her?Jason: Yeah. So, we use Redis as our primary data store for our Identity Graph, and also for—soon to be for our Identity Network, which is our other database. But specifically for our Identity Graph, scale we're talking about, we do have some compression added on there, but if you say uncompressed, it's about 12 terabytes of data that's compressed, with replication into about four.Corey: That's a relatively decent compression factor, given that I imagine we're not talking about huge datasets.Jason: Yeah, so this is actually basically driven directly by cost: If you need to store less data, then you need less memory, therefore, you need to pay for less.Corey: So, our users once again have shored up my longtime argument that when it comes to cloud, cost and architecture are in fact the same thing. Please, continue by all means.Jason: I would be lying if I said that we didn't do weekly slash monthly reviews of costs. Where are we spending costs in AWS? How can we improve costs? How can we cut down on costs? How can you store less—Corey: You are singing my song.Jason: It is a [laugh] it is a constant discussion. But yeah, so we use Zstandard compression, which was developed at Facebook, and it's a dictionary-based compression. And the reason we went for this is—I mean like if I say I want to compress, like, a Word document down, like, you can get very, very, very high level of compression. It exists. It's not that interesting, everyone does it all the time.But with this we're talking about—so in that, basically, four or so terabytes of compressed data that we have, it's something around four to four-and-a-half billion keys and values, and so in that we're talking about each key-value only really having anywhere between 50 and 100 bytes. So, we're not compressing very large pieces of information. We're compressing very small 50 to 100 byte JSON values that we have give UUID keys and JSON strings stored as values. So, we're compressing these 50 to 100 byte JSON strings with around 70, 80% compression. I mean, that's using Zstandard with a custom dictionary, which probably gave us the biggest cost savings of all, if you can [unintelligible 00:14:32] your dataset size by 60, 70%, that's huge. [laugh].Corey: Did you start off doing this on top of Redis, or was this an evolution that eventually got you there?Jason: It was an evolution over time. We were formally Whitepages. I mean, Whitepages started back in the late-90s. It really just started off as a—we just—Corey: You were a very early adopter of Redis [laugh]. Yeah, at that point, like, “We got a time machine and started using it before it existed.” Always a fun story. Recruiters seem to want that all the time.Jason: Yeah. So, when we first started, I mean, we didn't have that much data. It was basically just one provider that gave us some amount of data, so it was kind of just a—we just need to start something quick, get something going. And so, I mean, we just did what most people do just do the simplest thing: Just stuff it all in a Postgres database and call it good. Yeah, it was slow, but hey, it was back a long time ago, people were kind of okay with a little bit—Corey: The world moved a bit slower back then.Jason: Everything was a bit slower, no one really minded too much, the scale wasn't that large. But business requirements always change over time and they evolve, and so to meet those ever-evolving business requirements, we move from Postgres, and where a lot of the fun commit messages that I mentioned earlier can be found is when we started working with Cassandra and Titan. That was before my time before I had started, but from what I understand, that was a very fun time. But then from there, that's when we really kind of just took a step back and just said, like, “There's so much stuff that we just don't need here. Let's really think about this, and let's try to optimize a bit more.”Like, we know our use case, why not optimize for our use case? And that's how we ended up with the flattened graph storage stuffing into Redis. Because everyone thought of Redis as a cache, but everyone also knows that—why is it a cache? Because it's fast. [laugh]. We need something that's very fast.Corey: I still conceptualize it as an in-memory data store, just because when I turned on disk persistence model back in 2011, give or take, it suddenly started slamming the entire data store to a halt for about three seconds every time it did it. It was, “What's this piece of crap here?” And it was, “Oh, yeah. Turns out there was a regression on Zen, which is what AWS is used as a hypervisor back then.” And, “Oh, yeah.”So, fork became an expensive call, it took forever to wind up running. So oh, the obvious lesson we take from this is, oh, yeah, Redis is not designed to be used with disk persistence. Wrong lesson to take from the behavior, but did cement, in my mind at least, the idea that this is something that we tend to use only as an in-memory store. It's clear that the technology has evolved, and in fact, I'm super glad that Redis threw you my direction to talk to you about this stuff because until talking to you, I was still—I got to admit—sort of in the position of thinking of it still as an in-memory data store because the fact that Redis says otherwise because they're envisioning it being something else, well okay, marketers going to market. You're a customer; it's a lot harder for me to talk smack about your approach to this thing, when I see you doing it for, let's be serious here, what is a very important use case. If identity verification starts failing open and everyone claims to be who they say they are, that's something is visible from orbit when it comes to the macroeconomic effect.Jason: Yeah, exactly. It's actually funny because before we move to primarily just using Redis, before going to fully Redis, we did still use Redis. But we used ElastiCache, we had it loaded into ElastiCache, but we also had it loaded into DynamoDB as sort of a, I don't want this to fail because we weren't comfortable with actually using Redis as a primary database. So, we used to use ElastiCache with a fallback to DynamoDB, just in that off chance, which, you know, sometimes it happens, sometimes it didn't. But that's when we basically just went searching for new technologies, and that's actually how we landed on Redis on Flash, which is a kind of breaks the whole idea of Redis as an in-memory database to where it's Redis, but it's not just an in-memory database, you also have flashback storage.Corey: So, you'll forgive me if I combine my day job with this side project of mine, where I fixed the horrifying AWS bills for large companies. My bias, as a result, is to look at infrastructure environments primarily through the lens of AWS bill. And oh, great, go ahead and use an enterprise offering that someone else runs because, sure, it might cost more money, but it's not showing up on the AWS bill, therefore, my job is done. Yeah, it turns out that doesn't actually work or the answer to every AWS billing problem is to migrate to Azure to GCP. Turns out that doesn't actually solve the problem that you would expect.But you're obviously an enterprise customer of Redis. Does that data live in your AWS account? Is it something using as their managed service and throwing over the wall so it shows up as data transfer on your side? How is that implemented? I know they've got a few different models.Jason: There's a couple of aspects onto how we're actually bill. I mean, so like, when you have ElastiCache, you're just billed for your, I don't know, whatever nodes using, cache dot, like, r5 or whatever they are… [unintelligible 00:19:12]Corey: I wish most people were using things that modern. But please, continue.Jason: But yeah, so you basically just build for whatever last cache nodes you have, you have your hourly rate, I don't know, maybe you might reserve them. But with Redis Enterprise, the way that we're billed is there's two aspects. One is, well, the contract that we signed that basically allows us to use their technology [unintelligible 00:19:31] with a managed service, a managed solution. So, there's some amount that we pay them directly within some contract, as well as the actual nodes themselves that exist in the cluster. And so basically the way that this is set up, is we effectively have a sub-account within our AWS account that Redis Labs has—or not Redis Labs; Redis Enterprise—has access to, which they deploy directly into, and effectively using VPC peering; that's how we allow our applications to talk directly to it.So, we're built directly—or so the actual nodes of the cluster, which are i3.8x, I believe, on they basically just run EC2 instances. All of those instances, those exist on our bill. Like, we get billed for them; we pay for them. It's just basically some sub-account that they have access to that they can deploy into. So, we get billed for the instances of the cluster as well as whatever we pay for our enterprise contract. So, there's sort of two aspects to the actual billing of it.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats V-U-L-T-R.com slash screaming.Corey: So, it's easy to sit here as an engineer—and believe me, having been one for most of my career, I fall subject to this bias all the time—where it's, “Oh, you're going to charge me a management fee to run this thing? Oh, that's ridiculous. I can do it myself instead,” because, at least when I was learning in my dorm room, it was always a “Well, my time is free, but money is hard to come by.” And shaking off that perspective as my career continued to evolve was always a bit of a challenge for me. Do you ever find yourself or your team drifting toward the direction of, “Well, what we're paying for Redis Enterprise for? We could just run it ourselves with the open-source version and save whatever it is that they're charging on top of that?”Jason: Before we landed on Redis on Flash, we had that same thought, like, “Why don't we just run our own Redis?” And the decision to that is, well, managing such a large cluster that's so important to the function of our business, like, you effectively would have needed to hire someone full time to just sit there and stare at the cluster the whole time just to operate it, maintain it, make sure things are running smoothly. And it's something that we made a decision that, no, we're going to go with a managed solution. It's not easy to manage and maintain clusters of that size, especially when they're so important to business continuity. [laugh]. From our eyes, it was just not worth the investment for us to try and manage it ourselves and go with the fully managed solution.Corey: But even when we talk about it, it's one of those well—it's—everyone talks about, like, the wrong side of it first, the oh, it's easier if things are down if we wind up being able to say, “Oh, we have a ticket open,” rather than, “I'm on the support forum and waiting for people to get back to me.” Like, there's a defensibility perspective. We all just sort of, like sidestep past the real truth of it of, yeah, the people who are best in the world running and building these things are right now working on the problem when there is one.Jason: Yeah, they're the best in the world at trying to solve what's going on. [laugh].Corey: Yeah, because that is what we're paying them to do. Oh, right. People don't always volunteer for for-profit entities. I keep forgetting that part of it.Jason: Yeah, I mean, we've had some very, very fun production outages that just randomly happened because to our knowledge, we would just like—I would, like… “I have no idea what's going on.” And, you know, working with their support team, their DevOps team, honestly, it was a good, like, one-week troubleshooting. When we were validating the technology, we accidentally halted the database for seemingly no reason, and we couldn't possibly figure out what's going on. We kept talking to—we were talking to their DevOps team. They're saying, “Oh, we see all these writes going on for some reason.” We're like, “We're not sending any writes. Why is there writes?”And that was the whole back and forth for almost a week, trying to figure out what the heck was going on, and it happened to be, like, a very subtle case, in terms of, like, the how the keys and values are actually stored between RAM and flash and how it might swap in and out of flash. And like, all the way down to that level where I want to say we probably talked to their DevOps team at least two to three times, like, “Could you just explain this to me?” Like, “Sure,” like, “Why does this happen? I didn't know this was a thing.” So, on and so forth. Like, there's definitely some things that are fairly difficult to try and debug, which definitely helps having that enterprise-level solution.Corey: Well, that's the most valuable thing in any sort of operational experience where, okay, I can read the documentation and all the other things, and it tells me how it works. Great. The real value of whether I trust something in production is whether or not I know how it breaks where it's—Jason: Yeah.Corey: —okay—because the one thing you want to hear when you're calling someone up is, “Oh, yeah. We've seen this before. This is what you do to fix it.” The worst thing in the world is, “Oh, that's interesting. We've never seen that before.” Because then oh, dear Lord, we're off in the mists of trying to figure out what's going on here, while production is down.Jason: Yeah kind of like, “What is this database do, like, in terms of what do we do?” Like, I mean, this is what we store our Identity Graph in. This has the graph of people's information. If we're trying to do identity verification for transactions or anything, for any of our products, I mean, we need to be able to query this database. It needs to be up.We have a certain requirement in terms of uptime, where we want it at least, like, four nines of uptime. So, we also want a solution that, hey, even if it wants to break, don't break that bad. [laugh]. There's a difference between, “Oh, a node failed and okay, like, we're good in 10, 20 seconds,” versus, “Oh, node failed. You lost data. You need to start reloading your dataset, or you can't query this anymore.” [laugh]. There's a very large difference between those two.Corey: A little bit, yeah. That's also a great story to drive things across. Like, “Really? What is this going to cost us if we pay for the enterprise version? Great. Is it going to be more than some extortionately large number because if we're down for three hours in the course of a year, that's we owe our customers back for not being able to deliver, so it seems to me this is kind of a no-brainer for things like that.”Jason: Yeah, exactly. And, like, that's part of the reason—I mean, a lot of the things we do at Ekata, we usually go with enterprise-level for a lot of things we do. And it's really for that support factor in helping reduce any potential downtime for what we have because, well, if we don't consider ourselves comfortable or expert-level in that subject, I mean, then yeah, if it goes down, that's terrible for our customers. I mean, it's needed for literally every single query that comes through us.Corey: I did want to ask you, but you keep talking about, “The database” and, “The cluster.” That seems like you have a single database or a single cluster that winds up being responsible for all of this. That feels like the blast radius of that thing going down must be enormous. Have you done any research into breaking that out into smaller databases? What is it that's driven you toward this architectural pattern?Jason: Yeah, so for right now, so we have actually three regions were deployed into. We have a copy of it in us-west in AWS, we have one an eu-central-1, and we also have one, an ap-southeast-1. So, we have a complete copy of this database in three separate regions, as well as we're spread across all the available availability zones for that region. So, we try and be as multi-AZ as we can within a specific region. So, we have thought about breaking it down, but having high availability, having multiple replication factors, having also, you know, it stored in multiple data centers, provides us at least a good level of comfortability.Specifically, in our US cluster, we actually have two. We literally also—with a lot of the cost savings that we got, we actually have two. We have one that literally sits idle 24/7 that we just call our backup and our standby where it's ready to go at a moment's notice. Thankfully, we haven't had to use it since I want to say its creation about a year-and-a-half ago, but it sits there in that doomsday scenario: “Oh, my gosh, this cluster literally cannot function anymore. Something crazy catastrophic happened,” and we can basically hot swap back into another production-ready cluster as needed, if needed.Because the really important thing is that if we broke it up into two separate databases if one of them goes down, that could still fail your entire query. Because what if that's the database that held your address? We can still query you, but we're going to try and get your address and well, there, your traversal just died because you can no longer get that. So, even trying to break it up doesn't really help us too much. We can still fail the entire traversal query.Corey: Yeah, which makes an awful lot of sense. Again, to be clear, you've obviously put thought into this goes way beyond the me hearing something in passing and saying, “Hey, you considered this thing?” Let's be very clear here. That is the sign of a terrible junior consultant. “Well, it sounds like what you built sucked. Did you consider building something that didn't suck?” “Oh, thanks, Professor. Really appreciate your pointing that out.” It's one of those useful things.Jason: It's like, “Oh, wow, we've been doing this for, I don't know, many, many years.” It's like, “Oh, wow, yeah. I haven't thought about that one yet.” [laugh].Corey: So, it sounds like you're relatively happy with how Redis has worked out for you as the primary data store. If you were doing it all again from scratch, would you make the same technology selection there or would you go in a different direction?Jason: Yeah, I think I'd make the same decision. I mean, we've been using Redis on Flash for at this point three, maybe coming up to four years at this point. There's a reason we keep renewing our contract and just keep continuing with them is because, to us, it just fits our use case so well, and we very much choose to continue going with this direction in this technology.Corey: What would you have them change as far as feature enhancements and new options being enabled there? Because remember, asking them right now in front of an audience like this puts them in a situation where they cannot possibly refuse. Please, how would you improve Redis from where it is now?Jason: I like how you think. That's [laugh] a [fair way to 00:28:42] to describe it. There's a couple of things for optimizations that can always be done. And, like, specifically with, like, Redis on Flash, there's some issue we had with storing as binary keys that to my knowledge hasn't necessarily been completed yet that basically prevents us from storing as binary, which has some amount of benefit because well, binary keys require less memory to store. When you're talking about 4 billion keys, even if you're just saving 20 bytes of key, like you're talking about potentially hundreds of gigabytes of savings once you—Corey: It adds up with the [crosstalk 00:29:13].Jason: Yeah, it adds up pretty quick. [laugh]. So, that's probably one of the big things that we've been in contact with them about fixing that hasn't gotten there yet. The other thing is, like, there's a couple of, like, random… gotchas that we had to learn along the way. It does add a little bit of complexity in our loading process.Effectively, when you first write a value into the database it'll write to RAM, but then once it gets flushed to flash, the database effectively asks itself, “Does this value already exist in flash?” Because once it's first written, it's just written to RAM, it isn't written to backing flash. And if it says, “No it's not,” the database then does a write to write it into Flash and then evict it out of RAM. That sounds pretty innocent, but if it already exists in flash when you read it, it says, “Hey, I need to evict this does it already exist in Flash?” “Yep.” “Okay, just chuck it away. It already exists, we're good.”It sounds pretty nice, but this is where we accidentally halted our database is once we started putting a huge amount of load on the cluster, our general throughput on peak day is somewhere in the order of 160 to 200,000 Redis operations per second. So, you're starting to think of, hey, you might be evicting 100,000 values per second into Flash, you're talking about added 100,000 operate or write operations per second into your cluster, and that accidentally halted our database. So, the way we actually go around this is once we write our data store, we actually basically read the whole thing once because if you read every single key, you pretty much guarantee to cycle everything into Flash, so it doesn't have to do any of those writes. For right now, there is no option to basically say that, if I write—for our use case, we do very little writes except for upfront, so it'd be super nice for our use case, if we can say, “Hey, our write operations, no, I want you to actually do a full write-through to flash.” Because, you know, that would effectively cut our entire database prep in half. We no longer had to do that read to cycle everything through. Those are probably the two big things, and one of the biggest gotchas that we ran into [laugh] that maybe it isn't, so known.Corey: I really want to thank you for taking the time to speak with me today. If people want to learn more, where can they find you? And I will also theorize wildly, that if you're like basically every other company out there right now, you're probably hiring on your team, too.Jason: Yeah, I very much am hiring; I'm actually hiring quite a lot right now. [laugh]. So, they can reach me, my email is simply jason.frazier@ekata.com. I unfortunately, don't have a Twitter handle. Or you can find me on LinkedIn. I'm pretty sure most people have LinkedIn nowadays.But yeah, and also feel free to reach out if you're also interested in learning more or opportunities, like I said, I'm hiring quite extensively. I'm specifically the team that builds our actual product APIs that we offer to customers, so a lot of the sort of latency optimizations that we do usually are kind of through my team, in coordination with all the other teams, since we need to build a new API with this requirement. How do we get that requirement? [laugh]. Like, let's go start exploring.Corey: Excellent. I will, of course, throw a link to that in the [show notes 00:32:10] as well. I want to thank you for spending the time to speak with me today. I really do appreciate it.Jason: Yeah. I appreciate you having me on. It's been a good chat.Corey: Likewise. I'm sure we will cross paths in the future, especially as we stumble through the wide world of, you know, data stores in AWS, and this ecosystem keeps getting bigger, but somehow feels smaller all the time.Jason: Yeah, exactly. You know, we'll still be where we are hopefully, approving all of your transactions as they go through, make sure that you don't run into any friction.Corey: Thank you once again, for speaking to me, I really appreciate it.Jason: No problem. Thanks again for having me.Corey: Jason Frazier, Software Engineering Manager at Ekata. This has been a promoted episode brought to us by our friends at Redis. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment telling me that Enterprise Redis is ridiculous because you could build it yourself on a Raspberry Pi in only eight short months.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Screaming in the Cloud
The Redis Rebrand with Yiftach Shoolman

Screaming in the Cloud

Play Episode Listen Later Feb 9, 2022 39:55


About YiftachYiftach is an experienced technologist, having held leadership engineering and product roles in diverse fields from application acceleration, cloud computing and software-as-a-service (SaaS), to broadband networks and metro networks. He was the founder, president and CTO of Crescendo Networks (acquired by F5, NASDAQ:FFIV), the vice president of software development at Native Networks (acquired by Alcatel, NASDAQ: ALU) and part of the founding team at ECI Telecom broadband division, where he served as vice president of software engineering.Yiftach holds a Bachelor of Science in Mathematics and Computer Science and has completed studies for Master of Science in Computer Science at Tel-Aviv University.Links: Redis, Inc.: https://redis.com/ Redis open source project: https://redis.io LinkedIn: https://www.linkedin.com/in/yiftachshoolman/ Twitter: https://twitter.com/yiftachsh TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is, in AWS, with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem, and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai, and Stax have seen significant results by using them, and it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted episode is brought to us by a company that I would have had to introduce differently until toward the end of last year. Today, they're Redis, but for a while they've been Redis Labs, here to talk with me about that and oh, so much more is their co-founder and CT, Yiftach Shoolman. Yiftach, thank you for joining me.Yiftach: Hi, Corey. Nice to be a guest of you. This is a very interesting podcast, and I often happen to hear it.Corey: I'm always surprised when people tell me that they listen to this because unlike a newsletter or being obnoxious on Twitter, I don't wind up getting a whole lot of feedback from people via email or whatnot. My operating theory has been that it's like a—when I send an email out, people will get that, “Oh, an email. I know how to send one of those.” And they'll fire something back. But podcasts are almost like a radio show, and who calls into radio shows? Well, lunatics, generally, and if I give feedback, I'll feel like a lunatic.So, I get very little email response on stuff like this. But when I talk to people, they mention the show. It's, “Oh, right. Good. I did remember to turn the microphone on. People are out there listening.” Thank you.So you, back in August of 2021, the company that formerly known as Redis Labs, became known as Redis. What caused the name change? And sure, is a small change as opposed to, you know, completely rebranding a company like Square to Block, but what was it that really drove that, I guess, rebrand?Yiftach: Yeah, a great question. And by way, if you look at our history, we started the company under the name of Garantia Data, which is a terrible name. [laugh]. And initially, what we wanted to do is to accelerate databases with both technologies like memcached, and Redis. Eventually, we built a solution for both, and we found out that Redis is much more used by people. That was back in 2011.So, in 2021, we finally decided to say let's unify the brand because, you know, as a contributors to Redis from day one, and creator of Redis is also part of the company, Salvatore Sanfilippo. We believed that we should not confuse the market with multiple messages about Redis. Redis is not just the cache and we don't want people to definitely interpret this. Redis is more than a cache, it's actually, if you look at our customer, like, 66% of them are using it as a real-time database. And we wanted to unify everyone around this naming to avoid different interpretation. So, that was the motivation, maybe.Corey: It's interesting you talk about, I guess, the evolution of the use cases for Redis. Back in 2011, I was using Redis in an AWS environment, and, “Ah, disk persistence, we're going to turn that on.” And it didn't go so well back in those days because I found that the entire web app that we were using would periodically just slam to a halt for about three seconds whenever Redis wound up doing its disk persistent stuff, diving in far deeper than I really had any right to be doing, I figured out this was a regression in the Xen hypervisor and Xen kernel that AWS was using back then around the fork call. Not Redis's fault, to be very clear. But I looked at this and figured, “Ah. I know how to fix this.”And that's right. We badgered AWS into migrating to Nitro years later and not using Xen anymore, and that solve that particular problem. But this was early on in my technical career. It sort of led to the impression of, “Oh, Redis. That's a cache, I should never try and use it as anything approaching a database.” Today, that guidance no longer holds, you are positioning yourself as a data platform. What did that dawning awareness look like? How did you get to where you are from where Redis was once envisioned in the industry: Primarily as a cache?Yiftach: Yeah, very good question. So, I think we should look at this problem from the application perspective, or from the user perspective. Sounds like a marketing term, but we all know we are in the age of real-time. Like, you expect everything to be instantly. You don't want to wait, no one wants to wait, especially after Covid and everything's that brought to the you know, online services.And the expectation today from a real-time application is to be able to reply less than 100 milliseconds in order to feel the real-time. When I say 100 milliseconds, from the time you click the button until you get the first byte of the response. Now, if you do the math, you can see that, like, 50% of this goes to the network and 50% of this goes to the data center. And inside the data center, in order to complete the transaction in less than 50 milliseconds, you need a database that replies in no time, like, less than a millisecond. And today, I must say, only Redis can guarantee that.If you use Redis as a cache, every transaction—or there is a potential at least—that not all the information will be in Redis when the transaction is happening and you need to bring it probably from the main database, and you need to processing it, and you need to update Redis about it. And this takes a while. And eventually, it will help the end-user experience. And just to mention, if you look at our support tickets, like, I would say the majority of them is, why Redis replies—why Redis latency grew from 0.25 millisecond to 0.5 millisecond because there is a multiplier effect for the end-user. So, I hoping I managed to answer what are the new challenges that we see today in the market.Corey: Tell me a little bit more about the need for latency around things like that. Because as we look at modern web apps across the board, people are often accessing them through mobile devices, which, you know, we look at this spinning circle of regret as it winds up loading a site or whatnot, it takes a few seconds. So, the idea of oh, that the database call has to complete in millisecond or less time seems a little odd viewed purely from a perspective of, “Really? Because I spent a lot of time waiting for the webpage to load.” What drives that latency requirement?Yiftach: First of all, I agree with you. A lot of time, you know, application were not built for it then. This is why I think we still have an opportunity to improve existing application. But for those applications that weren't built for real-time, for instance, in the gaming space, it is clear that if you delay your reaction for your avatar, in more than two frame, like, I mean, 60 millisecond, the experience is very bad, and customers are not happy with this. Or, in transaction scoring example, when you swipe the card, you want the card issuer to approve or not approve it immediately. You don't want to wait. [unintelligible 00:07:19] is another example.But in addition to that there are systems like mobility as a service, like the Ubers of the world, or the Airbnb of the world. Or any e-commerce site. In order to be able to reply in second, they need to process behind the scene, thousand, and sometime millions of operations per second in order to get to the right decision. Yeah? You need to match between riders and drivers. Yeah, and you need to match between guests and free room in the hotel. And you need to see that the inventory is up-to-date with the shoppers.And all these takes a lot of transactions and a lot of processing behind the scene in order just to reply in second in a consistent manner. And this is why that this is useful in all these application. And by the way, just a note, you know, we recently look at how many operations per second actually happening in our cloud environment, and I must tell you that I was surprised to see that we have over one thousand clusters or databases with the speed of between 1 million to 10 million operation per second. And over 150 databases with over 10 million operations per second, which is huge. And if you ask yourself how come, this is exactly the reason. This is exactly the reason. For every user interaction, usually you need to do a lot of interaction with your data.Corey: That kind of transaction volume, it would never occur to me to try and get that on a single database. It would, “All right, let's talk about sharding for days and trying to break things out.” But it's odd because a lot of the constraints that I was used to in my formative years, back when I was building software—badly—are very much no longer the case. The orders of magnitude are different. And things that used to require incredibly expensive, dedicated hardware now just require, “Oh yeah, you can click the button and get one of those things in the cloud, and it's dirt cheap.”And it's been a very strange journey. Speaking of clicking buttons, and getting things available in the cloud, Redis has been a thing, and its rise has almost perfectly tracked the rise of the cloud itself. There's of course the Redis open-source project, which has been around for ages and is what you're based on top of. And then obviously AWS wind up launching—“Ah, we're going to wind up ‘collaborating'”—and the quotes should be visible from orbit on that—“With Redis by launching ElasticCache for Redis.” And they say, “Oh, no, no, it's not competition. It's validating your market.”And then last year, they looked at you folks again, like, “Ah, we're launching a second service: MemoryDB in this case.” It's like Redis, except bad. And I look at this, and I figure what is their story this time? It's like, “Oh, we're going to validate the shit out of your market now.” It's, on some level, you should be flattered having multiple services launched trying to compete slash offer the same types of things.Yet Redis is not losing money year-over-year. By all accounts, you folks are absolutely killing it in the market. What is it like to work both in cloud environments and with the cloud vendors themselves?Yiftach: This is a very good question. And we use the term frenemy, like, they're our friend, but sometimes they are our enemy. We try to collaborate and compete them fairly. And, you know, AWS is just one example. I think that the other cloud took a different approach.Like with GCP, we are fully integrated in the console, what is called, “Third-party first-class service.” You click the button through the GCP console and then you're redirected to our cloud, Redis Enterprise cloud. With Azure even, we took a one step further and we provide a fully integrated solution, which is managed by Azure, Azure Cache for Redis, and we are the enterprise tier. But we are also cooperating with AWS. We cooperating on the marketplace, and we cooperate in other activities, including the open-source itself.Now, to tell you that we do not have, you know, a competition in the market, the competition is good. And I think MemoryDB is a validation of your first question, like, how can you use Redis [more than occasion 00:11:33], and I encourage users to test the differences between these services and to decide what fits to their use case. I promise you my perspective, at least, that we provide a better solution. We believe that any real-time use case should eventually be served by Redis, and you don't need to create another database for that, and you don't need to create another caching layer; you have everything in a single data platform, including all the popular models, data models, not only key-value, but also JSON, and search, and graph, and time-series… and probably AI, and vector embedding, and other stuff. And this is our approach.Corey: Now, I know it's unpopular in AWS circles to point this out, but I have it on good authority that they are not the only large-scale cloud provider on the planet. And in fact, if I go to the Google Cloud Console, they will sell me Redis as well, but it's through a partner affinity as a first-party offering in the console called Redis Enterprise. And it just seems like a very different interaction model, as in, their approach is they're going to build their own databases that solve for a wide variety of problems, some of them common and some of them ridiculous, but if you want actual Redis or something that looks like Redis, their solution is, “Oh, well, why don't we just give you Redis directly, instead of making a crappy store-brand equivalent of Redis?” It just seems like a very different go to market approach. Have you seen significant uptake of Redis as a product, through partnering with Google Cloud in that way?Yiftach: I would do answer this politely and say that I can no more say that the big cloud momentum is only on AWS. [laugh]. We see a lot of momentum in other clouds in terms of growth. And I would challenge the AWS guys to think differently about partnership with ISV. I'm not saying that they're not partnering with us, but I think the partnerships that we have with other clouds are more… closer. Yeah. It's like there is less friction. And it's up to them, you know? It's up to any cloud vendor to decide the approach they wants to take in this market. And it's good.Corey: It's a common refrain that I hear is that AWS is where we see the eight-hundred-pound gorilla in the space, it's very hard to deny that. But it also has been very tricky to wind up working with them in a partnership sense. Partnering is not really a language that Amazon speaks super well, kind of like, you know, toddlers and sharing. Because today, they aren't competing directly with you, but what about tomorrow? And it's such a large distributed company that in many cases, your account manager or your partner manager there doesn't realize that they're launching a competitor until 12 hours before it launches. And that's—yeah, feels great. It just feels very odd.That said, you are a partner with AWS and you are seeing significant adoption via the AWS Marketplace, and the reason I know that is because I see it in my own customer accounts on the consulting side, I'm starting to see more and more spend via the marketplace, partially due to offset spend commitments that they've made contractually with AWS, but also, privately I tend to believe a lot of that is also an end-run around their own internal procurement department, who, “Oh, you want some Redis. Great. Give me nine months, and then find three other vendors to give me competitive bids on it.” And yeah, that's not how the world works anymore. Have you found that the marketplace is a discovery mechanism for customers to come to Redis, or are they mostly going into the marketplace saying, “I need Redis. I want it to be Redis Enterprise, from Redis, but this is the way I'm going to procure it.”Yiftach: My [unintelligible 00:15:17], you know, there are people that are seeing differently, that marketplace is how to be discovered through the marketplace. I still see it, I still see it as a billing mechanism for us, right? I mean, AWS helping us in sell. I mean, their sell are also sell partner and we have quite a few deals with them. And this mechanism works very nicely, I must say.And I know that all the marketplaces are trying to change it, for years. That customer whenever they look at something, they will go through the marketplace and find it there, but it's hard for us to see the momentum there. First of all, we don't have the metrics on the marketplace; we cannot say it works, it doesn't works. What we do see that works is that when we own the customer and when the customer is ascertaining how to pay, through the credit card or through the wire, they usually prefer to pay through the commit from the cloud, whether it is AWS, GCP, or Azure. And for that, we help them to do the transaction seamlessly.So, for me, the marketplace, the number one reason for that is to use your existing commit with the cloud provider and to pay for ourselves. That said, I must say that [with disregard 00:16:33] [laugh] AWS should improve something because not the entire deal is committed. It's like 50% or 60%, don't remember the exact number. But in other clouds when ISVs are interacting with them, the entire deal is credited for the commit, which is a big difference.Corey: I do point out, this is an increasing trend that I'm seeing across the board. For those who are unaware, when you have a large-scale commitment to spend a certain dollar amount per year on AWS Marketplace spend counts at a 50% rate. So, 50 cents of every dollar you spend to the marketplace counts toward your commit. And once upon a time, this was something that was advertised by AWS enterprise sales teams, as, “Ah. This is a benefit.”And they're talking about moving things over that at the time are great, you can move that $10,000 a year thing there. And it's, “You have a $50 million annual commit. You're getting awfully excited about knocking $5,000 off of it.” Now, as we see that pattern starting to gain momentum, we're talking millions a year toward a commit, and that is the game changer that they were talking about. It just looks ridiculous at the smaller-scale.Yiftach: Yeah. I agree. I agree. But anyway, I think this initiative—and I'm sure that AWS will change it one day because the other cloud, they decided not to play this game. They decided to give the entire—you know, whatever you pay for ISVs, it will be credited with your commit.Corey: We're all biased by our own experiences, so I have a certain point of view based upon what I see in my customer profile, but my customers don't necessarily look like the bulk of your customers. Your website talks a lot about Redis being available in all cloud providers, in on-prem environments, the hybrid and multi-cloud stories. Do you see significant uptake of workloads that span multiple clouds, or are they individual workloads that are on multiple providers? Like for example Workload A lives on Azure, Workload B lives on GCP? Or is it just Workload A is splattered across all four cloud providers?Yiftach: Did the majority of the workloads is splitted between application and each of them use different cloud. But we started to see more and more use cases in which you want to use the same data sets across cloud, and between hybrid and cloud, and we provide this solution as well. I don't want to promote myself so much because you worried me at the beginning, but we create these products that is called Active-Active Redis that is based on CRDT, Conflict-free Replicated Data Type. But in a nutshell, it allows you to run across multiple clouds, or multiple region in the same cloud, or hybrid environment with the speed the of Redis while guaranteeing that eventually all your rights will be converged to the same value, and while maintaining the speed of Redis. So, I would say quite a few customers have found it very attractive for them, and very easy to migrate between clouds or between hybrid to the cloud because in this approach of Active-Active, you don't need the single cut-off.A single cut-off is very complex process when you want to move a workload from one cloud to another. Think about it, it is not only data; you want to make sure that the whole entire application works. It never works in one shot and you need to return back, and if you don't have the data with you, you're stuck. So, that mechanism really helps. But the bigger picture, like you mentioned, we see a lot of [unintelligible 00:20:12] distribution need, like, to maintain the five nines availability and to be closer to the user to guarantee the real-time. Send dataset deployment across multiple clouds, and I agree, we see a growth there, but it is still not the mainstream, I would say.Corey: I think that my position on multi-cloud has been misconstrued in a variety of corners, which is doubtless my fault for failing to explain it properly. My belief has been when you're building something on day-one, greenfield pickup provider—I don't care which one—go all in. But I also am not a big fan of potentially closing off strategic or theoretical changes down the road. And if you're picking, let's say, DynamoDB, or Cloud Spanner, or Cosmos DB, and that is the core of your application, moving a workload from Cloud A to Cloud B is already very hard. If you have to redo the entire interface model for how it talks to his data store and the expectations built into that over a number of years, it becomes basically impossible.So, I'm a believer in going all-in but only to a certain point, in some cases, and for some workloads. I mean, I done a lot of work with DynamoDB, myself for my newsletter production pipeline, just because if I can't use AWS anymore, I don't really need to write Last Week in AWS. I have a hard time envisioning a scenario in which I need to go cross-cloud but still talk about the existing thing. But my use case is not other folks' use case. So, I'm a big believer in the theoretical exodus, just because not doing that in many corporate environments becomes a lot less defensible. And Redis seems like a way to go in that direction.Yiftach: Yeah. Totally with you. I think that this is a very important—and by the way, it is not… to say that multi-cloud is wrong, but it allows you to migrate workload from one cloud to another, once you decide to do it. And it's put you in a position as a consumer—no one wants—why no one likes [unintelligible 00:22:14]. You know, because of the pricing model [laugh], okay, right?You don't want to repeat this story, again with AWS, and with any of them. So, you want to provide enough choices, and in order to do that, you need to build your application on infrastructures that can be migrated from one cloud to another and will not be, you know, reliant on single cloud database that no one else has, I think it's clear.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: Well, going greenfield story of building something out today, “I'm going to go back to my desk after this and go ahead and start building out a new application.” And great, I want to use Redis because this has been a great conversation, and it's been super compelling. I am probably not going to go to redis.com and sign up for an enterprise Redis agreement to start building out.It's much likelier that I would start down the open-source path because it turns out that I believe ‘docker pull redis' is pretty close to—or ‘docker run redis latest' or whatever it is, or however it is you want to get Redis—I have no judgment here—is going to get you the open-source tool super well. What is the nature of your relationship between the open-source Redis and the enterprise Redis that runs on money?Yiftach: So, first of all, we are, like, the number one contributor to the Redis open-source. So, I would say 98% of the code of Redis contributed by our team. Including the creator of Redis, Salvatore Sanfilippo, was part of our team. Salvatore has stepped back in, like—when was it? Like, one-and-a-half, almost two years ago because the project became, like, a monster, and he said, “Listen, this is too much. I worked, like, 10 years or 11 years. I want to rest a bit.”And the way we built the core team around Redis, we said we will allocate three people from the company according to their contribution. So, the leaders—the number two after Salvatore in terms of contribution, I mean, significant contribution, not typo and stuff [laugh] like this. And we also decided to make it, like, a community-driven project, and we invited people from other places, including AWS, Madelyn, and Zhao Zhao from Alibaba.And this is based on the past contribution to Redis, not because they are from famous cloud providers. And I think it works very well. We have a committee which is driven by consensus, and this is how we agree what we put in the open-source and what we do not. But in addition to the pure open-source, we also invested a lot in what we call Source Available. Source Available is a new approach that, I think, we were the first who started it, back in 2018, when we wanted to have a mechanism to be able to monetize the company.And what we did by then, we added all the modules which are extensions to the latest open-source that allow you to do the model, like JSON and search and graph and time series and AI and many others with Redis under the Source Available license. That mean you can use it like BSD; you can change everything, without copyleft, you don't need to contribute back. But there is one restriction. You cannot create a service or a product that compete directly with us. And so far, it works very well, and you can launch Docker containers with search, and with JSON—or with all the modules combined; we also having this—and get the experience from day zero.We also made sure that all your clients are now working very well with these modules, and we even created the object mapping client for each of the major language. So, we can easily integrate it with Spring, in Django, and Node.js platform, et cetera. This is called when OM .NET, OM Java, OM Node.js, OM Python, et cetera, very nicely. You don't need to know all the commands associated. We just speak [unintelligible 00:26:22] level with Redis and get all the functionality.Corey: It's a hard problem to solve for, and whenever we start talking about license changes for things like this, it becomes a passionate conversation. People with all sorts of different perspectives and assumptions baked in—and a remembrance of yesteryear—all have different thoughts on coulda, woulda, shoulda, et cetera, et cetera. But let's be very clear, running a business is hard. And building a business on top of an open-source model is way harder. Now, if we were launching an open-source company today in 2022, there are different things we would do; we would look at it very differently. But over a decade ago, it didn't work that way. If you were to be looking at how open-source companies can continue to thrive in a time of cloud, what guidance do you have for him?Yiftach: This is a great question, and I must say that the every month or every few weeks, I have a discussion with a new team of founders that want to create an open-source, and they asked me what is my opinion here. And I would say, today, that we and other ISV, we built a system for you to decide what you want to freely open-source, and take into account that if this goes very well, the cloud provider will pick it up and will make a service out of it. Because this is the way they work. And the way for you to protect yourself is to have different types of licenses, like we did. Like you can decide about Source Available and restrict it to the minimum.By the way, I think that Source Available is much better than AGPL with the copyleft and everything that it's provide. So, AGPL is a pure open-source, but it has so many other implications that people just don't want to touch it. So, it's available, you can do whatever you want, you just cannot create a competing product. And of course, if there are some code that you want to close, use closed-source. So, I would say think very seriously about your licensing model. This is my advice. It's not to say that open-source is not great. I truly believe that it helps you to get the adoption; there are a lot of other benefits that open-source creates.Corey: Historically, it feels that open-source was one of those things that people wanted the upside of the community, and the adoption, and getting people to work. Especially on a shoestring budget, and people can go in and fix these things. Like, that's the naive approach of, “Oh, it just because we get a whole bunch of free, unpaid labor.” Okay, fine, whatever. It also engenders trust and lets people iterate rapidly and build things to suit their use cases, and you learn so much more about the use cases as you continue to grow.But on the other side of it, there's always the Docker problem where they gave away the thing that added stupendous value. If they hadn't gone open-source with Docker, it never would have gotten the adoption that it did, but by going open-source, they wound up, effectively, being forced more or less than to say, “Okay, we're going to give away this awesome thing and then sell services around it.” And that's not really a venture-scaled business, in most cases. It's a hard market.Yiftach: And the [gate 00:29:26] should never be the cloud. Because people, like you mentioned, people doesn't start with the cloud. They start to develop with on the laptop or somewhere with Docker or whatever. And this is where Source Available can shine because it allows you to do the same thing like open-source—and be very clear, please do not confuse your user. Tells them that this is Source Available; they should know in advance, so they will be not surprise later on when they move to the production stage.Then if they have some question, legal questions, for Redis, we're able to answer, yeah. And if they don't, they need to deal with the implication of this. And so far, we found it suitable to most of the users. Of course, there will be always open-source gurus.Corey: If there's one thing people have on the internet, it's opinions.Yiftach: Yeah. I challenge the open-source gurus to change their mindset because the world has changed. You know, we cannot treat the open-source like we used to treat it there in 2008 or early-90s. It is a different world. And you want companies like Redis, you want people to invest in open-source. And we need to somehow survive, right? We need to create a business. So, I challenge these [OSI 00:30:38] committees to think differently. I hope they will, one day.Corey: One last topic that I want to cover is the explosion of AI—artificial intelligence—or machine-learning, or bias-laundering, depending upon who you ask. It feels in many ways like a marketing slogan, and I viewed it as more or less selling pickaxes into a digital gold rush on the part of the cloud providers, until somewhat recently, when I started encountering a bunch of customers who are in fact using it for very interesting and very appropriate use cases. Now, I'm seeing a bunch of databases that are touting their machine-learning capabilities on top of the existing database offerings. What's Redis's story around that?Yiftach: Great question. Great question. So, I think today, I have two story, which are related to the real-time AI, yeah, we are in the real-time world. One of them is what we call the online feature store. Just to explain the audience what is a feature store, usually, when you do inferencing, you need to enhance the transaction with some data, in order to get the right quality.Where do you store this data? So, for online transaction, usually you want to store it in Redis because you don't want to delay your application whenever you do inferencing. So, the way it works, you get a transaction, you bring the features, you combine them together, sends them to inferencing, and then whatever you want to do with the results. One of the things that we did with Redis, we combine AI inferencing inside with this, and we allow you to do that in one API call, which makes the real-time much, much faster. You can decide to use Redis just as a [unintelligible 00:32:16] feature store; this is also great.The other aspect of AI is vector embedding. Just to make sure that we are all aligned with vector embedding term, so vector embedding allows you to provide a context for your text, for your video, for your image in just 128-byte, or floating point. It really depends on the quality of vector. And think about is that tomorrow, every profile in your database will have a vector that explain the context of the product, the context of the user, everything, like, in one single object in your profile.So, Redis has it. So, what can you do once you have it? For instance, you can search where are the similar vector—this is called vector similarity search—for recommendation engines, and for many, many, many others implications. And you would like to combine it with metadata, like, not only bring me all the similar context, but also, you know, some information about the visitor, like the age, like the height, like where does the person live? So, it's not only vector similarity search, it's search with vector similarity search.Now, the question could be asked, do we want to create a totally different database just for this vector similarity search, and then I will make it fast as Redis because you need everything to run in real-time? And this is why I encourage people to look at what they have in Redis. And again, I don't want to be marketeer here, but they don't think that the single-feature deployment require a new database. And we added this capability because we do see the need to support it in real-time. I hope my answer was not too long.Corey: No, no, it's the right answer because the story that you're telling about this is not about how smart you are; it's not about hype-driven stuff. You're talking about using this to meet actual customer needs. And if you tell me that, “Oh, we built this thing because we're smart,” yeah, I can needle you to death on that and make fun of you until I'm blue in the face. But when you say, “I'm going to go ahead and do this because our customers have this pain,” well, that's a lot harder for me to criticize because, yeah, you have to meet your customers where they are; that's the right thing to do. So, this is the kind of story that is slowly but surely changing my view on the idea of machine-learning across the board.Yiftach: I'm happy that you like it. We like it as well. And we see a lot of traction today. Vector similarity search is becoming, like, a real thing. And also features store.Corey: I want to thank you so much for taking the time to speak with me today. If people want to learn more, where can they find you?Yiftach: Ah, I think first of all, you can go to redis.io or redis.com and look for our solution. And I'm available on LinkedIn and Twitter, and you can find me.Corey: And we will of course put links to all of that in the [show notes 00:35:10]. Thank you so much for your time today. I appreciate it.Yiftach: Thank you, Corey. It was very nice conversation. I really enjoy it. Thank you very much.Corey: Thank you. You as well. Yiftach Shoolman, CTO and co-founder at Redis. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with a long rambling angry comment about open-source licensing that no one is going to be bothered to read.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Investor Connect Podcast
Investor Connect - 577 - Daniel Cohen of Viola Ventures

Investor Connect Podcast

Play Episode Listen Later Jul 26, 2021 12:28


In this episode, Hall welcomes Daniel Cohen, General Partner at Viola Ventures. Viola Ventures is part of Viola, Israel's leading tech-focused investment group with over $3.5B AUM. Founded in 2000, Viola Ventures empowers early-stage start-ups to become global leaders. The fund manages over $1B and has backed some of Israel's unicorns such as Payoneer, ironSource, Lightricks, Outbrain, Redis Labs, Pagaya, and more. To date, the fund has invested in 82 companies and has recorded over 40 exits. Daniel joined Viola Ventures after 11 years at Gemini Israel Ventures where he invested in various companies including Adap.tv (acquired by AOL for $450M), Outbrain, WatchDox (acquired by Blackberry for ~$100M), and Minute Media. Daniel's investment focus is the B2C market, including consumer internet, e-Commerce, DTC, games, and digital media. Daniel began his career as a developer and product manager in a few Israeli high-tech companies, including Commtouch and Scitex. He has a BA in computer science and psychology from Tel-Aviv University and an MBA from INSEAD. He currently serves on the board of EX.CO (formerly Playbuzz), Lightricks (creator of Facetune), and Maapilim. He was also on the board of Tapingo (acquired by Grubhub for $150M) and Origami Logic (acquired by Intuit). Daniel advises investors and entrepreneurs, shares how he sees the industry evolving, and discusses his investment thesis. You can visit Viola Ventures at , via LinkedIn at /, and via Twitter at .  Daniel can be contacted via email at , via LinkedIn at , and via Twitter at .  Music courtesy of .

The 6 Figure Developer Podcast
Episode 199 – Redis with Guy Royse

The 6 Figure Developer Podcast

Play Episode Listen Later Jun 7, 2021 34:51


  Guy works for Redis Labs as a Developer Advocate. Combining his decades of experience in writing software with a passion for sharing what he has learned, Guy goes out into developer communities and helps others build great software.   Fun fact: Redis stands for Remote Dictionary Server.   Links https://twitter.com/guyroyse https://www.linkedin.com/in/groyse https://github.com/guyroyse http://guyroyse.com/ https://redislabs.com/blog/author/guy-royse/   Resources https://redis.io/ https://redislabs.com/ https://redis.io/topics/streams-intro https://redislabs.com/blog/introduction-to-redisgears/ https://developer.redislabs.com/ https://www.youtube.com/redislabs https://redislabs.com/blog/7-redis-worst-practices/ https://redislabs.com/blog/goodbye-cache-redis-as-a-primary-database/   "Tempting Time" by Animals As Leaders used with permissions - All Rights Reserved × Subscribe now! Never miss a post, subscribe to The 6 Figure Developer Podcast! Are you interested in being a guest on The 6 Figure Developer Podcast? Click here to check availability!  

Software Defined Talk
Episode 293: Don’t steal my kid’s bike, steal my bike

Software Defined Talk

Play Episode Listen Later Apr 9, 2021 63:39


This week we discuss the Supreme Court’s Ruling in Google vs. Oracle and the future of open source business models. Plus, do you really need a yard? Rundown API’s for Everyone Supreme Court sides with Google in Oracle’s API copyright case (https://www.theverge.com/2021/4/5/22367851/google-oracle-supreme-court-ruling-java-android-api) SCOTUS Decision Oracle vs. Google (https://www.supremecourt.gov/opinions/20pdf/18-956_d18f.pdf) Xinuos—owners of what used to be SCO—file suit against Red Hat and IBM (https://arstechnica.com/gadgets/2021/04/xinuos-finishes-picking-up-scos-mantle-by-suing-red-hat-and-ibm/) OSS Business Models The Identity Crisis Facing Open Source Companies in the Cloud (https://www.tomtunguz.com/open-source-cloud-identity-crisis/) GitHub Sponsors (https://github.com/sponsors/community) Everyone uses OpenSSL, but nobody's willing to fix it — except the Linux Foundation (https://venturebeat.com/2014/05/29/everyone-uses-openssl-but-nobodys-willing-to-fix-it-except-the-linux-foundation/) Troubles with the Open Source Gig Economy and Sustainability Tip Jar (https://www.aniszczyk.org/2019/03/25/troubles-with-the-open-source-gig-economy-and-sustainability-tip-jar/) Tidelift | A managed open source subscription backed by creators and maintainers (https://tidelift.com/) Relevant to your interests Briefing: Amazon Plans for Return to ‘Office-Centric Culture’ (https://www.theinformation.com/briefings/11454c) Theme Studio: Create VS Code Themes! (https://themes.vscode.one/) Software development is a losers game (https://thehosk.medium.com/software-development-is-a-losers-game-fc68bb30d7eb) Amazon apologizes (https://www.theverge.com/2021/4/3/22365330/amazon-apology-pee-bottles-worker-warehouse-union-pocan) Gitlab acquired Peach Fuzzer Pro then open-sourced most of it (https://gitlab.com/gitlab-org/security-products/protocol-fuzzer-ce) VMware Taps Knative for Tanzu Kubernetes Abstraction - SDxCentral (https://www.sdxcentral.com/articles/news/vmware-taps-knative-for-tanzu-kubernetes-abstraction/2021/04/) LG confirms it’s getting out of the smartphone business (https://www.theverge.com/2021/4/4/22346084/lg-exits-smartphone-business) Personal data of 533 million Facebook users leaks online (https://www.theverge.com/2021/4/4/22366822/facebook-personal-data-533-million-leaks-online-email-phone-numbers) Spotify opens a second personalized playlist to sponsors, after ‘Discover Weekly’ in 2019 (https://techcrunch.com/2021/04/05/spotify-opens-a-second-personalized-playlist-to-sponsors-after-discover-weekly-in-2019/) Google will stop using Oracle's finance software and adopt SAP instead (https://www.cnbc.com/2021/04/05/google-will-stop-using-oracle-finance-software-switch-to-sap.html) Biden allows Trump admin's H-1B visa program suspension to expire (https://www.ciodive.com/news/h1B-program-suspension-biden/597813/) Clubhouse Discusses Funding at About $4 Billion Value (https://www.bloomberg.com/news/articles/2021-04-06/clubhouse-is-said-to-discuss-funding-at-about-4-billion-value) Note-taking app Mem raises $5.6 million from Andreessen Horowitz (https://techcrunch.com/2021/04/06/note-taking-app-mem-raises-5-6-million-from-andreessen-horowitz/) IBM Delivers on Cloud Promise for Financial Services - Container Journal (https://containerjournal.com/features/ibm-delivers-on-cloud-promise-for-financial-services/) IBM creates a COBOL compiler – for Linux on x86 (https://www.theregister.com/2021/04/07/ibm_cobol_x86_linux/) Target CIO Mike McNamara makes a cloud declaration of independence (https://www.protocol.com/enterprise/target-cio-mike-mcnamara-multicloud) T-Mobile’s 5G home internet service has become a reality (https://www.theverge.com/2021/4/7/22312155/t-mobile-5g-home-internet-wireless-broadband) KKR hands Box a $500M lifeline (https://techcrunch.com/2021/04/08/kkr-hands-box-a-500m-lifeline/) Cisco and HashiCorp Join Forces to Deliver Infrastructure as Code Automation Across Hybrid Cloud (https://blogs.cisco.com/cloud/cisco-and-hashicorp-join-forces-to-deliver-infrastructure-as-code-automation-across-hybrid-cloud) Redis Labs doubles value to $2bn in 9 months with $110m Series G funding round (https://www.theregister.com/2021/04/07/redis_labs_doubles_value_to/) Google Is Testing Its Controversial New Ad Targeting Tech in Millions of Browsers. Here’s What We Know. (https://www.eff.org/deeplinks/2021/03/google-testing-its-controversial-new-ad-targeting-tech-millions-browsers-heres) JAB Guidance on CentOS Linux End of Life (https://www.fedramp.gov/2021-03-30-CentOS-Linux-End-of-Life/) Nonsense Inside a viral website (https://notfunatparties.substack.com/p/inside-a-viral-website) Engineers Sneakily Upgrade Apple M1 Mac Mini With More Storage, RAM (https://www.tomshardware.com/news/mac-m1-mod) Sponsors CBT Nuggets — Training available for IT Pros anytime, anywhere. Start learning today at cbtnuggets.com/sdt (http://cbtnuggets.com/sdt) Listener Feedback Ryan from DataDog wants you to know they are hiring a Technical Writer (https://www.datadoghq.com/careers/detail/?gh_jid=2220727) and Technical Curriculum Developer (https://www.datadoghq.com/careers/detail/?gh_jid=2220736) REMOTE available. Conferences SpringOne.io (https://springone.io), Sep 1st to 2nd - CFP is open until April 9th (https://springone.io/cfp). Two SpringOne Tours: (1.) developer-bonanza in for NA, March 10th and 11th (https://tanzu.vmware.com/developer/tv/springone-tour/0014/), and, (2.) EMEA dev-fest on April 28th (https://tanzu.vmware.com/developer/tv/springone-tour/0015/). VMware Tanzu Up Close Virtual Event (https://connect.tanzu.vmware.com/EMEA_P5_FE_Q122_Event_VMware-Tanzu-Up-Close.html), April 27, 2021, 10:00am - 5:50pm CET SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/) and LinkedIn (https://www.linkedin.com/company/software-defined-talk/). Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Use the code SDT to get $20 off Coté’s book, (https://leanpub.com/digitalwtf/c/sdt) Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: Formula 1: Drive to Survive (https://www.netflix.com/title/80204890). Coté: Peanut M&M’s; Hitchhiker’s Guide to the Galaxy. Photo Credit (https://unsplash.com/photos/Vq__yk6faOI) Photo Credit (https://www.supremecourt.gov/opinions/20pdf/18-956_d18f.pdf)

Redis Stars Podcast
Redis for Fraud Detection Systems

Redis Stars Podcast

Play Episode Listen Later Mar 18, 2021 15:15


[:30] Tell us a little bit about yourself? [1:00]What have you been doing to stay busy during this COVID world? [1:30]You can talk about Walmart and the role you are working on?[2:15] When did you get to know about Redis for the first time?[3:20] Which are the most recent Redis modules you worked on and why did you choose it?[6:05] What type of Infrastructure did you use for Kubernetes?[7:10] I was going through your medium blog post titled “Redis Centric Fraud Detection System” where you mentioned that “AI plays a huge part in Fraud detection and Redis has extensive support for that.”. Can you elaborate?[9:50] Apart from real-time Ad Fraud detection, what are the other exciting things around Redis you are working on?[11:50] Which feature would you love to see in Redis?[12:30] What developer tools do you enjoy using the most? Relevant Links:Repo - Building Real Time Fraud Detection Application using RedisBlog - Redis Centric Real-Time Fraud DetectionBlog - Introducing the Redis Data Source Plug-in for Grafana

Azure Friday (HD) - Channel 9
Scale your cloud app with Azure Cache for Redis

Azure Friday (HD) - Channel 9

Play Episode Listen Later Feb 6, 2021


Ye Gu joins Scott Hanselman to discuss Azure Cache for Redis, a popular open-source in-memory data store that uses DRAM to store the most frequently used or time-sensitive data for fast retrieval. With it, you can create applications on Azure that handle millions of requests per second at down to sub-millisecond latency. Now, Azure Cache for Redis is becoming even more powerful through the integration of Redis Enterprise in partnership with Redis Labs.[0:00:00]– Introduction[0:00:46]– Presentation[0:06:34]– Demo[0:15:48]– Discussion & wrap-upMeeting developer needs with powerful new features in Azure Cache for RedisAzure Cache for RedisQuickstart: Create an Enterprise tier cache (preview)How to Improve Your Azure SQL Performance by up to 800%Optimize your web applications by caching read-only data with RedisCreate a free account (Azure)

Azure Friday (Audio) - Channel 9
Scale your cloud app with Azure Cache for Redis

Azure Friday (Audio) - Channel 9

Play Episode Listen Later Feb 6, 2021


Ye Gu joins Scott Hanselman to discuss Azure Cache for Redis, a popular open-source in-memory data store that uses DRAM to store the most frequently used or time-sensitive data for fast retrieval. With it, you can create applications on Azure that handle millions of requests per second at down to sub-millisecond latency. Now, Azure Cache for Redis is becoming even more powerful through the integration of Redis Enterprise in partnership with Redis Labs.[0:00:00]– Introduction[0:00:46]– Presentation[0:06:34]– Demo[0:15:48]– Discussion & wrap-upMeeting developer needs with powerful new features in Azure Cache for RedisAzure Cache for RedisQuickstart: Create an Enterprise tier cache (preview)How to Improve Your Azure SQL Performance by up to 800%Optimize your web applications by caching read-only data with RedisCreate a free account (Azure)

Azure Friday (HD) - Channel 9
Scale your cloud app with Azure Cache for Redis

Azure Friday (HD) - Channel 9

Play Episode Listen Later Feb 6, 2021 17:03


Ye Gu joins Scott Hanselman to discuss Azure Cache for Redis, a popular open-source in-memory data store that uses DRAM to store the most frequently used or time-sensitive data for fast retrieval. With it, you can create applications on Azure that handle millions of requests per second at down to sub-millisecond latency. Now, Azure Cache for Redis is becoming even more powerful through the integration of Redis Enterprise in partnership with Redis Labs.[0:00:00]– Introduction[0:00:46]– Presentation[0:06:34]– Demo[0:15:48]– Discussion & wrap-upMeeting developer needs with powerful new features in Azure Cache for RedisAzure Cache for RedisQuickstart: Create an Enterprise tier cache (preview)How to Improve Your Azure SQL Performance by up to 800%Optimize your web applications by caching read-only data with RedisCreate a free account (Azure)

Azure Friday (Audio) - Channel 9
Scale your cloud app with Azure Cache for Redis

Azure Friday (Audio) - Channel 9

Play Episode Listen Later Feb 6, 2021 17:03


Ye Gu joins Scott Hanselman to discuss Azure Cache for Redis, a popular open-source in-memory data store that uses DRAM to store the most frequently used or time-sensitive data for fast retrieval. With it, you can create applications on Azure that handle millions of requests per second at down to sub-millisecond latency. Now, Azure Cache for Redis is becoming even more powerful through the integration of Redis Enterprise in partnership with Redis Labs.[0:00:00]– Introduction[0:00:46]– Presentation[0:06:34]– Demo[0:15:48]– Discussion & wrap-upMeeting developer needs with powerful new features in Azure Cache for RedisAzure Cache for RedisQuickstart: Create an Enterprise tier cache (preview)How to Improve Your Azure SQL Performance by up to 800%Optimize your web applications by caching read-only data with RedisCreate a free account (Azure)

Software Engineering Radio - The Podcast for Professional Software Developers

Tug Grall of Redis Labs discusses Redis, its evolution over the years and emerging use cases today,its module based ecosystem and Redis’ applicability in a wide range of applications beyond being a layer for caching data such as search, machine learning

Mad Over Videos by guch
MOV Podcast - Ep 23 Feat. Doug Tidwell of Redis Labs | Mad Over Videos by guch

Mad Over Videos by guch

Play Episode Listen Later Dec 4, 2020 70:31


Doug Tidwell is the Senior Technical Marketing Manager at Redis Labs where he is currently working to build out the Redis Labs Developer Program. Redis Labs is the home of Redis, the world's most popular in-memory database, and commercial provider of Redis Enterprise. Doug is also a programmer and writer who creates videos, articles, sample code, container images, and other useful things. He has given hundreds of presentations at dozens of conferences around the world and is the author of O'Reilly's XSLT, a copy of which makes a perfect gift for all occasions. In a conversation with Pranav, host, Mad over videos podcast episode 23, Doug shares his insights on creating humorous marketing videos to open new doors, create an affinity, and leave a lasting impact on your target audience. We've also covered video-based evangelism, how to speak to developers effectively, and a lot more. So without further ado, tune in to learn more such interesting video content, marketing insights only on the Mad Over Videos Podcast episode 23 featuring Doug Tidwell of Redis Labs. WEBSITE: guch.me/ LINKEDIN : @guch www.linkedin.com/company/guchme/ INSTAGRAM : @guch.me www.instagram.com/guch.me/ FACEBOOK : @guchHQ www.facebook.com/guchHQ TWITTER : @guchHQ twitter.com/guchHQ

Coffee, Collaboration, and Enablement
Building a Manager-Led Coaching Program

Coffee, Collaboration, and Enablement

Play Episode Listen Later Nov 3, 2020 35:03 Transcription Available


Jerry Pharr is the Director of Global Sales Enablement at Redis Labs. He stopped in to share the presentation he just did for the Sales Enablement Society Annual Conference.Jerry provided great insights on:1️⃣ How to get executive buy-in for this type of program.2️⃣ How to structure the program.3️⃣ The types of materials you will want in place to support the managers.4️⃣ Key technologies.5️⃣ Measuring successAnd so much more.

Data on Kubernetes Community
#9 DoK community: Geospatial Sensor Networks and Partitioning Data // Alex Miłowski

Data on Kubernetes Community

Play Episode Listen Later Sep 17, 2020 54:04


For our 9th installation of the Dokc data on k8s meetup, we will be talking with Alex Milowski from Redis Labs. // Key takeaways: How are data collection and consumption workloads fundamentally different? What are the main challenges for sensor networks? How are those challenges address within the context of K8s? // Abstract: We use resources like weather reports or air quality measurements to navigate the world. These resources become especially important when faced by extreme events like the current wildfires in the Western USA. The data for the reports, predictions, and maps all start as realtime sensor networks. In this talk, Alex will present some of his research into scientific data representation on the Web and how the key mechanism is the partitioning, annotation, and naming of data representations. We'll take a look at a few examples, including some recent work on air quality data relating to the current wildfires in the western USA. We'll explore the central question of how geospatial sensor network data can be collected and consumed within K8s deployments. // Alex Bio Dr. Milowski is a researcher, developer, entrepreneur, mathematician, and computer scientist. He has been involved in the development of Web and Semantics technologies since the early 1990's, primarily focusing on data representation, algorithms, and processing data at scale; also, an experienced developer skilled in a variety of functional and imperative languages. He received his PhD in Informatics (Computer Science) from the renowned University of Edinburgh School of Informatics (Scotland) on large-scale computation over scientific data on the Web in 2014. Various experience in scientific computing - geospatial and genome data pipelines - and big data platforms. Recently, he has been working in telecommunications on various mobile financial applications and researching how to improve the productivity of machine learning systems and data scientists by utilizing Kubernetes as a platform. He has experience teaching, mentoring, and developing within various data science/ML domains including topics such as cloud computing, Kubernetes, Spark, Hadoop, text processing/NLP, deep learning, data acquisition, and a whole lot of Python. ▬▬▬▬▬▬ Connect with us

FLOSS Weekly (Video LO)
FLOSS Weekly 595: Redis Redux - In Memory Data Structure Store

FLOSS Weekly (Video LO)

Play Episode Listen Later Sep 9, 2020 66:37


In-memory data structure store. Redis is an open-source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Doc Searls and Shawn Powers check in with Christoph Zimmermann, who is a community manager of Redis Labs. They discuss the multiple ways Redis can be used. They talk about how Redis has expanded in the last ten years and the future of Redis. Christoph believes the future lies with the module ecosystem. Christoph gives examples of how Redis is used with large projects and companies all over the world. Hosts: Doc Searls and Shawn Powers Guest: Christoph Zimmermann Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Sponsor: privacy.com/floss

All TWiT.tv Shows (Video HD)
FLOSS Weekly 595: Redis Redux

All TWiT.tv Shows (Video HD)

Play Episode Listen Later Sep 9, 2020 66:37


In-memory data structure store. Redis is an open-source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Doc Searls and Shawn Powers check in with Christoph Zimmermann, who is a community manager of Redis Labs. They discuss the multiple ways Redis can be used. They talk about how Redis has expanded in the last ten years and the future of Redis. Christoph believes the future lies with the module ecosystem. Christoph gives examples of how Redis is used with large projects and companies all over the world. Hosts: Doc Searls and Shawn Powers Guest: Christoph Zimmermann Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Sponsor: privacy.com/floss

All TWiT.tv Shows (Video LO)
FLOSS Weekly 595: Redis Redux

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Sep 9, 2020 66:37


In-memory data structure store. Redis is an open-source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Doc Searls and Shawn Powers check in with Christoph Zimmermann, who is a community manager of Redis Labs. They discuss the multiple ways Redis can be used. They talk about how Redis has expanded in the last ten years and the future of Redis. Christoph believes the future lies with the module ecosystem. Christoph gives examples of how Redis is used with large projects and companies all over the world. Hosts: Doc Searls and Shawn Powers Guest: Christoph Zimmermann Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Sponsor: privacy.com/floss

FLOSS Weekly (Video HI)
FLOSS Weekly 595: Redis Redux - In Memory Data Structure Store

FLOSS Weekly (Video HI)

Play Episode Listen Later Sep 9, 2020 66:37


In-memory data structure store. Redis is an open-source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Doc Searls and Shawn Powers check in with Christoph Zimmermann, who is a community manager of Redis Labs. They discuss the multiple ways Redis can be used. They talk about how Redis has expanded in the last ten years and the future of Redis. Christoph believes the future lies with the module ecosystem. Christoph gives examples of how Redis is used with large projects and companies all over the world. Hosts: Doc Searls and Shawn Powers Guest: Christoph Zimmermann Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Sponsor: privacy.com/floss

FLOSS Weekly (Video HD)
FLOSS Weekly 595: Redis Redux - In Memory Data Structure Store

FLOSS Weekly (Video HD)

Play Episode Listen Later Sep 9, 2020 66:37


In-memory data structure store. Redis is an open-source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Doc Searls and Shawn Powers check in with Christoph Zimmermann, who is a community manager of Redis Labs. They discuss the multiple ways Redis can be used. They talk about how Redis has expanded in the last ten years and the future of Redis. Christoph believes the future lies with the module ecosystem. Christoph gives examples of how Redis is used with large projects and companies all over the world. Hosts: Doc Searls and Shawn Powers Guest: Christoph Zimmermann Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Sponsor: privacy.com/floss

All TWiT.tv Shows (MP3)
FLOSS Weekly 595: Redis Redux

All TWiT.tv Shows (MP3)

Play Episode Listen Later Sep 9, 2020 66:37


In-memory data structure store. Redis is an open-source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Doc Searls and Shawn Powers check in with Christoph Zimmermann, who is a community manager of Redis Labs. They discuss the multiple ways Redis can be used. They talk about how Redis has expanded in the last ten years and the future of Redis. Christoph believes the future lies with the module ecosystem. Christoph gives examples of how Redis is used with large projects and companies all over the world. Hosts: Doc Searls and Shawn Powers Guest: Christoph Zimmermann Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Sponsor: privacy.com/floss

All TWiT.tv Shows (Video HI)
FLOSS Weekly 595: Redis Redux

All TWiT.tv Shows (Video HI)

Play Episode Listen Later Sep 9, 2020 66:37


In-memory data structure store. Redis is an open-source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Doc Searls and Shawn Powers check in with Christoph Zimmermann, who is a community manager of Redis Labs. They discuss the multiple ways Redis can be used. They talk about how Redis has expanded in the last ten years and the future of Redis. Christoph believes the future lies with the module ecosystem. Christoph gives examples of how Redis is used with large projects and companies all over the world. Hosts: Doc Searls and Shawn Powers Guest: Christoph Zimmermann Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Sponsor: privacy.com/floss

אנשים ומחשבים
אנשים ומחשבים פרק 158- Rediscover Redis: Beyond Cache

אנשים ומחשבים

Play Episode Listen Later Sep 8, 2020 9:55


גיא קורלנד, Redis Labs, CTO of Incubations See omnystudio.com/listener for privacy information.

CapitalGeek
Nate Aune, CEO and founder of AppSembler and Tech Learning Geek

CapitalGeek

Play Episode Listen Later Sep 3, 2020 42:55


Nate is the founder & CEO of Appsembler, a B2B SaaS company that he founded in 2011, and is now a 100% distributed team hailing from 8 countries. Appsembler helps companies like Redis Labs, Chef Software and Dremio deliver online hands-on technical training at scale. Nate has been heavily involved with several open source communities in his 25+ year tech career, and loves tinkering around with emerging technologies and playing jazz saxophone in his spare time.

Software Defined Talk
Episode 253: People don’t understand how pay works

Software Defined Talk

Play Episode Listen Later Aug 28, 2020 52:25


This week we give our “expert analysis” of all the impending enterprise IPO’s, discuss Multi-Cloud and try to make sense of Roblox and TikTok. Plus, are salary bands good or bad…? The Rundown IPOs the Palantir IPO mission statement is an amazing artifact (https://twitter.com/MikeIsaac/status/1298370679033573376). Software Developer Tools Company JFrog Adds to Tech IPO Rush (https://www.bloomberg.com/news/articles/2020-08-24/software-developer-tools-company-jfrog-adds-to-tech-ipo-rush). Asana files to go public via direct listing (https://www.axios.com/asana-files-to-go-public-via-direct-listing-679fe73c-e249-44b3-b501-dc6e287ab587.html). Snowflake files for IPO, taking on Amazon and Microsoft cloud database businesses (https://www.cnbc.com/2020/08/24/snowflake-files-s-1-for-ipo.html). Unity’s IPO filing shows how big a threat it poses to Epic and the Unreal Engine (https://www.theverge.com/2020/8/24/21399611/unity-ipo-game-engine-unreal-competitor-epic-app-store-revenue-profit). Redis Labs, Maker Of Database Software, Hits $1 Billion Valuation With New Fundraise (https://www.forbes.com/sites/kenrickcai/2020/08/25/redis-labs-database-startup-series-f-unicorn/#59b5fe6c3b3a). Gamers are logging millions of hours a day on Roblox (https://www.economist.com/graphic-detail/2020/08/21/gamers-are-logging-millions-of-hours-a-day-on-roblox). Relevant to your Interests Apple A14X Bionic to Be ‘Nearly on Par’ With 8-Core Intel Core i9-9880H, According to Fresh Performance Analysis (https://wccftech.com/apple-a14x-bionic-leaked-performance-results-comparable-to-core-i9-9880h/). Apple apologizes to WordPress, won’t force the free app to add purchases after all (https://www.theverge.com/2020/8/22/21397424/apple-wordpress-apology-iap-free-ios-app). Epic Games wins temporary ruling barring Apple from retaliation (https://www.axios.com/epic-apple-court-app-store-fortnite-temporary-ruling-3f80074b-d4d6-42d8-88c6-c68c5d1afac5.html?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top). Online Retailing Executives The second-most important Jeff at Amazon is leaving the company (https://www.theverge.com/2020/8/21/21395776/amazon-executive-wilke-retire-logistics-consumer-division). Downtown Las Vegas fixture Tony Hsieh leaves Zappos (https://vegasinc.lasvegassun.com/business/2020/aug/24/downtown-las-vegas-fixture-tony-hsieh-leaves-zappo/). Tony Hsieh out as CEO of shoe and clothing giant Zappos (https://www.ktnv.com/news/tony-hsieh-out-as-ceo-of-shoe-and-clothing-giant-zappos?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top). MSFT Cloud Revenue: Microsoft Bigger than Amazon and Google, 2X IBM (https://cloudwars.co/microsoft/cloud-revenue-microsoft-bigger-amazon-google-ibm/). Microsoft plans cloud contract push with foreign governments after $10 billion JEDI win (https://www.cnbc.com/2020/08/21/microsoft-plans-cloud-push-with-foreign-governments-after-jedi-win.html). Microsoft’s new Transcribe in Word feature is designed for students, reporters, and more (https://www.theverge.com/2020/8/25/21400623/microsoft-transcribe-in-word-transcription-audio-microsoft-365). TikTok Plans to Challenge Trump Administration Over Executive Order (https://www.nytimes.com/2020/08/22/technology/tiktok-lawsuit-trump-executive-order.html?referringSource=articleShare). At scale, all B2B products converge. It’s reports, alerts, workflow, permissions and approvals all the way down. (https://twitter.com/parkerconrad/status/1297696630355816448?s=21) Malicious Chinese SDK In 1,200 iOS Apps With Billions Of Installs Causing ‘Major Privacy (https://www.forbes.com/sites/johnkoetsier/2020/08/24/malicious-chinese-sdk-in-1200-ios-apps-with-billions-of-installs-causing-major-privacy-concerns-to-hundreds-of-millions-of-consumers). Concerns To Hundreds Of Millions Of Consumers (https://www.forbes.com/sites/johnkoetsier/2020/08/24/malicious-chinese-sdk-in-1200-ios-apps-with-billions-of-installs-causing-major-privacy-concerns-to-hundreds-of-millions-of-consumers). (https://www.forbes.com/sites/johnkoetsier/2020/08/24/malicious-chinese-sdk-in-1200-ios-apps-with-billions-of-installs-causing-major-privacy-concerns-to-hundreds-of-millions-of-consumers)’ (https://www.forbes.com/sites/johnkoetsier/2020/08/24/malicious-chinese-sdk-in-1200-ios-apps-with-billions-of-installs-causing-major-privacy-concerns-to-hundreds-of-millions-of-consumers) Freedom of cloud choice: The myths and truths about multi-cloud (https://www.zdnet.com/article/the-myths-and-truths-about-multi-cloud/). Application modernization requires more than just technology, lots of consultative analysis (https://cote.io/2020/08/26/application-modernization-requires-more-than-just-technology-lots-of-consultative-analysis/). Fun fact: Nvidia is now worth more than Intel + AMD combined. (https://twitter.com/jwangark/status/1297993022278377475?s=21) TikTok CEO Kevin Mayer resigns (https://www.axios.com/tiktok-ceo-kevin-mayer-resigns-4b9e53f6-3785-41c7-938b-33b0a325704f.html?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axiosprorata&stream=top) Then he asked me “Is Kubernetes right for us?” (https://medium.com/@alexellisuk/then-he-asked-me-is-kubernetes-right-for-us-78695ee35289) Nonsense @PopeyesChicken mobile app doesn't think @heyhey email addresses are real (https://twitter.com/t3rabytes/status/1297979836271558660?s=21). Sponsor CloudBees Register for DevOps Worlds by CloudBees (https://www.cloudbees.com/devops-world/) and visit cloudbees.com (https://www.cloudbees.com/) to learn more about their products. Listener Feedback Eric wants you to work on monitoring Get a job at DigitalOcean (https://www.digitalocean.com/careers/position/apply/?jid=2271225&gh_jid=2271225&gh_src=0c0ea5a01us). Conferences SpringOne Platform (https://springone.io/2020/sessions?utm_campaign=cote), Sep 2nd and 3rd. Devops World 2020 by CloudBees | The Future of DevOps & Jenkins (https://www.cloudbees.com/devops-world). September 22-24, 2020 SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/) and LinkedIn (https://www.linkedin.com/company/software-defined-talk/). Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Use the code SDT to get $20 off Coté’s book, (https://leanpub.com/digitalwtf/c/sdt) Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Recommendations Matt Ray: Connected:Digits (https://www.netflix.com/title/81031737) Benford’s Law. Anti-recommendation: San Miguel Nacho Cheese (https://latindeli.com.au/product/san-miguel-nacho-cheese-can-2-8k/). Brandon: Yellowstone Season 3 Finale (https://www.paramountnetwork.com/shows/yellowstone). (https://www.paramountnetwork.com/shows/yellowstone) (https://www.paramountnetwork.com/shows/yellowstone) Coté: Microsoft OneNote…? Photo Credit (https://unsplash.com/photos/uJhgEXPqSPk)

How I Launched This: A SaaS Story
Open Source In-Memory Database Systems with Redis Labs Chief Product Officer Alvin Richards

How I Launched This: A SaaS Story

Play Episode Listen Later Aug 16, 2020 50:07 Transcription Available


We're back with How I Launched This: A SaaS story! This week, Stephanie Wong (@swongful) talks with Alvin Richards about Redis Labs, a company optimizing the Redis open source in-memory database system to build better managed tools for enterprise clients.Alvin begins the show describing how his love of solving complex development problems and great people skills have put him in a unique position to act as intermediary between engineers and clients, gaining insights into real-world problems and how to solve them. Looking to the future, Alvin's team also anticipates client needs, creating database products that will continue to help clients as their projects evolve.Later in the show, Alvin describes how the Redis system built in the cloud was reworked to also provide on-prem offerings. We learn how Redis Labs was able to fill a gap in the market by offering a database product that both developers and clients could understand, adapt, and use. Alvin introduces us to other Redis Labs products, including Redis for Enterprise which allows tiering between memory forms, in-memory caching, scaling, and more for a flexible database experience.We wrap up the show with a discussion of what it's like coordinating the development of such a large open source project and why Redis Labs supports open source. Alvin offers advice to other companies, stressing the importance of building solutions with both the creator and client in mind and educating clients and developers to use the software effectively. We talk about the future of open source in SaaS companies and how important it will be for scaling SaaS technology. Alvin concludes by encouraging everyone to ultimately find joy in what they do.Episode Links:RedisRedis LabsDockerMemorystoreMongoDBElasticRedis University

Redis Stars Podcast
Anshuman Agarwal | FaceMark-Redis Beyond Cache Winner!

Redis Stars Podcast

Play Episode Listen Later Aug 13, 2020 20:43


About FaceMark:People’s Choice winner and First Runner-UpFaceMark, built by Anshuman Agarwal, is a real-time video solution that can take student attendance in a classroom using contactless face recognition, so no one has to handle the same sign-in sheet. It was built with RedisAI, RedisGears, RedisTimeSeries, and Tensorflow. Anshuman said this hackathon was his first time using Redis, and that he began the process thinking that advanced Redis features, like Redis Streams and Redis modules, would be extremely complicated. To his surprise and delight, that was not the case. “I built the first prototype of my architecture using Redis Streams and RedisGears within two to three hours of reading for the first time what they are,” he says. “I truly believe Redis has come a long way from what it’s known for, and the submissions in this hackathon are a true demonstration of the super powers Redis has.”Notes: “What’s your history with Software development?” (1:45)“How did you get started with Redis?” (2:25)“How did you find out about the Hackathon?” (3:00)“Can you give a quick intro of what you built and your experience building it with Redis?” (5:40)“How did Redis help speed up your development?” (6:10)“How did you arrive at your process of recognizing each person's specific face?” (10:00)“Did you create the AI Layers?” (12:40)“What Machine learning library are you using?” (13:30) “What are you working on now?” (14:20)“What is coming that you are excited about?” (16:30)“What technologies are you excited about?” (18:25)Relative links Anshuman Agarwal: https://linkedin.com/in/anshuman73/Anshuman's Twitter: https://twitter.com/anshumanagr73Winning FaceMark Submission: https://devpost.com/software/facemarkHackathon Video: Redis Pub/Sub & Redis Streams: https://youtu.be/O_jNJ32s6x8Redis Modules: https://redislabs.com/community/oss-projects/To stay up to date on upcoming hackathons: https://forum.redislabs.com & Redis Community Slack

Redis Stars Podcast
Srivatsa Katta | Redis usage at Rapido

Redis Stars Podcast

Play Episode Listen Later Jul 23, 2020 26:09


Srivatsa Katta, Head of Engineering at Rapido in Bengaluru, India. Previously he was the head of Engineering at Dunya Labs. Srivatsa is Passionate in building scalable solutions for complex problems and has keen interest in building distributed systems. Currently Srivatsa is working with Rapido, one of the largest bike taxi platforms in India. Focussing more on last mile mobility in general and recently expanding to last mile logistics. They operate in 100+ cities with more than 1 million captains and 10 million customers on our platform.[:40] “How’s life in Bangalore?”[2:00] “What does Rapido do and what do you do at Rapido?”[3:30] “How did you get your first start with Redis?”[4:45]“What do you like most about Redis?”[6:10]“What do you use Redis for at Rapido?”[7:45] “What is distributed blocking?”[15:20] “What do you do when services get overloaded?”[16:30] “Are you using RateLimiting?”[20:10]“What do you want to do next with Redis?”Related Links:Rapido: https://rapido.bikeSrivatsa's Linkedin Profile: https://www.linkedin.com/in/srivatsa-katta/Srivatsa's Twitter: @vatsakattaRedis Client libraries: http://redis.io/clientsGeospatial Index: https://redislabs.com/redis-best-practices/indexing-patterns/geospatial/Rate Limiting: https://redislabs.com/redis-best-practices/basic-rate-limiting/Keyspace Expiry Notifications: https://redis.io/topics/notificationsDistributed Locks: https://redislabs.com/redis-best-practices/communication-patterns/Clean Rivers during COVID: http://f24.my/6YH0

All Jupiter Broadcasting Shows
2020-07-01 | Linux Headlines 173

All Jupiter Broadcasting Shows

Play Episode Listen Later Jul 1, 2020


Mozilla's Firefox 78 rollout is not going smoothly, antirez steps down as the Redis Labs leader, Couchbase debuts a new managed service, the ArcMenu GNOME extension introduces new features, and manjaro32 closes its doors.

Software Daily
Redis with Alvin Richards (Summer Break Repeat)

Software Daily

Play Episode Listen Later Jun 18, 2020


Originally published October 24, 2019. We are taking a few weeks off. We'll be back soon with new episodes.Redis is an in-memory database that persists to disk. Redis is commonly used as an object cache for web applications.Applications are composed of caches and databases. A cache typically stores the data in memory, and a database typically stores the data on disk. Memory has significantly faster access times, but is more expensive and is volatile, meaning that if the computer that is holding that piece of data in memory goes offline, the data will be lost.When a user makes a request to load their personal information, the server will try to load that data from a cache. If the cache does not contain the user's information, the server will go to the database to find that information. Alvin Richards is chief product officer with Redis Labs, and he joins the show to discuss how Redis works. We explore different design patterns for making Redis high availability, or using it as a volatile cache, and we talk through the read and write path for Redis data. Full disclosure: Redis Labs is a sponsor of Software Engineering Daily.

Roaring Elephant
Episode 197 – Exploring Redis with Kyle Davis Part 2

Roaring Elephant

Play Episode Listen Later Jun 16, 2020 30:14


We're joined by Kyle Davis, head of Developer Advocacy at Redis Labs to discuss the ins and outs of Redis. It's the the in-memory data store that everybody probably uses, whether you know it or not and Kyle does a great job on discussing the pros and cons of deploying Redis in the many use cases it can add tremendous value. This is the final part of our interview with Kyle. Redis Microservices for Dummies Make sure to check out the free "Redis Microservices for Dummies" e-book by Kyle Davis with Loris Cro on the Redis Labs website! Please use the Contact Form on this blog or our twitter feed to send us your questions, or to suggest future episode topics you would like us to cover.

Roaring Elephant
Episode 195 – Exploring Redis with Kyle Davis – Part 1

Roaring Elephant

Play Episode Listen Later Jun 2, 2020 31:26


We're joined by Kyle Davis, head of Developer Advocacy at Redis Labs to discuss the ins and outs of Redis. It's the the in-memory data store that everybody probably uses, whether you know it or not and Kyle does a great job on discussing the pros and cons of deploying Redis in the many use cases it can add tremendous value. This is the first part of the interview with Kyle. Redis Microservices for Dummies Make sure to check out the free "Redis Microservices for Dummies" e-book by Kyle Davis with Loris Cro on the Redis Labs website! Please use the Contact Form on this blog or our twitter feed to send us your questions, or to suggest future episode topics you would like us to cover.

Redis Stars Podcast
Aaron Chambers | Ember CLI Deploy and Immutable Deployments

Redis Stars Podcast

Play Episode Listen Later Apr 2, 2020 23:07


Aaron is the author and co-maintainer of the popular Ember addon, EmberCLI Deploy, which has become the defacto addon for shipping Ember applications. He has spent the past 5 years evolving deployment patterns for JS web applications to allow faster shipping with more confidence, using patterns such as parallel deployments that leverage tools such as Redis. By day, Aaron is a tech lead at Phorest where he helps lead the development of their salon management software built in Ember.js and loves to ship at 4pm on Fridays :)“What do you do?” (1:25)“What is Ember?” (1:53)“What is Ember CLI deploy?” (3:35)Multiple versions of app“What inspired you to create it?” “4:30”Team member saw it used at squareWeb server looks for parameter“What are patterns are you interested in these days?” (10:40)Immutable deployments – normal on the backend, but not normal on the front end“Where do you use Redis?” (12:35)“S3 (assets), CDN & Redis (store links, version to files in Redis)” (15:10)“Patterns to make dev easy” (14:35)JavascriptCSSImages“Parallel Deployments” (19:15)“How do you integrate with other app code?” (20:00)RELATED LINKS:https://twitter.com/grandazzhttps://cli.emberjs.com/release/basic-use/deploying/#emberclideployhttps://immutablewebapps.orghttps://www.phorest.com

DMRadio Podcast
Database Innovations: A Whole Host of Options

DMRadio Podcast

Play Episode Listen Later Mar 13, 2020 53:54


Guests: Shane Johnson, MariaDB James Serra, Microsoft Joshua Drake, Postgres Conference Kyle Davis, Redis Labs

Redis Stars Podcast
Loris Cro | Rust vs Go, and What Makes Live-Coding So Compelling

Redis Stars Podcast

Play Episode Listen Later Mar 10, 2020 22:47


Loris is a bioinformatician who has worked on everything from big-data problems in academia  to consulting for fintech startups in Singapore. He now works at Redis Labs as a Developer Advocate, and writes for the company tech blog, speaks about Redis at conferences, and live codes on Twitch.Show Highlights:“Why Go and not Rust?” (1:14)“Live Coding a Redis Client in Python from Scratch” (7:25)“Salvatore (ANTIREZ) live coding on Twitch” (10:31)“Why do people watch live coding streams?” (12:59)“What makes for an interesting talk?”(15:39)“Redis as a toolkit for building distributed systems”(18:01)Relevant Links:Redis’ Company Tech BlogLoris’ Twitch account 

Storage Unpacked Podcast
#147 – Introduction to Key Value Stores and Redis

Storage Unpacked Podcast

Play Episode Listen Later Mar 6, 2020 35:06


This week, Chris and Martin look at Key-Value stores and in particular, Redis, with Kyle Davis, Head of Developer Advocacy at Redis Labs. Key Value stores are at first glance a lightweight way to store structured data. As it turns out, the implementation of Redis includes significantly more features and functionality as well as multiple […] The post #147 – Introduction to Key Value Stores and Redis appeared first on Storage Unpacked Podcast.

The New Stack Podcast
Episode #98 Microservices for Dummies

The New Stack Podcast

Play Episode Listen Later Jan 3, 2020 42:55


This week on The New Stack Context podcast we discuss databases and microservices. We chat with Kyle Davis, Redis Labs' head of developer advocacy and Loris Cro, Redis Labs' developer advocacy manager, about their new e-book, "Redis Microservices for Dummies." We discuss the new requirements for database systems in the the world of microservices, as well as the emergence of data streaming. We also discuss the news of the week with TNS founder and publisher Alex Williams and TNS Managing Editor Joab Jackson. Libby Clark, editorial and marketing director at TNS, hosted this podcast.

The New Stack Context
Episode #98 Microservices for Dummies

The New Stack Context

Play Episode Listen Later Jan 3, 2020 42:55


This week on The New Stack Context podcast we discuss databases and microservices. We chat with Kyle Davis, Redis Labs' head of developer advocacy and Loris Cro, Redis Labs' developer advocacy manager, about their new e-book, "Redis Microservices for Dummies." We discuss the new requirements for database systems in the the world of microservices, as well as the emergence of data streaming. We also discuss the news of the week with TNS founder and publisher Alex Williams and TNS Managing Editor Joab Jackson. Libby Clark, editorial and marketing director at TNS, hosted this podcast.

Late Night Linux All Episodes
Late Night Linux – Episode 79

Late Night Linux All Episodes

Play Episode Listen Later Dec 23, 2019 40:19


It’s almost Christmas so it’s time to look back at 2019 and talk about some of the news stories that shaped the year.     January Amazon launches Mongo-compatible DocumentDB MongoDB removed from major distros   February Redis Labs raises $60 million for its NoSQL database Redis Labs changes its open-source license — again  ... Read More

Late Night Linux
Late Night Linux – Episode 79

Late Night Linux

Play Episode Listen Later Dec 23, 2019 40:19


It’s almost Christmas so it’s time to look back at 2019 and talk about some of the news stories that shaped the year.     January Amazon launches Mongo-compatible DocumentDB MongoDB removed from major distros   February Redis Labs raises $60 million for its NoSQL database Redis Labs changes its open-source license — again  ... Read More

The New Stack Podcast
Redis Labs on Why NoSQL is a Safe Bet

The New Stack Podcast

Play Episode Listen Later Dec 16, 2019 42:52


This is what one can say with a reasonably high degree of certainty: organizations are deploying applications in hybrid environments consisting of legacy datacenters and often different cloud services, while the open source business models allowing them to do that are changing. In this context, for database management and use, the choice of NoSQL remains a safe bet for today's deployments, especially for multi-cloud environments, Alvin Richards, chief product officer, Redis Labs said in this latest edition of The New Stack Makers podcast

Reality 2.0
Episode 24: A Chat About Redis Labs

Reality 2.0

Play Episode Listen Later Aug 2, 2019 50:11


Doc Searls and Katherine Druckman talk to Yiftach Shoolman, CTO and Co-founder of Redis Labs, about Redis, Open Source licenses, company culture and more. Download ogg format Links mentioned: Time for Net Giants to Pay Fairly for the Open Source on Which They Depend Redis Labs and the "Common Clause" Redis Labs Changing Its Licensing for Redis Modules Again... Redis Labs’ Modules License Changes Special Guest: Yiftach Shoolman.

Pivotal Insights
Episode 124: Grappling with Data and Application Modernization with Redis Labs' Adi Foulger and Cassie Zimmerman

Pivotal Insights

Play Episode Listen Later May 2, 2019 33:20


Enterprises across industries are modernizing legacy applications to improve performance and provide great customer experiences. But it doesn't how snappy your application is or how pretty the user interface if the data supporting the application can't keep up. In this episode of Pivotal Conversations, Redis Labs' Adi Foulger and Cassie Zimmerman talk about the challenges of modernizing your data architecture.

Cloud Native in 15 Minutes
Episode 124: Grappling with Data and Application Modernization with Redis Labs' Adi Foulger and Cassie Zimmerman

Cloud Native in 15 Minutes

Play Episode Listen Later May 2, 2019 33:20


Enterprises across industries are modernizing legacy applications to improve performance and provide great customer experiences. But it doesn't how snappy your application is or how pretty the user interface if the data supporting the application can't keep up. In this episode of Pivotal Conversations, Redis Labs' Adi Foulger and Cassie Zimmerman talk about the challenges of modernizing your data architecture.

Cloud & Culture
Episode 124: Grappling with Data and Application Modernization with Redis Labs' Adi Foulger and Cassie Zimmerman

Cloud & Culture

Play Episode Listen Later May 2, 2019 33:20


Enterprises across industries are modernizing legacy applications to improve performance and provide great customer experiences. But it doesn't how snappy your application is or how pretty the user interface if the data supporting the application can't keep up. In this episode of Pivotal Conversations, Redis Labs' Adi Foulger and Cassie Zimmerman talk about the challenges of modernizing your data architecture.

Linux Action News
Linux Action News 101

Linux Action News

Play Episode Listen Later Apr 14, 2019 29:36


Google's important news this week, why Linux is fueling PowerShell Growth, and the Matrix breach that might be worse than it sounds. Plus more good work by Mozilla, and the Chinese crackdown on Bitcoin mining.

Linux Action News
Linux Action News 101

Linux Action News

Play Episode Listen Later Apr 14, 2019 29:36


Google's important news this week, why Linux is fueling PowerShell Growth, and the Matrix breach that might be worse than it sounds. Plus more good work by Mozilla, and the Chinese crackdown on Bitcoin mining.

Linux Action News
Linux Action News 101

Linux Action News

Play Episode Listen Later Apr 14, 2019 29:36


Google's important news this week, why Linux is fueling PowerShell Growth, and the Matrix breach that might be worse than it sounds. Plus more good work by Mozilla, and the Chinese crackdown on Bitcoin mining.

Open Source Underdogs
Episode 20: Redis Labs – Database for the Instant Experience with Ofer Bengal

Open Source Underdogs

Play Episode Listen Later Apr 9, 2019 33:18


Ofer Bengal is the Co-founder and CEO of Redis Labs, home of Redis, one the world’s fastest instant experience databases. In this episode, Ofer discusses the evolution of open source software in the cloud-hosted market. To note: This interview took place a few weeks prior to Redis Labs’ announcement of an updated license for modules...

Les Cast Codeurs Podcast
LCC 208 - Si après 10 ans d'open source, t'as pas ta fondation, t'as raté ta vie

Les Cast Codeurs Podcast

Play Episode Listen Later Apr 8, 2019 97:53


Dans cet épisode en tête à tête Arnaud et Audrey discutent des nouveautés de Java 12, des dernières versions de Vert.x, Kubernetes ou Traefik mais aussi open source et fondations, et bien d’autres choses encore. Enregistré le 4 avril 2019 Téléchargement de l’épisode LesCastCodeurs-Episode–208.mp3 News Posez nous toutes vos questions pour l’épisode live des Cast Codeurs à Devoxx L’ASF a 20 ans Langages The arrival of Java 12! Alex Buckley demande du feedback sur les switch expressions de Java 12 39 fonctionnalités et APIs de Java 12 JEP draft: Add detailed message to NullPointerException describing what is null Frameworks Spring Boot 2.2 M1 Utiliser JUnit 5 avec Spring-Boot Librairies Flight of the Flux 1 - Assembly vs Subscription Middleware Eclipse Vert.x 3.7.0 released! Infrastructure Testcontainers-java 1.11.0 Introducing Kraken, an Open Source Peer-to-Peer Docker Registry Kubernetes 1.14: Production-level support for Windows Nodes, Kubectl Updates, Persistent Local Volumes GA Pimp My Kubernetes Shell Back to Traefik 2.0 Web Mozilla lance WASI: WebAssembly System Interface wasi.dev Fastly annonce Lucet, un compilateur/runtime natif WASI Exemple d’utilisation de Rust et WASI Preact X is here Le TC39 a maintenant son repository GitHub Introducing the OpenJS Foundation: The Next Phase of JavaScript Ecosystem Growth Cache-Control for Civilians Outillage Nouvelle Continuous Delivery Foundation et aussi New CI/CD Foundation Draws Tech’s Big Beasts, Open Source Donations Gradle Entreprise pour accélerer votre build maven Creating a commit on behalf of an organization Architecture Nouvelle GraphQL Foundation Loi, société et organisation La guerre de l’open source continue : Redis Labs drops Commons Clause for a new license Keeping Open Source Open – Open Distro for Elasticsearch A propos des distributions “ouvertes”, de l’open source et de la création d’entreprise Deprecation Notice: MIT and BSD Le parlement européen a voté pour la directive sur le droit d’auteur: EU’s Parliament Signs Off on Disastrous Internet Law: What Happens Next? « Qwant va rémunérer les éditeurs de presse pour l’indexation de leurs articles », dit son patron Après avoir viré les travailleurs en remote, IBM vire les vieux Les effets des interruptions au travail Turing Award Won by 3 Pioneers in Artificial Intelligence Qui est Cédric O, nouveau secrétaire d’État au numérique et remplaçant de Mounir Mahjoubi ? Outils de l’épisode Peacock v1 Released Conférences Devoxx France du 17 au 19 avril 2019 - sold out VoxxedCERN le 1er mai 2019 Riviera Dev du 15 au 17 mai 2019 NCrafts les 16 et 17 mai 2019 Mix-it les 23 et 24 mai 2019 BestOfWeb les 6 et 7 juin 2019 DevFest Lille le 14 juin 2019 Voxxed Days Luxembourg les 20 et 21 juin 2019 Sunny Tech les 27 & 28 juin 2019 JugSummerCamp le 13 septembre 2019 - Le CfP ouvre bientôt. Codeurs en Seine le 21 novembre 2019 Nous contacter Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Faire un crowdcast ou une crowdquestion Contactez-nous via twitter https://twitter.com/lescastcodeurs sur le groupe Google https://groups.google.com/group/lescastcodeurs ou sur le site web https://lescastcodeurs.com/  

Software Defined Talk
Episode 167: "Write this on your hand: July 9, 2019.”

Software Defined Talk

Play Episode Listen Later Feb 22, 2019 60:25


Google goes enterprise, Time to upgrade Win 2008, Redis changes licenses again. All this and more in this episode. Plus, Matt explains good parenting to Brandon. Relevant to your interests Google makes $13 billion worth of cloud plans for 2019 (https://news.google.com/articles/CBMiUWh0dHA6Ly90ZWxlY29tcy5jb20vNDk1NTM3L2dvb2dsZS1tYWtlcy0xMy1iaWxsaW9uLXdvcnRoLW9mLWNsb3VkLXBsYW5zLWZvci0yMDE5L9IBAA?hl=en-US&gl=US&ceid=US%3Aen) Google acquires cloud migration platform Alooma (https://techcrunch.com/2019/02/19/google-acquires-cloud-migration-platform-alooma/) Google emits a beta of Cloud Service Platform to entice hold-outs with hybrid goodness (https://www.theregister.co.uk/2019/02/21/google_cloud_service_platform/) Google's .dev domains now available for a cool $11k, sensible pricing due later this month (https://www.androidpolice.com/2019/02/19/googles-dev-domains-now-available-for-a-cool-11k-sensible-pricing-due-later-this-month/) Google says the built-in microphone it never told Nest users about was 'never supposed to be a secret (https://www.businessinsider.com/nest-microphone-was-never-supposed-to-be-a-secret-2019-2)’ AT&T signed an '8-digit' deal that isn't good news for VMware, Cisco, or Huawei — but could be great for Google Cloud (https://www.businessinsider.com/att-airship-cisco-vmware-google-cloud-2019-2) Warren Buffett's Berkshire Hathaway dumped its stake in Oracle after just one quarter (https://qz.com/1551778/the-oracle-of-omaha-has-given-up-on-oracle-the-company/) There's No Good Reason to Trust Blockchain Technology (https://www.wired.com/story/theres-no-good-reason-to-trust-blockchain-technology/) Open source startup Redis Labs raises $60 million and starts planning for an IPO, as it takes a stand against Amazon Web Services (https://www.businessinsider.com/redis-labs-funding-amazon-2019-2) Redis Labs changes its open-source license — again (https://techcrunch.com/2019/02/21/redis-labs-changes-its-open-source-license-again/) TravisCI Acquired (https://techcrunch.com/2019/01/23/idera-acquires-travis-ci/), Layoffs 1 month later (https://twitter.com/carmatrocity/status/1098538649908666368) CNCF Annual Report (https://www.cncf.io/wp-content/uploads/2019/02/CNCF_Annual_Report_2018.pdf) Chef Hires Cloud Industry Veteran Goldfarb (https://www.geekwire.com/2019/chef-hires-cloud-industry-veteran-brian-goldfarb-chief-marketing-officer/) GitLab Bolsters C-Suite, Hires Chief Revenue Officer and Chief Marketing Officer (https://globenewswire.com/news-release/2019/02/19/1734337/0/en/GitLab-Bolsters-C-Suite-Hires-Chief-Revenue-Officer-and-Chief-Marketing-Officer.html) Sponsors Solarwinds To learn more or try the company’s DevOps products for free, visit https://solarwinds.com/devops. Conferences, et. al. ALERT! DevOpsDays Discount - DevOpsDays MSP (https://www.devopsdays.org/events/2019-minneapolis/welcome/), August 6th to 7th, $50 off with the code SDT2019 (https://www.eventbrite.com/e/devopsdays-minneapolis-2019-tickets-51444848928?discount=SDT2019). 2019, a city near you: The 2019 SpringTours are posted (http://springonetour.io/). Coté will be speaking at many of these, hopefully all the ones in EMEA. They’re free and all about programming and DevOps things. Free lunch and stickers! Mar 7th to 8th, 2019 - Incontro DevOps in Bologna (https://2019.incontrodevops.it/), Coté speaking. Mar 13th, 2019 - Coté speaking at (platform as a product) (https://www.meetup.com/Continuous-Delivery-Amsterdam/events/258120367/) - Continuous Delivery, Amsterdam. Mar 18th to 19th, 2019 - SpringOne Tour London (https://springonetour.io/2019/london). Get £50 off ticket price of £150 with the code S1Tour2019_100. Mar 21st to 2nd, 2019 (https://springonetour.io/2019/amsterdam) - SpringOne Tour Amsterdam. Get €50 off ticket price of €150 with the code S1Tour2019_100. ChefConf 2019 (http://chefconf.chef.io/) May 20-23. Early bird pricing ends February 28th! Get a Free SDT T-Shirt Write an iTunes review of SDT and get a free SDT T-Shirt. Write an iTunes Review on the SDT iTunes Page. (https://itunes.apple.com/us/podcast/software-defined-talk/id893738521?mt=2) Send an email to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and include the following: T-Shirt Size (Only Large or X-Large remain), Preferred Color (Gray, Black) and Postal address. First come, first serve. while supplies last! Can only ship T-Shirts within the United State SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you a free laptop sticker! Follow us on Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/) or LinkedIn (https://www.linkedin.com/company/software-defined-talk/) Listen to the Software Defined Interviews Podcast (https://www.softwaredefinedinterviews.com/). Check out the back catalog (http://cote.coffee/howtotech/). Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Recommendations Brandon: The Dropout (http://abcradio.com/podcasts/the-dropout/),Vanity Fair Article (https://www.vanityfair.com/news/2019/02/inside-elizabeth-holmess-final-months-at-theranos), Bad Blood (https://www.audible.com/pd/Bad-Blood-Audiobook/B07C8GVTB5) Matt: HazeOver for reducing distractions on MacOS (https://hazeover.com)

Coder Radio
330: Vinny's Unit Tests

Coder Radio

Play Episode Listen Later Oct 23, 2018 53:22


What’s the future of .NET? With .NET Core growing and the future of the orginal .NET seems uncertain. Chris and Mike suspect there is clear possibility. Plus a few more thoughts on Unit Testing, embedded productivity companion devices, and the hoopla of the week.

Destination Linux
Destination Linux EP87 – Unpredictably Ephemeral

Destination Linux

Play Episode Listen Later Sep 12, 2018 75:45


On this episode of Destination Linux, we discuss a lot of new releases from GNOME, KaOS, Nitrux and even Linux From Scratch. We also discuss the licensing issues regarding Redis Labs, Linus’ peace of mind for the future of the kernel, and later in the show we cover some Linux Gaming. All that and much […]

Software Defined Talk
Episode 145: Redis be like “I just stepped into a big pile of…SaaSy!”

Software Defined Talk

Play Episode Listen Later Aug 31, 2018 59:56


Related image https://media1.tenor.com/images/e83b2b5aef8c8af0dd36a0d33d3046a4/tenor.gif?itemid=5038124 This week, we discuss Redis’ license changing move, open source business models in general (of course), SUSE revenue, and some VMworld selections. Relevant to your interests Istio Aims To Be The Mesh Plumbing For Containerized Microservices (https://www.nextplatform.com/2018/08/15/istio-aims-to-be-the-mesh-plumbing-for-containerized-microservices/) Michael Cot (https://soundcloud.com/infoq-engineering-culture/michael-cote-from-pivotal-on-programming-the-business)é (https://soundcloud.com/infoq-engineering-culture/michael-cote-from-pivotal-on-programming-the-business) from Pivotal on Programming the Business by Engineering Culture by InfoQ (https://soundcloud.com/infoq-engineering-culture/michael-cote-from-pivotal-on-programming-the-business) Mobile App Development Services | Web Development services - The NineHertz (https://theninehertz.com/blog/becoming-an-iot-developer/) Has Bezos Become More Powerful in D.C. Than Trump? (https://www.vanityfair.com/news/2018/08/has-bezos-become-more-powerful-in-dc-than-trump) What Will Be the Real Impact From Knative? (https://www.sdxcentral.com/articles/news/what-will-be-the-real-impact-from-knative/2018/08/) Google just gave control over data center cooling to an AI (https://www.technologyreview.com/s/611902/google-just-gave-control-over-data-center-cooling-to-an-ai/) O11yCon 2018: Notes and Observations (https://dev.to/dangolant/o11ycon-2018-notes-and-observations-4nbf) Slack just raised a whopping $427 million to become a $7.1 billion company. Now, it has to defeat Microsoft. (https://www.businessinsider.com/slack-funding-valuation-microsoft-teams-2018-8) Apple Pay Now Accepted at All Costco Warehouses in United States (https://www.macrumors.com/2018/08/20/costco-now-widely-accepts-apple-pay/). 10 AWS Lambda Use Cases to Start Your Serverless Journey (https://www.simform.com/serverless-examples-aws-lambda-use-cases/). Announcing resource-based pricing for Google Compute Engine (https://cloudplatform.googleblog.com/2018/07/announcing-resource-based-pricing-for-google-compute-engine.html). DevOps Report 2018 released (https://www.prnewswire.com/news-releases/devops-research-and-assessment-dora-announces-the-2018-accelerate-state-of-devops-report-300703837.html). Will talk about it next week. Until then, enjoy 78 pages of landscape PDF glory. Spoiler alert: elite high performers are elite high performers. Pivotal has a webinar on Oct 11th (https://content.pivotal.io/webinars/oct-11-the-accelerate-state-of-devops-report-webinar) about it. Community management is a career cul-de-sac (https://thenewstack.io/why-community-manager-is-a-dead-end-job-and-what-to-do-about-it/). See interview next week (http://www.softwaredefinedinterviews.com/). Good example of corpdev thinking, in the US (legal) drugs market (https://contrarianedge.com/2018/08/28/investors-have-misdiagnosed-amazons-push-into-the-pharmacy-business/). “Google today announced that it is providing the Cloud Native Computing Foundation (CNCF) with $9 million in Google Cloud credits (https://techcrunch.com/2018/08/29/google-steps-back-from-running-the-kubernetes-infrastructure/) to help further its work on the Kubernetes container orchestrator and that it is handing over operational control of the project to the community.” Armory lands $10M Series A to bring continuous delivery to enterprise masses (https://techcrunch.com/2018/08/23/armory-lands-10m-series-a-to-bring-continuous-delivery-to-enterprise-masses/). VMworld NA 2018 VMware acquires CloudHealth Technologies for multi-cloud management (https://techcrunch.com/2018/08/27/vmware-acquires-cloudhealth-technologies-for-multi-cloud-management/) - Carl@451 (https://clients.451research.com/reportaction/95582/Toc?ref=Email%3Amis): “Primarily a cost management and analysis platform, it has roughly 3,500 users and has also grown to cover automation, security and governance with a broad, API-based management platform for the major public clouds: AWS, Azure and GCP. CloudHealth mainly operates in the US, meaning VMware will have to square overseas operations and data management with other jurisdictions – primarily the EU GDPR regulations – going forward.” Est. $500m valuation. They monitor your cloud costs. Cf. Dr. Cloud Pricing Guy at 451 (https://twitter.com/owenrog/status/970708698879406080). Still that MoM in the Clouds vision. “With CloudHealth, VMware not only gets the multi-cloud management solution, it gains its 3000 customers which include Yelp, Dow Jones, Zendesk and Pinterest.” VMware CEO: A Virtual Machine Is Still the Best Place to Run Kubernetes (https://thenewstack.io/vmware-ceo-a-virtual-machine-is-still-the-best-place-to-run-kubernetes/). Cameo (https://twitter.com/camhaight/status/1034101496332279810) from the Hill Country’s favorite systems management (former) analyst (https://twitter.com/camhaight). VMware's Software-Defined Vision (https://www.actualtech.io/vmwares-software-defined-vision/). Coté remember when he met with Kit Colbert at DockerCon EU 2014 (https://blog.docker.com/2015/01/dockercon-europe-the-future-of-micro-services/), and Coté had no idea what this “cloud native” stuff was. Now, it seems like it’s slowly moving to be the new word for PaaS, but more like the under-girding of PaaS. Also, went back to the NEMO recently. They no longer have the closet of dead things (https://www.flickr.com/photos/cote/shares/R61a89), sadly. Project Dimension (https://blogs.vmware.com/vsphere/2018/08/introducing-project-dimension.html) - on-demand private clouds, driven by SDDC stuff. Pat’s Pillars (https://www.linkedin.com/pulse/next-step-forward-capturing-full-potential-tech-pat-gelsinger/): ‘“Superpowers” that are unlocking game-changing opportunities on a global scale – Cloud, Mobile, Artificial Intelligence and the Internet of Things.’ Redis stinkup - the mysteries of making money by actually selling something Coté: now, what’s the deal here? They closed source some stuff that maybe others had contributed to, taking advantage of good will, and/or they’re just now charging for what used to be free? (Are there other open source scandal scenarios?) Joab and Lawrence at (https://thenewstack.io/redis-pulls-back-on-open-source-licensing-citing-stingy-cloud-services/) The New Stack (https://thenewstack.io/redis-pulls-back-on-open-source-licensing-citing-stingy-cloud-services/): “While the core of Redis itself remains under the permissive BSD license, the company has reworded the licensing for some of its add-on modules, in effect blocking their use by third parties offering commercial Redis-based services — most notably cloud providers. Redis Labs was able to make this change because it retains the copyright to the open source code.” Commons Clause (https://redislabs.com/community/commons-clause/), (https://redislabs.com/community/commons-clause/) Redis Labs (https://redislabs.com/community/commons-clause/). Adam Jacob Twitter thread on commons clause (https://twitter.com/adamhjk/status/1032285457978208257). SUSE Revenue Watch SUSE is all like “PE Mane, call me!” https://usatftw.files.wordpress.com/2016/11/mcgregor-cash.jpg?w=1000&h=600&crop=1 Somehow, this has become a bit in the show. Blame Coté. Something like ~$360m based on trailing 6 months runrat’ed to 12 trailing. Also, likely non-GAAP reporting (not clear if it’s ACV vs. TCV), but whatever. Grind and stack: “EBITDA for that period was $56 million, nearly 23 percent year-over-year growth.” So: ~$112m profit, ~31% margins. That’s the kind of stable (they claim to run 70% of SAP apps), growing cash-throw-off that should make PE people drool on their Patagonia puffy vests: “Following last week's shareholder approval of Micro Focus' proposed sale of SUSE to EQT Partners for $2.535 billion, the transaction is expected to complete in the first quarter of calendar 2019, subject to customary regulatory approvals.” If my math (https://docs.google.com/spreadsheets/d/1tq65HkucftfmO7YUDpfAWEK7Eq3vb9DYfwYeuNqLQ1U/edit#gid=0) is right (it’s established that I don’t know how numbers work), clawing in all profits would pay that $2.5bn off by 2026: 8 or 10 years of holding growth and profit %. Of course, you’d sell it off before that. Conferences, et. al. Sep 24th to 27th - SpringOne Platform (https://springoneplatform.io/), in DC/Maryland (crabs!) get $200 off registration with the code S1P200_Cote. Also, check out the Spring One Tour - coming to a city near you (https://springonetour.io/)! DevOpsDays Berlin (https://www.devopsdays.org/events/2018-berlin/welcome/), September 12th to 13th. DevOpsDays Paris (https://www.devopsdays.org/events/2018-paris/welcome/), October 16th. Cloud Expo Asia October 10-11 (https://www.cloudexpoasia.com/cloud-asia-2018). Matt’s presenting! DevOps Days Singapore October 11-12 (https://www.devopsdays.org/events/2018-singapore/). Matt’s presenting! DevOps Days Newcastle October 24-25 (https://devopsdaysnewy.org/). DevOps Days Wellington November 5-6 (https://www.devopsdays.org/events/2018-wellington/). Devoxx Belgium (https://devoxx.be/), Antwerp, November 12th to 16th. SpringOne Tour (https://springonetour.io/) - all over the earth! Listener Feedback Bryan wants you to know about DevOps Days Galway (https://www.devopsdays.org/events/2018-galway/welcome/) November 18-20th DevOps Days Singapore (https://www.devopsdays.org/events/2018-singapore/) wanted us to let folks know it’s October 11-12! Camille sent us some feedback and really liked Matt’s Red Atlas recommendation (https://www.amazon.com/dp/022638957X/ref=asc_df_022638957X5555077?tag=shopz0d-20&ascsubtag=shopzilla_mp_1475-20;15350415381076093153510070301008005&creative=395261&creativeASIN=022638957X&linkCode=asn) because she lives near a missile site. Joshua built a service that creates an RSS feed of podcasts based on keywords. Here’s an example: https://prod.mypod.online/feed?q=kubernetes Try it out. Soon to be Honey Ninja subscribes to SDT (https://twitter.com/michaelwilde/status/1035392934110273536), citing host “wit” as driver. SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Subscribe to Software Defined Interviews Podcast (http://www.softwaredefinedinterviews.com/) Dustin on Linux and Google Cloud (http://www.softwaredefinedinterviews.com/73) Rachel Stephens from RedMonk on Numbers (http://www.softwaredefinedinterviews.com/74) Buy some t-shirts (https://fsgprints.myshopify.com/collections/software-defined-talk)! DISCOUNT CODE: SDTFSG (40% off) Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you a sticker. Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Recommendations Brandon: Acquired Podcast (http://www.acquired.fm/). Matt: Blindsight (http://www.rifters.com/real/Blindsight.htm) by Peter Watts; The Good Fight (https://www.imdb.com/title/tt5853176/). Coté: Friendly Fire (http://www.maximumfun.org/shows/friendly-fire) podcast (http://www.maximumfun.org/shows/friendly-fire), the intros. Ikea knives.

L8ist Sh9y Podcast
Redis Lab Licensing Change to Common Clause

L8ist Sh9y Podcast

Play Episode Listen Later Aug 29, 2018 21:14


VM Brasseur from Open Source Initiative along with Rob Hirschfeld and Stephen Spector talk about the license announcement made by Redis Labs to add Common Clause (https://redislabs.com/community/licenses/) to some of their software. Also discussed is the limited success of the Open Core business model.

Red Hat X Podcast Series
Redis Labs – home to both open source Redis and Redis Enterprise

Red Hat X Podcast Series

Play Episode Listen Later May 9, 2018 11:03


Join Leena Joshi, Vice President of Product Marketing at Redis Labs as she introduces her company and discusses several topics including the opportunity for digital transformation brought on by the cloud, and her thoughts on microservices and cloud-native and orchestration.

L8ist Sh9y Podcast
Dave Nielsen talks Redis and usage at the Edge

L8ist Sh9y Podcast

Play Episode Listen Later Mar 25, 2018 46:08


Joining us this week is Dave Nielsen, Head of Ecosystem Programs at Redis Labs. Dave provides background on the Redis project and discusses ideas for using Redis in edge devices. Highlights • Background of Redis project and Redis Labs • Redis and Edge Computing • Where is the Edge? • Raspberry Pi for edge devices? It’s about management • Wasteland of IT management at the edge

The Changelog
Blockchains and Databases at OSCON

The Changelog

Play Episode Listen Later Dec 14, 2017 56:54 Transcription Available


We went back into the archives to conversations we had around blockchains and databases at OSCON 2017. We talked with Monty Widenius, creator of MariaDB the open source forever fork MySQL, Brian Behlendorf, Executive Director of Hyperledger, the open source collaborative effort hosted by The Linux Foundation to advance blockchain technologies, and Tague Griffith, Head of Developer Advocacy at Redis Labs, the home of open source Redis and commercial provider of Redis Enterprise.

Changelog Master Feed
Blockchains and Databases at OSCON (The Changelog #278)

Changelog Master Feed

Play Episode Listen Later Dec 14, 2017 56:54 Transcription Available


We went back into the archives to conversations we had around blockchains and databases at OSCON 2017. We talked with Monty Widenius, creator of MariaDB the open source forever fork MySQL, Brian Behlendorf, Executive Director of Hyperledger, the open source collaborative effort hosted by The Linux Foundation to advance blockchain technologies, and Tague Griffith, Head of Developer Advocacy at Redis Labs, the home of open source Redis and commercial provider of Redis Enterprise.

AWS re:Invent 2017
ENT224: Redis Enterprise for Large-Scale Deployment

AWS re:Invent 2017

Play Episode Listen Later Nov 30, 2017 60:19


C.H. Robinson, Intuit, and Scopely hold a fireside chat with Redis Labs' CMO to discuss the challenges of deploying large-scale applications that need to support personalized user experiences based on real-time insights. The architects from these companies share how Redis Enterprise operates as the primary datastore and tackles the issues of maintaining consistency in geo-distributed deployments, ingests massive amounts of data while executing hybrid transactions-analytics functions, and balances workloads between RAM and SSDs while performing hundreds of thousands of operations per second with sub-millisecond latency. In this business-technical session, you learn about diverse use cases and capabilities of Redis, a highly popular in-memory NoSQL database, including job and queue management, machine learning, streaming, search, geo-spatial indexing, fast data ingest, and high speed transactions. Session sponsored by Redis Labs

Chinchilla Squeaks
Manish Gupta of Redis Labs

Chinchilla Squeaks

Play Episode Listen Later May 24, 2017 32:53


Chris speaks with Manish Gupta of Redis Labs about how redis became one of the most popular data stores for developers. --- Send in a voice message: https://anchor.fm/theweeklysqueak/message

Heavybit Podcast Network: Master Feed
Ep. #2, Developing Your Influencer Marketing Strategy

Heavybit Podcast Network: Master Feed

Play Episode Listen Later Aug 26, 2015 36:20


I had the pleasure of speaking with Cameron Peron following his Heavybit Speaker Series presentation on Creating Killer Trend Stories.Cameron recently served as VP Marketing at Redis Labs where he led their post Series A marketing activities to accelerate SaaS based customer acquisition fivefold within 24 months. Previously as VP Marketing at Newvem he led their developer marketing team to generate 2.5K users in 14 months from Series A to acquisition.Listen in as we discuss how to make your organization a source for news, the art of media relations and tactics for effective influencer marketing.In the age of continuous delivery, there are more frequent software releases than ever before. This decoupling of ‘code' from ‘product' creates challenges for marketing teams that are planning dates to announce new features.In this episode of The Pitch Room, Cameron offers advice for smaller companies on how to overcome this challenge.Avoid assumptions: Once a feature update is live, it is rare for your end users and external community to know about it unless you tell them. Do not assume that end users are aware of updates and be sure to communicate changes in a timely manner.Latch onto a trend: Small feature updates are not as attractive for bloggers and influencers to write about unless it latches onto a trend or controversial topic. For example, for a data monitoring product, one tactic is to align your company with big data trend stories and provide insight into how a particular monitoring feature is solving a big data challenge.Do not get discouraged: Continue to announce feature updates because it keeps your company on the radar. If the announcement does not yield press coverage, continue to post to your blog and share the news with end users. Feature updates are an easy opportunity to connect with your community and establish loyalty.Have your customers tell the story: A feature that solves a specific challenge for your customer makes for great content. Track customer feedback and ask customers if they would be willing to speak with the media or contribute to a byline on your behalf.As the software development market continues to evolve, expect to see traditional communications strategies evolve with it.