Podcasts about open core

business model monetizing commercial open-source software

  • 37PODCASTS
  • 50EPISODES
  • 37mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Dec 26, 2024LATEST
open core

POPULARITY

20172018201920202021202220232024


Best podcasts about open core

Latest podcast episodes about open core

PodRocket - A web development podcast from LogRocket
void(0) with Evan You [Repeat]

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Dec 26, 2024 46:48


In this holiday repeat episode, Evan You, creator of Vue and Vite, discusses his new venture, void(0). He discusses the motivations behind founding void(0), the inefficiencies in JavaScript tooling, and the future of unified tooling stacks. Links https://evanyou.me https://x.com/youyuxi https://github.com/yyx990803 https://sg.linkedin.com/in/evanyou https://voidzero.dev We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Evan You.

PodRocket - A web development podcast from LogRocket

Evan You, creator of Vue and Vite, discusses his new venture, voidI0). He discusses the motivations behind founding void(0), the inefficiencies in JavaScript tooling, and the future of unified tooling stacks. Links https://evanyou.me https://x.com/youyuxi https://github.com/yyx990803 https://sg.linkedin.com/in/evanyou https://voidzero.dev We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Evan You.

Open Source Startup Podcast
E145: Bootstrapping an Open Source Monitoring Platform

Open Source Startup Podcast

Play Episode Listen Later Aug 15, 2024 38:46


Aliaksandr Valialkin and Roman Khavronenko are Co-Founders of VictoriaMetrics, the open source time series database and monitoring platform built alongside their open source project, also called victoriametrics. In this episode, we discuss the limitations to Prometheus and how ClickHouse inspired the founders to build VictoriaMetrics, how open source helped them attract their early users and gain momentum, the importance of simplicity and saying no to feature requests that would complicate the product, their approach to an Open Core model, their unique view on funding and why bootstrapping has been an advantage for them and more!

The Cloudcast
Identifying Successful Open Source Projects

The Cloudcast

Play Episode Listen Later Jun 30, 2024 32:37


As open source companies and projects enter a transition phase of funding, licensing, community participation, let's look at the characteristics of successful open source projects. SHOW: 834SHOW TRANSCRIPT: The Cloudcast #834 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"SHOW SPONSOR:Panoptica, Cisco's Cloud Application Security solutionSHOW NOTES:10 Years of Kubernetes30 Years of Linux10 Years of OpenStack25 Years of JavaWHAT DOES SUCCESS MEAN IN OPEN SOURCE?There are 1000s of widely used open source projects, from Linux to Java to MySQL to Docker to Kubernetes to MongoDBLess projects are successful from a monetization perspective, but that's about individual business modelsPlenty of companies during the 2010s-2020s treated open source as “marketing” and not really a development modelWHAT DOES COMMERCIAL SUCCESS LOOK LIKE IN OPEN SOURCE?Wide usage infrastructure - Linux, Kubernetes, OpenStackComplex projects vs. “it just works”Critical security - VaultComplex data services - MongoDB, KafkaMany companies contributing (share the costs) - Linux, KubernetesManagement / Observability Engines - Crossplane, PrometheusProgramming Languages - Java, Rust, Go, Python, Ruby, etc.Cloud services hiding complexityCloud services stretching across cloudsSingle vendor projects - often move to non-OSS licensingSingle vendor projects - difficult to maintain at scaleFEEDBACK?Email: show at the cloudcast dot netTwitter: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod

Hanselminutes - Fresh Talk and Tech for Developers
Open Core Open Source with Mermaid Chart's Knut Sveidqvist

Hanselminutes - Fresh Talk and Tech for Developers

Play Episode Listen Later May 9, 2024 25:00


This week we talk to Knut Sveidqvist who brings over 20 years of software expertise to the table. Knut is the creator of the award-winning Mermaid open-source project, but he's also the CTO at Mermaid Chart, the powerful JavaScript-based diagramming and charting tool that is building their business on an Open Core Business Model.

Next in Tech
Ep. 159 - KubeCon Preview

Next in Tech

Play Episode Listen Later Mar 19, 2024 32:14


The spring edition of the Cloud Native Computing Foundation's KubeCon/CloudNativeCon is about to get underway and Jean Atelsek and William Fellows return to discuss what will be playing out in the sessions and on the exhibit floor with host Eric Hanselman. The next stage of licensing pull back from a number of open source providers is on display, as Open Core revenue models are realigned. FinOps efforts try to tame operational costs and evolution of platform engineering approaches continues.

The Business of Open Source
Buyer-Based Open Core with Zach Wasserman

The Business of Open Source

Play Episode Listen Later Mar 6, 2024 37:50


This week on The Business of Open Source, I spoke with Zach Wasserman, co-founder and CTO of Fleet. This was a fabulous episode for many reasons, but then again I never do crappy episodes, right? The first thing I wanted to call your attention to is that Zach talked about how he's building an open core business because building an open source business is what he wants to do. When his previous company turned away from open source, Zach left to do consulting around OSquery and Fleet (the project). I always like to talk about how companies / founders need a solid reason for building an open source company… and “this is the kind of company I want to build” is a very good reason. (“Everyone else is doing it” on the other hand, is not a good reason). Everyone puts constraints around the type of company the want to build, and as long as you are intentionally about the decisions, there is nothing wrong about this, business-wise.Second, we talked about the tension that exists between making a great project and still leaving room for a commercial product that people will pay for, and Zach talked through how Fleet uses a buyer-based open core strategy to decide which functionality to put in the enterprise version or in the open core. We also talked about:Leaving his first company, Kolide, when the founders had divergent visions about where the company should goHow his investor arranged a ‘co-founder marriage' for Zach and his co-founder Mike McNeilHow the transparency aspect of open source can be extremely important, especially for anything in the security spaceLastly, Fleet happens to be a former client of mine. You can check out what Mike, Zach's co-founder, said about working with me here. And if you're interested in more conversations like this… but in person!!! you should come to Open Source Founders Summit May 27th and 28th in Paris. 

Open Hardware Manufacturing Podcast
Ep. 12 - Improving FreeCAD ft. Brad Collette from Ondsel

Open Hardware Manufacturing Podcast

Play Episode Listen Later Jan 18, 2024 79:24


In this episode, we're excited to have guest Brad Collette! Brad is the cofounder and CTO of Ondsel, an Open-Core company working to improve FreeCAD. Ondsel is also building features on top of FreeCAD that are aimed specifically at commercial applications. We hear how Brad has been involved with FreeCAD development for over a decade, started Ondsel, and has been balancing them both! Lots of great discussion about open software UX, and what the future holds for FreeCAD.You can find Ondsel on Twitter, and their website where you can download their release of FreeCAD with huge UX and functionality improvements.As always, let us know what you thought of the episode on Discord in the #ohm-episode-discussion thread for Episode 12! https://discordapp.com/invite/TCwy6De Hosted on Acast. See acast.com/privacy for more information.

This Week in Startups
Drawing the Future with AI featuring tldraw's Steve Ruiz | E1863

This Week in Startups

Play Episode Listen Later Dec 13, 2023 37:55


This Week in Startups is brought to you by… Miro. Working remotely doesn't mean you need to feel disconnected from your team. Miro is an online whiteboard that brings teams together - anytime, anywhere. Go to https://www.miro.com/startups to sign up for a FREE account with unlimited team members. The Equinix Startup program offers a hybrid infrastructure solution for startups, including up to $100K in credits and personalized consultations and guidance from the Equinix team. Go to https://www.equinixstartups.com to apply today. NetSuite. Once your business gets to a certain size the cracks start to emerge.  Things you used to do in a day take a week. You deserve a customized solution - and that's NetSuite. Learn more when you download NetSuite's popular KPI Checklist - absolutely free, at http://www.netsuite.com/twist * Today's show: Steve Ruiz, Founder of tldraw, joins Jason to discuss how Make Real went viral just one week after a recent funding round (5:35), diving into the debate of creating consciousness with AI (20:41), highlighting tldraw's versatility with multiple demonstrations including creating a stopwatch from simple sketches (24:21), and much more! * Timestamps: (0:00) - Steve Ruiz, Founder of tldraw, joins Jason. (2:26) The Story Behind tldraw and exploring its origins. (5:35) Discussing the business plan of tldraw and how Make Real went viral just one week after a recent venture round. (6:46) Diving into the popular topic of “Open Core” (7:59) Jumping into demos of tldraw and Make Real including the creation of a color picker. (12:01) Miro - Sign up for a free account at https://www.miro.com/startups (14:42) Further exploring tldrraw demos using Iterations. (16:18) How multi-modal AI responds to different instructions. (20:41) Are we creating consciousness in AI or merely simulating it? (21:25) Equinix - Join the Equinix Startup Program for up to $100K in credits and much more at https://www.deploy.equinix.com/startups (24:21) More demos! Creating a stopwatch application with varied approaches. (31:21) NetSuite - Download your free KPI Checklist at http://www.netsuite.com/twist (32:42) The future of tldraw and it's potential to integrate across various AI models. * Check out tldraw: https://www.tldraw.com Thank you to our partners: (12:01) Miro - Sign up for a free account at https://www.miro.com/startups (21:25) Equinix - Join the Equinix Startup Program for up to $100K in credits and much more at https://www.deploy.equinix.com/startups (31:21) NetSuite - Download your free KPI Checklist at http://www.netsuite.com/twist * Follow Steve X: https://twitter.com/steveruizok LinkedIn: https://www.linkedin.com/in/steve-ruiz-61a150239?originalSubdomain=uk * Follow Jason: X: https://twitter.com/jason Instagram: https://www.instagram.com/jason LinkedIn: https://www.linkedin.com/in/jasoncalacanis * Great 2023 interviews: Steve Huffman, Brian Chesky, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarland * Check out Jason's suite of newsletters: https://substack.com/@calacanis * Follow TWiST: Substack: https://twistartups.substack.com Twitter: https://twitter.com/TWiStartups YouTube: https://www.youtube.com/thisweekin * Subscribe to the Founder University Podcast: https://www.founder.university/podcast

Project Geospatial
FOSS4G NA 2023 | Moving Protomaps from Open Core to Open Source - Brandon Liu

Project Geospatial

Play Episode Listen Later Nov 15, 2023 28:25


Summary: Brandon Liu discusses the shift of Protomaps from open core to open source, highlighting the challenges and trends in open source projects, particularly in web mapping. He explores the dynamics between open source software and software as a service (SaaS), emphasizing the advantages of SaaS in terms of adoption, digital rights management (DRM), and price discrimination. Liu delves into the dilemma of building a SaaS product using open source software and addresses the sustainability of open source projects. Protomaps, an open-source map of the world and a vector tiling system, aims to create a sustainable open ecosystem for web maps, addressing underserved use cases. Highlights:

Open Source Startup Podcast
E111: The Highs & Lows of Open Source with Adam Jacob of System Initiative & Chef

Open Source Startup Podcast

Play Episode Listen Later Oct 16, 2023 43:22


Adam Jacob is CEO of DevOps platform System Initiative and Co-Founder of infrastructure automation platform Chef. This is Adam's second time on the Open Source Startup Podcast, and this episode is packed with learnings. We discuss the distribution benefits of open source and why some products should be open source and others should not, challenges with the Open Core business model, HashiCorp's license change and the community's response to fork Terraform to create OpenTofu, and much more!

Crossing the Enterprise Chasm
Building Successful Developer Products in a Crowded Landscape

Crossing the Enterprise Chasm

Play Episode Listen Later May 16, 2023 23:43


In this episode, WorkOS CEO Michael Grinich and Cockroach Labs Co-Founder and CEO Spencer Kimball discuss the importance of execution over ideas, the need for exploratory sales in early GTM teams, and leveraging technical content to target developers. 

Crossing the Enterprise Chasm
Adapting Open Source Developer Products for Commercial Enterprise

Crossing the Enterprise Chasm

Play Episode Listen Later May 2, 2023 23:05


In this episode, WorkOS CEO Michael Grinich and HashiCorp Co-Founder and CTO Armon Dadgar discuss open core strategy, the challenges surrounding cloud-based product adoption for traditional enterprise, and the evolution of enterprise commercial structure.

The React Show
Profitable Open Source With react-admin Founder François Zaninotto

The React Show

Play Episode Listen Later Mar 24, 2023 90:26


react-admin is a popular SPA React project. We join founder François Zaninotto to discuss react-admin,  profitable open source projects as well as software environmental sustainability, and engineering design.My Book - https://www.thereactshow.com/bookhttps://www.thereactshow.com/react-admin: https://marmelab.com/react-admin/greenframe: https://greenframe.io/marmelab: https://marmelab.com/en/marmelab open-source projects: https://github.com/marmelabmarmelab twitter: https://twitter.com/marmelabNews about react-admin and other marmelab projects: https://marmelab.com/en/blogMusic by DRKST DWN: https://soundcloud.com/drkstdwnDavid C Barnett Small Business and Deal Making M&A SMBI discuss buying, selling, financing and managing small and medium sized businesses...Listen on: Apple Podcasts SpotifySupport the show

Engineering Kiosk
#59: Kann man mit Open Source Geld verdienen?

Engineering Kiosk

Play Episode Listen Later Feb 21, 2023 54:11


Finanzierung von Open-Source-Projekten ist essentiell - Doch welche Möglichkeiten gibt es?Open-Source-Projekte sind wichtiger denn je, in unserer aktuellen Gesellschaft. Projekte wie cURL, OpenSSL, sqlite und Co. werden oft von wenigen Leuten maintained, doch Millionen Menschen nutzen diese jeden Tag, auch oft ohne es zu wissen. Die meisten Open-Source-Projekte werden in der Freizeit maintained. Doch wie passt das zusammen, besonders wenn die Miete gezahlt werden muss und auch Essen auf dem Tisch sein soll?Da kommt das (nicht ganz so einfache) Thema der Finanzierung von Open Source Projekten auf. In dieser Episode gehen wir genau darauf ein und stellen euch ein paar Möglichkeiten vor, wie du Geld mit bzw. für dein Open-Source-Projekt bekommen kannst. Dabei geht es nicht nur um den Platzhirsch GitHub Sponsors, sondern auch um professionelles Sponsoring von Firmen, dem Early-Access-Modell, staatliche Förderungen und so langweilige Themen wie Steuern.Bonus: Was Rundfunkgeräte mit Batterien mit Open-Source zu tun haben und ob Geld wirklich motivierend ist.Das schnelle Feedback zur Episode:

Open Source Startup Podcast
E70: Making Distributed Systems More Accessible With Diagrid

Open Source Startup Podcast

Play Episode Listen Later Jan 18, 2023 39:07


Mark Fussell & Yaron Schneider are Co-founders of Diagrid, the platform that simplifies and provides access to the power of distributed systems. Diagrid's founders co-created open source Dapr which Diagrid provides a fully managed service on top of. Dapr has over 20K stars and works on any language or framework. Diagrid has raised over $24M from investors including Norwest Venture Partners and Amplify. In this episode, we discuss contributors rather than stars as a strong engagement metric, why Open Core wasn't the right business model for Diagrid, learnings for other open source founders & more!

Beyond Coding
Open Core, Pricing and AI product development with Dat Tran

Beyond Coding

Play Episode Listen Later Dec 21, 2022 55:11


Dat Tran shares how he co-founded Priceloop, what problem it solves and how he's choosing to open source the core product. I've seen more and more organisations choose this open core strategy. It removes lots of the hurdles for customer and developer adoption. On top of that, you join forces / create a community to make the product better. Enjoy!

Bringing Design Closer with Gerry Scullion
Scott Jenson 'From Apple, Symbian to Google - Exploring the World of Free and Open Source Software (FOSS)'

Bringing Design Closer with Gerry Scullion

Play Episode Listen Later Dec 7, 2022 33:32


I caught up Scott Jenson recently - Scott refers to himself as a battle scarred veteran of the software industry.  He has been doing user interface design and strategic planning for over 30 years. He worked at Apple on System 7, Newton, and the Apple Human Interface guidelines. He was UX director of Symbian, VP of product design for Cognima, managed mobile UX for Google and was a creative director at frog design in San Francisco. Scott returned to Google in 2013 to lead the Physical Web project and research future Android UX concepts. In 2021, Scott left Google to explore life outside. In this episode we drill into Scott's focus at the moment, Design within FOSS (free and open source software). We plan on recording two episodes, so this is Part 1. Part 2 will follow in early 2023. Become a Patron of This is HCD / https://www.thisishcd.com/become-a-patron Buy Gerry a Coffee / https://thisishcd.ck.page/products/buy-me-a-coffee Sign up to This is HCD Newsletter / https://www.thisishcd.com/community/stay-up-to-date-with-this-is-hcd Follow Gerry Scullion on Twitter / https://twitter.com/gerrycircus Follow This is HCD on Twitter / https://twitter.com/thisishcd Connect with Scott on LinkedIn / https://www.linkedin.com/in/scottjenson/ Connect with Scott on Mastodon / https://social.coop/@scottjenson Connect with Scott on Twitter / https://twitter.com/scottjenson View Scott's website / https://jenson.org/ Penpot / https://penpot.app/ Open Core / https://dortania.github.io/OpenCore-Install-Guide/ Elastio / https://elastio.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

This is HCD - Human Centered Design Podcast
Scott Jenson 'From Apple, Symbian to Google - Exploring the World of Free and Open Source Software (FOSS)'

This is HCD - Human Centered Design Podcast

Play Episode Listen Later Nov 29, 2022 33:17


I caught up Scott Jenson recently - Scott refers to himself as a battle scarred veteran of the software industry.  He has been doing user interface design and strategic planning for over 30 years. He worked at Apple on System 7, Newton, and the Apple Human Interface guidelines. He was UX director of Symbian, VP of product design for Cognima, managed mobile UX for Google and was a creative director at frog design in San Francisco. Scott returned to Google in 2013 to lead the Physical Web project and research future Android UX concepts. In 2021, Scott left Google to explore life outside. In this episode we drill into Scott's focus at the moment, Design within FOSS (free and open source software). We plan on recording two episodes, so this is Part 1. Part 2 will follow in early 2023. Become a Patron of This is HCD / https://www.thisishcd.com/become-a-patron Buy Gerry a Coffee / https://thisishcd.ck.page/products/buy-me-a-coffee Sign up to This is HCD Newsletter / https://www.thisishcd.com/community/stay-up-to-date-with-this-is-hcd Follow Gerry Scullion on Twitter / https://twitter.com/gerrycircus Follow This is HCD on Twitter / https://twitter.com/thisishcd Connect with Scott on LinkedIn / https://www.linkedin.com/in/scottjenson/ Connect with Scott on Mastodon / https://social.coop/@scottjenson Connect with Scott on Twitter / https://twitter.com/scottjenson View Scott's website / https://jenson.org/ Penpot / https://penpot.app/ Open Core / https://dortania.github.io/OpenCore-Install-Guide/ Elastio / https://elastio.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

The Changelog
Harmonai revisited, lessons learned from public salary, Open Core Ventures, Stripe is Paypal in 2010 & Helix

The Changelog

Play Episode Listen Later Oct 17, 2022 6:38 Transcription Available


We revisit our Harmonai story from last week, Jamie Tanna reviews posting his salary history publicly, Sid Sijbrandij's new (open core) venture fund, Zed Shaw thinks Stripe is like Paypal in 2010 & Helix is a new Rust-based terminal.

Changelog News
Harmonai revisited, lessons learned from public salary, Open Core Ventures, Stripe is Paypal in 2010 & Helix

Changelog News

Play Episode Listen Later Oct 17, 2022 6:38 Transcription Available


We revisit our Harmonai story from last week, Jamie Tanna reviews posting his salary history publicly, Sid Sijbrandij's new (open core) venture fund, Zed Shaw thinks Stripe is like Paypal in 2010 & Helix is a new Rust-based terminal.

Changelog Master Feed
Harmonai revisited, lessons learned from public salary, Open Core Ventures, Stripe is Paypal in 2010 & Helix (Changelog News)

Changelog Master Feed

Play Episode Listen Later Oct 17, 2022 6:38 Transcription Available


We revisit our Harmonai story from last week, Jamie Tanna reviews posting his salary history publicly, Sid Sijbrandij's new (open core) venture fund, Zed Shaw thinks Stripe is like Paypal in 2010 & Helix is a new Rust-based terminal.

Secure Ventures with Kyle McNulty
FleetDM: Mike McNeill on the BEST Way to Monetize a Product (Open Core)

Secure Ventures with Kyle McNulty

Play Episode Listen Later Sep 20, 2022 41:53


Mike: Founder at FleetDM, helping organizations manage and optimize their OSquery deployments Previously founded Sails.js, the most popular MVC framework for node.js, with over 50 million downloads per year A strong believer in Open Source and Open Core software products Check out the episode for our conversation on open source security software, pivoting from an open source contributor to a full-time founder, and more! Links: https://fleetdm.com/ GitLab article about Open Core: https://about.gitlab.com/company/pricing/

Open Source Startup Podcast
E49: Momento, the World's Fastest Cache

Open Source Startup Podcast

Play Episode Listen Later Sep 6, 2022 39:56


Daniela Miao is Cofounder of Momento, the serverless cache that automatically optimizes, scales, and manages your cache for you. Momento works with open source caching engine Pelikan which was created at Twitter. Daniela is joined in this episode by Yao Yue, a Principal Software Engineer at Twitter who is a core part of Twitter's Pelikan Caching team. Today, Momento provides a SaaS service on top of Pelikan in an Open Core model. In this episode, we discuss launching a company on top of an open source project started by a team outside of the founders, messaging and positioning for technical companies, team building, and much more!

The Swyx Mixtape
[Biz] Open Core Ventures - Sid Sijbrandij (@sytses)

The Swyx Mixtape

Play Episode Listen Later Aug 4, 2022 3:53


Listen to 20VC: https://thetwentyminutevc.libsyn.com/20vc-gitlab-ceo-sid-sijbrandij-on-why-you-are-not-allowed-to-present-in-meetings-at-gitlab-why-it-is-a-pipedream-we-will-go-back-to-offices-and-what-is-the-future-of-work-ceo-coaches-what-makes-the-best-when-to-have-them-and-when-to-change-them (31 mins in)Discuss this episode: https://twitter.com/swyx/status/1555237139633283072We want to hear from you! The Swyx Mixtape Listener Survey Fill out our 2022 Survey! https://forms.gle/g2s1Np9wS5qmrKSRA! Survey context: https://mixtape.swyx.io/episodes/swyx-mixtape-survey-refactor-and-deadpool-swyx Results will be summed up in a future episode

Coder Radio
466: Luxury Emotional Manipulation

Coder Radio

Play Episode Listen Later May 18, 2022 51:40


Why Mike feels like Heroku is in a failed state, what drove us crazy about Google I/O this year, how Chris botched something super important, and some serious Python love sprinkled throughout.

Coder Radio
465: Mike's Magic Mom

Coder Radio

Play Episode Listen Later May 11, 2022 59:47


After solving a moral dilemma in our particular kind of way, Mike dishes on some ambitious plans that might kick off a new era of development for him.

Giant Robots Smashing Into Other Giant Robots
405: RackN Digital Rebar with Rob Hirschfeld

Giant Robots Smashing Into Other Giant Robots

Play Episode Listen Later Dec 23, 2021 47:24


Chad talks to Rob Hirschfeld, the Founder and CEO of RackN, which develops software to help automate data centers, which they call Digital Rebar. RackN is focused on helping customers automate infrastructure. They focus on customer autonomy and self-management, and that's why they're a software company, not a services or as-a-service platform company. Digital Rebar is a platform that helps connect all of the different pieces and tools that people use to manage infrastructure into infrastructure pipelines through the seamless multi-component automation across all of the different pieces and parts that have to be run to bring up infrastructure. RackN's Website (https://rackn.com/); Digial Rebar (https://rackn.com/rebar/) Follow Rob on Twitter (https://twitter.com/zehicle) or LinkedIn (https://www.linkedin.com/in/rhirschfeld/). Visit his website at robhirschfeld.com (https://robhirschfeld.com/). Follow RackN on Twitter (https://twitter.com/rackngo), LinkedIn (https://www.linkedin.com/company/rackn/), or YouTube (https://www.youtube.com/channel/UCr3bBtP-pMsDQ5c0IDjt_LQ). Follow thoughtbot on Twitter (https://twitter.com/thoughtbot), or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: CHAD: This is the Giant Robots Smashing Into Other Giant Robots Podcast where we explore the design, development, and business of great products. I'm your host, Chad Pytel. And with me today is Rob Hirschfeld, Founder, and CEO of RackN, which develops software to help automate data centers, which they call Digital Rebar. Rob, welcome to the show ROB: Chad, it is a pleasure to be here. Looking forward to the conversation. CHAD: Why don't we start with a little bit more information about what RackN and the Digital Rebar platform actually is. ROB: I would be happy to. RackN is focused on helping customers automate infrastructure. And for us, it's really important that the customers are doing the automation. We're very focused on customer autonomy and self-management. It's why we're a software company, not a services or as a service platform company. But fundamentally, what Digital Rebar does is it is the platform that helps connect all of the different pieces and tools that people use to manage infrastructure into infrastructure pipelines through the seamless multi-component automation across all of the different pieces and parts that have to be run to bring up infrastructure. And we were talking data centers do a lot of on-premises all the way from the bare metal up. But multi-cloud, you name it, we're doing infrastructure at that level. CHAD: So, how agnostic to the actual bare metal are you? ROB: We're very agnostic to the bare metal. The way we look at it is data centers are heterogeneous, diverse places. And that the thing that sometimes blocks companies from being innovative is when they decide, oh, we're going to use this one vendor for this one platform. And that keeps them actually from moving forward. So when we look at data centers, the heterogeneity and sometimes the complexity of that environment is a feature. It's not a bug from that perspective. And so it's always been important to us to be multi-vendor, to do things in a vendor-neutral way to accommodate the quirks and the differences between...and it's not just vendors; it's actually user choice. A lot of companies have a multi-vendor problem (I'm air quoting) that is actually a multi-team problem where teams have chosen to make different choices. TerraForm has no conformance standard built into it. [laughs] And so you might have everybody in your company using TerraForm and Ansible happily but all differently. And that's the problem that we walk into when we walk into a data center challenge. And you can't sweep that under the rug. So we embraced it. CHAD: What kind of companies are your primary customers? ROB: We're very wide-ranging, from the top banks use us and deploy us, telcos, service providers, very large scale service providers use us under the covers, media companies. It really runs the gamut because it's fundamentally for us just about infrastructure. And our largest customers are racing to be the first to deploy. And it's multi-site, but 20,000 machines that they're managing under our Digital Rebar management system. CHAD: It's easy, I think, depending on where you sit and your experiences. The cloud providers today can overshadow the idea that there are even people who still have their own data centers or rent a portion of a data center. In today's ecosystem, what are some of the factors that cause someone to do that who isn't an infrastructure provider themselves? ROB: You know the funny thing about these cloud stories (And we're talking just the day after Amazon had a day-long outage.) is that even the cloud providers don't have you give up operation. You're still responsible for the ops. And for our customers, it's not like they can all just use Lambdas and API gateways. At the end of the day, they're actually doing multi-site distributed operations. And they have these estates that are actually it's more about how do I control distributed infrastructure as much as it is about repatriating. Now, we do a lot to help people repatriate. And they do that because they want more control. Cost savings is a significant component with this. You get into the 1000s of machines, and that's a big deal. Even at hundreds of machines, you can save a lot of money compared to what you get in cloud. And I think people get confused with it being an or choice. It really is an and choice. Our best customers are incredibly savvy cloud users. They want that dynamic, resilient very API-driven environment. And they're looking to bring that throughout the organization. And so those are the ones that get excited when they see what we've done because we spend a lot of time doing infrastructure as code and API-driven infrastructure. That's really what they want. CHAD: Cool. So, how long have you been working on RackN? When did you found it? ROB: [laughs] Oh my goodness. So RackN is seven years old. Digital Rebar, we consider it to be at its fourth generation, but those numbers actually count back before that. They go back to 2009. The founding team was actually at Dell together in the OpenStack heyday and even before the OpenStack heyday. And we were trying to ship clouds from the Dell Factory. And what we found was that every customer had this bespoke data center we've already talked about. And we couldn't write automation that would work customer to customer to customer. And it was driving us nuts. We're a software team, not a hardware team inside of Dell. And the idea that if I fixed something in the delivery or in their data center, and couldn't go back to their data center because it was different than what the next customer needed and the next customer needed, we knew that we would never have a community. It's very much about this community and reuse pattern. There's an interesting story that I picked up from SREcon actually where they were talking about the early days of boilers. This is going back a few centuries ago. But when they first started putting boilers into homes and buildings, there was no pattern, there was no standard. And everybody would basically hire a plumber or a heating architect. Heating architect was a thing. But you'd build a boiler and every one was custom, and every one was different. And no surprise, they blew up a lot, and they caused fires. And buildings were incredibly unsafe because they were working on high-pressure systems with no pattern. And it took regulation and laws and standards. And now nobody even thinks about it. You just take standard parts, and you connect them together in standard ways. And that creates actually a much more innovative system. You wouldn't want every house to be wired uniquely either. And so when we look at the state of automation today, we see it as this pre-industrial pre-standardization process and that companies are actually harmed and harming themselves because they don't have standards, and patterns, and practices that they can just roll and know they work. And so that philosophy started way back in 2009 with the first generation which was called Crowbar. Some of your audience might even remember this from the OpenStack days. It was the first OpenStack installer built around Chef. And it had all sorts of challenges in it, but it showed us the way. And then we iterated up to where Digital Rebar is today. Really fully infrastructure as code, building infrastructure pipelines, and a lot of philosophical pieces we've learned along the way. CHAD: So you were at Dell working on this thing. How did you decide to leave Dell and start something new? ROB: Dell helped me with that decision. [laughs] So the challenge of being a software person inside of Dell especially at the time, Crowbar was open-source which did make it easier for us to say, "Hey, we want to part ways but keep the IP." And the funny thing is there's not a scrap of Crowbar in Digital Rebar except one or two naming conventions that we pulled forward and the nod of the name, that Rebar is a nod to Crowbar. But what happened was Dell when it went private, really did actually double down on the hardware and the more enterprise packaged things. They didn't want to invest in DevOps and that conversation that you need to have with your customers on how they operate, the infrastructure you sold them. And that made Dell not a very good place for me and the team. And so we left Dell, looked at the opportunity to take what we'd been building with Crowbar and then make it into a product. That's been a long journey. CHAD: Now, did you bootstrap, or did you take investment? ROB: We took [laughs] a little bit of investment. We raised some seed funding. Certainly not what was in hindsight was going to be sufficient for us to do it. But we thought at the time that we had something that was much more product-ready for customers than it was. CHAD: And what was the challenge that you found? What was the surprise there that it wasn't as ready as you thought? ROB: So what we've learned in our space specifically...and there are some things that I think apply to everybody, and there are some things that you might be able to throw on the floor and ignore. I was a big fan of Minimum Viable Product. And it turned out that the MVP strategy was not at all workable with customers in data centers. Our product is for people running production data centers. And nobody's going to put in software to run a data center that is MVP. It has to be resilient. It has to be robust. It has to be simple enough that they can understand it and solve some core problems, which is still sort of an MVP idea. But it can't be oops. [laughs] You can't have a lot of oops moments when you're dealing with enterprise infrastructure automation software. It has to work. And importantly, and as a design note, this has been a lesson for us. If it does break, it has to break in very transparent, obvious ways. And I can't emphasize that enough. There's so much that when we build it, we come back and like, was it obvious that it broke? Is it obvious that it broke in a way that you can fix? CHAD: And it's part of the culture too to do detailed post mortems with explanations and be as transparent as possible or at least find the root cause so that you can address it. That's part of the culture of the space too, right? ROB: You'd like to hope so. [laughs] CHAD: Okay. [laughs] In my experience, that's the culture of the space. ROB: You're looking more at a developer experience. But even with a developer, you've got to be in a post mortem or something. And it's like everybody's pointing to the person to the left and the right sort of by human nature. You don't walk into that room assuming that it was your fault, and you should, but that's not how it usually is approached. And what we find in the ops space, and I would tell people to work around this pattern if they can, is that if you're the thing doing the automation, you're always the first cause of the problem. So we run into situations where we're doing a configuration, and we find a vendor bug or a glitch or there's something, and we found it. It's our problem whether we were the cause or not. And that's super hard. I think that people on every side of any type of issue need to look through and figure out what the...the blameless post mortem is a really important piece in all this. At the end of the day, it's always a human system that made a mistake or has something like that. But it's not as simple as the thing that told you the bad news that the messenger was at fault. And there's a system design element to that. That's what we're talking about here is that when you're exposing errors or when something's not behaving the way you expect, our philosophy is to stop. And we've had some very contentious arguments with customers who were like, "Just retry until it fixes itself," or vendors who were like, "Yeah, if you retry that thing three times, [laughs] then it'll magically go away." And we're like, that's not good behavior. Fix the problem. It actually took us years to put a retry element into the tasks so that you can say, yeah, this does need three retries. Just go do it. We've resisted for a long time for this reason. CHAD: So you head out into the market. And did you get initial customers, or was there so much resistance to the product that you had that you struggled to get even first customers? ROB: We had first customers. We had a nice body of code. The problem is actually pretty well understood even by our customers. And so it wasn't hard for them to get a trial going. So we actually had a very profitable customer doing...it was in object storage, public object storage space. And they were installing us. They wanted to move us into all their data centers. But for it to work, we ended up having an engineer who basically did consulting and worked with them for the better part of six months and built a whole bunch of stuff, got it working. They could plug in servers, and everything would set itself up. And they could hit a button and reset all the servers, and they would talk to the switches. It was an amazing amount of automation. But, and this happens a lot, the person we'd been working with was an SRE. And when they went to turn it over to the admins in the ops team, they said, [laughs] "We can't operate. There's too much going on, too complex." And we'd actually recognized...and this is a really serious challenge. It's a challenge now that we're almost five years into the generation that came after that experience. And we recognized there was a problem. And that this wasn't going to create that repeatable experience that we were looking for if it took that much. At the same time, we had been building what is now Digital Rebar in this generation that was a single Golang binary. All the services were bundled into the system. So it listened on different ports but provide all the services, very easy to install, really, really simple. We literally stripped everything back to the basics and restarted. And we had this experience where we sat down with a customer who had...I'm going to take a second and tell the story because this is such a compelling story from a product experience. So we took our first product. We were in a bake-off with another bare metal focus provisioning at the time. And they were in a lab, and they set our stuff up. And they turned it on, and they provisioned. And they set up the competitor, and they turned it on and provisioned. And both products worked. Our product took 20 minutes to go through the cycle and the competitor took 3. And the customer came back and said, "I can't use this. I like your product better. It has more controls with all this stuff." But it took 20 minutes instead of 3. We actually logged into the system, looked at it and we were like, "Well, that's because it recognized that your BIOS was out of date, patched your BIOS, updated the system, checked that it was right, and then rebooted the systems and then continued on its way because it recognized your systems were outdated automatically. And he said, "I didn't want it to do that. I needed it to boot as quickly as possible." And literally, [laughs] we were in the middle of a team retreat. So it's like, the CTO is literally excusing himself on the table to talk to the guy to make this stuff, try and make it right. And he's like, "Well, we've got this new thing. Why don't you install this, what's now Digital Rebar, on the system and repeat the experiment?" And he did and Digital Rebar was even faster than the competitor. And it did exactly just install, booted, and was done. And he came back to the table, and it took 15 minutes to have the whole conversation to make everything work. It was that much of a simpler design. And he sat down and told the story. And I was in the middle of it. I'm just like, "We're going to have to pivot and put everything into the new version," which is what we did. And we just ripped out the complexity. And then over the last couple of years now, we've built the complexity back into the system to do all those additional but much more customer-driven from that perspective. CHAD: How did you make sure that as you were changing your focus, putting all of your energy into the new version that you [laughs] didn't introduce too much risk in that process or didn't take too long? ROB: [laughs] We did take too long and introduced too much risk, and we did run out of money. [laughs] All those things happened. This was a very difficult decision. We thought we could get it done much faster. The challenge of the simpler product was that it was too simple to be enough in customers' data centers. And so yeah, we almost went out of business in the middle of all this cycle. We had a time where RackN went back down to just the two founders. And at this point, we'd gotten far enough with the product that we knew it was the right thing. And we'd also embedded a degree...with the way we do the UX, we have this split. The UX runs on a hosted system. It doesn't have to but by default, it does. And then we have the back end. So we were very careful about how we collected metrics because you really need to know who's downloading and using your products. And we had enough data from that to realize that we had some very committed early users and early customers, just huge brand names that were playing around. So we knew that we'd gotten this mix right, that we were solving a problem in a unique way. But it was going to take time because big companies don't make decisions quickly. We have a joke. We call it the reorg half-life problem. So the half-life of a reorg in any of our customers is about nine months. And either you're successful inside of that reorg half-life, or you have to be resilient across this reorg half. And so initially, it was taking more than nine months. We had to be able to get the product in play. And once we did, we had some customers who came in with very big checks and let us come back and basically build back up. And we've been adding some really nice names into our customer roster. Unfortunately, it's all private. I can tell you their industries and their scale, but I can't name them. But that engagement helped drive us towards the feature set and the capabilities and building things up along that process. But it was frustrating. And some of them, especially at the time we were open-source, were very happy to say, "No, we are a super big brand name. We don't pay for software." I'm like, "Most profitable, highest valued companies in the world you don't want to pay for this operational software?" And they're like, "No, we don't have to." And that didn't sit very well with us. Very hard, as a starting startup, it was hard. CHAD: At the time, everything you were doing was open source. ROB: So in the Digital Rebar era, we were trying to do Open Core. Digital Rebar itself was open. And then we were trying to hold back the BIOS patches, integrate enterprise single sign-on. So there was a degree of integration pieces that we held back as RackN and then left the core open. So you could use Digital Rebar and run it, which we had actually had a lot of success with people downloading, installing, and running Digital Rebar, not as much success in getting them to pay us for that privilege. CHAD: So, how did you adjust to that reality? ROB: We inverted the license. After we landed a couple of big banks and we had several others and some hyperscalers too who were like, "This is really good software. We love it. We're embedding it in our service, but we're not going to pay you." And then they would show up with bugs and complaints and issues and all sorts of stuff still. And what happened is we started seeing them replicating the closed pieces. The APIs were open. We actually looked at it and listening to our communities, they wanted to see what was in the closed pieces. That was actually operationally important for them to understand how that stuff worked. They never contributed or asked to see anything in the core. And, there's an important and here, and they needed performance improvements in the core that were radically different. So the original open-source stuff went to maybe 500 machines, and then it started to cap out. And we were like, all right, we're going to have to really rewrite the data store mechanisms that go with this. And the team looked at each other and were like, "We're not going to open source that. That's really complex and challenging IP." And so we said the right model for us is going to be to make the core closed and then allow our community and users to see all the things that they are actually using to interact with their environment. And it ends up being a little bit of a filter. There are people who only use open-source software. But those companies also don't necessarily want to pay. When I was an open-source evangelist, this was always a problem. You're pounding on the table saying, "If you're using open-source software, you need to understand who to pay for that service, that software that you're getting. If you're not paying for it, that software is going to go away." In a lot of cases, we're a walking example of that. And it's funny, more of the codebase is open today than it was then. [chuckles] But the challenge is that it's really an open ecosystem now because none of that software is particularly useful without the core to run it and glue everything together. CHAD: Was that a difficult decision to make? Was it controversial? ROB: Incredibly difficult. It was something I spent a lot of time agonizing about. My CTO is much clear-eyed on this. From his perspective, he and the other engineers are blood, sweat, and tears putting this in. And it was very frustrating for them to see it running people's production data centers who told us, and this is I think the key, who just said to us, "You know, we're not going to pay money for that." And so for them, it was very clear-eyed it's their work, their sweat equity, very gut feeling for that. For me, I watched communities with open-source routes, you know, the Kubernetes community. I was in OpenStack. I was on the board for that. And there is definitely a lift that you get from having free software and not having the strings. And I also like the idea that from a support perspective, if you're using open-source software, you could conceivably not care for the vendor that went away. You could find another life for it. But years have gone by and that's not actually a truism that when you are using open-source software if you're getting it from a vendor, you're not necessarily protected from that vendor making decisions for you. CentOS is a great...the whole we're about to hit the CentOS deadlines, which is the Streams, and you can't get other versions. And we now have three versions of CentOS, at least three versions of CentOS with Rocky, and Alma, and CentOs Streams. Those are very challenging decisions for people running enterprise data centers, not that simple. And nobody in our communities is running charity data centers. There's no goodwill charity. I'm running a data center out of the goodness of my heart. [laughs] They are all production systems, enterprise. They're doing real production work. And that's a commercial engagement. It's not a feel-good thing. CHAD: So what did you do in your decision-making process? What pushed you, or what did you come to terms with in order to make that change? ROB: I had to admit I was wrong. [laughter] I had to think back on statements I'd made and the enthusiasm that I'd had and give up some really hard beliefs. Being a CEO or a founder is the same process. So I wish I could say this was the only time [laughs] I had to question, you know, hard-made assumptions, or some core beliefs in what I thought. I've had to get really good at questioning when am I projecting this is the way I want the world to be therefore it will be? That's a CEO skill set and a founder skill set...and when that projection is having you on thin ice. And so you constantly have to make that balance. And this was one of those ones where I'm like, all right, let's do it. And I still wake up some mornings and look at people who are open source only and see how much press they get or how easy it is for them to get mentions and things like that. And I'm like, ah, God, that'd be great. It feels like it's much harder for us because we're commercial to get the amplification. There are conferences that will amplify open-source TerraForm, great example. It gets tons of amplification for being a single vendor project that's really tightly controlled by HashiCorp. But nobody is afraid to go talk about TerraForm and mention TerraForm and do all this stuff, the amazing use of open source by that company. But they could turn it and twist it, and they could change it. It's not a guarantee by any stretch of the imagination. CHAD: Well, one of the things that I've come to terms with, and maybe this is a very positive way of looking at it, instead of that you were wrong, [laughter] is to realize that well, you weren't necessarily wrong. It got you to where you were at that point. But maybe in order to go to the next level, you need to do something different. And that's how I come to terms with some things where I need to change my thinking. ROB: [laughs] I like that. It's good. Sometimes you can look back and be like, yeah, that wasn't the right thing and just own it. But yeah, it does help you to know the path. Part of the reason why I love talking about it with you like this is it's not just Rob was wrong; we're actually walking the path through that decision. And it's easy to imagine us sitting in...we're in a tiny, little shared office listening to calls where...I'll tell you this as a story to make it incredibly concrete because it's exactly how this happened. We were on a call. Everybody was in the room. And we were talking to a major bank saying, "We love your software." We're like, "Great, we're looking forward to working with you," all this stuff. And they're like, "Yeah, we need you to show us how you built this plugin because we want to write our own version of it." CHAD: [chuckles] ROB: We're like, "If you did that, you wouldn't need to buy our software." And they're like, "That's right. We're not going to buy your software." CHAD: Exactly. [laughs] ROB: And we're like, "Well, we won't show you how to use it. Then we won't show you how to do that." And they're like, "Well, okay. We'll figure it out ourselves." And so I'm the cheerful, sunny, positive, sort of managing the call, and I'm not just yelling at them. My CTO is sitting next to me literally tearing his hair. This was literally a tearing his hair out moment. And we hung up the call, and we went on a walk around the neighborhood. And he was just like, "What more do you need to hear for you to understand?" And so it's moments like that. But instead of being like, no, you're wrong, we got to do it this way, I was ready to say, "Okay, what do you think we can do? How do we think we can do it?" And then he left me with a big pile of PR messaging to explain what we're doing, conversations like this. Two years ago when we made this change, almost three, I felt like I was being handed a really hard challenge. As it turns out, it hasn't been as big a deal. The market has changed about how they perceive open source. And for enterprise customers, they're like, "All right, how do we deal with the licensing for this stuff?" And we're like, "You just buy it from us." And they're like, "That's it?" And I'm like, "Yes." And you guarantee every..." "Yes." They're like, "Oh. Well, that's pretty straightforward. I don't have to worry about..." We could go way down an open-source rabbit hole and the consulting pieces and who owns the IP, and I used to deal with all that stuff. Now it's very straightforward. [laughs] Like, "You want to buy and use the software to run your data center?" "Yes, I do." "Great." CHAD: Well, I think this is generally applicable even beyond your specific product but to products in general. It's like, when you're not talking to people who are good customers or who are even going to be your customers who are going to pay for what you want, you can spend a lot of time and energy trying to please them. But you're not going to be successful because they're not going to be your customers no matter what you do. ROB: And that ends up being a bit of a filter with the open-source pieces is that there are customers who were dyed in the wool open source. And this used to be more true actually as the markets moved a lot. We ended up just not talking to many. But they do, they want a lot. They definitely would ask for features or things and additions and help, things like that. And it's hard to say no. Especially as a startup founder, you want to say yes a lot. We try to not say yes to things that we don't...and this puts us at a disadvantage I feel like from a marketing perspective. If we don't do something, we tend to say we don't do it, or we could do it, but it would take whatever. I wish more people in the tech space were as disciplined about this does work, this doesn't work, this is a feature. This is something we're working on. It's not how tech marketing typically works sadly. That's why we focus on self-trials so people can use the product. Mid-roll Ad I wanted to tell you all about something I've been working on quietly for the past year or so, and that's AgencyU. AgencyU is a membership-based program where I work one-on-one with a small group of agency founders and leaders toward their business goals. We do one-on-one coaching sessions and also monthly group meetings. We start with goal setting, advice, and problem-solving based on my experiences over the last 18 years of running thoughtbot. As we progress as a group, we all get to know each other more. And many of the AgencyU members are now working on client projects together and even referring work to each other. Whether you're struggling to grow an agency, taking it to the next level and having growing pains, or a solo founder who just needs someone to talk to, in my 18 years of leading and growing thoughtbot, I've seen and learned from a lot of different situations, and I'd be happy to work with you. Learn more and sign up today at thoughtbot.com/agencyu. That's A-G-E-N-C-Y, the letter U. CHAD: So you have the core and then you have the ecosystem. And you also mentioned earlier that it is an actual software package that people are buying and installing in their data center. But then you have the UI which is in the cloud and what's in the data center is reporting up to that. ROB: Well, this is where I'm going to get very technical [laughs] so hang on for a second. We actually use a cross-domain approach. So the way this works...and our UX is written in React. And everything's...boy, there's like three or four things I have to say all at once. So forgive me as I circle. Everything we do at Digital Rebar is API-first, really API only, so the Golang service with an API, which is amazing. It's the right way to do software. So for our UX, it is a React application that can talk to that what we call an endpoint, that Digital Rebar endpoint. And so the UX is designed to talk directly to the Digital Rebar endpoint, and all of the information that it gets comes from that Digital Rebar endpoint. We do not have to relay it. Like, you have to be inside that network to get access to that endpoint. And the UX just talks to it. CHAD: Okay. And so the UX is just being served from your centralized servers, but you're just delivering the React for the JavaScript app. And that is talking to the local APIs. ROB: Right. And so we do use that browser as a bridge. And so when you want to download new content packs...so Digital Rebar is a platform. So you have to download content and automation and pieces into it. The browser is actually your bridge to do that. So the browser can connect to our catalog, pull down our catalog, and then send things into that browser. So it's super handy for that. But yeah, it's fundamentally...it's all behind your firewall software except...and this is where people get confused because you're downloading it from rackn.io. That download or the URL on the browser looks like it's a RackN URL even though all the traffic is network local. CHAD: Do your customers tend to stay up to date? Are they updating to the latest version right away all the time? ROB: [laughs] No, of course not. CHAD: I figured that was the answer. ROB: And we maintain patches on old versions and things like that. I wish they were a little faster. I'm not always sad that they're...I'm actually very glad when we do a release like we did yesterday...And in that release, I don't expect any of our production customers to go patch everything. So in a SaaS, you might actually have to deal with the fact that you've got...and we're back to our heterogeneity story. And this is why it's important that we don't do this. If we were to push that, if we didn't handle every situation for every customer exactly right, there would be chaos. And it would all come back to our team. The way we do it means that we don't have to deal with that. Customers are in control of when they upgrade and when they migrate, except in the UX case. CHAD: So how do you manage that if someone goes to the UI and their local thing is an old version? Are you detecting that and doing things differently? ROB: Yes, one of the decisions we made that I'm really happy with is we embedded feature flags into the API. When you log in, it will pull back. We know what the versions are. But versions are really problematic as a way to determine what's in software, not what's not in software. So instead, we get an array back that has feature flags as we add features into the core. And we've been doing this for years. And it's an amazingly productive process. And so what the UX does is as we add new things into the UX, it will look for those feature flags. And if the feature flag isn't there, it will show you a message that says, "This feature is not available for your endpoint," or show you the thing appropriate without that. And so the UX has gone through years of this process. And so there are literally just places where the UX changes behavior based on what you've installed on your system. And remember, our customers it's multi-site. So our customers do have multiple versions of Digital Rebar installed across there. So this behavior is really important also for them to be able to do it. And it goes back to LaunchDarkly. I was talking to Edith back in the early days of LaunchDarkly and feature flags, and I got really excited about that. And that's why we embedded it into the product. Everybody should do it. It's amazing. CHAD: One of the previous episodes a few ago was with actually the thoughtbot CTO, Joe Ferris. And we're on a project together where it's a different way of working but especially when you need it... so much of what I had done previously was versioned APIs. Maybe that works at a certain scale. But you get to a certain scale of software and way of working and wanting to do continuous deployment and continually update features and all that stuff. And it's a really good way of working when instead you are communicating on the level of feature availability. ROB: And from an ops person's perspective, and this was true with OpenStack, they were adding feature flags down at the metadata for the...it was incredible. They went deep into the versioned API hellscape. It's the only way I can describe it [laughs] because we don't do that. But the thing that that does not help you with is a lot of times the changes that you're looking at from an API perspective are behavior changes, not API changes. Our API over years now has been additive. And as long as you're okay with new objects showing up, new fields showing up in an object, you could go back to four-year-old software, talk to our API, and it would still work just fine. So all your integrations are going to be good, but the behavior might change. And that's what people don't...they're like, oh, I can make my API version, and everything's good. But the behavior that you're putting behind the scenes might be different. You need a way to express that even more than the APIs in my opinion. CHAD: I do think you really see that when you...if you're just building a monolithic web app, it's harder to see. But once you separate your UI from your back end...and where I first hit this was with mobile applications. The problem becomes more obvious to you as a developer I think. ROB: Yes. CHAD: Because you have some people out there who are actually running different versions of your UI too. So your back end is the same for everybody but your UI is different. ROB: [laughs] CHAD: And so you need a back end that can respond to different clients. And a better way to do that rather than versioning your API is to have the clients tell you what they're capable of while they're making the requests and to respond differently. It's much more of a flexible way. ROB: We do track what UX. We have customers who don't want to use that. They don't even want us changing the UX...or actually normal enterprise. And so they will run...the nice thing about a React app is you can just run it. The Digital Rebar can host its UX, and that's perfectly reasonable. We have customers who do that. But every core adds more operational complexity. And then if they don't patch the UX, they can fall behind or not get features. So we see that it's...you're describing a real, you know, the more information you're exchanging between the clients and the servers, the better for you to track what's really going on. CHAD: And I think overall once you can get a little...in my experience, especially people who haven't worked that way, joining the team, it can take a little bit for them to get comfortable with that approach and the flexibility you need to be building into your system. But once people are comfortable with it and the team is comfortable, it really starts to hum. In my experience, a lot of what we've advocated for in terms of the way software should be built and deployed and that kind of thing is it actually makes it so that you can leave that even easier. And you can really be agile because you can roll things out in a very agile way. ROB: So are you thinking like an actual rolling deployment where the deployed software has multiple versions coming through? CHAD: Yep. And you can also have different users seeing different things at different times as well. You can say, "We're going to be doing continual deployment and have code continually deployed." But that doesn't mean that it's part of the release yet, that it's available to users to use. ROB: Yeah, that ability to split and feature flag is a huge deal. CHAD: Yeah. What I'm trying to figure out is does this apply to every project even the small like, this just changes the way you should build software? Or is there a time in a product to start introducing that thing? ROB: I am a big fan of doing it first and fast. There are decisions that we made early that have proven out really well. Feature flags is one of them. We started right away knowing that this would be an important thing for us to do. And same thing with tracking dependencies and being able to say, "I need..." actually, it's helpful because you can write automation that says, "I need this feature in the product." This flag and the product it's not just a version thing. That makes the automation a little bit more portable, easier to maintain. The other thing we did that I really like is all of our objects have documentation embedded in them. So as I write a parameter or an ask or really anything in the system, everything has a documentation field. And so I can write the documentation for that component right there. And then we modified our build scripts so that they will pull in all of that documentation and create an aggregated view. And so the ability to do just-in-time documentation is very, very high. And so I'm a huge fan of that. Because then you have the burden of like, oh, I need to go back and write up a whole bunch of documentation really lessened when you can be like, okay, for this parameter, I can explain its behavior, or I can tell you what it does and know that it's going to show up as part of a documentation set that explains it. That's been something I've been a big fan of in what we build. And not everybody [laughs] is as much a fan. And you can see people writing stuff without particularly crisp documentation behind it. But at least we can go back and add that documentation or lessons learned or things like that. And it's been hugely helpful to have a place to do that. From a design perspective, one other thing I would say that we did that...and you can imagine the conversation. I have a UX usability focus. I'm out selling the product. So for me, it's how does it demo? How does it show? What's that first experience like? And so for me having icons and colors in the UX, in the experience is really important. Because there's a lot of semantic meaning that people get just looking down a list of icons and seeing that they are different colors and different shapes. But from the CTO's perspective, that's window dressing. Who cares? It doesn't have functional purpose. And we're both right. There's a lot of times when to me, both people can be right. So we added that as a metafield into all of our objects. And so we have the functional part of the definition of the API. And then we have these metaobjects that you can add in or meta definitions that you can add in behind the scenes to drive icons and colors. But sometimes UX rendering hints and things like that that from an API perspective, you're like, I don't care, not really an API thing. But from a do I show...this is sensitive information. Do I turn it into a password field? Or should this have a clipboard so I can clipboard icon it, or should I render it in this type of viewer or a plain text viewer? And all that stuff we have a place for. CHAD: And so it's actually being delivered by the API that's saying that. ROB: Correct. CHAD: That's cool. ROB: It's been very helpful. You can imagine the type of stuff we have, and it's easy to influence UX behaviors without asking for UI change. CHAD: Now, are these GraphQL APIs? ROB: No. We looked at doing that. That's probably a whole nother...I might get our CTO on the line for that. CHAD: [laughs] It's a whole nother episode for that. ROB: But we could do that. But we made some decisions that it wasn't going to provide a lot of lift for us in navigation at the moment. It's funny, there's stuff that we think is a really cool idea, but we've learned not to jump on them without having really specific customer use cases or validations. CHAD: Well, like you said, you've got to say no. You've got to make decisions about what is important, and what isn't important now, and what you'll get to later, and that requires discipline. ROB: This may be a way to bring it full circle. If you go back to the stories of every customer having a unique data center, there's this heterogeneity and multi-vendor pieces that are really important. The unicycle we have to ride for this is we want our customers to have standard operating processes, standard infrastructure pipelines for this and use those and follow that process. Because we know if they do, then they'll keep improving as we improve the pipelines. And they're all unique. So there has to be a way in those infrastructure pipelines to do extensions that allow somebody to say, "I need to make this call here in the middle of this pipeline." And we have ways to do that address those needs. The challenge becomes providing enough opinionated like, this is how you should do things. And it's okay if you have to extend it or change it a little bit or tweak it without it just becoming an open-ended tool where people show up and they're like, "Oh, yeah, I get how to build something." And we have people do this, but they run out of gas in the long journey. They end up writing bespoke workflows. They write their own pipelines; they do their own integrations. And for them, it's very hard to support them. It's very hard to upgrade them. It's very hard for them to survive the reorg, your nine-month reorg windows. And so yeah, there's a balance between go do whatever you want, which you have to enable and do it our way because these processes are going to let your teams collaborate, let you reuse software. And we've actually over time been erring more and more on the side of you really need to do it the way we want you to do; reinforce the infrastructure as code processes. And this is the key, right? I mean, you're coming from a development mindset. You want your tooling to reinforce good behavior, CICD, infrastructure as code, all these things. You need those to be easier to do [laughs] than writing it yourself. And over time, we've been progressing more and more towards the let's make it easier to do it within the opinionated way that we have and less easy to do it within the Wild West pattern. CHAD: Cool. Well, I think with that, we'll start to wrap up. So if people want to find out more, where are some places that they could do that or get in touch with you? ROB: The simplest thing is of course rackn.com is the website. We encourage people to just, if this is interesting, download and try the software. If they have a cloud account, it's super easy to play with it, all things RackN through that. I am very active on Twitter under the handle @zehicle Z-E-H-I-C-L-E. And I'm happy to have conversations around these topics and data center and operations and even the future of cloud and edge computing. So please look me up. I'm excited to have conversations like that. CHAD: Awesome. And you can subscribe to the show and find notes and transcripts for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. And you can find me on Twitter @cpytel. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening and see you next time. Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

925 SPORTS
Fantasy Golf - Houston Open - Core Plays

925 SPORTS

Play Episode Listen Later Nov 8, 2021 16:51


Recapping the Mayakoba Championship then getting into the tournament preview for the Houston Open. Lastly we touch on the core plays this week.

TFIR: Open Source & Emerging Technologies
Isn't Open Core Better Than Proprietary Software? | Dirk Hohndel, VMware

TFIR: Open Source & Emerging Technologies

Play Episode Listen Later Aug 25, 2021 35:49


In this episode of Dirk & Swap: Conversations on Open Source, we talk about Open Core. What is it? How different is it from Open Source or Closed Source software? What are the pros and cons of Open Core? Is it better than proprietary software as at least something is open there? How to do Open Core in the right way (if there is one)... We tackle many such complicated questions in this show. I hope you will enjoy it. We have also published a transcript of the recording so you can read it if you want to!

Screaming in the Cloud
Open Core, Real-Time Observability Born in the Cloud with Martin Mao

Screaming in the Cloud

Play Episode Listen Later Jun 22, 2021 41:41


About MartinMartin Mao is the co-founder and CEO of Chronosphere. He was previously at Uber, where he led the development and SRE teams that created and operated M3. Prior to that, he was a technical lead on the EC2 team at AWS and has also worked for Microsoft and Google. He and his family are based in our Seattle hub and he enjoys playing soccer and eating meat pies in his spare time.Links: Chronosphere: https://chronosphere.io/ Email: contact@chronosphere.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached, the hard way. Take a look at this. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That's canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.Corey: If your mean time to WTF for a security alert is more than a minute, it's time to look at Lacework. Lacework will help you get your security act together for everything from compliance service configurations to container app relationships, all without the need for PhDs in AWS to write the rules. If you're building a secure business on AWS with compliance requirements, you don't really have time to choose between antivirus or firewall companies to help you secure your stack. That's why Lacework is built from the ground up for the Cloud: low effort, high visibility and detection. To learn more, visit lacework.com.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I've often talked about observability, or as I tend to think of it when people aren't listening, hipster monitoring. Today, we have a promoted episode from a company called Chronosphere, and I'm joined today by Martin Mao, their CEO and co-founder. Martin, thank you for coming on the show and suffering my slings and arrows.Martin: Thanks for having me on the show, Corey, and looking forward to our conversation today.Corey: So, before we dive into what you're doing now, I'm always a big sucker for origin stories. Historically, you worked at Microsoft and Google, but then you really sort of entered my sphere of things that I find myself having to care about when I'm lying awake at night and the power goes out by working on the EC2 team over at AWS. Tell me a little bit about that. You've hit the big three cloud providers at this point. What was that like?Martin: Yeah, it was an amazing experience, I was a technical lead on one of the EC2 teams, and I think when an opportunity like that comes up on such a core foundational project for the cloud, you take it. So, it was an amazing opportunity to be a part of leading that team at a fairly early stage of AWS and also helping them create a brand new service from scratch, which was AWS Systems Manager, which was targeted at fleet-wide management of EC2 instances, so—Corey: I'm a tremendous fan of Systems Manager, but I'm still looking for the person who named Systems Manager Session Manager because, at this point, I'm about to put a bounty out on them. Wonderful service; terrible name.Martin: That was not me. So, yes. But yeah, no, it was a great experience, for sure, and I think just seeing how AWS operated from the inside was an amazing learning experience for me. And being able to create foundational pieces for the cloud was also an amazing experience. So, only good things to say about my time at AWS.Corey: And then after that, you left and you went to Uber where you led development and SRE teams that created and operated something called M3. Alternately, I'm misreading your bio, and you bought an M3 from BMW and went to drive for Uber. Which is it?Martin: I wish it was the second one, but unfortunately, it is the first one. So yes, I did leave AWS and joined Uber in 2015 to lead a core part of their monitoring and eventually larger observability team. And that team did go on to build open-source projects such as M3—which perhaps we should have thought about the name and the conflict with the car when we named it at the time—and other projects such as Jaeger for distributed tracing as well, and a logging backend system, too. So, yeah, definitely spent many years there building out their observability stack.Corey: We're going to tie a theme together here. You were at Microsoft, you were at Google, you were at AWS, you were at Uber, and you look at all of this and decide, “All right. My entire career has been spent in large companies doing massive globally scaled things. I'm going to go build a small startup.” What made you decide that, all right, this is something I'm going to pursue?Martin: So, definitely never part of the plan. As you mentioned, a lot of big tech companies, and I think I always got a lot of joy building large distributed systems, handling lots of load, and solving problems at a really grand scale. And I think the reason for doing a startup was really the situation that we were in. So, at Uber as I mentioned, myself and my co-founder led the core part of the observability team there, and we were lucky to happen to solve the problem, not just for Uber but for the broader community, especially the community adopting cloud-native architecture. And it just so happened that we were solving the problem of Uber in 2015, but the rest of the industry has similar problems today.So, it was almost the perfect opportunity to solve this now for a broader range of companies out there. And we already had a lot of the core technology built-in open-source as well. So, it was more of an opportunity rather than a long-term plan or anything of that sort, Corey.Corey: So, before we dive into the intricacies of what you've built, I always like to ask people this question because it turns out that the only thing that everyone agrees on is that everyone else is wrong. What is the dividing line, if any, between monitoring and observability?Martin: That's a great question, and I don't know if there's an easy answer.Corey: I mean, my cynical approach is that, “Well, if you call it monitoring, you don't get to bring in SRE-style salaries. Call it observability and no one knows what the hell we're talking about, so sure, it's a blank check at that point.” It's cynical, and probably not entirely correct. So, I'm curious to get your take on it.Martin: Yeah, for sure. So, you know, there's definitely a lot of overlap there, and there's not really two separate things. In my mind at least, monitoring, which has been around for a very long time, has always been around notification and having visibility into your systems. And then as the system's got more complex over time, being able to understand that and not just have visibility into it but understand it a little bit more required, perhaps, additional new data types to go and solve those problems. And that's how, in my mind, monitoring sort of morphed into observability. So, perhaps one is a subset of the other, and they're not competing concepts there. But at least that's my opinion. I'm sure there are plenty out there that would, perhaps, disagree with that.Corey: On some level, it almost hits to the adage of, past a certain point of scale with distributed systems, it's never a question of is the app up or down, it's more a question of how down is it? At least that's how it was explained to me at one point, and it was someone who was incredibly convincing, so I smiled and nodded and never really thought to question it any deeper than that. But I look back at the large-scale environments I've been in, and yeah, things are always on fire, on some level, and ideally, there are ways to handle and mitigate that. Past a certain point, the approach of small-scale systems stops working at large scale. I mean, I see that over in the costing world where people will put tools up on GitHub of, “Hey, I ran this script, and it works super well on my 10 instances.”And then you try and run the thing on 10,000 instances, and the thing melts into the floor, hits rate limits left and right because people don't think in terms of those scales. So, it seems like you're sort of going from the opposite end. Well, this is how we know things work at large scale; let's go ahead and build that out as an initially smaller team. Because I'm going to assume, not knowing much about Chronosphere yet, that it's the sort of thing that will help a company before they get to the hyperscaler stage.Martin: A hundred percent, and you're spot on there, Corey. And it's not even just a company going from small-stage, small-scale simple systems to more complicated ones, actually, if you think about this shift in the cloud right now, it's really going from cloud to cloud-native. So, going from VMs to container on the infrastructure tier, and going from monoliths to microservices. So, it's not even the growth of the company, necessarily, or the growth of the load that the system has to handle, but this shift to containers and microservices heavily accelerates the growth of the amount of data that gets produced, and that is causing a lot of these problems.Corey: So, Uber was famous for disrupting, effectively, the taxi market. What made you folks decide, “I know. We're going to reinvent observability slash monitoring while we're at it, too.” What was it about existing approaches that fell down and, I guess, necessitated you folks to build your own?Martin: Yeah, great question, Corey. And actually, it goes to the first part; we were disrupting the taxi industry, and I think the ability for Uber to iterate extremely fast and respond as a business to changing market conditions was key to that disruption. So, monitoring and observability was a key part of that because you can imagine it was providing all of the real-time visibility to not only what was happening in our infrastructure and applications, but the business as well. So, it really came out of a necessity more than anything else. We found that in order to be more competitive, we had to adopt what is probably today known as cloud-native architecture, adopt running on containers and microservices so that we can move faster, and along with that, we found that all of the existing monitoring tools we were using, weren't really built for this type of environment. And it was that that was the forcing function for us to create our own technologies that were really purpose-built for this modern type of environment that gave us the visibility we needed to, to be competitive as a company and a business.Corey: So, talk to me a little bit more about what observability is. I hear people talking about it in terms of having three pillars; I hear people talking about it, to be frank, in a bunch of ways so that they're trying to, I guess, appropriate the term to cover what they already are doing or selling because changing vocabulary is easier than changing an entire product philosophy. What is it?Martin: Yeah, we actually had a very similar view on observability, and originally we thought that it is a combination of metrics, logs, and traces, and that's a very common view. You have the three pillars, it's almost like three checkboxes; you tick them off, and you have, quote-unquote, “Observability.” And that's actually how we looked at the problem at Uber, and we built solutions for each one of those and we checked all three boxes. What we've come to realize since then is perhaps that was not the best way to look at it because we had all three, but what we realized is that actually just having all three doesn't really help you with the ultimate goal of what you want from this platform, and having more of each of the types of data didn't really help us with that, either. So, taking a step back from there and when we really looked at it, the lesson that we learned in our view on observability is really more from an end-user perspective, rather than a data type or data input perspective.And really, from an end-user perspective, if you think about why you want to use your monitoring tool or your observability tool, you really want to be notified of issues and remediate them as quickly as possible. And to do that, it really just comes down to answering three questions. “Can I get notified when something is wrong? Yes or no? Do I even know something is wrong?”The second question is, “Can I triage it quickly to know what the impact is? Do I know if it's impacting all of my customers or just a subset of them, and how bad is the issue? Can I go back to sleep if I'm being paged at two o'clock in the morning?”And the third one is, “Can I figure out the underlying root cause to the problem and go and actually fix it?” So, this is how we think about the problem now, is from the end-user perspective. And it's not that you don't need metrics, logs, or distributed traces to solve the problem, but we are now orienting our solution around solving the problem for the end-user, as opposed to just orienting our solution around the three data types, per se.Corey: I'm going to self-admit to a fun billing experience I had once with a different monitoring vendor whom I will not name because it turns out, you can tell stories, you can name names, but doing both gets you in trouble. It was a more traditional approach in a simpler time, and they wound up sending me a message saying, “Oh, we're hitting rate limits on CloudWatch. Go ahead and open a ticket asking for them to raise it.” And in a rare display of foresight, AWS respond to my ticket with a, “We can do this, but understand at this level of concurrency, it will cost something like $90,000 a month on increased charges, with that frequency, for that many metrics.” And that was roughly twice what our AWS bill was in those days, and, “Oh.” So, I'm curious as to how you can offer predictable pricing when you can have things that emit so much data so quickly. I believe you when you say you can do it; I'm just trying to understand the philosophy of how that works.Martin: As I said earlier, we started to approach this by trying to solve it in a very engineering fashion where we just wanted to create more efficient backend technology so that it would be cheaper for the increased amount of data. What we realized over time is that no matter how much cheaper we make it, the amount of data being produced, especially from monitoring and observability, kept increasing, and not even in a linear fashion but in an exponential fashion. And because of that, it really switched the problem not to how efficiently can we store this, it really changed our focus of the problem to how our users using this data, and do they even understand the data that's being produced? So, in addition to the couple of properties I mentioned earlier, around cost accounting and rate-limiting—those are definitely required—the other things we try to make available for our end-users is introspection tools such that they understand the type of data that's being produced. It's actually very easy in the monitoring and observability world to write a single line of code that actually produces a lot of data, and most developers don't understand that that single line of code produces so much data.So, our approach to this is to provide a tool so that developers can introspect and understand what is produced on the backend side, not what is being inputted from their code, and then not only have an understanding of that but also dynamic ways to deal with it. So that again, when they hit the rate limit, they don't just have to monitor it less, they understand that, “Oh, I inserted this particular label and now I have 20 times the amount of data that I needed before. Do I really need that particular label in there> and if not, perhaps dropping it dynamically on the server-side is a much better way of dealing with that problem than having to roll back your code and change your metric instrumentation.” So, for us, the way to deal with it is not to just make the backend even more efficient, but really to have end-users understand the data that they're producing, and make decisions on which parts of it is really useful and which parts of it do they, perhaps not want or perhaps want to retain for shorter periods of time, for example, and then allow them to actually implement those changes on that data on the backend. And that is really how the end-users control the bills and the cost themselves.Corey: So, there are a number of different companies in the observability space that have different approaches to what they solve for. In some cases, to be very honest, it seems like, well, I have 15 different observability and monitoring tools. Which ones do you replace? And the answer is, “Oh, we're number 16.” And it's easy to be cynical and down on that entire approach, but then you start digging into it and they're actually right.I didn't expect that to be the case. What was your perspective that made you look around the, let's be honest, fairly crowded landscape of observability companys' tools that gave insight into the health status and well being of various applications in different ways, and say, “You know, no one's quite gotten this right, yet. I have a better idea.”Martin: Yeah, you're completely correct, and perhaps the previous environments that everybody was operating in, there were a lot of different tools for different purposes. A company would purchase an infrastructure monitoring tool, or perhaps even a network monitoring tool, and then they would have, perhaps, an APM solution for the applications, and then perhaps BI tools for the business. So, there was always historically a collection of different tools to go and solve this problem. And I think, again, what has really happened recently with this shift to cloud-native recently is that the need for a lot of this data to be in a single tool has become more important than ever. So, you think about your microservices running on a single container today, if a single container dies in isolation without knowing, perhaps, which microservice was running on it doesn't mean very much, and just having that visibility is not going to be enough, just like if you don't know which business use case that microservice was serving, that's not going to be very useful for you, either.So, with cloud-native architecture, there is more of a need to have all of this data and visibility in a single tool, which hasn't historically happened. And also, none of the existing tools today—so if you think about both the existing APM solutions out there and the existing hosted solutions that exist in the world today, none of them were really built for a cloud-native environment because you can think about even the timing that these companies were created at, you know, back in early 2010s, Kubernetes and containers weren't really a thing. So, a lot of these tools weren't really built for the modern architecture that we see most companies shifting towards. So, the opportunity was really to build something for where we think the industry and everyone's technology stack was going to be as opposed to where the technology stack has been in the past before. And that was really the opportunity there, and it just so happened that we had built a lot of these solutions for a similar type environment for Uber many years before. So, leveraging a lot of our lessons learned there put us in a good spot to build a new solution that we believe is fairly different from everything else that exists today in the market, and it's going to be a good fit for companies moving forward.Corey: So, on your website, one of the things that you, I assume, put up there just to pick a fight—because if there's one thing these people love, it's fighting—is a use case is outgrowing Prometheus. The entire story behind Prometheus is, “Oh, it scales forever. It's what the hyperscalers would use. This came out of the way that Google does things.” And everyone talks about Google as if it's this mythical Valhalla place where everything is amazing and nothing ever goes wrong. I've seen the conference talks. And that's great. What does outgrowing Prometheus look like?Martin: Yeah, that's a great question, Corey. So, if you look at Prometheus—and it is the graduated and the recommended monitoring tool for cloud-native environments—if you look at it and the way it scales, actually, it's a single binary solution, which is great because it's really easy to get started. You deploy a single instance, and you have ingestion, storage, and visibility, and dashboarding, and alerting, all packaged together into one solution, and that's definitely great. And it can scale by itself to a certain point and is definitely the recommended starting point, but as you really start to grow your business, increase your cluster sizes, increase the number of applications you have, actually isn't a great fit for horizontal scale. So, by default, there isn't really a high availability and horizontal scale built into Prometheus by default, and that's why other projects in the CNCF, such as Cortex and Thanos were created to solve some of these problems.So, we looked at the problem in a similar fashion, and when we created M3, the open-source metrics platform that came out of Uber, it was also approaching it from this different perspective where we built it to be horizontally scalable, and highly reliable from the beginning, but yet, we don't really want it to be a, let's say, competing project with Prometheus. So, it is actually something that works in tandem with Prometheus, in the sense that it can ingest Prometheus metrics and you can issue Prometheus query language queries against it, and it will fulfill those. But it is really built for a more scalable environment. And I would say that once a company starts to grow and they run into some of these pain points and these pain points are surrounding how reliable a Prometheus instance is, how you can scale it up beyond just giving it more resources on the VM that it runs on, vertical scale runs out at a certain point. Those are some of the pain points that a lot of companies do run into and need to solve eventually. And there are various solutions out there, both in open-source and in the commercial world, that are designed to solve those pain points. M3 being one of the open-source ones and, of course, Chronosphere being one of the commercial ones.Corey: This episode is sponsored in part by Salesforce. Salesforce invites you to “Salesforce and AWS: Whats Ahead for Architects, Admins and Developers” on June 24th at 10AM, Pacific Time. Its a virtual event where you'll get a first look at the latest innovations of the Salesforce and AWS partnership, and have an opportunity to have your questions answered. Plus you'll get to enjoy an exclusive performance from Grammy Award winning artist The Roots! I think they're talking about a band, not people with super user access to a system. Registration is free at salesforce.com/whatsahead.Corey: Now, you've also gone ahead and more or less dangled raw meat in front of a tiger in some respects here because one of the things that you wind up saying on your site of why people would go with Chronosphere is, “Ah, this doesn't allow for bill spike overages as far as what the Chronosphere bill is.” And that's awesome. I love predictable pricing. It's sort of the antithesis of cloud bills. But there is the counterargument, too, which is with many approaches to monitoring, I don't actually care what my monitoring vendor is going to charge me because they wind up costing me five times more, just in terms of CloudWatch charges. How does your billing work? And how do you avoid causing problems for me on the AWS side, or other cloud provider? I mean, again, GCP and Azure are not immune from this.Martin: So, if you look at the built-in solutions by the cloud providers, a lot of those metrics and monitoring you get from those like CloudWatch or Stackdriver, a lot of it you get included for free with your AWS bill already. It's only if you want additional data and additional retention, do you choose to pay more there. So, I think a lot of companies do use those solutions for the default set of monitoring that they want, especially for the AWS services, but generally, a lot of companies have custom monitoring requirements outside of that in the application tier, or even more detailed monitoring in the infrastructure that is required, especially if you think about Kubernetes.Corey: Oh, yeah. And then I see people using CloudWatch as basically a monitoring, or metric, or log router, which at its price point, don't do that. [laugh]. It doesn't end well for anyone involved.Martin: A hundred percent. So, our solution and our approach is a little bit different. So, it doesn't actually go through CloudWatch or any of these other inbuilt cloud-hosted solutions as a router because, to your point, there's a lot of cost there as well. It actually goes and collects the data from the infrastructure tier or the applications. And what we have found is that not only does the bill for monitoring climb exponentially—and not just as you grow; especially as you shift towards cloud-native architecture—our very first take of solving that problem is to make the backend a lot more efficient than before so it just is cheaper overall.And we approached it that way at Uber, and we had great results there. So, when we created an—originally before M3, 8% of Uber's infrastructure bill was spent on monitoring all the infrastructure and the application. And by the time we were done with M3, the cost was a little over 1%. So, the very first solution was just make it more efficient. And that worked for a while, but what we saw is that over time, this grew again.And there wasn't any more efficiency, we could crank out of the backend storage system. There's only so much optimization you can do to the compression algorithms in the backend and how much you can get there. So, what we realized the problem shifted towards was not, can we store this data more efficiently because we're already reaching limitations there, and what we noticed was more towards getting the users of this data—so individual developers themselves—to start to understand what data is being produced, how they're using it, whether it's even useful, and then taking control from that perspective. And this is not a problem isolated to the SRE team or the observability team anymore; if you think about modern DevOps practices, every developer needs to take control of monitoring their own applications. So, this responsibility is really in the hands of the developers.And the way we approached this from a Chronosphere perspective is really in four steps. The first one is that we have cost accounting so that every developer, and every team, and the central observability team know how much data is being produced. Because it's actually a hard thing to measure, especially in the monitoring world. It's—Corey: Oh, yeah. Even AWS bills get this wrong. Like if you're sending data between one availability zone to another in the same region, it charges a penny to leave an AZ and a penny to enter an AZ in that scenario. And the way that they reflect this on the bill is they double it. So, if you're sending one gigabyte across AZ link in a month, you'll see two gigabytes on the bill and that's how it's reflected. And that is just a glimpse of the monstrosity that is the AWS billing system. But yeah, exposing that to folks so they can understand how much data their application is spitting off? Forget it. That never happens.Martin: Right. Right. And it's not even exposing it to the company as a whole, it's to each use case, to each developer so they know how much data they are producing themselves. They know how much of the bill is being consumed. And then the second step in that is to put up bumper lanes to that so that once you hit the limit, you don't just get a surprise bill at the end of the month.When each developer hits that limit, they rate-limit themselves and they only impact their own data; there is no impact to the other developers or to the other teams, or to the rest of the company. So, we found that those two were necessary initial steps, and then there were additional steps beyond that, to help deal with this problem.Corey: So, in order for this to work within a multi-day lag, in some cases, it's a near certainty that you're looking at what is happening and the expense that is being incurred in real-time, not waiting for it to pass its way through the AWS billing system and then do some tag attribution back.Martin: A hundred percent. It's in real-time for the stream of data. And as I mentioned earlier, for the monitoring data we are collecting, it goes straight from the customer environment to our backend so we're not waiting for it to be routed through the cloud providers because, rightly so, there is a multi-day or multi-hour delay there. So, as the data is coming straight to our backend, we are actively in real-time measuring that and cost accounting it to each individual team. And in real-time, if the usage goes above what is allocated, will actually limit that particular team or that particular developer, and prevent them by default from using more. And with that mechanism, you can imagine that's how the bill is controlled and controlled in real-time.Corey: So, help me understand, on some level; is your architecture then agent-based? Is it a library that gets included in the application code itself? All of the above and more? Something else entirely? Or is this just such a ridiculous question that you can't believe that no one has ever asked it before?Martin: No, it's a great question, Corey, and would love to give some more insight there. So, it is an agent that runs in the customer environment because it does need to be something there that goes and collects all the data we're interested in to send it to the backend. This agent is unlike a lot of APM agents out there where it does, sort of, introspection, things like that. We really believe in the power of the open-source community, and in particular, open-source standards like the Prometheus format for metrics. So, what this agent does is it actually goes and discovers Prometheus endpoints exposed by the infrastructure and applications, and scrapes those endpoints to collect the monitoring data to send to the backend.And that is the only piece of software that runs in our customer environments. And then from that point on, all of the data is in our backend, and that's where we go and process it and get visibility into the end-users as well as store it and make it available for alerting and dashboarding purposes as well.Corey: So, when did you found Chronosphere? I know that you folks recently raised a Series B—congratulations on that, by the way; that generally means, at least if I understand the VC world correctly, that you've established product-market fit and now we're talking about let's scale this thing. My experience in startup land was, “Oh, we've raised a Series B, that means it's probably time to bring in the first DevOps hire.” And that was invariably me, and I wound up screaming and freaking out for three months, and then things were better. So, that was my exposure to Series B.But it seems like, given what you do, you probably had a few SRE folks kicking around, even on the product team because everything you're saying so far absolutely resonates with the experiences someone who has run these large-scale things in production. No big surprise there. Is that where you are? I mean, how long have you been around?Martin: Yeah, so we've been around for a couple of years thus far—so still a relatively new company, for sure. A lot of the core team were the team that both built the underlying technology and also ran it in production the many years at Uber, and that team is now here at Chronosphere. So, you can imagine from the very beginning, we had DevOps and SREs running this hosted platform for us. And it's the folks that actually built the technology and ran it for years running it again, outside of Uber now. And then to your first question, yes, we did establish fairly early on, and I think that is also because we could leverage a lot of the technology that we had built at Uber, and it sort of gave us a boost to have a product ready for the market much faster.And what we're seeing in the industry right now is the adoption of cloud-native is so fast that it's sort of accelerating a need of a new monitoring solution that historical solutions, perhaps, cannot handle a lot of the use cases there. It's a new architecture, it's a new technology stack, and we have the solution purpose-built for that particular stack. So, we are seeing fairly fast acceleration and adoption of our product right now.Corey: One problem that an awful lot of monitoring slash observability companies have gotten into in the last few years—at least it feels this way, and maybe I'm wildly incorrect—is that it seems that the target market is the Ubers of the world, the hyperscalers where once you're at that scale, then you need a tool like this, but if you're just building a standard three-tier web app, oh, you're nowhere near that level of scale. And the problem with go-to-market in those stories inherently seems that by the time you are a hyperscalers, you have already built a somewhat significant observability apparatus, otherwise you would not have survived or stayed up long enough to become a hyperscalers. How do you find that the on-ramp looks? I mean, your website does talk about, “When you outgrow Prometheus.” Is there a certain point of scale that customers should be at before they start looking at things like Chronosphere?Martin: I think if you think about the companies that are born in the cloud today and how quickly they are running and they are iterating their technology stack, monitoring is so critical to that. It's the real-time visibility of these changes that are going out multiple times a day is critical to the success and growth of a lot of new companies. And because of how critical that piece is, we're finding that you don't have to be a giant hyperscalers like Uber to need technology like this. And as you rightly pointed out, you need technology like this as you scale up. And what we're finding is that while a lot of large tech companies can invest a lot of resources into hiring these teams and building out custom software themselves, generally, it's not a great investment on their behalf because those are not companies that are selling monitoring technology as their core business.So generally, what we find is that it is better for companies to perhaps outsource or purchase, or at least use open-source solutions to solve some of these problems rather than custom-build in-house. And we're finding that earlier and earlier on in a company's lifecycle, they're needing technology like this.Corey: Part of the problem I always ran into was—again, I come from the old world of grumpy Unix sysadmins—for me, using Nagios was my approach to monitoring. And that's great when you have a persistent stateful, single node or a couple of single nodes. And then you outgrow it because well, now everything's ephemeral and by the time you realize that there's an outage or an issue with a container, the container hasn't existed for 20 minutes. And you better have good telemetry into what's going on and how your application behaves, especially at scale because at that point, edge cases, one-in-a-million events happen multiple times a second, depending upon scale, and that's a different way of thinking. I've been somewhat fortunate in that, in my experience at least, I've not usually had to go through those transformative leaps.I've worked with Prometheus, I've worked with Nagios, but never in the same shop. That's the joy of being a consultant. You go into one environment, you see what they're doing and you take notes on what works and what doesn't, you move on to the next one. And it's clear that there's a definite defined benefit to approaching observability in a more modern way. But I despair the idea of trying to go from one to the other. And maybe that just speaks to a lack of vision for me.Martin: No, I don't think that's the case at all, Corey. I think we are seeing a lot of companies do this transition. I don't think a lot of companies go and ditch everything that they've done. And things that they put years of investment into, there's definitely a gradual migration process here. And what we're seeing is that a lot of the newer projects, newer environments, newer efforts that have been kicked off are being monitored and observed using modern technology like Prometheus.And then there's also a lot of legacy systems which are still going to be around and legacy processes which are still going to be around for a very long time. It's actually something we had to deal with that at Uber as well; we were actually using Nagios and a StatsD Graphite stack for a very long time before switching over to a more modern tag-like system like Prometheus. So—Corey: Oh, modern Nagios. What was it, uh… that's right, Icinga. That's what it was.Martin: Yes, yes. It was actually the system that we were using Uber. And I think for us, it's not just about ditching all of that investment; it's really about supporting this migration as well. And this is why both in the open-source technology M3, we actually support both the more legacy data types, like StatsD and the Graphite query language, as well as the more modern types like Prometheus and PromQL. And having support for both allows for a migration and a transition.And not even a complete transition; I'm sure there will always be StatsD, Graphite data in a lot of these companies because they're just legacy applications that nobody owns or touches anymore, and they're just going to be lying around for a long time. So, it's actually something that we proactively get ahead of and ensure that we can support both use cases even though we see a lot of companies and trending towards the modern technology solutions, for sure.Corey: The last point I want to raise has always been a personal, I guess, area of focus for me. I allude to it, sometimes; I've done a Twitter thread or two on it, but on your website, you say something that completely resonates with my entire philosophy, and to be blunt is why in many cases, I'm down on an awful lot of vendor tooling across a wide variety of disciplines. On the open-source page on your site, near the bottom, you say, and I quote, “We want our end-users to build transferable skills that are not vendor or product-specific.” And I don't think I've ever seen a vendor come out and say something like that. Where did that come from?Martin: Yeah. If you look at the core of the company, it is built on top of open-source technology. So, it is a very open core company here at Chronosphere, and we really believe in the power of the open-source community and in particular, perhaps not even individual projects, but industry standards and open standards. So, this is why we don't have a proprietary protocol, or proprietary agent, or proprietary query language in our product because we truly believe in allowing our end-users to build these transferable skills and industry-standard skills. And right now that is using Prometheus as the client library for monitoring and PromQL as the query language.And I think it's not just a transferable skill that you can bring with you across multiple companies, it is also the power of that broader community. So, you can imagine now that there is a lot more sharing of, “Hey, I am monitoring, for example, MongoDB. How should I best do that?” Those skills can be shared because the common language that they're all speaking, the queries that everybody is sharing with each other, the dashboards everybody is sharing with each other, are all, sort of, open-source standards now. And we really believe in the power that and we really do everything we can to promote that. And that is why in our product, there isn't any proprietary query language, or definitions of dashboarding, or [learning 00:35:39] or anything like that. So yeah, it is definitely just a core tenant of the company, I would say.Corey: It's really something that I think is admirable, I've known too many people who wind up, I guess, stuck in various environments where the thing that they work on is an internal application to the company, and nothing else like it exists anywhere else, so if they ever want to change jobs, they effectively have a black hole on their resume for a number of years. This speaks directly to the opposite. It seems like it's not built on a lock-in story; it's built around actually solving problems. And I'm a little ashamed to say how refreshing that is [laugh] just based upon what that says about our industry.Martin: Yeah, Corey. And I think what we're seeing is actually the power of these open-source standards, let's say. Prometheus is actually having effects on the broader industry, which I think is great for everybody. So, while a company like Chronosphere is supporting these from day one, you see how pervasive the Prometheus protocol and the query language are that actually all of these probably more traditional vendors providing proprietary protocols and proprietary query languages all actually have to have Prometheus—or not ‘have to have,' but we're seeing that more and more of them are having Prometheus compatibility as well. And I think that just speaks to the power of the industry, and it really benefits all of the end-users and the industry as a whole, as opposed to the vendors, which we are really happy to be supporters of.Corey: Thank you so much for taking the time to speak with me today. If people want to learn more about what you're up to, how you're thinking about these things, where can they find you? And I'm going to go out on a limb and assume you're also hiring.Martin: We're definitely hiring right now. And you can find us on our website at chronosphere.io or feel free to shoot me an email directly. My email is martin@chronosphere.io. Definitely massively hiring right now, and also, if you do have problems trying to monitor your cloud-native environment, please come check out our website and our product.Corey: And we will, of course, include links to that in the [show notes 00:37:41]. Thank you so much for taking the time to speak with me today. I really appreciate it.Martin: Thanks a lot for having me, Corey. I really enjoyed this.Corey: Martin Mao, CEO and co-founder of Chronosphere. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment speculating about how long it took to convince Martin not to name the company ‘Observability Manager Chronosphere Manager.'Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

925 SPORTS
Waste Management Phoenix Open Core Plays

925 SPORTS

Play Episode Listen Later Feb 2, 2021 7:49


Giving you the top picks in PGA DFS this week for the Waste Management Phoenix Open played at TPC Scottsdale. Including course horses: Jon Rahm, Webb Simpson, and our boy John Huh.

925 SPORTS
PGA DFS - Houston Open Core Plays 2020

925 SPORTS

Play Episode Listen Later Nov 3, 2020 15:50


PGA DFS PREMIUM CONTENT: 925 Podcast Follow Us on Twitter: https://twitter.com/925_Sports Follow us on Instagram: instagram.com/925sports Follow us on Facebook: https://www.facebook.com/925Sports 925 Membership Overview: https://youtu.be/KAyDwjJGy3c Visit the website: ninetofivesports.com NFL DFS: https://www.ninetofivesports.com/nfl-dfs

The Appalachian Way
Keep Trails Open: CORE and Jeep Conservation

The Appalachian Way

Play Episode Listen Later Jul 26, 2020 59:47


Marcus Trusty is a 3rd generation Colorado native that spent his childhood in the mountains. As he grew older, he began to notice something needed done to ensure the back country Jeep trails he loved were maintained and stayed accessible to the public. This is the CORE story and it's mission statement to Keep Trails Open.

The Changelog
From open core to open source

The Changelog

Play Episode Listen Later Mar 2, 2020 70:09 Transcription Available


Frank Karlitschek joined us to talk about Nextcloud - a self-hosted free & open source community-driven productivity platform that’s safe home for all your data. We talk about how Nextcloud was forked from ownCloud, successful ways to run community-driven open source projects, open core vs open source, aligned incentives, and the challenges Nextcloud is facing to increase adoption and grow.

Changelog Master Feed
From open core to open source (The Changelog #383)

Changelog Master Feed

Play Episode Listen Later Mar 2, 2020 70:09 Transcription Available


Frank Karlitschek joined us to talk about Nextcloud - a self-hosted free & open source community-driven productivity platform that’s safe home for all your data. We talk about how Nextcloud was forked from ownCloud, successful ways to run community-driven open source projects, open core vs open source, aligned incentives, and the challenges Nextcloud is facing to increase adoption and grow.

Founder Real Talk
Aghi Marietti, Co-founder & CEO of Kong, on Scaling an Open Core Offering for the Enterprise

Founder Real Talk

Play Episode Listen Later Oct 24, 2019 55:01


Aghi Marietti is an inventor, technology entrepreneur and angel investor. As the CEO and Co-founder of Kong — the API company on a mission to intelligently broker information across all services — he drives the company’s vision, strategy and long-term growth. Prior to Kong, he was the CEO and Co-founder of Mashape, the largest API marketplace, which was acquired by RapidAPI in 2017. Before that, he founded MemboxX, the first European cloud service for storing documents and sensitive personal data. Augusto holds a B.S. in Economics from the Catholic University of Milan. He is the lead inventor on five U.S. patents and an angel investor in more than 10 startups. In this episode, we learn from Aghi how he and his Co-founder & CTO Marco Palladino decided to move to an open core model. Kong Inc. was born from the first API marketplace, previously known as Mashape. Today, Kong has 75k downloads for its open source API gateway and more than 40k community members. Aghi and Glenn discussed the changing face of the enterprise software market, and how selling developer tools has changed as developers have more control over the software they use, even within larger organizations. Aghi’s advice for founders: “We tend to overestimate what we can do in the short term and underestimate what we can do in the long term.” This episode was recorded live at Heavybit in San Francisco.

Founder Real Talk
Aghi Marietti, Co-founder & CEO of Kong, on Scaling an Open Core Offering for the Enterprise

Founder Real Talk

Play Episode Listen Later Oct 24, 2019 55:01


Aghi Marietti is an inventor, technology entrepreneur and angel investor. As the CEO and Co-founder of Kong — the API company on a mission to intelligently broker information across all services — he drives the company’s vision, strategy and long-term growth. Prior to Kong, he was the CEO and Co-founder of Mashape, the largest API marketplace, which was acquired by RapidAPI in 2017. Before that, he founded MemboxX, the first European cloud service for storing documents and sensitive personal data. Augusto holds a B.S. in Economics from the Catholic University of Milan. He is the lead inventor on five U.S. patents and an angel investor in more than 10 startups. In this episode, we learn from Aghi how he and his Co-founder & CTO Marco Palladino decided to move to an open core model. Kong Inc. was born from the first API marketplace, previously known as Mashape. Today, Kong has 75k downloads for its open source API gateway and more than 40k community members. Aghi and Glenn discussed the changing face of the enterprise software market, and how selling developer tools has changed as developers have more control over the software they use, even within larger organizations. Aghi’s advice for founders: “We tend to overestimate what we can do in the short term and underestimate what we can do in the long term.” This episode was recorded live at Heavybit in San Francisco.

Linux Headlines
2019-09-20

Linux Headlines

Play Episode Listen Later Sep 20, 2019 2:50


The first Open Core Summit, an activist programmer takes aim at Chef, a French court disagrees with Valve's licensing model, and Lennart Poettering wants to rethink the Home directory.

Open Source Underdogs
Episode 30: Open Core Summit – The Conference for COSS with Joseph Jacks

Open Source Underdogs

Play Episode Listen Later Sep 6, 2019 16:23


Joseph Jacks is the Founder and Organizer of the Open Core Summit (OCS), the world’s first commercial open source software (COSS) event. Joseph is also the Founder and GP of OSS Capital, a venture fund that invests exclusively in early-stage Commercial Open-Source Software (COSS) companies. In this episode, Joseph discusses the exciting origins of the...

Reality 2.0
Episode 24: A Chat About Redis Labs

Reality 2.0

Play Episode Listen Later Aug 2, 2019 50:11


Doc Searls and Katherine Druckman talk to Yiftach Shoolman, CTO and Co-founder of Redis Labs, about Redis, Open Source licenses, company culture and more. Download ogg format Links mentioned: Time for Net Giants to Pay Fairly for the Open Source on Which They Depend Redis Labs and the "Common Clause" Redis Labs Changing Its Licensing for Redis Modules Again... Redis Labs’ Modules License Changes Special Guest: Yiftach Shoolman.

Linux Action News
Linux Action News 100

Linux Action News

Play Episode Listen Later Apr 7, 2019 27:07


Chef goes 100% open source, and this recipe has an old twist, plus the real cost of abandoning the VMware lawsuit. A new way to run Android apps on Linux using Wayland, Sailfish and Mer merge, and more.

chefs android linux vmware wayland action news sailfish jolla blockchain association open core spurv linux action show gpl violation apache 2 linux news podcast
Linux Action News
Linux Action News 100

Linux Action News

Play Episode Listen Later Apr 7, 2019 27:07


Chef goes 100% open source, and this recipe has an old twist, plus the real cost of abandoning the VMware lawsuit. A new way to run Android apps on Linux using Wayland, Sailfish and Mer merge, and more.

chefs android linux vmware wayland action news sailfish jolla blockchain association open core spurv linux action show gpl violation apache 2 linux news podcast
Linux Action News
Linux Action News 100

Linux Action News

Play Episode Listen Later Apr 7, 2019 27:07


Chef goes 100% open source, and this recipe has an old twist, plus the real cost of abandoning the VMware lawsuit. A new way to run Android apps on Linux using Wayland, Sailfish and Mer merge, and more.

chefs android linux vmware wayland action news sailfish jolla blockchain association open core spurv linux action show gpl violation apache 2 linux news podcast
DiscoPosse Podcast
Ep 73 - Open Source, Closed Minds - A Discussion on Open Core and Open Source Monetization with Rob Hirschfeld (@zehicle) of @RackN

DiscoPosse Podcast

Play Episode Listen Later Apr 5, 2019 56:18


As more open source projects become commercially driven (rightly so) there is a challenge among open source community folks as to when it is right to use open core, open source, or the dreaded fork.  Rob Hirschfeld joins us for a great discussion that started on the news of AWS launching Open DIstro for Elasticsearch and we take on some challenging topics around open source, monetization, and commercialization of open platforms and products.   Rob refers to a great Gitlab talk which discusses open source business models which is also a must-watch here:   https://www.youtube.com/watch?v=G6ZupYzr_Zg 

L8ist Sh9y Podcast
Redis Lab Licensing Change to Common Clause

L8ist Sh9y Podcast

Play Episode Listen Later Aug 29, 2018 21:14


VM Brasseur from Open Source Initiative along with Rob Hirschfeld and Stephen Spector talk about the license announcement made by Redis Labs to add Common Clause (https://redislabs.com/community/licenses/) to some of their software. Also discussed is the limited success of the Open Core business model.

The Cloudcast
The Cloudcast #358 - Exploring the Business Side of Open Source Software

The Cloudcast

Play Episode Listen Later Aug 15, 2018 34:16


Brian talks with Joseph Jacks (@asynchio, advisor to CNCF, creator of KubeCon, co-Founder Kismatic, Entrepreneur) about different ways to view OSS business models, and how he's been tracking commercial OSS companies through his COSSCI Index. Show Links: Open Consensus - Data Driven Perspectives on Open Source Software $100M+ Revenue - Commercial Open Source Software Company Index (COSSCI) Bessemer Venture Partners Cloud Index [PODCAST] @PodCTL - Containers | Kubernetes | OpenShift - RSS Feed, iTunes, Google Play, Stitcher, TuneIn and all your favorite podcast players [A CLOUD GURU] Get The Cloudcast Alexa Skill [A CLOUD GURU] A Cloud Guru Membership - Start your free trial. Unlimited access to the best cloud training and new series to keep you up-to-date on all things AWS. [A CLOUD GURU] FREE access to AWS Certification Exam Prep Guide - At A Cloud Guru, the #1 question received from students is "I want to pass the AWS cert exam, so where do I start?" This course is your answer. [FREE] eBook from O'Reilly Show Notes Topic 1 - Welcome to the show. Give us some of your background and some of the things you’re working on these days? Topic 2 - You created a tracking of Commercial Open Source Software Company Information (COSSCI, “cozy index”). How did you go about collecting the information, some of which isn’t public? Topic 3 - Let’s talk about “open core” and how you’re thinking about the thickness of the core. Topic 4 - To a certain extent, OSS velocity relies on VC funding to gain traction. VC funding typically wants an exit. Do you think there are enough successful OSS exists to keep seeing VC funding flow in, or do you think public cloud making OSS-as-a-service will slow that down? Topic 5 - Looking out a few years, you obviously have some models in your mind about how Commercialization of OSS could play out. What are some of the “truths” that you believe are not up for debate, and what are some areas you think might significantly change? Feedback? Email: show at thecloudcast dot net Twitter: @thecloudcastnet and @ServerlessCast

Free as in Freedom
Episode 0x03: i Don't Store

Free as in Freedom

Play Episode Listen Later Nov 23, 2010 45:04


Karen and Bradley discuss the debates regarding Apple's online store restrictions that make it impossible to distribute GPL'd software via Apple's store. Then, they discuss question the usefulness of the term “Open Core” Note: Bradley's audio was too low compared to Karen's on this episode. We're still sorting out our recording issues, and apologize for this. This is completely Bradley's fault: don't blame Producer Dan. :) Show Notes: Segment 0 (00:34) Karen mentioned first Brett's statement on the VLC mailing list, although that is toward the end of the story that was covered last month. (05:30) Bradley mentioned that the story started with FSF's enforcement regarding Apple's distribution of GNU Go in Apple's application store. (05:54) Don't confused GNU Go (the game) with Google Go (the programming language). Bradley pointed out that Google did assign some of its copyright on the language Go, for the GCC frontend for the Go language. (06:51) Bradley mentioned that the game Go has been around thousands of years, although according the Go Wikipedia entry, it's been around for approximately 2,500 years. (08:21) Bradley pointed out that the primary goal of GPL enforcement is to get compliance, not to get companies to cease distribution, but sometimes the companies prefer to cease distribution rather than complying with the license. (09:57) There was disagreement in the VLC community about the enforcement action (11:50). There's an original thread on the VLC mailing list that discussed this (12:35), and then Brett's response on that list. (13:25) GPLv2 requires in § 6 that you cannot impose terms that restrict the downstream more than GPL otherwise does. (15:40) FSF made a statement that linked this issue to the DRM issue, which caused some confusion. It's our view that what Apple is doing against GPL software is part of their initiative to put DRM (both for software and more traditional content) onto devices. (17:20) Bradley mentioned that Apple lawyers have a pathological hatred of GPL, which he believes comes directly down from Steve Jobs, who began his dislike of GPL when he tried, while at NeXT, to distribute a proprietary front-end for GCC for Objective-C. (RMS discussed the story briefly in his essay Copyleft: Pragmatic Idealism.) (23:45) Segment 1 (27:40) Bradley has decided that the term “Open Core” is so confusing that it's now useless. The Gnus IMAP backend is being rewritten, and Joel Adamson mentioned that he's using Emacs development mainline and the new IMAP implementation is working well. (29:58) Alexandre Oliva started a project called Linux Libre, to remove proprietary software from Linux. (31:31) There is a file called WHENCE in Linux that is a long list of proprietary software included inside Linux. Fontana linked the WHENCE file on identi.ca (31:02) Alexandre made an announcement calling Linux an “Open Core” project. (32:56) Bradley mentioned that Alexandre appears to have been convinced that Open Core is a problematic term in this context (during this identica conversation). Alexandre seems to be favoring the term “Free Bait” now. (35:16) Karen mentioned Nina Paley's intellectual pooperty cartoon. (38:39) Bradley mentioned the softer side of Sears marketing campaign, which was used as a cruel joke by Cordelia in the pilot of Buffy the Vampire Slayer to make fun of Willow's clothes. Sears apparently dropped the campaign in 1999. (40:23) Join us on #faif on freenode and the !FaiFCast group on identi.ca (43:47) Send feedback and comments on the cast to . You can keep in touch with Free as in Freedom on our IRC channel, #faif on irc.freenode.net, and by following Conservancy on identi.ca and and Twitter. Free as in Freedom is produced by Dan Lynch of danlynch.org. Theme music written and performed by Mike Tarantino with Charlie Paxson on drums. The content of this audcast, and the accompanying show notes and music are licensed under the Creative Commons Attribution-Share-Alike 4.0 license (CC BY-SA 4.0).

Free as in Freedom
Episode 0x03: i Don't Store

Free as in Freedom

Play Episode Listen Later Nov 23, 2010 45:04


Karen and Bradley discuss the debates regarding Apple's online store restrictions that make it impossible to distribute GPL'd software via Apple's store. Then, they discuss question the usefulness of the term “Open Core” Note: Bradley's audio was too low compared to Karen's on this episode. We're still sorting out our recording issues, and apologize for this. This is completely Bradley's fault: don't blame Producer Dan. :) Show Notes: Segment 0 (00:34) Karen mentioned first Brett's statement on the VLC mailing list, although that is toward the end of the story that was covered last month. (05:30) Bradley mentioned that the story started with FSF's enforcement regarding Apple's distribution of GNU Go in Apple's application store. (05:54) Don't confused GNU Go (the game) with Google Go (the programming language). Bradley pointed out that Google did assign some of its copyright on the language Go, for the GCC frontend for the Go language. (06:51) Bradley mentioned that the game Go has been around thousands of years, although according the Go Wikipedia entry, it's been around for approximately 2,500 years. (08:21) Bradley pointed out that the primary goal of GPL enforcement is to get compliance, not to get companies to cease distribution, but sometimes the companies prefer to cease distribution rather than complying with the license. (09:57) There was disagreement in the VLC community about the enforcement action (11:50). There's an original thread on the VLC mailing list that discussed this (12:35), and then Brett's response on that list. (13:25) GPLv2 requires in § 6 that you cannot impose terms that restrict the downstream more than GPL otherwise does. (15:40) FSF made a statement that linked this issue to the DRM issue, which caused some confusion. It's our view that what Apple is doing against GPL software is part of their initiative to put DRM (both for software and more traditional content) onto devices. (17:20) Bradley mentioned that Apple lawyers have a pathological hatred of GPL, which he believes comes directly down from Steve Jobs, who began his dislike of GPL when he tried, while at NeXT, to distribute a proprietary front-end for GCC for Objective-C. (RMS discussed the story briefly in his essay Copyleft: Pragmatic Idealism.) (23:45) Segment 1 (27:40) Bradley has decided that the term “Open Core” is so confusing that it's now useless. The Gnus IMAP backend is being rewritten, and Joel Adamson mentioned that he's using Emacs development mainline and the new IMAP implementation is working well. (29:58) Alexandre Oliva started a project called Linux Libre, to remove proprietary software from Linux. (31:31) There is a file called WHENCE in Linux that is a long list of proprietary software included inside Linux. Fontana linked the WHENCE file on identi.ca (31:02) Alexandre made an announcement calling Linux an “Open Core” project. (32:56) Bradley mentioned that Alexandre appears to have been convinced that Open Core is a problematic term in this context (during this identica conversation). Alexandre seems to be favoring the term “Free Bait” now. (35:16) Karen mentioned Nina Paley's intellectual pooperty cartoon. (38:39) Bradley mentioned the softer side of Sears marketing campaign, which was used as a cruel joke by Cordelia in the pilot of Buffy the Vampire Slayer to make fun of Willow's clothes. Sears apparently dropped the campaign in 1999. (40:23) Join us on #faif on freenode and the !FaiFCast group on identi.ca (43:47) Send feedback and comments on the cast to . You can keep in touch with Free as in Freedom on our IRC channel, #faif on irc.freenode.net, and by following Conservancy on on Twitter and and FaiF on Twitter. Free as in Freedom is produced by Dan Lynch of danlynch.org. Theme music written and performed by Mike Tarantino with Charlie Paxson on drums. The content of this audcast, and the accompanying show notes and music are licensed under the Creative Commons Attribution-Share-Alike 4.0 license (CC BY-SA 4.0).